10 Website Security Best Practices You Can Implement Today

According to a recent Global Security Study from Citrix conducted by the Ponemon Institute, 69 percent of respondents believe some of their organization’s existing security solutions are outdated and inadequate. This is particularly problematic when looking at the state of cybersecurity where many vulnerabilities could be fairly easily eliminated. In order to help businesses strengthen their security profile and reduce vulnerabilities, here are 10 website security best practices that can be implemented today.

BEST PRACTICE #1: ENCRYPTION VIA HTTPS IMPLEMENTATION

While HTTP was conceived as a means to transfer information on the internet, HTTPS provides some important security aspects for businesses and their end users. Overall, the HTTPS authentication spec defines a series of mechanisms to identify users and parties (via credentials).

The main benefit of HTTPS is that it makes your site more secure for your users when they provide any sort of information such as PCI via encryption. Because attackers don’t have the encryption key, it prevents “man in the middle” attacks. HTTPS implementation provides a number of website security benefits, including ensuring to site visitors that:

  • The site they are on is actually the site the URL says it is
  • The content on the site has not been changed in any way by anybody other than the site owner
  • Any information shared between the visitor and the site through a contact form or reservation signup will not end up in the hands of a third party
  • The visitor’s browser history is not being tracked by some unauthorized third party
  • Any payment gateways on the site are secure

You might also like:The Only Checklist You’ll Need to Uncover Your IT Security Risks

BEST PRACTICE #2: SECURE SOCKET LAYER (SSL) CERTIFICATES

Secure socket layer (SSL) is the protocol that HTTPS uses so that the installation of an SSL certificate on your site enables the use of HTTPS. Obviously, all SSL certificates will encrypt data that are sent from a customer’s browser to a company’s server. Encryption ranges anywhere from 128-bit to the recommended 256-bit. In today’s increasingly treacherous online world, the higher the encryption, the better.

BEST PRACTICE #3: MULTIFACTOR AUTHENTICATION (MFA) WITH SINGLE SIGN-ON (SSO)

Multifactor authentication (MFA) is a security practice that goes beyond the basic requirement of website users to provide an additional form of authentication to log in along with their standard user name and password. This is normally accomplished through SMS message, voice message, or a one-time code generated via an application on a user’s mobile phone.

MFA also can and should include more advanced website security methods, such as biometrics, GPS location, or a hardware token, but those can take more time and effort to implement. There are numerous MFA solutions available that can be incorporated into website security for customer and end-user access to a variety of services or applications. The addition of single sign-on (SSO) enables web users who need access to cloud applications, networks, and other business systems via the web to use a single sign-on rather than multiple sign-on steps as they access other connected systems.

BEST PRACTICE #4: UPDATE PLATFORMS AND SCRIPTS

Keep installed platforms and scripts up to date to eliminate security loopholes that allow malicious hackers to take control of the website. Without regular maintenance to all components of a platform, urgent fixes for major user-facing problems can become a large undertaking very quickly. System administrators must subscribe to manufacturer support and product announcements to be aware of current available patches and have a protocol in place to implement them immediately.

BEST PRACTICE #5: INSTALL SECURITY PLUG-INS

According to the most recent survey, WordPress CMS is used by 59 percent of websites with a CMS, from those of individuals to those of the largest enterprises. The most common way that hackers enter a WordPress site is through outdated plug-ins or an outdated WordPress install. Consequently, it’s imperative to install security plug-ins, wherever and whenever possible to actively prevent hacking attempts.

BEST PRACTICE #6: DILIGENCE, POLICIES, AND FIREWALLS FOR XSS ATTACK PREVENTION

It’s imperative that any code you use on your website for functions or fields that allow input is as explicit as possible in order to prevent cross-site scripting (XSS) attacks. XSS attacks consist of attackers injecting malicious JavaScript code that infects web pages and makes use of coding vulnerabilities.

While diligence in the coding process is the most important preventive measure, web application firewalls (WAFs) also play an important role in mitigating reflected XSS attacks. In addition, a robust Content Security Policy (CSP) allows specification of the domains that a browser should consider valid sources of executable scripts when on your page.

BEST PRACTICE #7: IMPLEMENT PASSWORD MANAGERS

More than just having password generators, businesses should implement password managers that can provide a wealth of important features, including:

  • Password generator
  • Local-only key encryption with AES-256
  • Automatic cloud credential backup
  • Master key only visible to administrator
  • Active Directory, LDAP, federated ID management, SIEM, and ticketing system integration
  • Compliance report generation
  • Employee provisioning and deprovisioning
  • Key self-destruct settings
  • FISMA, FIPS, HIPAA, PCI, compliance; SOC-2 certification
  • Security audit capabilities
  • 128-bit SSL for server communication
  • SHA-512 hashing

While all of these features may not be included in a single password manager solution, most are available in the more robust offerings.

BEST PRACTICE #8: LOCK DOWN DIRECTORY AND FILE PERMISSIONS

Locking down your directory and file permissions can be somewhat involved depending on the size of your business and whether or not you have a qualified systems administrator. While file server resource managers (FSRMs) are designed to enable administrators to perform these functions, there are automated tools available that simplify the process in large organizations.

BEST PRACTICE #9: IMPLEMENT MOBILE DEVICE AND MOBILE APPLICATION MANAGEMENT

Solutions to manage access to corporate applications and data where BYOD (“bring your own device”) policies are in place require mobile device management (MDM) and mobile application management (MAM) tools to control approved application installation lists, as well as approved Wi-Fi access points. IT can also require users to employ PINs to access their devices.

BEST PRACTICE #10: IMPLEMENT BACKUP AND DISASTER RECOVERY MEASURES

Perform frequent backups, keep a copy of recent backup data off premises, and test backups by restoring your system to make sure the process works.

Best practice standards and adherence for website security and mobile applications is only the beginning of an enterprise cybersecurity strategy. It’s important to remember that effective website security is an ongoing and evolving process that requires diligence, as well as the use of integrated forward-thinking tools that protect data, users, and customers.

It’s Your Cloud: On Privacy and Security

There has been a lot in the news lately about privacy and hacking. Everyone has read about whether Apple should help the FBI gain access to a locked iPhone that was used by terrorists.  We also learned about the ransomware attack on a Hollywood hospital that paid $17000 to get back control of their computer systems.

 

Regardless of whether Apple provides a backdoor to gain access to an iPhone or if another company’s network gets hacked, one thing is certain: Read more

Integrate the Cloud into Your Business – The Next Web

Don’t miss this great post from San Francisco based blogger Ritika Puri, published last month on TheNextWeb.com.

The benefits of the cloud are clear: these technologies help companies scale efficiently, reduce costs, reinforce security and provide faster services to customers. With many new players entering the cloud market, businesses have a range of options to build their ideal configurations. You can implement the cloud in full or in part. You can even stage your transition to the cloud over a number of years. You’re in control.

Even still, the decision-making process can be challenging. How do you manage initial costs? How do you ensure the same security levels as your on-premise systems. How do you know whether you’re choosing the right provider? Your due diligence process should start with the following questions.

Click here to read the rest of the article: Full Article 

Protect Your Hospital’s Critical Patient Information: 7 Best Practices

Great healthcare security article from our technology partner Citrix, make sure that your organization is ready to tackle the challenges that are in store.

Screen Shot 2015-11-30 at 2.13.31 PM

 

A breach of patient data can be catastrophic, not only from a PR and public image standpoint, but to a hospital’s bottom line, as well. The AMA estimates–conservatively–that the cost of breaches such as these could be in the millions. Today’s attackers have proved to be more creative than ever when it comes to obtaining sensitive patient data and it is up to the organization’s IT security team to ensure that data doesn’t fall into the wrong hands. 89% of IT decision-makers say that security technologies are critical or important to creating a business advantage.

Click here to read on and discover the 7 best practices from Citrix. 

PVS vs. MCS in a VDI deployment

There’s a lot of debate in the engineering community on whether to use Provisioning Services (PVS) or Machine Creation Services (MCS) when deploying a VDI solution. There are clear differences between the two technologies, and depending on the type of deployment, important factors to consider when choosing which one to use.

MCS

MCS uses linked clone technology. This means that a snapshot is taken of the master template, which is then used as the parent disk for the clones (VDIs). This is a relatively simple method to deploy, and it doesn’t require any additional infrastructure.

Challenges of MCS 

The main challenges of MCS are storage and scale-out related. With MCS, the clones and the parent disk must be on the same datastore in order to maintain link integrity. If the linked clones are distributed across multiple datastores, a copy of the master must be as well – substantially increasing the storage requirements for a deployment. For this reason, scaling out an MCS deployment can become difficult.

  • MCS uses about 21% more IOPS than PVS. Depending on the network infrastructure, this may be an important factor to consider for maintaining consistent performance across the VDIs.
  • MCS does not regularly “clean up” old master clones when deploying an update from a new snapshot. Instead, the old files must be manually removed in order to free up disk space.
PVS

PVS uses software streaming technology to deliver VDIs, requiring additional infrastructure to support the solution. PVS imaging wizard captures an image from the master machine, and then stores it in a VHD format (vDisk) on the Provisioning Server. The vDisk can then be shared (streamed) by multiple VDIs.

Technical Note: PVS utilizes a PXE boot method (BDM can also be used in the absence of DHCP) and a TFTP to stream the vDisk. Additional network configurations are required to allow PXE to work in the environment.

PVS is an elegant solution, and scales well in large enterprise-class deployments. Multiple PVS servers can be built out to balance the vDisk streaming load, providing redundancy as needed. And, with the introduction of caching to device RAM, the IOPS used by a PVS deployment can be greatly reduced (<1 IOP in some cases).

Summary

MCS is suited for small deployments (or lab scenarios) and is simple to deploy. But overall, PVS is the more robust and scalable solution for enterprise environments.

PVS requires more intensive planning, additional infrastructure, and more configuration to implement. However, once built it’s easy to maintain and requires very little maintenance. In most scenarios, it would be preferable to deploy PVS as opposed to MCS for the reasons outlined above.

 

Here at Centrinet we keep up-to-date on the latest technologies – and like to make sure you do too. Contact us today to learn more about our suite of datacenter virtualization and management services.

 

Methods to Better Secure Your Environment

It seems like every day brings a new possible data threat. The news is filled with reports of breaches, new vulnerabilities, and the organizations making reparations to those affected.

In today’s threat-filled landscape, how can you protect your organization? Fortunately, there ARE steps you can take. To illustrate this point, I’ve outlined three methods we have used to better secure our customer’s environments.

End Point Security: Virtualization, thin clients, and policy implementation

The end point is arguably the most vulnerable area for a breach; laptops get lost, end users copy and pass data outside your environment, and PCs are left unattended for long periods of time. A combination of virtualization and the implementation of thin clients can greatly reduce the risk. Virtualization allows you to move your critical data from the insecure end point, back to the safety and security of the data center.

Whether you utilize your own private cloud implementation, are looking for a hosted public cloud environment, or something in between, Centrinet can provide the tailor-made solution that you’re looking for. We’re also able to help you find the perfect thin clients, which will allow your users to securely access your organization’s resources. Once in place, we ensure the continued success and security of your environment by helping implement the correct policies for your business.

Security Updates: Staying current is imperative

The most often overlooked (and simplest) way to reduce the risk of a data breach, is to ensure that all of your systems are up-to-date with their software and security patches. Far too often we see environments that have not been updated for years simply because “we haven’t had a problem”.

With each new threat that is announced, software vendors are producing patches to secure their systems. Many organizations have processes in place to handle updates for Microsoft products, but then neglect the hardware, Linux, and UNIX system updates. More and more attacks are being developed to take advantage of vulnerabilities in non-Microsoft systems.

It’s imperative to stay current on all updates – not just the Microsoft updates!

I should also mention that this is not just a PC and server issue. Every piece of infrastructure in your environment must be updated when patches are released. These days networking equipment, hardened appliances, and virtualization hypervisors are just as vulnerable to attack as your PCs and servers.

Assessments: Last, but definitely not least

Finally, one of the best ways to ensure the security of your environment is to get a third party assessment of your systems. You live and breathe your systems all day every day, and sometimes you need a second pair of eyes.

A comprehensive assessment allows you to take a step back and look objectively at your systems. This is a vital step towards ensuring the safety of your critical data. And who knows, you may even uncover some efficiency improvements for your end users!

 

Here at Centrinet we offer a full suite of services to ensure your security. Let us share our experience and knowledge to ensure your environment is up-to-date and running at optimal efficiency. For more information, please contact us today!

 

What’s Your Company’s Work From Home Policy?

Snow has hit Atlanta for the last two years in a row, with a single inch putting the city on shutdown. So far this year we have had to deal with almost a week of shutdown time due to these icy conditions. To combat this, many companies have started to implement Work From Home policies – not only in Atlanta, but nation-wide. With these policies going into effect certain questions arise, namely: Can an employee be as productive from home as they are in the office?

Yes, with a solid Work From Home policy we know you can maintain that same level of productivity. However, you must ensure your policy is developed with the following three factors in mind:

  1. Communication – How effective is the communication method that has been built into your Work From Home policy? With today’s technology high speed internet is a necessity for most households. With the standardization of a 4G LTE network, people are now carrying mobile devices that are comparable to cable modems. This makes working from home and maintaining a high quality user experience a reality. Additionally, most cell phone carriers have adopted the unlimited calling plan, which make communication truly “free”. And, employees now have the ability to email and text from their personal devices (computers, tablets, and smartphones). With these advances we can now be anywhere, anytime, maintaining a high level of business productivity.
  2. Accessing data and applications – How employees access their corporate data and applications is critical to a successful Work From Home policy. In today’s technological environment most of us expect to have the same user experience whether in the office, at home, in a coffee shop, or on the beach. A good Work From Home policy must allow employees access to a quality user experience from any environment.
  3. Security – Security is the number one concern for the majority of business decision makers. Protecting a business’s intellectual property, preventing data breaches, and ensuring user flexibility while keeping everything under control can be a challenging proposition. This is something many businesses face, but unfortunately few are able to successfully tackle.

Here at Centrinet we’ve been providing successful Work From Home policies for our enterprise clients since 2005 – from a wide range of sectors and industries. We make sure to always provide the best user experience, whether you are working from the office, or remotely. In the age of the cloud we have datacenters from the East to the West Coasts, ensuring our clients always have the best user experience, and secure 24/7 access to all data and applications.

We’d love to help you take the unplanned network downtime, data security risks, and high overhead costs out of storing and accessing your critical business information. Please contact us for more information.

A Review of Dell’s vWorkspace

Last week Centrinet attended a vWorkspace training session at the Dell Concourse Parkway site in Atlanta, GA. The session was held to introduce the partners to the new product, along with a demonstration of its capabilities.

We were very impressed and found the product to have some great new features. The administration console was very user-friendly with an intuitive layout; now one can access all of the configurations from a single pane of glass. Additionally, vWorkspace supports most of the major hypervisors in the market today, so it’s positioned to be a competitive low-cost solution for VDI and Terminal Server deployments.

When building out vWorkspace with a Hyper-V hypervisor, you can leverage a new feature in vWorkspace 8.x called HyperCache. From Dell’s internal documentation:

HyperCache provides read Input/Output Operations Per Second (IOPS) savings and improves virtual desktop performance through selective RAM caching of parent VHDs. This is achieved through the following:

Reads requests to the parent VHD are directed to the parent VHD cache.

Requests data that is not in cache is obtained from disk and then copied into the parent VHD cache.

Provides a faster virtual desktop experience as child VMs requesting the same data find it in the parent VHD cache.

Requests are processed until the parent VHD cache is full.

The default size is 800 MB, but can be changed through the Hyper-V virtualization host property.

This is a very nice feature and greatly improves the performance and user experience of the desktops.

The way that disk usage and workloads are managed makes vWorkspace highly efficient. During bake-off tests, Dell found that the desktop density of vWorkspace deployments was much higher (as much as 25% in some cases) than other solutions in the market.

By offering compatibility with a variety of low-cost storage solutions, and support for multiple desktop deployment technologies, vWorkspace makes a very attractive option for companies looking to deploy new desktop solutions.

Notable Citrix Updates

A couple of weeks back I was able to attend the 2015 Citrix Summit in Las Vegas. The event was a great resource – with sales education, technical deep dives, and an array of new programs. One great bonus was the fact that it was held in January, so I didn’t have to suffer the 110°+ heat of a Las Vegas summer.

There are three releases I want to make special note of – these are releases we have had success with, along with great client feedback. So without further ado, here are my top three notable Citrix updates from the Summit:

  1. XenApp and XenDesktop 7.6 updates: There have been changes in graphic engines, with H.264 becoming the new industry standard. With the release of the 7.6 updates, Citrix is moving in that direction as well.
  •   For detailed product information please click here.
  1. PVS Cache in RAM, with overflow to Hard Disk (HD): This is a great new feature on PVS, because you can eliminate the need for expansive storage. We have clients currently using this feature, and they can’t tell the difference between 3PAR SATA Hard Drives and Whiptail SSD.
  •   For detailed product information please click here.
  1. GPU into XA/XD: We have successfully deployed AutoCAD for some of our clients via XenApp, and so far the feedback has been great. One client said, “The performance is as good as if it were locally installed”.
  •   For detailed product information please click here.

VMware Thin and Thick Client Provisioning: A Brief Overview

VMware’s release of vSphere 4.0 provided customers with the option to use either Thick or Thin Provisioning for virtual machine disks. With the release came debates on which version to use, along with the pros and cons of each.

Thick and thin client provisioning is not as different as you might first assume, both operate by running a client application on the desktop – which then sends and receives data over the network to the server. By going through the differences with you here, we will hopefully help you see how one might benefit your company’s environment over the other.

Thin Clients 

A thin client is a network computer without a hard disk drive. Thin provisioning is based around the concept of saving disk space on your data stores. This allows you to over allocate disk space on your virtual machines because thin clients don’t reserve space on the file system (VMFS).

Thin Provisioning Pros

  • Virtual machine disk usage is minimal.
  • Cuts down on storage costs.
  • Allows an organization to maximize the use of space in a storage array.
  • Reduces the threat of data loss.

Thin Provisioning Cons

  • The possibility that you can run out of space on an over-allocated data store.
  • Requires closer storage oversight.
  • Eliminates the possibility of using some of vSphere’s advanced features – such as Fault Tolerance.
  • May carry a performance penalty. As new space is made available for thinly provisioned storage expansion, vSphere must reserve the space and zero it out. If you are in an environment where top performance reigns paramount, don’t use thin provisioning.

Thick Clients 

A thick client performs most client/server applications. Thick provisioning is based on the concept of allocating the virtual machine disk, reserving all necessary space on the data store at the time of creation.

Thick Provisioning Pros

  • Prevents over provisioning your data stores, ensuring you don’t have any down time.
  • You’ll receive the best performance since all of the blocks will be pre-zeroed, cutting out the need during normal operations.

Thick Provisioning Cons

  • Thick provisioning will decrease your storage space much faster.
  • There’s the very real possibility of wasting disk space on empty blocks of data.

Thick Options

  1. Lazy Zeroed Thick is a provisioning format in which the virtual machine reserves the space on the VMFS. The disk blocks are only used on the back-end data store when they get written to the virtual machine.
  1. Eager Zeroed Thick is a provisioning format in which the virtual machine reserves all the space on the VMFS and zeros out the disk blocks at the time of creation. Creating a virtual machine with this type of provisioning may take a little longer, but it’s performance is optimal from deployment because there’s no overhead in zeroing out disk blocks on-demand. This means no additional work to the data store for the zeroing operation.

Screen Shot 2015-01-21 at 10.17.39 AM