PVS vs. MCS in a VDI deployment

There’s a lot of debate in the engineering community on whether to use Provisioning Services (PVS) or Machine Creation Services (MCS) when deploying a VDI solution. There are clear differences between the two technologies, and depending on the type of deployment, important factors to consider when choosing which one to use.


MCS uses linked clone technology. This means that a snapshot is taken of the master template, which is then used as the parent disk for the clones (VDIs). This is a relatively simple method to deploy, and it doesn’t require any additional infrastructure.

Challenges of MCS 

The main challenges of MCS are storage and scale-out related. With MCS, the clones and the parent disk must be on the same datastore in order to maintain link integrity. If the linked clones are distributed across multiple datastores, a copy of the master must be as well – substantially increasing the storage requirements for a deployment. For this reason, scaling out an MCS deployment can become difficult.

  • MCS uses about 21% more IOPS than PVS. Depending on the network infrastructure, this may be an important factor to consider for maintaining consistent performance across the VDIs.
  • MCS does not regularly “clean up” old master clones when deploying an update from a new snapshot. Instead, the old files must be manually removed in order to free up disk space.

PVS uses software streaming technology to deliver VDIs, requiring additional infrastructure to support the solution. PVS imaging wizard captures an image from the master machine, and then stores it in a VHD format (vDisk) on the Provisioning Server. The vDisk can then be shared (streamed) by multiple VDIs.

Technical Note: PVS utilizes a PXE boot method (BDM can also be used in the absence of DHCP) and a TFTP to stream the vDisk. Additional network configurations are required to allow PXE to work in the environment.

PVS is an elegant solution, and scales well in large enterprise-class deployments. Multiple PVS servers can be built out to balance the vDisk streaming load, providing redundancy as needed. And, with the introduction of caching to device RAM, the IOPS used by a PVS deployment can be greatly reduced (<1 IOP in some cases).


MCS is suited for small deployments (or lab scenarios) and is simple to deploy. But overall, PVS is the more robust and scalable solution for enterprise environments.

PVS requires more intensive planning, additional infrastructure, and more configuration to implement. However, once built it’s easy to maintain and requires very little maintenance. In most scenarios, it would be preferable to deploy PVS as opposed to MCS for the reasons outlined above.


Here at Centrinet we keep up-to-date on the latest technologies – and like to make sure you do too. Contact us today to learn more about our suite of datacenter virtualization and management services.


What’s so Special About a Citrix Engineer?

This past January I had the pleasure of participating in the Citrix Partner Expert Council (PTEC) that was held in Las Vegas. Over 200 partners, myself included, were able to directly communicate and give our feedback to the different Citrix Product Managers. It’s great to know that Citrix is still on top of it’s game and gets partners involved in the product development cycle.

During one of the casual events I had a pretty interesting chat with a couple of my fellow participants. One thing we all seemed to agree on was the difficulty in finding senior level talent within our field – especially senior level talent competent in both Citrix virtualization and networking practices.

For many organizations, the Citrix engineers (Citrix/VDI/RDS) are seen as responsible for the whole of the presentation layer and all of it’s related parts. Usually this ends up meaning that when a user encounters any issue from that presentation layer (applications, the network, storage, etc.), they will first look to the Citrix engineer.

In order to adapt, Citrix engineers have needed to develop other skills to prove their innocence. They’ve now had to expand their areas of expertise to encompass other layers within the environment such as networking, applications, the database, storage, directory services, and anything else related to Citrix/VDI/RDS. Although still specialized in presentation layers, most Citrix engineers end up becoming a bit of an IT generalist due to their accumulated knowledge of the layers over time.

It’s hard to train anyone to become a subject matter expert in one area, but it’s even harder to train someone as a subject matter expert in multiple areas. This is what makes it so difficult to find experienced and competent Citrix engineers.

Here at Centrinet we are an engineer-driven company. Always striving to stay on top of the most cutting edge technologies, our engineers will delivery solutions that consistently exceed client expectations. Please contact us with any questions, we’d love to help!

A Review of Dell’s vWorkspace

Last week Centrinet attended a vWorkspace training session at the Dell Concourse Parkway site in Atlanta, GA. The session was held to introduce the partners to the new product, along with a demonstration of its capabilities.

We were very impressed and found the product to have some great new features. The administration console was very user-friendly with an intuitive layout; now one can access all of the configurations from a single pane of glass. Additionally, vWorkspace supports most of the major hypervisors in the market today, so it’s positioned to be a competitive low-cost solution for VDI and Terminal Server deployments.

When building out vWorkspace with a Hyper-V hypervisor, you can leverage a new feature in vWorkspace 8.x called HyperCache. From Dell’s internal documentation:

HyperCache provides read Input/Output Operations Per Second (IOPS) savings and improves virtual desktop performance through selective RAM caching of parent VHDs. This is achieved through the following:

Reads requests to the parent VHD are directed to the parent VHD cache.

Requests data that is not in cache is obtained from disk and then copied into the parent VHD cache.

Provides a faster virtual desktop experience as child VMs requesting the same data find it in the parent VHD cache.

Requests are processed until the parent VHD cache is full.

The default size is 800 MB, but can be changed through the Hyper-V virtualization host property.

This is a very nice feature and greatly improves the performance and user experience of the desktops.

The way that disk usage and workloads are managed makes vWorkspace highly efficient. During bake-off tests, Dell found that the desktop density of vWorkspace deployments was much higher (as much as 25% in some cases) than other solutions in the market.

By offering compatibility with a variety of low-cost storage solutions, and support for multiple desktop deployment technologies, vWorkspace makes a very attractive option for companies looking to deploy new desktop solutions.

IT Projects 3

Software Defined Storage (SDS): A Centrinet Case Study

I recently completed a very interesting client project based on MS Hyper-V and MS Storage Space, and wanted to share the details here today.

The Clients had been using the 2008 R2 Hyper-V CSV cluster with iSCSI storage, which was complex to manage. The steep costs for iSCSI storage were returning low levels of performance on the 1GbE iSCSI network. After looking at some alternative solutions, we proposed that they utilize the existing 2U DL380G7 servers’ local storage with Hyper-V 2012R2.

Details and Setup 

We used 4x Hyper-V 2012 R2 hosts by running local storage tiering with a 4x 450 GB 10K SAS drive and a 4x 400 GB SSD. Additionally, we enabled Windows data deduplication on the file system. This had the end result of maximizing the performance (since VMs run locally) while still preserving storage (deduplication).

This setup met the requirements of cost-effective high-performance storage, and simple management, but did not have redundancy without the use of shared storage.

To handle this, we implemented Hyper-V Replica to a fifth Hyper-V node (4x Hyper-V hosts continuously replicate changes on the VM to the fifth Hyper-V node). In case any of the Hyper-V hosts are experiencing hardware issues, they are able to bring up the VM in minutes. Additionally, we put a robust backup in place, which backs up VMs locally and replicates everything offsite.


The clients were thrilled with the results. Their initial backup rate was 25-35 Mbps on the shared iSCSI SAN. After project completion the backup rate increased to 200-250 Mbps on their local storage.

Active VMs are now on SSD disks, versus the slow mechanical hard drives of the past. Hyper-V Replica gives clients an ability they’ve never before had – the ability to quickly restore VMs on the Hyper-V Replica node. In less than 5 minutes the client can bring back a failed VM, instead of having to suffer through the entire restoration process. Additionally, the Replica will have made 8 copies of the VM within a 24-hour period –there’s some reassurance!


This architecture fundamentally changes the ways we normally deploy virtualization solutions. Instead of having to acquire expansive blade servers, high-performance SANs, and high-performance storage networks, this architecture gives businesses the ability to use traditional rack mount servers to start virtualization projects. Now each server can potentially support 200-250 users, and the client doesn’t need to bleed in order to have in-house virtualization solutions.

This is a perfect example of utilizing software-defined storage (SDS). The only cost to the client is the solid-state drive (SSD). All of the necessary software features are free with Microsoft Server 2012 R2. And although this is not a silver bullet solution, it is a great alternative for anyone who is willing to look outside the box to move IT one step further from the cost center



Liquidware Labs Partner Solutions Brief

Our valued partner – Liquidware Labs – recently released a solutions brief on The Vital Role of  Robust Metrics in VDI Maintenance. The brief highlights the importance of their Stratusphere UX to support the delivery of managed VDI services. As only one of three Liquidware Lab Acceler8 partners to have achieved the Center of Excellence designation (COE), we have a deep understanding of deploying successful and effective desktop virtualization projects utilizing Liquidware Labs solutions.

From the beginning we recognized the need to find innovative and purpose-built VDI tools in order to maintain our standards of customer service. This search initially led us to Liquidware Labs Stratusphere, which provided the full range of desktop visibility across physical, virtual and RDSH desktops. New trends, and changing VDI environments, brought us to adopt Stratusphere UX for health checks and performance monitoring.

“With Stratusphere UX, we are sure we are doing the right thing by our customers. We are 100% positive that we are deploying products that don’t introduce problems, headaches, etc. That way we save time and effort for both our consultants, and especially for our clients, as we fast-track them to the right path.” – Dario Ferreira, Executive Vice President of Centrinet

Read the full solutions brief here.

About Liquidware Labs

Liquidware Labs is the leader in User Experience Management for next generation desktops. Analysts have described Liquidware Labs Stratsuphere and ProfileUnity solutions as the industry’s first “On-Ramp to VDI”. Liquidware Labs enables organizations to cost-effectively plan, migrate, and manage their next generation desktop infrastructure using the industry’s best practices.

Centrinet is one of only three partners worldwide to have achieved the Center of Excellence (COE) designation from Liquidware Labs. As a designated COE we demonstrate the highest level of knowledge in desktop virtualization, and have integrated Liquidware Labs technologies into our delivery to ensure superior service to our clients.

VDI Predictions for 2015

Consolidate, save, and work from anywhere.

The past five years have seen a number of industries and sectors adopt Virtual Desktop Infrastructure (VDI). Adopting VDI is a great way to allow your organization’s IT department to simplify the desktop management process; the software creates the desktop images, allowing you to consolidate management to a single location, cutting time and costs.

VDI also gives your users far more options for accessing their work. They can securely access business applications from their personal electronic device (a laptop, smartphone, tablet, or PC) to complete tasks from remote locations. This can be especially beneficial when looking to acquire talent that needs to work remotely.

There are some high costs to consider, fixed and variable.

It is true that many businesses go through growing pains when transitioning to VDI. Initially, it requires some complex infrastructure to get fully set-up. Normally a business will need the following:

  • SAN storage
  • High performance networking equipment
  • Profile management solutions
  • Application delivery solutions
  • Virtualization management solutions
  • And most importantly, an IT team to work together and run the process.

Making the switch will also require your business to embrace some budgeting changes. VDI has a high number of fixed costs involved with set-up; with few exceptions, your business must purchase all equipment up front. Additionally, your IT staff will need to be trained in the new technology.

Even for end users frustrations exist – when there is an issue on the SAN, server, or switch, it has a wide reaching effect. Businesses must be aware of all aspects when transitioning to VDI and understand the costs involved.

Collaboration and increased productivity will continue to push demand in 2015.

Despite all of this, IT departments, businesses, and end users all fully realize the benefits of VDI. VDI is not a silver bullet, but instead it’s a way to embrace end users and IT together. VDI has the ability to elevate your level of team collaboration, polished processes, business creativity, and production.

Moving forward into 2015, I predict that the increased demand for the products will push software vendors and system integrators to close the gap. The increased demand will also drive IT personnel to stay competitive through educating themselves in the latest VDI technologies. Over time the initial costs (as well as the upkeep and end user frustrations) will become much less of a burden.

Virtual Desktop Infrastructure and Your IT Budget

Desktops and their management typically occupy a significant portion of an IT budget. The three areas that occupy the largest piece of the pie are hardware costs, software costs, and the costs related to the physical environment.

  1. Hardware Costs: Most organizations tend towards a 3-4 year replacement cycle on their PCs. This means that IT needs to budget for 25-33% of their desktops to be replaced annually. They must also maintain a PC and helpdesk department to handle the troubleshooting and management of all hardware.
  1. Software Costs: Minimally, each PC needs the OS locally installed – if not all of the applications as well. Every upgrade and patch to the OS and applications that are deployed on the desktops can become very costly (and complex) projects.
  1. Physical Environment Costs: There are other fixed and variable costs that generally go unnoticed, but are associated to the desktop equipment. The power costs for the PCs, the cooling costs for the heat they produce, and the actual space needed for a typical PC footprint are all examples of the physical environment costs.

Imagine only needing to replace your end device every 7-10 years. Imagine the management of your PC environment can be done from anywhere. Imagine you are able to handle all major OS and application upgrades – throughout your entire business – in just a few minutes.

Hardware Costs

You can now accomplish all of the above with a virtual desktop infrastructure (VDI) deployment. Today’s PCs are built to last approximately three years. During this time a PC might have fan failures, bad hard drives, and blown power supplies.  All of these contribute to the total cost of ownership, as well as the maintenance and management costs. With a virtual desktop, companies can replace PCs with thin clients. Thin clients are built to last approximately seven years. Most thin clients have solid-state components and operate without a fan. And, most thin clients are priced less than the PCs they replace.

Software Costs

Virtual desktop deployment is usually done from a centralized data center or a public cloud.  In either case, the management of the environment can easily be accomplished from anywhere. All one needs is an internet and/or network connection. Virtual desktops are built to be mobile.  No longer will your IT support staff be stretched across your environment, running from desktop to desktop.  Everything needed to troubleshoot and manage the VDI is centralized and available from anywhere.

A great example of this is the ease with which you can upgrade your company’s OS.

  1. Create a virtual desktop with the new OS and applications.
  2. Test everything to ensure it all works as expected.
  3. Deploy the images to the production virtual machines.
  4. Reboot the system.

Once rebooted, everyone will be up and running on the new OS.  If you find any issues with the new image, you can revert by simply rebooting the virtual desktops back to the old image. No more worries about remote deployments, installing the new OS on each PC individually, or hardware drivers in your OS image. And, this same process can be used for patching application upgrades. Ultimately, your costs and concerns will be significantly reduced.

Physical Environment Costs

Thin clients produce significantly less heat and save on power. Additionally, virtual desktops are far more flexible than a PC environment – allowing your staff to connect to a virtual desktop from almost any device in any location. You can have your full desktop on your tablet or phone. Work from anywhere

The potential cost reduction from your user’s increased access to your proprietary data is huge. And, with all of the data residing in a secured data center, the risk of losing proprietary or government-regulated data is extremely low.

If you are searching for a way to reduce your overall IT budget – and lengthen the desktop life cycle – then it’s time to look at deploying virtual desktops.  Give us a call to setup a consultation.

Part III – Updating a Pooled Desktop Image

Applies to: Windows Server 2012 and 2012 R2

Part III

In Part II we saw how to deploy and publish a Windows 7 pooled virtual desktop. Now let’s take a look at the steps needed in making a change or update to our image and making it available to users. To recap our environment, we have a Windows 7 pooled desktop collection available to our users. This was created from our master virtual machine called Win7master. Part of the process needed when we created our collection was to run sysprep on our master image. As a good practice measure, I created a checkpoint of the virtual machine prior to running sysprep. This was done so I can easily revert the virtual machine back to a normal state prior to when I ran sys-prep. Read more

Part II – Publishing a Windows 7 Pooled Desktop

Applies to: Windows Server 2012 and 2012 R2

Part II

Now with our RD Virtualization Host deployed as per Part I of deploying VDI for RDS 2012, we are ready to publish a Windows 7 pooled VDI desktop. This section will cover the deployment of a pooled desktop and the options available when deploying the desktop. The first thing we will need is a Windows 7 virtual machine. I’ve pre-created this machine along with some applications installed on the image. Once you are done installing applications and configuring the Windows 7 image, we will need to prepare the image for deployment. In order to do this, we will need to run Sysprep with the following options. Read more

Part I – Deploying VDI for RDS 2012 / 2012R2

Applies to: Windows Server 2012 and 2012 R2

In previous articles, we looked at the deployment steps of a traditional form of Remote Desktop Services (RDS) for 2012 and 2012 R2. Now let’s take a look at the setup of VDI for a 2012 RDS farm. This will be broken down into three parts. In this first part, we will go through the process of deploying the RD Virtualization Host role to a single Hyper-V server in an existing 2012 RDS farm. Then in the second part, we will go through the process of creating a desktop collection and publishing a Windows 7 pooled VDI desktop. Finally in part three, we will go through the process of maintaining a desktop image for a pooled desktop. This portion will cover the maintenance and updating of the main image in a pooled VDI desktop environment.

Read more