Saturday, December 20, 2014

VDI provisioning using cloudstack

Latest release of citrix VDI and app vertulization XenApp (Xendesktop) 7.5  came up with many new exciting features and one the feature really caught my attention which is the cloudstack(CloudPlatform) and AWS support.

  I think that this is an important step from Citrix to keep a competitive advantage on competing technologies like Microsoft RDS. At least for us at SBP it is important because the customer environments that we host are moving to ‘the Cloud’ (that is built upon Apache Cloudstack (ACS) 4.3) as much as possible.
Before Xenapp 7.5 was released it was of course already possible to spin up instances in Cloudstack and install the various Xenapp components, so what has changed? What Citrix actually released with XenApp 7.5 is that you can now use Machine Creation Services (MCS) with Cloudstack. Also with Amazon AWS, Without this integration you had 2 options for provisioning workers like XenApp servers. 
    1) Citrix Provisioning Services (CPS) 
    2)Provision XenApp servers from Cloudstack templates.
  We tried the features with installed an Apache Cloudstack version 4.3 in Lab environment, but everything that is described in the document was equally valid for our ACS. 
The way to work around that is to create a ACS user per environment and give this ACS user (a sort of service account) permissions on the network

Cloudstack Networks

Normal user permissions are sufficient to use MCS with these networks. Citrix Studio is then configured to use these ACS user accounts to communicate with ACS:

Citrix Studio Hosting

To configure that simply copy the API and Secret keys of the ACS user account into the connection properties of Citrix Studio. As a result all jobs on ACS run with these credentials.
Next, we had to give these accounts permissions on all instances and templates within ACS that are used by Citrix Studio and live in these networks. In our case the Storefront servers are also in these networks so these also had to be linked to these accounts. The Volume Worker template also needed to be linked to these accounts (which means a separate Volume Worker Template per environment) and the Golden Image server as well. The last thing that we needed to be aware of is that the instances that are not deployed with MCS but do live in these networks (Storefront, Golden Image server) needed to be deployed with API and Secret keys of these accounts. Within SBP we use Chef for our deployments, so we have to adjust our knife.rb’s with these keys. The end result of this is that we have separated our TAP also nicely within ACS.
Another thing we ran into had to do with the service offerings that we used. Normally our instances in ACS have a service offering that is High Availability (HA) enabled. That means that if the instance goes down, ACS automatically tries to start the instance again. In our case (we are hosting mission critical environments) that is of course of essence. But those offerings cannot be selected by MCS when new XenApp servers are deployed. Because what happens is that during the provisioning the Volume Worker instructs the new XenApp instance to shut down and stay down. However ACS starts the instance again which will result in a failed provisioning process. So during deployment we select an offering that is not HA enabled and when all is done we make sure the instance gets an HA enabled service offering. 
What also would be handy is that we could have MCS read a file of specific machine names / IP address combinations and that MCS would create the new XenApp instances accordingly. Currently you only have the option to provide a machine name from a range (e.g. HOSTNAME##, where ## are a unique number in a range). In our case the Test and Acceptance instances run in 1 specific datacenter and our naming convention details that these are odd numbered. So deploying a number of these machines in 1 batch is normally not possible. As a workaround we would create a few even numbered AD objects first, so that MCS only deploys the odd ones. Then after the deployment we would remove the even numbered AD objects again. Because we cannot instruct MCS to use specific IP addresses, ACS selects them which means that after deployment we have to verify which IPs are used and write those down in our IP numbering plan. In itself not really an issue, but if we would also needed to restrict traffic using firewalls for specific XenApp servers, then it becomes more cumbersome. Not really something for enterprise scale.
One last suggestion towards Citrix is to improve the error handling in Citrix Studio of the provisioning process. Both the logging node in Studio as well as the Actions tab show only high level progress information. An example we faces was that our primary storage became almost full, but not completely. Now ACS uses a formula that when the storage capacy exceeds some level that new instances are not getting deployed. As a result the provisioning process fails. Studio only informs you that a disk is being copied and then a generic error message is shown. We had to dive into ACS logging to find out that the reason was that no suitable storage could be found. It would have been nice if Studio would have picked this up and translated it into a suitable error message.
it works quite nicely and it is possible to use this in a production environment. If this would this also scale into the hundreds of desktops or XenApp servers I cannot tell. I think in such a scenario you would rather leverage Citrix Provisioning Services.

No comments:

Post a Comment