Sunday, October 16, 2016

VMware on AWS tech preview

GREAT NEWS!! The power of virtualization meets the most powerful cloud.

    The news is not that quite new but knowing the same technically is quite interesting.

The deployment of VMware vsphere on aws is quite easy and provides great flexibility and power of scalability.

  Since i am going to show few preview steps, i am not going to go deep in bla bla bla!!

  Start the deployment of VMware on AWS with AWS web portal, (spoiler alert!!) i think the same may be available in vCenter web console in next version.

   Login to the console and select the region closest to your on-prem VMware (if you do not have on-prem VMware then you can select any)

Initially the VMware is available in 3 T-shirt size. 4 Hosts, 32 hosts and 64 hosts. The CPU, Memory and Disk is fixed with the size. Customization is available but still explored in demo.
Choose the pricing model, Good old Hourly price (not sure how feasible it is), 1 Year reserved and 3 year reserved. I hope partial and no upfront options available here. Payment option can be through Credit card, VMware account or through MSP like CrimsonCloud.

Wallah!! All the settings done. The setup will deploy VMware vSphere, vSAN and NSX for network virtualization. If you are using NSX on-prem then you can leverage Long distance vMotion features described below.
The AWS Cloud console for VMware is quite similar to previous web console. It shows all the different regions where VMware is deployed. The same console shows on-prem VMware VDC as well.

"The true form of Hybrid cloud is where we can migrate VM from on prem to cloud without shutting down" Now the same is possible with this partnership. With few clicks we can migrate storage (VMDK) and memory and other metadata information to AWS Cloud.

The steps are quite simple for migration, Click on on-prem VM->action and select Migrate

  Choose both Memory and storage (because you can not migrate just compute to AWS and storage to on-prem). select compute resource first if workload is compute intensive and storage if its IO intensive.

Select the target virtual datacenter where we just deployed our new VMware cluster

High priority vMotion will initiate live migration immediately and gives priority over any running scheduled VM migration. The max concurrent VM vMotion is still the same limit with current vSphere version. 

Recommended way of migrating the same over the direct connect because of high network requirement (if migration time is the concern).

For directconnect, private VIF is required for migration or public VIF with VPN. 

   The live migration also enables VMware DRS feature for VM migration to distribute the load. Since its working without single cluster its now called EDRS (Elastic distributed resource scheduling)

 One of the other main advantage is patch update of vmware infrastructure. deployed vSphere on AWS is managed by AWS so patch update and security is also AWS responsibility.

Still the DC level (AZ level) HA is not visible but if the vSAN cluster do not span across multiple AZ then VM level backup can be taken to AWS S3 using CrimsonVault.

  Since vSphere on AWS will be integrated with other AWS services like redshift, S3 and other non VPC service it will be easier to leverage true hybrid cloud functionality. I hope VPC based services like EC2 and RDS will be also available for integration with VMs running in AWS vSphere.

Still the on-prem VMware editions requirements are not published for hybrid cloud. Hope the same is available with the lowest VMware edition. 

Sunday, August 28, 2016

Deploy Openstack: Intense simplicity emerged from intense complexity

          Openstack is the second largest community and there are multiple ways openstack can be implemented.
  There is no straight answer for right strategy of openstack implementation. Openstack deployment varies based on different environments and technical requirements. It Sounds like a consultant's answer ha!! let me try to simplify the decision process.

   Openstack community is similar to Linux community. Most of the time we do not use vanilla Linux but we use various distros of linux (ubuntu, suse, redhat, fedora, backtrack etc.) right!!

 The same way there are 27+ openstack distributions(HPE, Merintis, Redhat, Ubuntu, Rackspace, Vmware etc.) and multiple deployment tools. There are multiple ways of designing the architecture based on technical needs.

The most simplest way of deciding right solution is to look at the requirement from the business perspective and then select the right technology.

Openstack Off-premise:

  • IBM Blue Box Openstack:

                   IBM provides Private cloud as a service where entire environment can be hosted at IBM datacenters. Its a good selection if organisation want to host private cloud with managed openstack environment. 

  • Openstack on IBM softlayer:

                    IBM Blue Box provides complete managed private cloud on and off-premise. but if you want to manage your own openstack environment hosted on public cloud then softlayer is the option you are looking for.

     you can build the openstack on softlayer's public cloud with selected hypervisor and custom configuration (eg. KVM with ubuntu openstack or VMware openstack with vSphere).

  • Ubuntu Bootstack:

                      Its a new offering from ubuntu openstack distribution which provides managed openstack service which also provides hosting of compute, storage and networking resources as well. Quite new with this service but we will get more information along the way.

On-premise openstack:

1) Openstack as a service:

  •  Platform9 Openstack:

                    Its a SaaS openstack environment with SLA based availability. The availability and management of openstack controller will be provided my Platform9. Its a feasible selection for environments who are willing to share their openstack management traffic to 3rd party provider. It also has advantage of manageability and simplicity. currently Platform9 supports KVM and VMware based hypervisors. its suitable for the environment with smaller physical footprint.
          There are other Vendors also started coming in the same space with different supported hypervisor and features.


2) Build and manage your own openstack without vendor lock in:

  • Mirantis Openstack:

              One of my favourite distribution which supports Ubuntu and centos based openstack deployment. Mirantis is also one of the distro which supports latest (Mitaka) openstack deployment with simplified deployment using Fuel. Its also considered as most stable distribution of openstack today (2015-2016). For the SDS lovers, Fuel has Ceph integration for openstack deployment for Block, object and ephemeral storage. it supports KVM/QEMU, VMware and Xenserver hypervisor.

  • Redhat Openstack:

             Redhat also provides a the openstack distro which is my second favourite in the list. Its in my list because they have other stack integration with their distro (eg. Openshift for  PaaS). Redhat provides deployment openstack using Puppet and foreman. The best way to start with Redhat openstack is to use its community edition RDO openstack using packstack.

  • Canonical Openstack:

            Canonical is also a biggest rival for openstack distro race. It also has ceph integration with its deployment like other distros. Canonical provides automated deployment using Landscape, MAAS and Juju. deployment using the tools is quite simplified but i have seen multiple disadvantages over the Fuel (which i don't want to discuss here).
      Since i do not want to bore you with long post, i am not mentioning similar distributions like Rackspace, Dell, HP Helion etc. Almost all the distribution features are same but different life cycle management, deployment tools and different hypervisor integration support. 

3) Build and manage your own openstack with vendor lock in:


  •  VMware VIO:

                  VMware vSphere with vRealize comes with VMware integrated openstack. VMware Openstack is tightly integrated with VMware vSphere and vCenter. VMware also provides integration with NSX for SDN and vSAN for software defined storage. Bydefault it provides deployment with HA hence deployment best practice for different openstack component is already taken.

  • ZeroStack, Cisco Metapod and stratoscale:

             There are other vendors emerging with simplified version of openstack. some of these openstack comes with physical appliance for greater level simplicity.  

       There are still multiple new distros (Ultimatum cloud, Bright openstack, ZET Tech openstack, Oracle Openstack, Debian, Softstack and many many more) which still need to be tested for Day 1 and Day 2 operations, architecture, simplicity and stability. 

     I think Its a long post but i think i covered the base for openstack distro selection. 

Next step 

 So the right distribution is selected and Now we have to design the right architecture for selected distribution. I can not cover entire best practice document but i can quickly mention few tips

       1) Monitor Openstack components for internal API calls and controller resources.
       2) Use the Log Analysis for Openstack using ELK stack.
       3) High availability configuration of Controller (database, Python Apps and Rabbit-mq)
       4) Manage openstack compute agent up time.
       5) Backup and restore of controller components.
       6) Build Patch and update mechanism for new openstack version.
       7) Configure infrastructure for different tire of application.


Tuesday, July 26, 2016

IaC with AWS Lambda: Automate infra optimization

   Almost every staging and/or development environment need to be shutdown  during the Off-hours and startup again next day for the cost optimization. the same can be automated multiple ways (Cron, Data pipeline, or 3rd party tools) . These tools either have single point of failure, higher execution time (Data pipeline) or cost.

  AWS Lambda provides server less compute (fast, cost effective, SLA based up time). It provide boots to new generation application and new way of writing infrastructure as code.

 In order to demonstrate the same i have written function in python boto3 with cloudwatch scheduler for automating shutdown and startup job.

   Since the complexity level is low I am skipping  the lambda function creation step.
A tag need to be created in each EC2 instances which need to be automated. I have used following tag value which you can replace.

  Auto Stop:
         Tag name: AutoStop
         Tag Value: True

   Auto Start:
         Tag name: AutoStart
         Tag Value: True
-------------------------------------------Auto shutdown function Begin-----------------------------------------
import boto3
import logging

#setup simple logging for INFO
logger = logging.getLogger()

#define the connection and set the region
ec2 = boto3.resource('ec2', region_name='ap-southeast-1')

def lambda_handler(event, context):

    # all running EC2 instances with tag filters.
    filters = [{
            'Name': 'tag:AutoStop,
            'Values': ['True']
            'Name': 'instance-state-name', 
            'Values': ['running']
    #filter the instances which are stopped
    instances = ec2.instances.filter(Filters=filters)

    #locate all running instances
    RunningInstances = [ for instance in instances]
    #print the instances for logging purposes
    print RunningInstances 
    if len(RunningInstances) > 0:
        #perform the shutdown
        shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
        print shuttingDown
        print "All the tagged instances are off"

-------------------------------------------Auto shutdown function End-----------------------------------------

     The same function can be converted as startup with minor modification as follows

-------------------------------------------Auto startup function Begin-----------------------------------------
import boto3
import logging

#setup simple logging for INFO
logger = logging.getLogger()

#define the connection and set region name
ec2 = boto3.resource('ec2', region_name='ap-southeast-1')
def lambda_handler(event, context):

    # all stopped EC2 instances and tag filters.
    filters = [{
            'Name': 'tag:AutoStart',
            'Values': ['True']
            'Name': 'instance-state-name', 
            'Values': ['stopped']
    #filter the instances
    instances = ec2.instances.filter(Filters=filters)

    #locate all stopped instances 
    StoppedInstances = [ for instance in instances]
    #print the instances for logging purposes
    print StoppedInstances  
    if len(StoppedInstances) > 0:
        #perform the startup
        AutoStarting = ec2.instances.filter(InstanceIds=StoppedInstances).start()
        print AutoStarting
        print "All the tagged instances are already running"

-------------------------------------------Auto startup function End-----------------------------------------

  NOTE: Post the function creation, lambda function can be tested with proper test case modification.

Create the cloudwatch rule with cron setup and add the target function as follows.
     -> Create cloudwatch schedule rule

 -> Configure cron settings and add lambda function target

-> Create rule and complete configuration

Saturday, April 30, 2016

Cloud MSP to ISV: The new Era of PaaS

      As working with Cloud Managed service provider, We manage Public and private cloud infrastructure with platform such as Apache, IIS, Ngnix etc. Either MSP or customer developers application and entire management and monitoring becomes the same story as on premise infrastructure.

     Lately i was playing with openshift origin ( Private PaaS ) in my test environment and i really find great opportunities for MSPs where instead of managing IaaS (monitoring and managing headache) traditional way MSPs can deploy PaaS and manage the same infrastructure and application with self serviced application deployment with their own Platform as a service on top of any public/private cloud infrastructure (IaaS).

Lets dive in to the question "How?"

    Openshift Origin allows to build high level application platform (eg. Magento, Wordpress etc.) as well as low level platform such as PHP, Java, Go, Python etc. and various database as service (eg. MySQL).

openshift origin also allows to build custom cartridge (platform) and integrate different cartridge with different platform.

   Openshift application runs on gears (Containers) build from basic security building block SELinux.
SELinux provides a high level of isolation between applications running within OpenShift Origin because each gear and its contents are uniquely labeled.

    cgroups (Control Groups) allow you to allocate processor, memory, and input and output (I/O) resources among applications and Kernel namespaces separate groups of processes so that they cannot see resources in other groups.

    Following characteristics makes MSPs to become ISVs
      1) Multi tenancy
      2) Public, private, virtual and bare metal support
      3) Self service portal for developers to deploy application
      4) Integration with code repos (Git), CI/CD and existing automation tool for developer.
      5) Integrated and Custom DNS.
    I know that above description was crappy lame basic description for any techi. but hold on for the cool stuff. Since its high level overview post i wanna show how easy it is

  Step 1: Select your platform.



Step 2: Select your platform. Here i am selecting wordpress for simplicity.

Step 3: Enter the DNS URL, Git repo, Bla Bla Bla...

Step 4: Select the scaling policy (load balance with HAProxy) and create application

  Well there are few caveat with PaaS implementation and application requirements, so hang on for in depth technical post.

Sunday, March 13, 2016

Cloud service provider comparison 2016

    Business demands profit from the cloud based application and your application demands certain resource performance. few applications are IO intensive where other are CPU and memory intensive.

   Choosing the right cloud service provider is always tedious task because you have to compare different features and cost which fits your business and application requirement. I have tried to simplify the same with high level comparison chart.

    Bellow mentioned details provide detailed understanding on compatibility, inter interoperability  and capabilities of different cloud service providers.

Public Cloud service comparison v1.0

Monday, March 7, 2016

Build Private and hybrid cloud setup in 5 minute

   In all my previous experience in private cloud deployment, i have seen that the adoption of private cloud is going down because of complexity and manageability of  Cloud management servers and resource requirement.

  I have worked on multiple private cloud technology like VMware vCloud, Openstack, Cloudstack, OpenNebula and Microsoft cloud stack. All of these offerings have multiple advantages but many of them lacks of simplicity of application.

OpenNebula has came up with the simplified version of opensource private cloud vOneCloud

In my opinion it is the most powerful cloud tool because of following capabilities.

  -> Simplified deployment as VMware virtual appliance
  -> Support for multiple hypervisor (eg. Xen, KVM, VMware etc.)
        NOTE: Citrix Xenserver is not supported        

   -> Hybrid cloud (support for AWS, Azure and Softlayer)
   -> Ability to import existing hypervisor virtual machines to cloud portal
   -> User quota management

   -> Out of the box showback feature.
   -> Support for advanced networking (eg. VXLAN)
   -> Last but not least "its FREE"

 Since the deployment is simple i am not mentioning the installation steps in this post.You can find vOneCloud OVA from here. you can find the deployment documentation from here