Wednesday, December 19, 2018

Changing threat landscape and security measures for cloud native applications


Overview:
This Article is intended for DevOps, Solution architects, and CISOs to help them create threat modelling for evolving application architecture in Cloud. This article provides overview on changing threat vector with mitigation guidelines in various cloud adoption stages.

Changing threat vector in evolving cloud applications:
Customers who has started the journey to cloud has migrated majority of monolithic application in multi–tier application in IaaS or mixed PaaS and IaaS environment. Security is a shared responsibility in cloud and the responsibility changes according to the type of cloud services you utilize for your application.
    
Cloud adoption and application evolution stages:



In Initial cloud adoption phase, application utilizes high amount of IaaS and PaaS services. While manageability for operations team reduces for application PaaS (AWS Beanstalk, Azure App service etc.), Security team sees such type of application’s threat vector similar to IaaS instances and/or on-premises application.
Stage-1: Threat Vector in monolithic application (IaaS)



Application Code in cloud isn’t just written by You. As with all modern web applications, developers use third party, mostly open source, components typically including web frameworks and libraries. These third party components have security vulnerabilities of their own (e.g. Java struts apache common vulnerability that allow attacker to execute code remotely over port 80/443).


Stage-1 threat mitigation measures for monolithic application in cloud
·         Continues vulnerability scanning and patch management on each layers of compute services OS (e.g. EC2, Azure VM, or GCP compute engine) with Intrusion prevention
·         Malicious payload detection with Anti-malware, Machine learning, behavioural analytics, and application control.
·         File system integrity monitoring on Compute service (e.g. EC2, GCP compute engine, Azure VM etc.) OS file system.
·         Whitelist authorized applications in compute instance to block any unauthorized application/script execution.
·         L3 and L4 level security hardening on Cloud security groups (e.g. Azure NSG, AWS SG etc.)
·         L7 application protection with WAF service along with managed WAF rules (e.g. AWS WAF) and Run-time application self-protection (RASP)
·         IAM authentication hardening for management console and API level access of cloud services.
·         Database, application and cloud services secret management (e.g. AWS/Azure access key secret key and other App and DB credential details)
·         Cloud services configuration hardening for data access (e.g. S3 bucket or Azure BLOB access management)
·         Adopt Serverless administration with tools like AWS Systems manager for secured operations practice.
·         Use least privilege policy for IAM role attached to cloud instances to restrict the compromised instance to make API calls to infrastructure.
·         Based on workload threat vector events, automate security operations by applying security controls at cloud services. More details on the same is available here

Stage-2: Threat vector and security consideration for application with micro service architecture:
When cloud journey evolves, application starts becoming loosely coupled and stateless in nature. Application starts integrating with other cloud services like Object storage (e.g. AWS S3, SQS, SNS), Caching services (e.g. AWS Elasticache, Azure Radis), NoSQL databases (e.g. AWS DynamoDB, Azure Cosmos DB etc.). Similar architecture becomes good candidate for micro services framework for application where every modules becomes individual small application entity in container and each service communicates over RESTful API call.  
While Developers follows agile methodology and releases new application update every day, DevOps team uses containers for each micro service for CI/CD. This process makes security team loose visibility in many moving components being used by developers and/or DevOps team inside containers.
   “if you don’t have visibility on threat, you can’t apply security controls on it”. Since containerized micro service application removes visibility for the teams who follows traditional security methods, it’s important to integrate security in DevOps process.  
Stage-2: Threat vector in containerized applications

 



In addition to stage 1 mitigation stage 2 threat mitigation measures for micro-service architecture application in containers are as follows.
·         Write security as a code for automation and integrate security validation process in DevOps CI/CD pipeline.
·         Identify the vulnerability and malware in Docker images from registry before it gets deployed in production.
·         Protect the vulnerability inside container with IPS. More details on the same is available here.
·         Carefully choose base Docker image with security considerations (e.g. Who has created image? how old image is? When it was last updated? What are the packages it contains?).
·         Monitor the integrity of Docker, kubelet and other supportive container service configuration files (These file modifications can allow attacker to provision rouge containers through K8S and Docker API).
·         Real-time malware scanning inside containers root file system and shared mount points.
·         Use thin operating system and lock down the file system for any unauthorized application (Application control on Docker host)
·         L7 application protection with Cloud provider WAF service (e.g. AWS WAF) and Run-time application self-protection(RASP).
·         Secure kubernetes API server, etcd database services and kubernetes cluster authentication.
o   K8S API authentication with RBAC policy
o   Restrict cloud metadata (AWS, GCP, Azure) API access
o   Secure kubelet API access to secure Docker host
o   Encrypt secret at rest

Stage-3: Threat vector and security consideration for serverless application:
Serverless applications are evolving from micro services architecture that can work on asynchronous calls based on event driven functional programming. These events are generated from application API calls or with other cloud services integration. 
The shared responsibility model in serverless application becomes more complex from security compliance because of asynchronous nature of function calls which makes data flow saturated in multiple silos. Threat vector is different in serverless application (than monolithic/micro services applications) that requires to think of security from application design perspective.
                In order to access other cloud services, serverless function need to be configured with identity role and permission provided by cloud provider. Such infrastructure privilege access to function code can become new threat vector if access to function trigger (External APIs) is not harden.
                Serverless functions in each cloud service provider is configured differently. While AWS provides Lambda with basic library and default language support, Azure Function requires different configuration for each language version and libraries from KUDU shell access. Such variant on basic building blocks changes threat vectors for different serverless application in each cloud service provider.
For the private cloud serverless applications and some of the public cloud service providers recently affected by vulnerability in serverless platform Apache OpenWhisk (CVE-2018-11757 and CVE-2018-11756) that can allow attacker to overwrite the source code of a serverless function that is being executed in a container and influence subsequent executions in the same container under certain conditions.




Stage-3: Threat mitigation measures for serverless application architecture.
·         To prevent Denial of Wallet (DoW) attacks in serverless applications, secure perimeter level APIs that triggers serverless functions. You can do this with function execution throttling, API gateway, CDN, WAF and authentication services mentioned bellow.
·         Secure API authentication using authentication services like AWS Cognito, Azure AD and other custom identity providers.
·         Restrict serverless function access by applying least privilege policy to function IAM role (e.g. If function need access to AWS DynamoDB or Azure CosmosDB, it’s better to provide read-only permission to NoSQL DB service)
·         Mandatory code review and static analysis.
·         Encrypt function’s environment variable with key management system (e.g. AWS KMS or Azure Key vault)
·         Leverage tools like AWS SSM parameter store for secrets and configuration management.
·         Automate security analysis as part of CI/CD pipeline (e.g. Code and library vulnerability analysis in serverless framework or zappa deployment process).
·         Throttle API access for individual user to prevent one customer from consuming all back-end system’s capacity.
·         Azure function kudu management portal provides exposure of shell (CMD) access for serverless app. Its recommended to manage vulnerabilities of 3rd party libraries and external modules.
·         For private cloud implementations of serverless platform, secure the platform (e.g. Apache OpenWhisk) by vulnerability management patching and host level IPS protection for container security.


                Although security is different for each stage of cloud adoption lifecycle and type of application, basic principles for security remains same. Visibility is the key to apply security controls on any threat vector “If you can’t see the threat, then you cannot apply the security controls on it”. Integrity monitoring of file system and log analytics can help to provide visibility on threat vector.

             Vulnerability scanning, applying patch and virtual patch is mandatory steps for all stages.  Intrusion prevention system (IPS/IDS), monitoring possible C&C connection (using web reputation) and firewall can be used to identify and mitigate the threats at initial phase of intrusion kill chain.  Advanced machine learning, behavioral analytics and white listing of approved app (application control). More details on such preventive controls are available here.


Wednesday, June 27, 2018

New Wifi security standard WPA3 can solve many security issues..





  Last year wifi security vulnerability allowed attacker to execute “KRACK – Key Reinstallation Attack”, that allowed attackers to perform sniffing on the “encrypted network”. That indecent build up requirement to enhance the encryption standard.

  Its good hear that WPA3 is out now, WPA3 will be protected against the KRACK vulnerability, As per the current information other features of WPA3 will be:



  • Stronger Protection, even against weaker passwords – Passwords are at the crux of the WPA3 standards, even for Wi-Fi users who use terrible passwords like “password.” The new standards will offer robust protections even when users choose to protect their documents with “1234,” as well as simplify the process of setting up security for devices with limited or no display interface for smart home devices.
  • A new handshake process between networks and devices can create tough time for attacker to capture handshake packet use it for decryption.
  • Protection against brute-force dictionary attacks – This is done by blocking authentication after a set number of failed login attempts. 
  • Individualized data encryption – Uses a stronger 192-bit security suite
      
     New devices supporting WPA3 is expected to be out next year. 

Thursday, April 5, 2018

First voice based virtual assistance for cloud SysOps, DevOps, DevSecOps



No more reports with colourful charts!!

Just say and it should be done.
 
 Today majority of cloud management teams go through multiple cost, performance and alert reports with 85% - 95% green charts. Today cloud service provider provides cost and performance management service but utilizing  the same is a challenge because of static intelligence and lots of metrics correlation. technical team starts ignoring such reports after a period of time or stops focusing on daily bases. A true helpful tool should understand your development, staging or production environment and help you fix the real issues while you are having a cup of coffee.

  
Today multiple tools and platform has made SysOps and DevOps team busy in mondaine job which prevents them to innovate in new areas.

 autobotAI is an artificial intelligence that helps you reduce your cloud cost, enhance cloud security.

Just ask and It will do it.

    Now managing aws cost, security compliance and optimisation is as easy as telling Alexa to turn on a light. autobotAI enables Alexa and Alexa for business users to execute mundane tasks within few seconds.

visit https://autobot.live for more details

AWS maintenance events:
It gives you update on cloud maintenance events like scheduled ec2 reboot so you can plan maintenance without affecting application users. It also provides other cloud service provider’s planned and unplanned change management in different aws services.

Check security compliance and configuration best practice:
  It checks various security compliance rules so your infrastructure vulnerability can not expose attack surface for malicious activity.

Check security anomaly detection:
 Like performance monitoring , keeping the attack surface low and monitoring the behavioural pattern for security is as important. autobotAI can give security anomaly detection (in all layers). All you have to do is ask. In product pipeline AWS guardduty, WAF (AWS and cloudflare) and various security tool integration are there which can really help DevSecOps.

Cost optimisation:
  autobotAI  checks any unused aws resources and cleanup the same to reduce tangible aws cost or intangible operational cost. autobotAI helps you to identify the old generation instances in the region where new generation instances are available. Moving to new generation instances (eg. M4 to M5) can optimise cost and improve performance.

Billing and budget management:
  You can set monthly, quarterly or yearly budget or update the same by asking autobot. You can also check the current budget utilisation for month, quarter and annual forecast before budget exceeds the threshold.

Alert and availability management:
   Today cloud infrastructure is agile which can span 100s of server on demand. Configuring auto recovery and basic alert management is still manual activity. autobotAI identifies new or modified resources and configure alarms for critical resource utilisation and also configures EC2 auto recovery, so you can act before impaired system impacts application. Such manual alarm creation in aws takes days after detecting resources change. autobotAI enables to configure same in seconds.

Troubleshoot network issue:
  It can also help you troubleshoot network connectivity issues and gives recommendation to achieve high availability. Currently VPN related issues identification is available. other network correlation related network issue fix is on the roadmap.

Reserved instance utilisation monitoring:
   autobotAI helps to make sure you utilise your reserved instance and provides recommendation if identifies any anomaly in reserved instance utilisation.

EC2, RDS instance state management:
    Remember to turn off the lights and stop development or staging environment when you are not using it. Its a new rule of thumb for cost optimisation. It helps to save energy and cost for business. Now you can do both by telling alexa. It also helps you start the same when you need it.

Clear CDN cache after content update:
   Whenever development team does the update in application static content and you need to refresh cloudfront edge locations with new content, autobot will help you clear cloudfront cache in development, staging or production cdn distribution.

AWS OS level automation integration:
   Aws systems manager helps to secure serverless administration. Autobot helps you check weather systems manager is configured or not and configures cloud infrastructure with best practice.

Check the state of production/development/qa environment:
  The simplest activity like checking  the state of development , production or staging servers across all AWS regions is also a time consuming task. Autobot enables user to enhance information gathering so team can take decision instantly.

EC2 instance Backup management:
   Instance backup is important task before or after completing any administrative activity. Any  configuration change in environment can cause application impact because of chain reaction hence we take AMI backup before or after the change. Autobot can enable you to take backup of development/staging/production environment instances by just telling to take backup.

S3 and EBS storage usage per environment and region:
  Identify total s3 storage utilization and EBS usage in development, staging and production is a time consuming activity in aws. Autobot provides details for different type of storage utilization in different environments.

What is next in its roadmap:
 Enable skill for many OS, application, trendmicro and other security tools integration level which can help DevSecOps, DevOps and SysOps team on day to day tasks cloud administrator tasks by just asking alexa. It is in development phase (with ML integration).  AI integration for AWS resource monitoring is under development and the same will be released in 2nd phase.


Security assessment Demo: