A Practitioner’s Guide to Moving to the Cloud

0
11965

 

Cloud computing is leading the IT transformation in a big way. Enterprises, irrespective of their size, are considering moving to the cloud as an integral part of their short term IT roadmap. Industry is at a state where embracing the cloud has become inevitable, since not doing so could lead to losing the competitive edge when operating at scale. At this juncture, businesses need to take cautious steps in their endeavour to migrate their workloads onto public clouds.

This article focuses on laying out guidelines that are essential for business decision-makers (BDMs) and technical decision-makers (TDMs). It aims to improve their knowhow about the cloud and, hence, facilitate better planning. Some aspects may have a direct impact on how a solution needs to be perceived in the context of the cloud. These guidelines help you revisit your ‘IT Reference Blueprints’ in the context of the cloud to address provisioning, management, latency, security, integration, redundancy, HA and other related aspects.

1. Fix the ‘foundation’ before moving ahead
At the point when you are deciding to migrate your solution to the cloud, it is essential to take a step back and understand if the built solution is cloud-ready. Companies that architect/design solutions keeping the public cloud under consideration will not be affected by this aspect, but it is critically relevant for the rest. In most cases, when solutions that already exist on premise are prepared to be moved to the public cloud, the base building blocks appear broken. This guideline is outside the purview of ‘lift and shift’ where you are essentially moving over ‘boxes’.
Solutions are built and tweaked over a period of time considering the hosting and operating environment. However, when you want to migrate to leverage the scale and elasticity of the cloud, issues which probably did exist in a miniature form, tend to bloat up in the cloud. Classic examples of this are:

  • Known architecture/design issues, which are less of a threat to the overall operations as of date are often left unaddressed and parked aside for future upgrades. You end up taking known architectural and design anomalies to the cloud, and this is only going to multiply in the highly elastic computing environment.
  • Session management tactics adopted for a Web workload: If you have a Web application that managed sessions in-proc, when you migrate the same workload and configure auto-scale capability, you will realise anomalies unless you set sticky bit for the load balancer.
  • A two-tier application where lack of intermediary services leads to direct calls being made by the app to the database, which for security and regulatory purposes, continues to remain on premise.
    The above examples may appear trivial, but do demand redesigning. In certain scenarios, we have observed that issues have only bubbled up post migration to the cloud when one has taken a stance to move over without assessing the true readiness of a particular solution.
    Hence, it is highly recommended to assess the building blocks of the solution for its fitment and adaptability to the compute environment in the cloud. In addition, fix the existing issues in the realm of architecture or design before marching ahead. Otherwise, you are only taking the issues along to an environment where they can pose greater threats to the overall functioning of the solution.

2. Don’t consider the cloud to be some sort of saviour that addresses performance
While the cloud gives unlimited access to compute, don’t embrace it just to solve performance issues. This is similar to customers hosting their solutions, as is, to the cloud; setting up scale parameters and expecting better performance. Auto-scale out/in will not have any implication to a solution that already lags on performance (response time, throughput, etc).
The cloud will be a great host/environment for applications to scale and demonstrate its true potential if, and only if, the solution is optimised to perform well in the cloud.

3. Don’t embrace the cloud considering it to be cheap
There is a general myth in the industry that moving to the cloud will bring down costs. There are cost benefits for sure, but economics is a relative term, and it all depends on the solution landscape chosen in the context of the cloud. In fact, benefits like just-in-time provisioning of compute workloads in lieu of procuring them independently, the choice of the compute environment, access to managed platforms, auto scaling or the elastic nature of the cloud far outweigh just the cost factor.
In summary, embrace the cloud for factors beyond cost, while cost benefits are realised over a period of time.

4. Don’t just ‘lift and shift’
If you already have your solution hosted on premise or with a third party, taking equivalent boxes and hosting workloads is no different than your current state. We term this as ‘lift and shift’. While you can do this to ensure your solution works well in the new environment, a desirable state to be in is to optimise your workloads to leverage the power of the cloud.
Preparation is key to any activities under the umbrella of ‘legacy modernisation’. A business or enterprise that wishes to embrace the cloud should not attempt a ‘lift and shift’ model considering the plethora of benefits one gets while optimising for the cloud. While the cloud addresses scale and elasticity as key tenets, there are multiple ways to solve a given problem/ scenario.
Hence, instead of pure play ‘lift and shift’, consider the following recommendations:

  • Understand and evaluate the various options to modernise your solution, while being aware of the available services in the cloud
  • Assess the economic implications of choosing one over the other
  • Make the required changes to your solution in view of the option chosen
  • Migrate your solution

5. Latency exists
Cloud regions are evolving and, hence, the eventual choice a customer makes is to host workloads at the nearest data centre. Having deployed workloads on premise and then moving to the nearest available region (outside of self-hosting) definitely instills certain latency. Geographic proximity, agreement with local telecom/network services providers in optimising routes, etc, will result in less latency. However, when a public cloud is chosen, irrespective of whether it is in the same region or the nearest region, you will notice a certain level of latency.
Do not take it for granted that latency exists, as there are ways and means to address this issue while you move your workloads to the cloud. Certain design changes leading to optimisation may be required to ensure your solution works optimally in the new environment.

6. Transient failures
Unlike on-premise systems that may run physically on a corporate LAN or within a defined boundary of infrastructure, things are rather different in the cloud. Due to the distributed nature of the infrastructure hosting the cloud, and with the need to subsume transitory issues, the infrastructure would, for example, need to swap an instance facing a problem with another replica instance. Transient errors could also occur on account of network connectivity issues or service unavailability. For applications that are hosted in them, it is imperative that such transient errors are anticipated, and are architected and designed to handle them. The cloud platform would, on its part, provide such indications in the error messages that will state whether that instance of error was transient.
Knowing about the occurrence of such exceptions, system designers should consider certain design patterns that are meant to handle these scenarios in the cloud. For example, a system might leverage ‘Retry Pattern’ to address a scenario where subsequent requests to a service in the backend might succeed after an initial failed response. Under such circumstances, the client side application should check for that condition and reissue the request. In case the error condition is persistent, the program should back off after a certain number of retries.

7. Always start with a PoC or proof of concept
The dynamics of the cloud are different from those of a local data centre provider. In the case of the latter, an application performs predictably, owing to the years of experience in building, deploying and managing it on premise. Lacunae in the underlying architecture are probably addressed in a custom and ad hoc way. However, any issues faced in the application on premise tend to get amplified when deployed in the cloud.
Hence, when moving to the cloud, however similar the hosting infrastructure is to that in the local data centre, it is a good practice to carry out a ‘proof of concept’ or PoC to identify any issues or bottlenecks.

8. Understand SLAs
One of the most critical aspects to consider and understand is the SLAs related to individual services in the cloud. While you may offer SLAs to your customers in terms of your solutions HA and DR plan, this cannot be outside of what a public cloud vendor offers. It is illogical to commit to 100 per cent availability to your customers when the cloud vendor is clearly offering SLAs for three 9s (99.9 per cent) or four 9s (99.99 per cent).
As mentioned earlier, transient failures do happen in the public cloud and when that occurs, or when your system goes down, your plan of action is guided by the SLAs you have signed with your clients.
While you build or migrate to the cloud, do take SLAs into account and build your system to handle transient failures and planned/unplanned downtime.

9. Know the environment
Unlike hosting on premise or with a hosting provider where you have taken mere infrastructure/boxes, the public cloud brings in a vibrant compute environment that packages key tenets of a cloud hosting. In addition, with transient failures, it is all the more important to be absolutely clear on troubleshooting tactics, gathering application insights, operating environment statistics and, most importantly, the support model.
You will not be able to design a high-performance and responsive solution in the cloud if you do not realise the host environment’s ability on all of the above said attributes. Public cloud providers offer out-of-the-box tools to ensure you stay in close proximity to your deployment and in control.
Awareness is the key here to ensure you are successful in your tryst with the cloud.

10. Simulate and test. ‘Make it real’
One of the most challenging aspects of operating in an on-premise environment is the limitation on the amount of compute available to simulate a real environment. The cloud gives you access to unlimited compute due to the humongous investments beings made by cloud providers in setting up data centres globally. Operating in the public cloud environment is hence considered a great opportunity and a choice available for you to go beyond a limited computing boundary in order to test your solution for stability and to identify breaking points.
Living in a world dominated by devices and eventual consumer expectations, your solution needs to be tested for real world load and thresholds. While your business grows, there is no better state to be in than to scale your infrastructure seamlessly. Reality strikes when one realises the limitations of a built solution in a simulated environment, which avoids having to encounter this later, in production.
Simulation is simply about knowing about the breaking points, rather than realising it the hard way when in production. Hence, leverage the cloud for the capacity it offers and ‘make it real’ by simulating an environment to deal with real world challenges that you may face in the future.

11. Leverage the power of PaaS
The cloud platforms are evolving constantly, and have moved from being mere infrastructure providers to service automation to a great extent—providing backup, recovery and other services necessary to ensure a continuous availability of their applications. With the recent trends in this space, cloud platforms have taken this complexity away from businesses and let them focus only on building their applications, through ‘Platform as a Service’ (PaaS) offerings.
The notion of PaaS ensures that customers merely focus on building their applications and deploying them, and leave it to the cloud service provider to handle the infrastructure aspects related to hosting and managing scale out/in. In its infancy, PaaS had a disadvantage wherein the applications built for it were locked in with a particular cloud service provider, since the applications had to be compiled for and deployed on a particular cloud platform. In its current form, however, the platforms have matured to a great extent, and have taken this dependence away. The applications can be built once using technology that users are comfortable with, and optionally, they can choose to either deploy them to their own data centres or to a cloud platform of their choice.

12. It is not ‘All or nothing’
Enterprises have invested considerably over the years in data centres that host their ‘Line of Business’ applications. It would not be viable for them to discard all the investments and move their workloads overnight to the cloud. Enterprises, however, could choose those workloads that have outgrown their deployment capacity in the data centre, or where there is a greater business advantage in moving an application to the cloud rather than retaining it within their data centre. They could also consider building and deploying their newer applications in the cloud as composite applications that, in addition to using data residing in the cloud, would also pull in data from on-premise Line of Business applications. This way, enterprises could retain their existing investments, and at the same time leverage the power of the cloud for the larger benefits it provides to their business.
Another reason why not all applications move to the cloud is due to data sovereignty restrictions that prohibit data being stored in a public cloud outside the country. There is flexibility, however, to have processed and obfuscated data moved to the cloud.
The public cloud provides numerous options to securely connect to on-premise applications. There are different ways in which a hybrid scenario could be implemented, depending on the needs of an enterprise. A few examples are highlighted below.

  • Existing ‘Line of Business’ applications in an enterprise were probably implemented a decade back, and built for use from the confines of the corporate intranet. Enabling them for BYOD based access from the public Internet would imply numerous internal teams to come together to implement changes across one or more applications to make this possible, each probably on a different set of technologies and platforms. Such an initiative would probably be a non-starter given these challenges. Using the hybrid cloud approach, an enterprise can enable secure communication for this scenario in a non-intrusive way.
  • Where existing Line of Business applications do not have a DR strategy implemented for their current set of applications, the cloud would provide an effective DR infrastructure, with the necessary tools and technologies to orchestrate the backup and recovery of on-premise infrastructure to the cloud or to an alternative on-premise data centre through the cloud.
  • When some Line of Business applications cannot be moved to the cloud for various reasons, ongoing development/testing and stabilising of applications could however be done in the cloud. This approach greatly reduces the cost and time associated with providing infrastructure for development and testing, and helps enterprises in not having to be saddled with a perpetual investment in servers, machines and software.

Hence, in the context of the public cloud, you definitely have a choice of deployment models and it is not an ‘All or nothing’ scenario.

13. Embrace native services: Don’t be a fence sitter
The cloud platforms today are seeing a rapid pace of continuous innovation where, increasingly, the power behind these innovations comes not from scratching the surface of infrastructure services, but from deep down domain-specific solution areas. While these capabilities had not matured till the recent past, and enterprises were content with evaluating the cloud for just what it offered at the surface, they ended up deploying their applications, as is, on infrastructure services like compute and networking. This approach also enabled them to avoid getting locked-in to a particular vendor, and obviated the need to make changes to their applications when deploying to a different cloud host. However, enterprises are fast realising that this approach, while good for a first step, was not enough to catapult them into a league where their businesses could really benefit from the full impact of the cloud. This next level of engagement with the cloud platform can be achieved by making the following considerations:

  • Architect and build your applications to utilise the native capabilities in the cloud platform, which are tried and tested for scale, performance and reliability. Expunge custom code and components within the application, where there are equivalent solutions that are much more conformant to industry standards and which have been battle hardened.
  • Leading cloud platforms support a wide range of platforms and technologies, be it open source or otherwise. Hence, enterprises can continue to operate in a familiar environment using tools they are already comfortable with. At the same time, they can seamlessly embed the native features to reap the benefits of embracing the cloud.
  • Tried and tested native capabilities in the cloud ought to be considered to replace custom or vendor-specific implementations of known problems. By doing so, you can dedicate time to innovate on the solution domain instead of the solution host/infrastructure.

Though native services are proprietary offerings by a cloud vendor, they are mostly built using proven techniques and platforms. As long as they are built using open standards and provide you a choice and familiarity, you should definitely consider them.

14. You need to re-benchmark your infrastructure
Capacity planning is an activity one undertakes to design the host environment in terms of RAM, CPU, cores and eventually, the number and type of machines required. Test and staging environments are a reflection of the production environment and, hence, you can arrive at a benchmark or statistics with respect to performance. Solutions are tweaked and optimised for specific kinds of hardware.
In this context, when you plan moving to the cloud, at the outset, you will not get the exact replica of the configurations. Hence, it is highly recommended to assess performance or re-benchmark against the new hardware specifications available in the cloud.

15. Assess for security and compliance
Dynamics change in the context of the public cloud, more so when you are choosing a region that falls outside a specific geographic boundary. Irrespective of whether your solution comes under the purview of data protection laws and privacy regulations, or adherence to global standards on the compliance front, it is important to assess the host environment’s coverage on the following fronts:

  • Security
  • Privacy
  • Transparency
  • Compliance

In order to leverage the benefits of hosting in the cloud, you have to believe in the cloud providers’ claims about adhering to laws or regulations pertaining to your domain and entrust them with managing your data. As an example, on the compliance front, it is absolutely essential to ensure the cloud provider conforms to industry defined global standards (like FISMA, HIPPA, FERPA, etc) as well as country-specific standards (like NZ GCIO, UK G-Cloud, etc).

Summary
At the end of the day, moving into the cloud gives you an opportunity to operate with unlimited compute and, hence, no boundary. It is absolutely critical to ‘do it right the first time’ and optimise on the efforts. At times, taking up migration to the cloud has resulted in rejigging in teams and their operating model. Awareness about the cloud platform, complemented by the above guidelines, will help you arrive at the starting line in terms of the overall planning cycle of your transformation journey. While the cloud will give you a sophisticated compute environment, planning is key to success. Before you move to the cloud, understand its potential.

LEAVE A REPLY

Please enter your comment!
Please enter your name here