CLOUD COMPUTING

From Hybrid Cloud To Multi Cloud: The Three Steps To Take

The migration from hybrid Cloud to multi-cloud is of interest to the vast majority of companies. What prerequisites are recommended? Should we set up a CMP platform? How do you organize data transfer? Within two years, 75% of companies looking for Cloud IaaS (infrastructure as a service) and PaaS (platform as a service) solutions will express the need for multi-cloud capabilities, according to Gartner. They were at most 30% in 2018.

How do we differentiate the two approaches? “Hybrid Cloud refers to Cloud services from one or more providers, without specifying the origin of the services. This is the relationship between an organization and a public cloud player,” explains the research firm. For its part, multi-cloud is defined as: “The use of several public Cloud providers for the same objective. This is a particular case of hybrid cloud computing.

Why Choose Multi-Cloud

The choice of multi-cloud is justified to escape exclusive commitments with a single platform and maintain a form of agility. In short, this gives the possibility of using several data center networks (AWS, Azure, Google or Salesforce) without being locked. This “multisourcing” strategy imposes ad hoc governance on the IT department, a framework for “monitoring” the workloads assigned here and there, and billing tracking.

“Some applications may be a composite of several different types of services and providers,” Gartner adds. Portability and migration capability are vital objectives. Some applications may be deployed on different cloud providers’ platforms at other times, and the choice may be made at runtime. For example, a batch processing application could be deployed with the cheapest cloud provider available.

The Three Steps To Take Towards Multi-Cloud

Specific steps are recommended to move from hybrid cloud to multi-cloud.

At Capgemini, we consider that there are three key phases: an “assessment”, the definition of a target landing zone and the choice or not of a management platform (CMP – Cloud Platform Management).

The “assessment” covers the analysis of the existing situation, the study of prerequisites and recommendations for transformation. “At this phase, we can integrate the design of the target architectures,” explains Thomas Sarrazin, director of Cloud & Edge Practice at Capgemini Cloud infrastructure services.

Several levers must be considered: agility, functional needs, costs, acceleration and quality. Then, it is necessary to define the eligibility of applications according to the services and functionalities offered – Big Data, IoT, machine learning, managed databases, etc. – in IaaS (infrastructure) or PaaS (platform), or even in “serverless” mode.

Also Read: Cloud Computing: Do We Need To Know About The Technology?

Classify Data And Specific Environments

The criticality and sensitivity of the data are two other critical criteria: “Some data must absolutely remain in France. Their use must comply with regulations in the EU (see personal data and GDPR, right to be forgotten; health sector, etc.)”, recalls Thomas Sarrazin. Another constraint: most mainframe, Unix, etc. environments are not compatible with the public Cloud

All of these evaluations make it possible to establish “a target transformation plan”.

Then, it will need to support a migration plan drawing inspiration from several possible models. “We need to study the impact on the application and its services,”. Either we do “lift & shift” consisting of transferring physical or virtual servers to IaaS; either we replace existing services of the application with PaaS type services (e.g. database); either we decide to rewrite the entire application or replace it with a new “cloud ready”.

Define A “Landing Zone” And Security

The second area of ​​work is the “landing zone” setting the end-to-end target architecture: “We must determine how to architect the interconnections with the on-site IS and the public Clouds, and between the Clouds themselves. Security procedures must be put in place to protect access to environments (authentication, encryption, profile management, etc.),” underlines Thomas Sarrazin.

Use A CMP Or not?

The third area of ​​work consists of determining with which tools to operate the multi-cloud environment: should we use a CMP (Cloud Management Platform)? How do we upgrade existing tools? Indeed, tools are needed to provision and “monitor” the resources and services made available. Other related questions: What catalog of services should be put in place? How can they monitor their consumption in order, if necessary, to be able to re-invoice?

A CMP platform can be connected to an ITSM (Information Technology Service Management) tool to track changes, incidents, and user requests. “The choice of a CMP platform is based on several criteria: the location of the service catalog (integrated or not with ITSM, etc.), the desired level of hybridization or even the type of provisioning to be implemented,” notes Thomas Sarrazin.

The last criterion is the technical choice of the CMP. “It is generally conditioned by the company’s existing environment: it can be VMware, a large manufacturer or Redhat or CloudBolt Software (our choice) or other Open Source solutions,” he concludes. Cloud providers are relatively resistant to CMPs because they limit access to their catalog of services. We can then give free access to developers, architects, innovation managers, etc., in order to ensure technological monitoring. In any case, as Gartner points out, “no current CMP platform is capable of supporting all of the features available on the major cloud platforms.”

Rework Data Flows

Opening up to multi-cloud is an opportunity to innovate because the Cloud encourages you to optimize, modernize and test new solutions. This agility, which comes from orchestration and automation tools, requires rethinking data flows. Cloud flows must take into account the type of resources used: type of VM, computing power (CPU, FPGA graphics processor, GPU, location, etc.). It is about moving forward in the automation of IT processes by providing optimal resources.

This sometimes requires rebuilding specific solutions: “A scheduling system or “scheduler” will integrate governance applied to the use of resources, in particular for cost control and optimization of execution time. This conditions elasticity and scalability and allows resources to be provisioned almost instantly,”. Another advantage of this approach is that it provides access to immediately available services, such as managed databases.

Maintain A Line Of Demarcation

“To move towards multi cloud, it is important to keep a line of demarcation between the company and the Cloud provider. We must remain independent and not let ourselves be locked in, that is to say minimize the grip of Cloud operators,” believes François Tournesac. But you also need to know how to take advantage of cheaper “ready-made solutions”, promotions and competition. To redesign workflows, there is every interest in using tools offering a wide range of connectors to market APIs, in particular those open to all offers and services on the Cloud, MongoDB and PostgreSQL databases, among others.

Towards Multi-Cloud Hybridization

The market is moving towards multi-cloud hybridization: “We started by looking for elasticity by spilling over onto the public Cloud, from its IS on private Cloud or not. But very quickly, we realize that we are limited by the bandwidth and latency of the network,”.

Result: we install more and more data on the Cloud, and then, application by application, we calibrate the I/O (inputs/outputs) between private and public Clouds. To avoid being too dependent, we open ourselves to several Clouds. This is the path chosen by the SNCF, which distributed specific applications and data on different types of Cloud according to cost and performance criteria while developing its private Cloud – which allows it to learn, for example, about containerization technologies.

Debunking The Fear Of The Cloud

The public Cloud opens doors. For example, the production of “machine learning” or “deep learning” algorithms is developing more quickly thanks to orchestration and automation tools that maximize the use of GPUs (see NVIDIA, etc. ) and facilitate production. IT services companies offer IaaS (infrastructure) offers but not enough PaaS solutions (platforms for developers). It is possible to turn, among other things, to Microsoft Azure to activate a “meta-scheduler”, which makes it possible to plan and execute batch processing of automated tasks, for example, parallel calculation jobs, in a pipeline large scale on thousands of VMs (virtual machines).

Also Read: What Are The Main Benefits Of The Cloud In SAP Business?

Techno Rumours

Technorumours.com is an internationally renowned website that publishes tech-based content exclusively. We are a team of dedicated and passionate souls who thrive to provide innovative content on the technology niche to our global audience.

Recent Posts

What Google Threat Horizons Suggests To Businesses Using The Cloud

The Google Threat Horizons report is a document that should be consulted by those involved…

3 months ago

Julius AI, The New Artificial Intelligence For Data Analysis

Julius computer-based intelligence is an artificial brainpower ideal for investigating information from Succeed. An instrument…

5 months ago

CA Technologies: Businesses Enter The Era Of The Software Economy

For CA Technologies, agility, DevOps, feedback, and security constitute the strategic pillars of business development.…

6 months ago

The New Digital Marketing Professions: What You Need To Know

The Internet has made the world an actual global village. Its advent broke down physical,…

8 months ago

When Should We Post On Social Media In 2024?

With the blast in the notoriety of virtual entertainment, it is progressively challenging for a…

8 months ago

HR Interview: Questions And Answers

You're looking for a job in the HR sector, and you've finally received the call…

9 months ago