What is Cloud Bursting and how to achieve it?

Cloud bursting offers the ability to rapidly scale compute and/or storage from on-premise to public cloud capacity. But what are the key pitfalls and how are they avoided?

The public cloud has quickly established itself as an easy and frictionless way to build out IT infrastructure.

If you already have on-premise systems, at some point, there will be a desire to integrate those on-premise systems with off-premise offerings.

One way to do this is through cloud bursting – but exactly what is it, and what does it mean to “burst to the cloud”?

The term cloud bursting isn’t new – it has been discussed in enterprise IT for perhaps the past 10 years.

Bursting to the cloud means expanding on-premise workloads and moving some (or all) of that activity to the public cloud. Generally, this is done to mitigate against rapid workload growth, such as coping with peak demand.

It’s also possible to use cloud bursting as a tool to aid in workload migrations when applications move partially or wholly to the cloud to alleviate the load on on-premise kit during upgrades or replacements.

The “on-demand” model of cloud bursting provides the ability to cater for spikes or peaks in workload demand without having to retain lots of unused and expensive equipment onsite.

Website traffic

If peaks in website traffic, for example, are only seen three or four times a year, it makes sense to manage these requirements with on-demand infrastructure that is paid for only during the peaks.

When demand diminishes, cloud resources can be switched off. This represents a huge saving compared with having equipment that is either rarely used or active all the time, consuming space and power.

A third scenario is to use cloud bursting to mitigate against on-premise datacentre expansion.

Imagine a scenario where growth in compute demands requires building or expanding on-premise capabilities. It may make sense to move some of this workload to the public cloud to mitigate the capital spend.

This case is not entirely a cloud bursting scenario, as by definition, bursting implies that workload is moved to the cloud for a temporary period and then eventually brought back on-premise. However, it could be used as a temporary solution while upgrading an existing datacentre.

The myth of cloud bursting

While cloud bursting seems like a great idea, in reality the process is quite difficult.

Many applications simply aren’t designed to be distributed across two or more compute environments at the same time because they are generally “monolithic” in nature.

Think of systems that are built on top of a large relational database, for example. Taking this to the cloud would mean moving the entire application. Even where tiers of an application could be separated – web tier from application logic and database – the latency between these layers that the cloud introduces could make cloud bursting a challenge.

So, although many organisations may talk about cloud bursting, few will be implementing the process in a truly dynamic nature. In reality, many cloud bursting projects will focus on moving entire applications or application groups into the public cloud on a semi-permanent basis.

Cloud bursting and storage

How is data storage affected in cloud bursting scenarios?

First of all, storage plays a big part in enabling applications to be moved to and from public cloud. The process of bursting an application to the public cloud is generally based on either moving the application and data together or moving the data to another application instance already in place.

As an example, most applications today are packaged as virtual machines (VMs). Suppliers such as Velostrata (acquired by Google), Zerto and Racemi all provide capabilities to move entire VMs into the cloud.

The cloud providers have their own solutions for this too. Some of these tools are focused at moving the entire VM in a one-time process. However, Velostrata, for example, provides the capability to move only active data and bring updates to the VM back on-premise in a truly dynamic fashion.Generally, applications have to be unavailable when moving to/from the public cloud, and that can be a problem. Extended outages aren’t popular with users

This capability highlights one of the major issues with this kind of migration and that is keeping applications and data in-sync.

Moving an entire virtual machine (or groups of VMs) across the network is expensive and time consuming. This is especially true when moving virtual machines back on-premise. Hyper-scale cloud providers like to charge for data egress, making application and data repatriation less palatable.

There’s also the time aspect to consider. Generally, applications have to be unavailable when moving to/from the public cloud, and that can be a problem. Extended outages aren’t popular with users and need to be mitigated as much as possible.

Storage-focused cloud bursting

How about just moving the data to the public cloud?

Simply using public cloud as an extension of on-premise storage has been around for some time. Backup suppliers, as well as primary and secondary storage solution suppliers all provide the capability to push data to the public cloud as a form of archive.

This is good from the perspective of controlling costs for inactive data, but what about active applications?

A few things need to be considered to make active storage cloud bursting practical.

The first is having a consistent view of the data. This means managing the metadata associated with the data. For block storage, this requires tracking and accessing the latest version of any individual block. For files and object stores, this means knowing the most current version of a file or object.

Metadata consistency is a challenge, because all data updates generate a metadata change, whether this is information on a new file or updates to an existing one. These changes have to be distributed across all endpoints for the data as quickly and efficiently as possible. This leads us to another issue with metadata management – locking.