Home

Based on paper written by Christian Vecchiolla

Edited by Dexter Duncan

A key vision often mentioned in the Microsoft Azure/Cloud announcements is the idea of a hybrid network which builds on-top of the “first wave” of cloud computing infrastructure available from Amazon, Microsoft, etc.   A hybrid network is one that connects the private (corporate) network with publicly available (cloud) infrastructure.   For example, Microsoft is designing tools such as Dynamic Data Center Toolkit to work in both hoster and Enterprise environments.    The hoster version is already available with enterprise version coming in March 2010.   The reference to the hybrid version is sometime “in the future”.       Similarly, the Azure announcements initially focused on applications available in the cloud with more recent announcements focused on middleware and tools bridging the gap between enterprise and the cloud.      Most of the projects announced will enter a beta phase in 2010 and include “Sydney”, AppFabric, Next Generation Application Directory and some updates to .NET framework.

Many enterprises have grown accustomed to building private data centers to meet peak demand and are in process of setting up Virtual Machine environments that help use more of the private data resources without having to buy new hardware.       Since public clouds also operate using Virtual Machines, it is not a far stretch to see Cloud Computing as an extension to the existing corporate strategy, especially if a VPN is set up to offer same level and type of security that is in the private network.   Amazon is the first one to publicly announce a VPN service as part of their cloud offering, but it should be easy to set up with any cloud vendor by using standard encrypted IP Sec VPN connections.     You just need to specify a private IP address block, work out subnet addresses and bridge private and public network together using VPN router.

The most significant benefit of Cloud computing is the elasticity of resources, services and applications, which is the ability to automatically scale out based on demand and users’ quality of service requests.      Having a VPN set-up with dynamic provisioning would allow you to do this.      Another alternative is to set up a broker between the private network and the cloud provider and to manage traffic/demand based on pre-defined security policies.    Highly sensitive information therefore remain on the private cloud, while lower priority data is processed on Pay-as-you-Go basis.    Building on top of this, you could also build the broker to offer other Quality of Service (QoS) related services such as ability to consume public resources based on pre-determined budget or deadline.    Offering QoS based consumption is called Market Oriented Computing.

Hybrid Cloud Implementation – A sample of Federated Computing

A Melbourne based start-up company recognized the need to address hybrid networks and Market Oriented Computing  and introduced a product that could easily work in private, public or hybrid network.     Their product called “Aneka” literally means “many into one” since their product allows you to manage many compute environments as one.       The power of provisioning resources from various IaaS providers, is able to seamlessly combine Private and Public Cloud resources, and deploy Aneka-enabled applications in these heterogeneous and hybrid environments.   There is no commonly agreed standard for provisioning virtual infrastructure and each IaaS provider tends to use its own specific interface.  To  achieve a seamless integration between Aneka and various IaaS providers, it is essential to design an open, flexible, and extensible architecture, which allows to easily plug custom implementations for interacting with new and emerging IaaS providers. The below provides an overall view of the architecture of the Aneka resource provisioning framework. This is composed of three major components:

  • Resource Provisioning Service: this is an Aneka specific service that implements the service interface, thus allowing the service to be integrated into and managed by the Aneka Container. This service is essentially a lightweight component that wraps the resource pool manager and integrates it into Aneka.
  • Resource Pool Manager: manages all the registered resource pools and decides how to allocate resources from those pools. The resource pool manager provides a uniform interface for requesting additional resources from any private or public provider, and hides the complexity of managing multiple pools to the Resource Provisioning Service.
  • Resource Pools: container of virtual resources that mostly come from the same resource provider. A resource pool is in charge of managing the virtual resources that contains and eventually releasing them when they are no longer in use. Since each vendor exposes its own specific interfaces, the resource pool encapsulates the specific implementation of the communication protocol required to interact with it, and provides the pool manager an unified interface for acquiring, terminating, and monitoring a virtual resources.

The Resource Provisioning Service takes requests primarily from the scheduling service and delegates those requests to the pool manager. According to the configured policy, the pool manager determines the pool instance that will be used to provision or release a given number of resources. It is then the responsibility of the selected resource pool to handle and forward the request to the resource provider. Once the requests are successfully processed, the requested number of virtual resources will either join or leave the Aneka Cloud.

The advantage of this architecture is that it is fairly simple and makes extending the current capability of the system straightforward. The provisioning service deals with all the aspects that are related to the internal infrastructure of Aneka and hides its implementation details to the pool manager. The pool manager focuses on the management capabilities of resource pools and keeps them separate from the other components of Aneka. Finally, the implementation of a single resource pool is completely independent from the other two components as long as the required interfaces are implemented. This architecture enables greater flexibility and simplicity for third-party integration. For example, introducing a new IaaS provider only requires implementing the resource pool component and configuring the pool manager to integrate it into the Aneka Container.

Applying these principles across between a private network and one or more cloud service providers is sometimes called Market Oriented Federated Computing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s