Home

By: Christian Vecchiola

Melbourne, Australia

Cloud Computing Reference Model and Technologies

In order to introduce a reference model for Cloud Computing, it is important to provide some insights on the definition of the term Cloud. There is no universally accepted definition of the term.  Folks from University of Cal-Berkeley [1]  notice that “Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and system software in the datacenters that provide those services”. They  identify the Cloud with both the datacenter hardware and the software. A more structured definition is given by group at University of Melbourne [2] who define a Cloud as a “type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources based on service-level agreement”. There is an agreement on the fact that Cloud Computing refers to the practice of delivering software and infrastructure as a service, eventually on a pay per use basis. In the following, we  illustrate how this is accomplished by defining a reference model for Cloud Computing.

Figure 2 - Cloud Computing Archictecture

The above gives a layered view of the Cloud Computing stack. It is possible to distinguish four different layers that progressively shift the point of view from the system to the end user. The lowest level of the stack is characterized by the physical resources on top of which the infrastructure is deployed. These resources have different natures: clusters, datacenters, and spare desktop machines. Infrastructure supporting commercial Cloud deployments are more likely to be constituted by datacenters hosting hundreds or thousands of machines, while private Clouds can provide a more heterogeneous scenario in which even the idle CPU cycles of spare desktop machines are used to leverage the compute workload. This level provides the “horse power” of the Cloud.

The physical infrastructure is managed by the core middleware layer whose objectives are to provide an appropriate run time environment for applications and to exploit the physical resources.  To provide advanced services, such as application isolation, quality of service, and sandboxing, the core middleware  relies on virtualization technologies. Among the different solutions for virtualization, hardware level virtualization and programming language level virtualization are the most popular. Hardware level virtualization guarantees complete isolation of applications and a fine partitioning of the physical resources, such as memory and CPU, by means of virtual machines. Programming level virtualization provides sandboxing and managed execution for applications developed with a specific technology or programming language (i.e. Java, .NET, and Python). On top of this, the core middleware provides a wide set of services that assist service providers in delivering a professional and commercial service to end users. These services include: negotiation of the quality of service, admission control, execution management and monitoring, accounting, and billing.

Together with the physical infrastructure the core middleware represents the platform where the applications are deployed in the Cloud. It is very rare to have direct access to this layer. More commonly, the services delivered by the core middleware are accessed through a user level middleware. This provides environments and tools simplifying the development and the deployment of applications in the Cloud: web 2.0 interfaces, command line tools, libraries, and programming languages. The user level middleware constitutes the access point of applications to the Cloud.

The Cloud Computing model introduces several benefits for applications and enterprises. The adaptive management of the Cloud allows applications to scale on demand according to their needs: applications  dynamically acquire more resource to host their services to handle peak workloads and release when the load decreases. Enterprises do not have to plan for the peak capacity anymore, but provision as many resources as they need, for the time they need, and when they need it.  Moreover, by moving their IT infrastructure into the Cloud, enterprise reduce their administration and maintenance costs. This opportunity becomes even more appealing for startups, which start their business with a small capital and increase their IT infrastructure as their business grows. This model is also convenient for service providers that want to maximize the revenue from their physical infrastructure. Besides the most common “pay as you go” strategy more effective pricing policies can be devised according to the specific services delivered to the end user. The use of virtualization technologies allows a fine control over the resources and the services that are made available at runtime for applications. This introduces the opportunity of adopting various pricing models that benefit either the customers or the vendors.

The model endorsed by Cloud Computing provides the capability of leveraging the execution of applications on a distributed infrastructure that, in case of public clouds, belongs to third parties. While this model is certainly convenient, it also brings additional issues from a legal and a security point of view. For example, the infrastructure constituting the Computing Cloud is made of datacenters and clusters located in different countries where different laws for digital content apply. The same application is considered legal or illegal according to the where is hosted. In addition, privacy and confidentiality of data depends on the location of its storage. For example, confidentiality of accounts in a bank located in Switzerland may not be guaranteed by the use of data center located in United States. In order to address this issue some Cloud Computing vendors have included the geographic location of the hosting as a parameter of the service level agreement made with the customer. For example, Amazon EC2 provides the concept of availability zones that identify the location of the datacenters where applications are hosted. Users have access to different availability zones and decide where to host their applications. Since Cloud Computing is still in its infancy the solutions devised to address these issues are still being explored and will definitely become fundamental when a wider adoption of this technology takes place.

(edited by: Dexter Duncan)

REFERENCES:

[1]  M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoika, M. Zaharia, Above the Clouds: a Berkeley view of Cloud Computing, Technical Report, UC Berkeley Reliable Adaptive Distributed Systems Laboratory, available at http://abovetheclouds.cs.berkeley.edu

[2]  R, Buyya, C.S. Yeo, S. Venugopal, J. Broberg, I. Brandic, Cloud Computing and emerging IT platforms: vision, hype, and reality for delivering IT services as the 5th utility, Future Generation of Computer Systems, 25 (2009), 599–616.

4 thoughts on “Cloud Start

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s