Every enterprise today is reviewing the potential benefits of cloud computing, and about one in five has a large-scale, active cloud project under way. This statistical consensus is about the only common ground in cloud computing, though. Enterprises differ sharply in their approach and thus in the technology route they're taking, and that means planning for public or private cloud computing can be challenging.
Planning for cloud-based unified communications (UC) applications is like planning for any cloud-based application -- for several reasons. First, and most obvious, UC is an application, and it will normally make sense to apply an enterprise's standard policies for public cloud vs. private cloud computing adoption to UC. Second, the factors you should consider to determine whether to host unified communications through a public or a private cloud are the same factors to consider for applications in general. You'll save a lot of work by being consistent!
Three cloud computing models
A part of the source of differences in how enterprises approach cloud computing is the relationship between cloud computing and Software as a Service (SaaS). Most companies that have previously offered enterprises SaaS versions of software now characterize their offerings as cloud computing services, and, in fact, SaaS is one of three models of cloud computing -- a model where the application's software, middleware, operating system and hardware are all offered as a service.
The other two models are what really differentiates cloud computing. Platform as a service (PaaS) offerings provide a user with a remote hardware configuration, operating system and middleware tools. The user supplies the application software, which is then run on the hosted platform. Infrastructure as a Service (IaaS) provides only a hardware platform and (normally) basic operating system software, leaving the user to provide the rest.
These distinctions characterize cloud computing as it's offered as a service to enterprises; the popular term is public cloud computing. While most enterprises are either already using some form of public cloud computing or planning for its use, clearly most do not plan to "cloud source" their mission-critical applications. These enterprises are planning to restructure their own data centers in accordance with cloud computing principles to create private clouds.
Differences between virtualization and cloud computing
Cloud computing at the technical level is an extension to the concept of virtualization that most enterprises already apply in at least some of their data centers. The justification for both is that when servers are assigned to specific applications, server utilization is low and the total server capacity and cost that's wasted can be as high as 45% for typical data centers. Both virtualization and cloud computing create resource pools from which server capacity is drawn by applications on demand.
The difference between the two approaches is primarily in their scope. Virtualization applies at a per-system or per-data-center level, and cloud computing applies across multiple data centers. Many data center virtualization implementations could be extended across multiple data centers and even across wide-area services, however, provided that the interconnecting communications links were comparable to those of data center LANs. In fact, where data centers are linked with high-speed facilities to create cloud resource pools by extending virtualization principles, the result is best planned and managed as a virtualization deployment.
Planning for public and private cloud computing
Private cloud computing planning should always focus first on how to connect the resources of multiple participating data centers into a common pool and then how to connect users to the selected data center to access the application once resources are assigned. When planning for public clouds, resource connectivity is out of the control of the enterprise, so the only issue will be how to address and connect with the correct cloud resources.
While most enterprises are ... using some form of public cloud computing or planning for its use, clearly most do not plan to 'cloud source' their mission-critical applications.
Users typically access applications using either a fixed IP address and port number or (preferably) a URL or URI (universal resource identifier). The mapping between the actual resource assigned to an application and this identifier will have to be adjusted as resources are assigned.
Standard techniques based on address decoding standards like DNS (Domain Name Server) or UDDI (Universal Description, Discovery, and Integration) can be used. Amazon adopts a system of elastic IP addresses that act like NAT addresses and can be reassigned by Amazon as the host associated with an application changes, presenting a constant address to the user.
Elastic addressing may be sufficient to provide connectivity between client and server in the cloud, but the specific route taken by the traffic can change, and so the performance and security of the connection can vary. This is true in both public and private cloud applications, but the variations can be more severe with public clouds because the geographic size of the resource pool can be much larger and the network performance variability correspondingly greater.
Some cloud providers will allow resource pool assignment to be "regionalized" to reduce the risk of having an application assigned to a resource half a world away from the users, but this capability should be reviewed and verified for each enterprise's inventory of applications and user locations.
Public versus private cloud computing security
Public and private cloud applications are both likely to be less inherently secure than standard data center applications, even presuming that the same Internet VPNs or other client-to-application connection technology are used. The variability of resource locations and the possibility of having resource assignment processes hacked must always be considered. In addition, public cloud applications lack the physical security available in enterprise data centers, and this may be particularly important where sensitive data must be stored in the cloud.
A final consideration in public versus cloud computing planning is the question of interprocess communications (IPC). As the number of SOA-componentized applications increases, so does the traffic between components. If applications are either divided between public and private cloud resources or are running entirely on public clouds, the connectivity between resources that supports this IPC traffic is out of enterprise control, and yet the performance of these connections can have a significant impact on overall application performance.
Remember that cloud computing is essentially a resource efficiency play and that large enterprise data centers may achieve as good economy of scale as public cloud computing. Given the markup associated with public cloud computing, private clouds may well be cheaper than public clouds if the design and connectivity of the cloud are optimized. A careful audit of utilization and cost should be undertaken before making any decision to adopt a public cloud model; private clouds may still be best for many, if not most, users.
With UC and cloud computing, it's important to remember that the UC application scope varies considerably among the various UC vendor/product choices, and thus the resources involved will also vary. Many companies would recognize that UC is not a major resource user and thus would not by itself justify a private cloud implementation. If you don't have a private cloud initiative for your key applications, you probably should consider the public options for UC.
About the author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is the publisher of Netwatcher, a journal addressing advanced telecommunications strategy issues. Check out his SearchTelecom.com networking blog, Uncommon Wisdom.
This was first published in May 2010