Most Read This Week
Combining the Cloud with the Computing: Application Delivery Networks
What new challenges does Cloud Computing present for the enterprise?
By: Willie M. Tejada
Sep. 25, 2009 07:45 PM
IT executives are being asked to increasingly evaluate new cloud-based services to improve business agility while lowering operating and capital costs within the enterprise. Yet often very little is known about the “cloud” itself. How does it work? What new challenges does it present for the enterprise?
The first of the two words in cloud computing is often not well understood. It’s almost always drawn very minuscule in pictures while dwarfed by the virtualized server farms providing on-demand computing power. Implying as if the cloud is secondary and works in a simple way – something goes in one side of the cloud and then shows up instantaneously on the other side. Or perhaps it’s a control issue – after all, the cloud is seemingly outside of the data-center beyond direct control of IT...or is it?
In order for cloud computing to realize its full potential and become commonplace for a range of business processes and applications within the enterprise the cloud itself needs to be treated equally as important as the computing aspect. The two must go hand-in-hand. For decades, enterprises have grown accustomed to private IP-VPN services such as MPLS offered by network providers. Such services offer high degrees of uptime, low latency and packet loss guarantees, and a sole point of escalation for problem resolution. Yet the on-demand accessibility promised by cloud computing services are best fulfilled when any type of user can access applications – anywhere in the world, and at any time thru a common interface such as a Web browser. And it simply isn’t possible to run private IP-VPN services to everywhere application users have access to a Web-browser. As a result, the Internet is more often than not the de facto cloud used to fulfill the ubiquitous reach and economies of scale necessitated by on-demand cloud applications.
Herein lies the challenge. The Internet cloud is not like a private network offered by a service provider. The Internet is a network-of-networks, consisting of over ten thousand individual network providers. And unlike traffic carried within a private-WAN, not all networks are economically motivated to carry the bulk of Internet traffic generated by an on-demand cloud computing service. The first-mile provider offering bandwidth for the data-center and the last-mile access provider are the two providers who directly get paid to connect the user to the application. All other Internet network providers have little economic incentive to exchange and deliver traffic and apply sub-optimal, unreliable relationships called peering. Peering relationships manifest themselves by adding extra round-trip latency and packet loss by way of the Border Gateway Protocol (BGP) which is used to route application requests thru the cloud between application users and the infrastructure1. Yet any latency or service interruption, whether caused by either the computing infrastructure or the cloud, degrades user experience and can damage customer satisfaction resulting in abandonment issues and low adoption of cloud computing services.
To make matters even worse, other protocols used to govern Web application delivery such as the chatty TCP protocol for transport and HTTP for applications introduce new application delivery bottlenecks for distributed users of on-demand cloud based applications. Users far away from computing infrastructure will experience slower response times and worse availability than those users close to the resources. And the Internet opens new security vulnerabilities ranging from Domain Name Server (DNS) and distributed denial-of-service (DDoS) attacks to more advanced malicious activities exposing application-specific vulnerabilities.
The challenges associated with the Internet cloud are very real. What happens to application adoption when one user gets a 5-10x slower application response time than another, merely because of their increased distance from computing servers? What if applications are unavailable due to issues associated with the Internet itself such as congestion, de-peering, cable cuts or earthquakes? What happens if your in-cloud application is attacked by Internet hackers? As evidenced by a recent State of the Internet Report2, attack traffic on the Internet was originated in over 139 unique countries. Over 400 unique ports were attacked, a twenty fold increase from just the prior quarter. DDoS attacks continued to exploit tactics that were identified years ago along with numerous high-visibility DNS hijackings. Network and routing outages remain commonplace. And Website and application hacks, such as SQL injections and cross-site scripting (XSS) attacks have infected hundreds of thousands of Web properties. It is clear the Internet must transform into a predictable, reliable application delivery platform suitable for business use to fulfill the promise of cloud computing within the enterprise.
Cloud computing providers need a strategy for optimizing the cloud for their on-demand applications and computing services on a global scale, while remaining as cost-effective as possible, in order to survive what is undoubtedly becoming increasingly competitive environment. At the same time, they are pressured to ensure their infrastructure can cope with a rapidly escalating volume of data and shield users from in-the-cloud bottlenecks outside of the data-center. For this reason, they are increasingly reliant on proven third-party providers for the reliable and cost-effective delivery of on-demand content and applications in the cloud in to solidify their position in this rapidly evolving and promising market.
One way of optimizing delivery over the Internet cloud has been thru next-generation content delivery network (CDN) providers. To enable on-demand cloud computing services, however, such providers must transcend far beyond traditional CDN capabilities to address the fact that rich interactive websites and enterprise applications aren’t generally cacheable like a large media file or image. Dynamic content requires new application delivery optimizations addressing routing, transport and application layer protocol inefficiencies introduced by the Internet cloud for effective delivery. Such optimizations allow globally distributed users to feel as though they are close to centralized computing resources, regardless of their distance from the infrastructure, while addressing other key availability, security and scalability bottlenecks associated with Internet-based application delivery.
Next-generation CDN providers incorporate tens of thousands of distributed computing servers across the globe at the edge of the Internet, within one network hop away from both application infrastructure and the vast majority of the world’s Internet users. In essence, this creates a distributed global “overlay” of the Internet serving as the foundation for powering a better Internet experience. Thru software written on the platform, the application of a sophisticated set of algorithms and knowledge of real-time Internet conditions are applied towards accelerating content goes well beyond static caching and traditional CDN capabilities to optimize application delivery bottlenecks for fully dynamic, on-demand applications. Essentially, these services leverage their own optimized protocols to optimize the distance induced performance and availability challenges introduced by BGP, TCP and HTTP protocols. Next-generation CDN services, often referred to as “Application Delivery Networks” (ADN), improve the delivery of dynamic content in the Internet cloud, without the use of any additional hardware, new software or application code changes for any application user accessing an application over the Internet cloud. The operation of an ADN is described and illustrated in Figure1.
1. A dynamic mapping system based on DNS directs user requests for secure application content to an optimal edge server.
Providers of on-demand computing resources and applications leveraging ADN technologies benefit by keeping data-center build-out to a minimum while simultaneously addressing Internet delivery issues. ADN services are provided as a convenient managed service with no capital expenditure. The result is higher application availability, better performance, improved security, and significantly improved scalability and operations. Cloud computing providers can focus on their core strength – developing innovative hosting services, application development platforms and off-the-shelf software applications - while benefiting from a scalable and robust delivery platform which works on a global scale.
Figure 2 – Response times across 25 geographies to complete a 4-step dynamic transaction for a Web-based customer service portal hosted as a single instance in eastern United States. Prior to the use of an ADN, users in some cities such as Madrid, Singapore and Sydney experienced over 40-second response times. After the use of an ADN, all cities exhibited response times no more than 17-seconds – whereas someone in Singapore would “feel” as though they were located in Los Angeles.
Some of the large cloud computing providers will opt to build-out a multitude of big regionalized data-centers, often spending tens or hundreds of millions of dollars on big data-center investments. While this will undoubtedly place on-demand infrastructure in closer proximity to application users, there are architectural limitations to this approach.
On-demand browser applications are accessible on a global scale, which means if the application resides in a single data-center there will always be some portion of the user community who will be much farther away. Do you have your application run in a North America, Europe or Asia-Pac data-center? And replicating instances of a single application across multiple data-centers may often not be desirable or even possible due to a variety of considerations such as management, cost, integration, performance, regulatory compliance and security
For those applications which can be replicated in multiple instances, however, the big data-center approach remains flawed as the majority of application users are most likely not buying their Internet connectivity from the same provider servicing the regional data-center. In fact, measurements show the ten largest networks in the Internet provide last-mile subscriptions to approximately 30% of overall Internet users3. And no single network provides more than 10% of the access traffic. So even if application instances were replicated in large data-centers that happened to reside within the world’s 30 largest networks, the average distance from an application user to data-center would still exceed 1,500 miles. Let alone unless the data-center is in the same service provider as the application user, the user remains at the mercy of Internet delivery bottlenecks.
From IP traceroute measurements, it is easy to observe how users are sometimes routed outside of countries and even continents to reach data-center infrastructure. Even when having infrastructure in the same city as the end-user, but not the same service provider, applications can be subject to substantial latency challenges. As a result, despite pre-existing data-center build-out, the use of an ADN is highly beneficial to optimize from the application user to a nearby data-center.
Table1: It is very common for Internet routing to go outside of city and country when connecting application users to nearby data-centers. For example, based on a sample of IP traceroutes, an application user in Frankfurt would traverse 3 or more ISP's 74% of the time to connect to application infrastructure also located in Frankfurt.
Leveraging CDN for static delivery of content via the public Internet is well established and understood. The next-generation of CDN services – Application Delivery Networks - are already proven and can be equally effective for transparent delivery of dynamic, on-demand applications developed and delivered within the Internet cloud. For many years now, leading managed service providers have been offering advanced services based on highly distributed global platforms which transform the Internet into a reliable and high-performing platform for on-demand application delivery to the global enterprise – for anyone, anywhere, anytime. An increasing number of applications and business processes are moving to a cloud-based delivery model. Whether it is for rich interactive Web 2.0 websites, web-enabled business processes such as extranet portals and supply chains, software-as-a-service and now on-demand cloud computing – the importance of optimizing the cloud itself moves to the forefront in order to meet the stringent demands of the enterprise.
Globally distributed Application Delivery Networks put the optimal architecture for in-cloud optimization right into IT and application development’s hands. The Internet cloud is tremendously complicated and those placing the same scrutiny towards optimizing outside of the data-center, as inside the data-center, are those who will be able to successfully satisfy the stringent demands necessary to bring cloud-based applications to the marketplace.
For those evaluating the use of any cloud-based platform or service… don’t forget the cloud. Ask probing questions to understand what is available to optimize cloud-based application delivery both inside and outside the data-center. The use of highly distributed Application Delivery Networks when applied to on-demand computing platforms is a powerful combination to help bring cloud based services to the enterprise market and is readily available today.
Recommended Reading and Viewing:
1 Historical Internet latency & packet loss measurements
Reader Feedback: Page 1 of 1
Subscribe to the World's Most Powerful Newsletters
Today's Top Reads