Most Read This Week
From the Blogosphere
McKinsey, The Cloud, and Fuzzy Calculations
McKinsey also knocks the uptime factor, claiming that enterprises set their own SLAs at 4 9s or higher
By: Brandon Watson
Apr. 21, 2009 03:15 PM
The report takes it one step further to claim that since there is no agreed upon definition for what the “cloud” is (apparently they found a study that found 22 definitions for the “cloud”, which seems low to me considering the conversations I hear at conferences and on news groups), large companies should not think about “internal clouds” but rather focus on the immediate benefits of virtualization of servers, storage and network operations. They posit that the newness of the cloud is distracting IT departments’ attention from technologies that “actually deliver sizeable benefits; e.g. aggressive virtualization.”
The early part of the report unfortunately spends as much time as many of the conferences are these days on the minutia of what definition is right, and what “the cloud means.” More than anything, these diversions are tiresome for the observer and confusing for IT managers. They zero in on the following traits:
That sounds like what we presented at the Azure launch at PDC, but far be it from me to ask McKinsey to give Microsoft credit for the definition.
They call Windows Azure a cloud example, and not Azure Services Platform. This confusion is consistent with customers and press/blogger sentiment that I am seeing. Windows Azure is a piece of the overall Microsoft cloud play. It’s an application hosting environment, which serves as the foundation, though not required, layer for other code execution paths in the Azure Services Platform. One can build applications that live completely on-premises without using Windows Azure, but utilize other pieces of the Azure Services Platform.
They do call out the difference between a cloud and cloud services. Cloud services has the two key tenets of hardware abstraction and scaling elastically. The service could run on top of a cloud or not (e.g. SaaS).
McKinsey makes the mistake of confusing operating costs and startup costs. The use of clouds by small companies is a result of startup costs, cost of capital, and availability of funds. Those companies are the ones who are not already invested in large datacenters and likely lack the resources to build their own. Whereas large companies have sunk costs in their datacenters, and will most likely externally claim that their operating costs are much lower than reality. Over time, as they have to think about expanding and building new data centers with new equipment, large companies will most certainly then be looking at the cloud in much the same way that small companies are now.
McKinsey lays out the four main hurdles to adoption of cloud by large companies:
The report claims the “typical” enterprise datacenter has the following metrics:
We finally get into the calculations for large and small/medium companies at slides 23-24. They don’t show their calculations, but claim that the Total Cost of Assets for this typical datacenter is $45/month for CPU equivalent. Assuming 36 month depreciation, that $14K server is $48/month. Doing the math on Amazon’s Reserved pricing (for Linux servers – not available on Windows) yields:
McKinsey’s conclusions are simply wrong. All of the instances work out to the same pricing per month, but vary depending on your agreed upon term of use (1 year or 3 years). Importantly, assuming the 3 year depreciation schedule of their $14K server, the equivalent 3 year cost from AWS is $21/month/core. This pricing does not include bandwidth costs, but I compare it to the $14K server purchase price.
Even more confusing is that on the two slides they have separate EC2 pricing conclusions for small/medium companies and large companies, even though they have the same line of demarcation for what is economical – the $45/s month per CPU month. The boys at RightScale also take exception to the reporting of the numbers by McKinsey.
Page 25 is where things get interesting. McKinsey claims that there’s a 144% gap from running one’s own datacenter to complete outsource to AWS (which is an unreasonable premise, as wholesale outsourcing is not the message delivered to any customer from any cloud player). McKinsey then claim “the key factor is that the majority of servers that can be migrated are Windows servers.” The implicit claim is that Windows makes AWS more costly. A CIO takeaway may be “well, we have a ton of Windows boxes, so this won’t make sense.” It’s true that AWS pre-made images running Windows are more expensive, especially if you include authentication services. That’s for their pre-made images, and doesn’t take into account customers who have their own VL licensing.
On this same slide, McKinsey only attributes a 10% labor savings from moving to a third party provider. They don’t substantiate that number, and it feels very light to me. There is no talk of any of the automation that comes from moving to the cloud and using their tools for scale and elasticity. Think tools like RightScale or Microsoft Systems Center.
McKinsey also knocks the uptime factor, claiming that enterprises set their own SLAs at 4 9s or higher. In practice, this number is lower for any enterprise, but they have their own targets. There are no web sources which track the downtime of enterprise resources, but there are a few for the cloud providers. McKinsey claims that since AWS SLAs can’t match those of enterprises, enterprises won’t be interested. There’s no punitive recourse if an IT manager doesn’t hit SLA, except perhaps that he might get fired, but AWS would be on the hook for real monetary damages, necessitating SLAs that are more realistic. It’s easier to posture and claim you are designed for 4 9s than to say you have signed an SLA for 3 9s with a cloud provider. 4 9s, which is the enterprise target, allows only 52 minutes of downtime per year. One server reboot a month could put you over that number.
On slides 29-30, McKinsey claims that large enterprises can increase their server utilization rates from 10% to 35% with “best in class, aggressive server virtualization.” Additional cost controls can be gained, they claim, through adopting data center best practices, yielding TCO savings of 50%.
Finally, they liken the hype around cloud to that of the dot com bubble, and ominously point out that the NASDAQ fell 80% when that one burst, suggesting that CIOs should avoid investing in the cloud hype.
What’s Missing from the Report?
· The report lacks any mention of the massive economies of scale which come from a large cloud provider purchasing equipment. Further, even things like the cost of power are glossed over, as our own internal $/kW-hour are much lower than those proposed for the “typical” datacenter.
· At present, AWS has near monopoly pricing power in the cloud, and it behooves them to keep those prices high. With additional competition, prices will come down.
· There is no mention of the speed to market associated with procuring and provisioning servers for any new projects, nor is there any mention of the risk mitigation for new projects.
Subscribe to the World's Most Powerful Newsletters
Today's Top Reads