Digital Edition

SYS-CON.TV
Utility Computing: Three Key Building Blocks
Utility Computing: Three Key Building Blocks

Utility computing (UC) is a concept in which storage hardware resources are pooled within a shared infrastructure of storage management and made available as needed in a pay-as-you-go model. Ultimately, the end goal that UC promises is to help companies more efficiently manage the resources they currently have while holding down costs. To get there, and operate as a utility, IT needs three key building blocks - availability, performance, and automation - that apply across storage, servers and applications.

Before looking at the three key building blocks to UC in depth, it is best to look at why enterprises need UC. Basically, IT executives are caught between two conflicting demands. The user community is demanding more applications to automate their business, and more data to make better decisions, and they all want it now. Pulling on the IT executive's other ear are the CEO and CFO who are demanding that the CIO spend less on data centers, less on hardware, and less on people - while providing flawless service with fewer complaints from the user community. One last caveat: this must all be done with the existing technology in which the company has already made a major investment - a complex and heterogeneous environment of Web servers, application servers, databases, and hardware from a variety of vendors. This is a great example of why many enterprises are looking to heterogeneous software vendors for a UC strategy, as opposed to vendors who want to "rip and replace" the enterprise's current hardware to implement their UC solution.

The Three Key Building Blocks
Availability

The first requirement of utility computing is that data and applications must always be available. Users should be insulated from disruptive events ranging from server failure to a complete site outage. And despite the fact that eliminating downtime is an industry preoccupation, "always-on" computing remains a challenge. According to IDC, when disaster strikes enterprises on average can expect to experience 3-7 days of downtime per event. Falling hardware costs have made it possible for many companies to protect data with layers of redundancy, but that redundancy makes some IT structures more difficult to access.

For IT managers to take availability to maximum levels, they must first make sure that all enterprise data is backed up. The data in branch offices, remote offices, home offices, desktops, and laptops is unquestionably valuable, but because of costs and logistical problems it is usually not backed up. The utility computing model calls for centralized, automated, cost-effective backup of these resources.

How is data backed up and recovered? Data volumes mirrored at one or more remote sites can now be reliably replicated over IP networks to reduce the amount of data exposed to loss and to speed disaster recovery. Automated server provisioning eliminates error-prone manual recovery techniques. Clustering optimizes availability by automatically detecting application and database performance bottlenecks, or server failure by moving these critical services to other servers within the cluster. Failover should include not only the data application, but also the application state, reducing the effects of a failure to ensure minimal impact to end users and the business.

Under the heading of data availability the utility computing model includes virtualization and pooling of storage resources, which enables IT departments to drive up storage utilization rates and reduce costs. Storage virtualization also reduces administrative costs by providing centralized control of heterogeneous resources from a single GUI. Effective data life cycle management further reduces the costs of data availability by automatically migrating data to the most cost-effective storage medium and allowing enterprises to access it selectively for regulatory compliance.

Performance
Utility computing includes the ability to scale compute resources to the needs of the business, optimize end-user response times, improve the overall quality of service, and detect and remedy causes of performance degradation, all in real time.

This requires tools that can instrument the entire application stack, from the Web browser or client application to the storage device, even in complex heterogeneous environments. If end-user response times are lagging, IT staff can break them down tier-by-tier to pinpoint problems. IT staff should utilize a dashboard-type client to send alerts and reports, giving them early warning of developing problems along with pointers to appropriate remedial action. Or if a database is running too slowly, storage management and storage networks can accelerate access to data to make it run faster.

As networks, applications, and data continue to grow, performance optimization tools will become more significant and valuable to IT departments. Many vendors are promising 99.9% performance, but there is a huge difference between performance and availability. An easy analogy is a water hose. If water is dripping out of the hose, the water is available, but not performing to full capacity. Many vendors promote availability as performance. It is necessary to ensure that the enterprise has both availability and performance.

Automation
With the continuous commoditization and decline in hardware costs, people are more than ever the greatest expense in any IT department. Handling routine tasks in today's evolving heterogeneous environments is a costly, unnecessary hassle. Automating processes releases IT from routine tasks to focus on more strategic activity and application development. Automation should enable IT resources to adjust to changing needs without operator intervention.

But automation does more than free up costly staff members for more productive work; it also speeds up processes to improve availability, ensures that things are done right the first time, and saves costs through more effective management of resources. Here are several examples of what automation technology can do to bring the enterprise closer to the utility computing model:

  • Virtualization and pooling of storage devices: Driving up storage utilization and reducing hardware costs.
  • Simplification of storage management: Automating of common tasks from simple graphical interfaces.
  • Virtualization and pooling of compute capacity. Server utilization is notoriously low - at best, 20% - and applications vary over time in their need for processing. Drawing processing resources from a pool of servers drives up server utilization to align the needs of the business.
  • Provisioning a second server anywhere in the world when a server, an operating system, or an application fails: Auto-mated migration of the application makes the failover practically unnoticeable to users.
Attaining Utility Computing
The key question for many companies is, how do we get from where we are today to the utility computing model? How do we put the focus areas of availability, performance, and automation in place? There are five basic steps to consider:
  1. Discover. Inventory your IT assets and their utilization. This may sound fundamental, but many IT departments lack a single, straightforward inventory of storage, server, and application resources with utilization estimates. This is essential for the steps that follow.
  2. Consolidate. Once you have a clear picture of the organization's assets, you can begin to architect the environment to create enhanced availability. The first goal is to build a storage utility with software that allows you to virtualize and consolidate resources to improve administrator productivity and storage utilization. Consolidating processing resources will drive up server utilization in the same way.
  3. Standardize. Classify your applications, and then standardize on a set of integrated software tools and on a reasonable number of storage platforms each providing different qualities of storage service.
  4. Automate. Now you can start to drive down the amount of time and labor required to request, provision, and manage the environment with automation - the third area of focus. Effective automation drives down cost, improves service levels by reducing intervention, and makes interaction with IT resources a more predictable experience.
  5. Allocate costs. Finally, move to a utility computing, service-provider model by accurately reporting the costs of service level delivery. The costs may be allocated or charged back to business units on a usage basis.
As these steps are taken, IT moves closer to the utility computing requisites: data and applications that are always available, performance that is maintained at specified levels, and IT management that is highly automated to reduce costs. Efficiency rises, and hardware and administrative costs drop. Ultimately, allocating costs helps align IT with business operations. These are achievable benefits that are being enjoyed today by progressive enterprises
About Mark Bregman
Mark F. Bregman is Senior Vice President and Chief Technology Officer at Neustar. He joined the Neustar executive team in August 2011 and is responsible for Neustar’s product technology strategy and product development efforts.

Prior to joining Neustar, Dr. Bregman was Executive Vice President and Chief Technology Officer of Symantec since 2006. His portfolio while CTO of Symantec Corporation included developing the company’s technology strategy and overseeing its investments in advanced research and development, security and technology services.

Prior to Symantec, Dr. Bregman served as Executive Vice President, Product Operations at Veritas Corporation, which merged with Symantec in 2005. Prior to Veritas, he was CEO of AirMedia, an early mobile content marketplace, and spent 16 years in a variety of roles at IBM. Dr. Bregman serves on the Board of the Bay Area Science & Innovation Consortium and the Anita Borg Institute, which focuses on increasing the impact of women on all aspects of technology.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1



ADS BY GOOGLE
Subscribe to the World's Most Powerful Newsletters

ADS BY GOOGLE

"Akvelon is a software development company and we also provide consultancy services to folks who are...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native a...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objecti...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mis...
The question before companies today is not whether to become intelligent, it’s a question of how and...
High-velocity engineering teams are applying not only continuous delivery processes, but also lesson...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offe...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to ...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are...
"MobiDev is a software development company and we do complex, custom software development for everyb...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced yo...
"NetApp is known as a data management leader but we do a lot more than just data management on-prem ...
While some developers care passionately about how data centers and clouds are architected, for most,...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection...
"We're focused on how to get some of the attributes that you would expect from an Amazon, Azure, Goo...
Data scientists must access high-performance computing resources across a wide-area network. To achi...
"We're developing a software that is based on the cloud environment and we are providing those servi...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22n...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22n...