Digital Edition

SYS-CON.TV
The Industry Needs to Stop Reacting to Outages
Virtualization, cloud and mobile have all demanded and enabled IT to extend the reach of mission-critical applications

It feels as if we can't even go a week anymore without hearing about a new breach or outage. For years, IT departments were always on stand-by should the unimaginable happen and were judged by how quickly they could curb a bad situation. These days, however, it's not good enough to fix a problem - even if it's taken care of within a few minutes. Questions start to arise the minute something bad happens, and to really show strength IT departments have to stop the problem before it happens. Magic? No, it's just being proactive and it's imperative more than ever that IT closely monitor the health of their infrastructure to keep the business running.

IT departments experience performance and availability issues on a daily basis, often not discovered until end users complain to customer service representatives or help desks. With IT environments becoming more complex, it's become increasingly important to identify where problems are originating in order to avoid downtime or performance-impacting events before they occur. How can IT predict the unimaginable?

First, stop focusing on troubleshooting. Companies like NASDAQ, Facebook, LinkedIn and Yahoo! experienced crippling outages in 2013 that impacted customers and hurt their bottom lines. What did they have in common? They weren't able to detect the problem until it was too late. These companies are surely presented with the resources that allow them to catch the issues before their customers were affected, yet their inability to implement a solution that allows them to fix a problem before it happens cost them time, money, and a hit to their reputation.

Technology issues are the last things one would expect to negatively impact a company's brand, but in reality nothing is more crucial to running a business than its datacenter infrastructure. Infrastructures today are expected to perform better, faster and more consistently than ever before. Couple this with adapting to an exponentially increasing rate of change, and you have a recipe for disaster.

IT organizations are now looking to consolidate data centers to reduce costs and improve efficiencies. Many are turning to virtualization technology to help them get more value out of their existing assets while improving the environmental impact. But virtualization adds an additional layer of complexity, making it difficult to see through the layers of abstraction into the underlying infrastructure. Most companies see a high-level view, but many times they are missing a huge piece of the puzzle that underpins virtualization and application support in the enterprise.

Why is it so important to see that piece of the puzzle? With a full view of their infrastructure, organizations are much more likely to catch performance issues earlier and resolve them more quickly. The differentiating factor here is that when you are continuously viewing your entire infrastructure, you are more able to see trends and matching patterns. When something is off, it stands out quickly because you know what to look for. Similar to the way the security industry evolved around threat detections, the IT industry needs to evolve around infrastructure management.

Three major technology developments - virtualization, cloud and mobile - have all demanded and enabled IT to extend the reach of mission-critical applications, but have limited the enterprise's ability to manage the underlying systems infrastructure. Because of these developments, the IT operations team is constantly chasing problems that are increasingly difficult to find and resolve. Virtualization specifically has demanded a balancing act. Ensure the required performance is available, while driving the highest level of utilization. Otherwise you've overprovisioned and are wasting cycles, money and resources.

Society today expects business applications to be available 24/7, without delay, and the old way of thinking - buy more boxes or throw hardware at the problem - only makes matters worse. What most IT organizations do not realize is that the solution is right there, within their existing infrastructures. It is imperative they realize the importance of regularly monitoring and proactively searching for symptoms that could lead to a new breach or outage.

By using technologies that shine a light into the darkest part of the datacenter and arming users with definitive insight into the performance, health and utilization of the infrastructure, organizations can shift their focus to finding trouble before it starts. Instead of being reactive, we can switch to being a proactive industry that is able to diagnose and resolve issues before they start negatively impacting a business. The result? Greatly improved performance of existing infrastructures that enable IT to align actual workload with requirements and drive the highest levels of performance and availability at the optimal cost.

About John Gentry
John Gentry has been with Virtual Instruments since early 2009, leading the team that is responsible for bringing Virtual Instruments message and vision to the market. He brings 18 years of experience in Marketing, Product Marketing, Sales and Sales Engineering in the Open Systems and Storage ecosystem. He was a double major in Economics and Sociology at the University of California, Santa Cruz. John first entered the technology field as a marketing intern for Borland International, where he went on to be the first Product Marketing Manager for their Java Compiler. Since then he has held various positions at mid-sized systems integrators and established storage networking companies.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1



ADS BY GOOGLE
Subscribe to the World's Most Powerful Newsletters

ADS BY GOOGLE

ChatOps is an emerging topic that has led to the wide availability of integrations between group cha...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting ch...
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being soft...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know ...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every ...
The cloud era has reached the stage where it is no longer a question of whether a company should mig...
The need for greater agility and scalability necessitated the digital transformation in the form of ...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an over...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and ...
While some developers care passionately about how data centers and clouds are architected, for most,...
"Since we launched LinuxONE we learned a lot from our customers. More than anything what they respon...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily ...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - w...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Ser...
Sanjeev Sharma Joins June 5-7, 2018 @DevOpsSummit at @Cloud Expo New York Faculty. Sanjeev Sharma is...
We are given a desktop platform with Java 8 or Java 9 installed and seek to find a way to deploy hig...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis too...
"Cloud4U builds software services that help people build DevOps platforms for cloud-based software a...
The question before companies today is not whether to become intelligent, it’s a question of how and...