Digital Edition

SYS-CON.TV
The Industry Needs to Stop Reacting to Outages
Virtualization, cloud and mobile have all demanded and enabled IT to extend the reach of mission-critical applications

It feels as if we can't even go a week anymore without hearing about a new breach or outage. For years, IT departments were always on stand-by should the unimaginable happen and were judged by how quickly they could curb a bad situation. These days, however, it's not good enough to fix a problem - even if it's taken care of within a few minutes. Questions start to arise the minute something bad happens, and to really show strength IT departments have to stop the problem before it happens. Magic? No, it's just being proactive and it's imperative more than ever that IT closely monitor the health of their infrastructure to keep the business running.

IT departments experience performance and availability issues on a daily basis, often not discovered until end users complain to customer service representatives or help desks. With IT environments becoming more complex, it's become increasingly important to identify where problems are originating in order to avoid downtime or performance-impacting events before they occur. How can IT predict the unimaginable?

First, stop focusing on troubleshooting. Companies like NASDAQ, Facebook, LinkedIn and Yahoo! experienced crippling outages in 2013 that impacted customers and hurt their bottom lines. What did they have in common? They weren't able to detect the problem until it was too late. These companies are surely presented with the resources that allow them to catch the issues before their customers were affected, yet their inability to implement a solution that allows them to fix a problem before it happens cost them time, money, and a hit to their reputation.

Technology issues are the last things one would expect to negatively impact a company's brand, but in reality nothing is more crucial to running a business than its datacenter infrastructure. Infrastructures today are expected to perform better, faster and more consistently than ever before. Couple this with adapting to an exponentially increasing rate of change, and you have a recipe for disaster.

IT organizations are now looking to consolidate data centers to reduce costs and improve efficiencies. Many are turning to virtualization technology to help them get more value out of their existing assets while improving the environmental impact. But virtualization adds an additional layer of complexity, making it difficult to see through the layers of abstraction into the underlying infrastructure. Most companies see a high-level view, but many times they are missing a huge piece of the puzzle that underpins virtualization and application support in the enterprise.

Why is it so important to see that piece of the puzzle? With a full view of their infrastructure, organizations are much more likely to catch performance issues earlier and resolve them more quickly. The differentiating factor here is that when you are continuously viewing your entire infrastructure, you are more able to see trends and matching patterns. When something is off, it stands out quickly because you know what to look for. Similar to the way the security industry evolved around threat detections, the IT industry needs to evolve around infrastructure management.

Three major technology developments - virtualization, cloud and mobile - have all demanded and enabled IT to extend the reach of mission-critical applications, but have limited the enterprise's ability to manage the underlying systems infrastructure. Because of these developments, the IT operations team is constantly chasing problems that are increasingly difficult to find and resolve. Virtualization specifically has demanded a balancing act. Ensure the required performance is available, while driving the highest level of utilization. Otherwise you've overprovisioned and are wasting cycles, money and resources.

Society today expects business applications to be available 24/7, without delay, and the old way of thinking - buy more boxes or throw hardware at the problem - only makes matters worse. What most IT organizations do not realize is that the solution is right there, within their existing infrastructures. It is imperative they realize the importance of regularly monitoring and proactively searching for symptoms that could lead to a new breach or outage.

By using technologies that shine a light into the darkest part of the datacenter and arming users with definitive insight into the performance, health and utilization of the infrastructure, organizations can shift their focus to finding trouble before it starts. Instead of being reactive, we can switch to being a proactive industry that is able to diagnose and resolve issues before they start negatively impacting a business. The result? Greatly improved performance of existing infrastructures that enable IT to align actual workload with requirements and drive the highest levels of performance and availability at the optimal cost.

About John Gentry
John Gentry has been with Virtual Instruments since early 2009, leading the team that is responsible for bringing Virtual Instruments message and vision to the market. He brings 18 years of experience in Marketing, Product Marketing, Sales and Sales Engineering in the Open Systems and Storage ecosystem. He was a double major in Economics and Sociology at the University of California, Santa Cruz. John first entered the technology field as a marketing intern for Borland International, where he went on to be the first Product Marketing Manager for their Java Compiler. Since then he has held various positions at mid-sized systems integrators and established storage networking companies.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1



ADS BY GOOGLE
Subscribe to the World's Most Powerful Newsletters

ADS BY GOOGLE

Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pic...
Blockchain is a new buzzword that promises to revolutionize the way we manage data. If the data is s...
Modern software design has fundamentally changed how we manage applications, causing many to turn to...
Wasabi is the hot cloud storage company delivering low-cost, fast, and reliable cloud storage. Wasab...
David Friend is the co-founder and CEO of Wasabi, the hot cloud storage company that delivers fast, ...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple ...
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with e...
In addition to 22 Keynotes and General Sessions, attend all FinTechEXPO Blockchain "education sessio...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As au...
Gym Solutions is a software as a service (SaaS) solution purpose-built to service the fitness indust...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: D...
SUSE is a German-based, multinational, open-source software company that develops and sells Linux pr...
Mid-sized companies will be pleased with StorageCraft's low cost for this solution compared to other...
Yottabyte is a software-defined data center (SDDC) company headquartered in Bloomfield Township, Oak...
All in Mobile is a mobile app agency that helps enterprise companies and next generation startups bu...
Technological progress can be expressed as layers of abstraction - higher layers are built on top of...
Every organization is facing their own Digital Transformation as they attempt to stay ahead of the c...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism...
A traditional way of software development efforts reimbursing is pay by the hour, which in case of r...
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are rep...