- Date published:
- Author:Brian Wood
As any kid knows, staying up is good — especially if it involves a fun movie and dessert.
As any IT professional knows, going down is bad — very bad, especially if revenue generation is affected.
Below is a summary of recent surveys that reveal some of our collective fears and opportunities.
There’s no time like the present to improve your storage, virtualize your environment, and/or develop a disaster recovery plan.
Article by Joseph Kovar in CRN.
Emphasis in red added by me.
Brian Wood, VP Marketing
6 Surprising Surveys About Causes And Effects Of System Downtime
For solution providers, the possibility of system downtime provides an opportunity to work closely with customers to comb through their IT infrastructures to find potential problems waiting to happen and to put in place new technology or disaster-recovery or business-continuity plans to reduce the chance of a system going down and/or mitigate the potential impact.
To help open the door to such opportunities, CRN has gleaned some interesting statistics sure to get attention from customers.
In the survey, Gridstore found that 55 percent of midsize businesses have experienced significant business and end-user disruptions as a result of major storage upgrades.
Gridstore also found that 32 percent of midsize businesses experienced failed upgrade processes and 9 percent experienced data loss as a result of upgrades.
Things in 2012 are not much better, the San Francisco-based cloud storage vendor found in its late-2012 survey of 650 IT decision-makers from companies ranging in size from 100 to over 3,000 employees in the U.S., the U.K, France, Germany and the Netherlands.
About 24 percent of respondents to the survey admitted to not telling their CEOs they are not backing up all files, especially those on mobile devices. About 38 percent admit they worry about their data not being saved securely or whether any work has been backed up at all.
They were right to worry. The IT decision-makers admitted that 53 percent of their companies had experienced data loss within the last 12 months, up significantly from the 31 percent that had that experience, according to the 2011 survey.
The Continuity Risk Benchmark, based on real-world customer data collected by Continuity Software’s RecoverGuard automated vulnerability-monitoring and detection software, found storage the root cause of downtime and data loss risks in 58 percent of cases, followed by servers at 17 percent, clusters at 11 percent, virtualization and the cloud at 9 percent, and databases at 5 percent.
Data loss was the primary potential impact to a business in 41 percent of instances, downtime and RTO (response time objective) violations at 25 percent, performance at 17 percent, and others at 17 percent.
Storage issues took the longest time to resolve at an average of 32 days, server issues at 19 days, cluster issues at 17 days, virtualization issues at nine days, and data base issues at seven days.
However, only 39 percent of respondents agreed or strongly agreed that senior management had access to increasing volumes of unstructured marketplace data needed to predict customer needs.
This made it difficult for senior management to analyze current data and to act on the results, according to 45 percent of respondents. Furthermore, over one-fourth of respondents cited managing the volume of data streaming from other sources as their major challenge.
In the survey, 72 percent of PSAPs serving populations of over 80,000 citizens experienced downtime in the past 12 months, with 50 percent having two to four outages and 11 percent experiencing over five. About 60 percent of PSAPs in smaller communities suffered downtime at least once in the past year.
Fifty-seven percent of outages lasted at least 15 minutes, while 26 percent lasted over an hour, according to respondents. Stratus estimated that one hour of downtime could potentially affect six 9-1-1 calls at a PSAP handling 50,000 calls annually.
About 29 percent of respondents said their organizations had no formal disaster-recovery or contingency plan, or did not know if a plan existed.
In the survey, Ponemon also found that 86 percent of companies suffered downtime in the last year, and lost an average of 2.2 days annually. Sixty percent of respondents said human error was the most common cause of downtime.
The biggest cause of downtime, cited by 70 percent of respondents, was moving data between different physical, virtual and cloud environments.