- Date published:
- Author:Brian Wood
This is an interesting article by 451 Research analyst Own Rogers that takes a look at public cloud computing from a risk perspective.
Since many CIOs and CFOs are risk-averse, it will resonate.
Dedicated resources are only cheaper on a per-unit basis if utilization is high — and the risk of the forecast being wrong is non-trivial.
Emphasis in red added by me.
Brian Wood, VP Marketing
Cost and risk in assessing cloud value
The cloud is often touted as a way of reducing IT costs, but is it always more cost-effective than traditional dedicated infrastructure? This report shows that the cloud’s most attractive economic feature is its ability to reduce business risks by allowing consumers to take advantage of market growth without being penalized for market shrinkage.
To demonstrate this risk-reducing capability, a cost comparison of a public versus private cloud is included, showing how cost and revenue are impacted when assumptions and forecasts prove to be inaccurate. The conclusion is that the public cloud may not always be cheaper than dedicated infrastructure, but it takes the risk and burden of capacity planning away from the consumer, and puts it squarely in the hands of the provider.
Is the cloud always cheaper?
The public cloud enables users to move from capex to variable opex, which means reduced up-front spending on IT without negatively impacted operations. Being able to avoid capital spending on IT and not having to provision for peak capacity are winning tickets for CIOs to take to the board as they fight for IT budgets. It’s important to recognize, however, that turning capex into opex doesn’t necessarily change the bottom line.
The per-hour cost of a public cloud is generally going to be more expensive than the per-hour cost of using a private cloud or traditional hosted environment, at least when resources are being consumed. Overall cost savings are achieved using public cloud because when resources aren’t being consumed, there is no cost; with a private cloud or dedicated infrastructure, there will be a cost even if resources aren’t being used.
So in a public cloud, if the same amount of resource is being consumed all the time, then the overall cost will be more than a private cloud or dedicated server. On the other hand, if resources are only occasionally being consumed in the public cloud, then the overall cost will be less than using dedicated technology. Determining where the threshold occurs (i.e., at what point is it more cost-effective to move to cloud than remain on dedicated) is difficult, simply because it’s impossible to know exactly what resources will be demanded in the future.
To determine the costs of public cloud versus dedicated infrastructure (including private cloud), a forecast is needed of future demand for resources. This demand can be used to calculate the cost of using a public cloud, which can then be compared with the cost of building a private cloud to meet the anticipated demands. Then it’s just a question of comparing the costs of using a public cloud, a private cloud or a dedicated infrastructure (perhaps the current infrastructure) against each other.
An issue of risk
The main issue with this cost-planning approach is the risk of financial loss should the forecast be incorrect. If the forecast is 100% accurate, then this comparison approach will work flawlessly; unfortunately, in reality, this is unlikely. If the forecast underestimates demand and the buyer has opted for a private cloud, then there is a risk the cloud will not be able to support the demand, and performance will suffer. If the forecast overestimates demand and the buyer has opted for a private cloud, then there is a risk that the investment will not pay off. These aren’t risks in public cloud, since the buyer can grow and shrink to suit this demand.
So public cloud reduces risk on the consumer’s part. Sometimes the consumer will pay more, but they do not risk spending money on an expensive infrastructure that doesn’t achieve an ROI; nor do they risk not being able to meet demand. Of course, there are others risks – putting a business-critical application in the hands of someone else is bound to have few. Security, availability, regulation and performance are all issues, but in this report, we look at just consumable costs.
Because creating a forecast is difficult, one approach is to follow a ‘what if’ analysis. Don’t just consider the ideal forecast for resource demand (‘the target case’), consider what will happen if the worst, or the best, case occurs for each of the different options (typically a public cloud, a private cloud or keeping the status quo). Then do a spending comparison to see how, cost-wise, these options stack up against each other.
The following is a hypothetical example of such a comparison that also demonstrates the risk-minimizing benefit of cloud computing. Note that the outcome and the conclusions of such a comparison will be different depending on the assumptions, costs, demand profile and accuracy used to make the comparison. We compare a public cloud to a private cloud, but note that this comparison also applies to a dedicated hardware solution, where scaling cannot be done automatically if additional hardware resources are not available.
Creating a forecast
An IT director at a Web application company has produced a forecast of likely demand to be experienced over the next 36 months. Making such a forecast is not necessarily an easy feat; countless textbooks have been written on the subject. Some ideas for extracting a baseline forecast for IT resources include:
- A business plan will contain a forecast of revenue. If IT is related to this revenue (such as an e-commerce site), it is possible to link the revenue forecast to the forecast for IT demands. For example, if each virtual machine can support 100 concurrent users on a website, and the mean spend per user is $5, then to drive an hourly revenue of $10, two virtual machines are needed to support that revenue generation.
- Historical data of server utilization or physical growth in a server can show a general trend in demand for resources, which can be used as a baseline or worked in with revenue-growth forecasts. This data can also be used to determine assumptions. There are several trending algorithms and techniques to show trends (Holt Winters for seasonal, moving average for simple trends).
- A product roadmap can show product releases, software updates, etc. that can drive demand for IT resources.
- A marketing plan can show email campaigns, events, social networking, etc. that might drive demand for IT resources.
- Corporate relationship management systems can show historical data of past sales interest and development, and can also show future pipeline information, which might indicate future demand.
The crucial thing to notice about all these factors is that they are estimates. Nothing is certain.
Best case vs. worst case
In our example, the target resource demands (in virtual machines, in this case) are related to the business case for the company. The IT director has also prepared a worst-case scenario, based on how the company will perform if growth is at the lower end of its ability to stay afloat. The best-case scenario is based on whether the company wins a major contract early, which should drive revenue through publicity marketing. These forecasts are shown in the figure below.
Ideally, we would forecast in hours, but for clarity we are going to stick to months. Note that if we were comparing a public cloud to a colocated server, the capacity of our server infrastructure would remain fairly constant over the entire period due to the difficultly in adding servers.
A cloud provider is approached to provide a quote on the cost of delivering a private cloud that can meet this demand. The private cloud will be expanded with additional hardware over time, and a forecast is needed to ensure servers are procured and installed in time for demand. The IT director takes this quote and makes an estimate of the number of staff required to support it and the cost of each staff member. This per-month cost turns out to be $100 per resource.
The forecast shows how many virtual machines are required each month to support the target demand, so calculating the cost of using a public cloud is simply a matter of multiplying this demand by the cost per month obtainable from the cloud provider (again, hours would be better, but our forecast is in months for clarity).
The price per resource per month comes in at $150 for the public cloud, clearly more expensive than the dedicated private cloud. The figure below shows the overall cost of private vs. public, to support the target demand profile.
Great, the IT director’s strategy of using a private cloud and growing it with anticipated demand appears to have paid off – it is cheaper than using a public cloud. But imagine the IT director’s forecast is inaccurate. The worst-case scenario has come true, and the company has a private cloud that can support the target demand, but the infrastructure isn’t being utilized to this extent. The figure below shows the cost comparison.
In this case, the IT director has spent much more money than needed on the private cloud. If he had stuck to public, his company wouldn’t have lost this amount. There is good news though – if business booms and the best case comes true, the private cloud cost is fixed, so it remains cheaper than using a public cloud.
Now let’s consider revenue. In both the target and best case, the private cloud has managed to offer a minimum performance level. All users who need to access a resource would have been able to in a timely manner. The finance department is aware that each resource, on average, generates $150 of revenue as a result of customers purchasing items from the website.
Since all users can access the public or private cloud in the worst- and best-case scenarios, revenue is the same regardless of deployment method. Unfortunately, when the best case happens and the website is in more demand than anticipated, the private cloud doesn’t have the ability to grow past a certain point. Now users who try to access the website will receive an error or will have to wait for a response. It is well documented that users who are not able to access a website won’t wait for long; these users will probably not return, and a source of revenue will have been lost.
This figure shows the revenue obtained using the best-, target- and worst-case scenarios for the different deployment types:
Revenue and cost are important because they are the components of profit. The IT director is not directly responsible for profit, but the business relies, as a whole, on making more money than it spends. The figure below shows the gross profit for each case discussed:
What does this mean?
In the IT director’s case, the risk assessment can be articulated as follows:
- If the forecast is accurate, the private cloud will increase profitability by $12,000 over the period compared with using a public cloud, generating a $72,000 profit.
- If the forecast is too pessimistic, the private cloud will decrease profitability by $8,000 over the period compared with using a public cloud as a result of failing to support the demand. However, a profit of $70,000 will still be achieved.
- If the forecast is too optimistic, the private cloud will decrease profitability by $43,000 over the period compared with using a public cloud, ultimately resulting in a $23,000 loss.
How this risk assessment is interpreted is often a personal preference. Some would be inclined to stick with public cloud until the savings to be achieved were more guaranteed. But others would disagree; there is no right or wrong answer. Humans are not particularly good at assessing probability or risk.
The benefit of hybrid clouds and best execution venues
Hybrid clouds almost act as the worst-case scenario. They provide a number of resources that the organization is confident will be utilized enough to make a return on the investment. If demand goes above this, additional resources are available from the public cloud. The use of best execution venues managed though cloud brokers and markets further extends this idea; worst-case requirements are secured through the use of forward and other derivatives, while unanticipated resources can be purchased from the spot market.
The important point to note is that in public cloud, assumptions and estimates are almost unnecessary. It is the provider who takes the risks, based on its own assumptions, risks and plans. If consumers can start and stop resources when they need to, without committing to anything, then risk is minimized.
Dedicated infrastructure, including private clouds, does have risk – someone must plan and purchase hardware requirements without knowing for certain they will be fully used. Even with the best information in the world, nobody has found the ability, yet, to predict the future.
Obviously this is a very naïve comparison and risk-assessment. For a start, we haven’t considered any other costs in detail: bandwidth or support agreements. We’ve also assumed that the private cloud cannot grow seamlessly and that a forecast is required to order additional capacity. There is also a plethora of other considerations – security, regulation, performance and availability. They all play a part, and have tangible cost benefits attached to them.
The crucial takeaway from this report is the concept that the cloud can reduce financial risks, but does not necessary reduce cost. We’ve used private cloud as a comparison, but any dedicated hardware acts the same. Cloud might not always be cheaper, but it will allow you to take advantage of market growth without being penalized for market shrinkage.