Just how fast or slow is cloud computing? Well, a lot depends on how you define the cloud and what tests you run.
Indeed, it's fair to say that the term cloud is so broad as to be almost meaningless. It can encompass a huge number of things: software, services, network infrastructure, "rental metal" (infrastructure-as-a-service), and hosted web applications (software-as-a-service). It can also refer to in-house (private) or third-party (public) business models.
Public clouds are based on the economics of sharing, says Alistair Croll, a member of Bitcurrent and a principal at startup accelerator Rednod. Bitcurrent is part blog, part analyst firm, and part resource site for web operations.
"Cloud providers can charge less, and sell computing on an hourly basis without long-term contracts, because they're spreading costs and skills across many customers," says Croll.
Of course, a shared cloud model means that an application competes for limited resources with other users' applications.
"The pact you make with a public cloud, for better or worse, is that the advantages of elasticity and pay-as-you-go economics outweigh any problems you'll face," says Croll.
Enterprises, he says, are skeptical about the value of clouds, because they must relinquish control of the underlying networks and architectures on which their applications run.
Is performance acceptable? Are clouds reliable? What are the tradeoffs?
To find out some, if not all the answers about the efficiency of leading cloud platforms, Bitcurrent teamed up with researcher Webmetrics to run hundreds of tests from many locations across Amazon, Google, Salesforce, Rackspace, and Terremark.
"We built and deployed custom test agents on each cloud, and crunched hundreds of megabytes of log data for a month," says Croll. The report measures service response, network performance, CPU, and internal I/O.
The agents were: a simple web request, to measure the responsiveness of the system for a trivially small, static object -- a 1x1 pixel GIF; a request for a large (2megabyte object), to measure the network throughput; a request that triggered a million mathematically-intensive calculations, to test the computer power; and a request that searched 500,000 rows of a database for a string, to test the back-end I/O of the system.
The results show that performance varies widely by test type and cloud.
Here are some of the other findings:
* All of the services handled the small image well.
* PaaS clouds (App Engine, Force.com) were more efficient at delivering the large object, possibly because of their ability to distribute workload out to caching tiers better than an individual virtual machine can do.
* Force.com didn't handle CPU workloads well, even with a tenth of the load of other agents.
* Amazon was slow for CPU, but Webmetrics was using the least-powerful of Amazon's EC2 machines.
* Google's ability to handle I/O, even under heavy load, was unmatched. Rackspace also dispatched the I/O tests quickly.
Clearly, there's no single "best" cloud platform, concludes the study.
PaaS (App Engine, Force.com) scales easily, but locks you in. IaaS (Rackspace, Amazon, and Terremark) offers portability, but leaves you doing all the scaling work yourself.
The full study -- complete with detailed conclusions, test methodology, and even agent code -- can be downloaded for free from Webmetrics