If you follow enterprise IT, you can’t miss SAP’s HANA in-memory database. SAP claims this product is a game changer, and they have managed to run their enterprise software on it. It also runs in the Cloud — both the Amazon Cloud and now SAP’s own Cloud.
True, software will run faster on an in-memory system vs. a traditional hard drive-supported install. And as you would expect, your storage requirements will show a corresponding drop. But we’ve had the technology to emulate storage in memory for decades. I ran a computer lab back in the early 1990s, and I’d use RAM disk software on Macintosh Plus computers. They didn’t have hard drives (hard to imagine now!) so I’d boot them with a RAM disk in order to eject the boot floppy. Today, you can run SAP software in HANA and free up your storage the same way my RAM disk trick freed up that old floppy drive.
It goes without saying today’s systems have far more RAM than the 1.5 megabytes my old Mac Plusses had. But then as now, RAM is more expensive than magnetic (hard drive) storage, megabyte for megabyte. And of course, RAM is more volatile. One blip in the power supply and goodbye data. Just ask any of my students who lost their work when their Macs crashed.
In my opinion, however, the problem with an in-memory system like HANA is that it gives you an excuse to run poor software, since after all, even the crappiest software will run faster in-memory. But if you think that running a partition-intolerant relational database in-memory will make it any more Cloud friendly, then you’re missing the point of the Cloud.
Cloud storage is plentiful, cheap, and horizontally scalable. Memory per virtual machine (VM) is limited and volatile. The last thing you want to do is run a partition intolerant system in-memory! HANA is nothing more than a stopgap measure by a desperate vendor trying to maintain its revenue in the face of an ancient code base that is inherently Cloud-unfriendly.