Posted by Jason Bloomberg
on March 5, 2014
One show-stopping problem with automation is that the more dynamic the recipe is for the automation, the slower it ends up running in the operational environment. I ran into a discussion of this problem yet again in a presentation on the Chef Web site.
Chef, of course, is one of a new crop of vendors hawking automation tools for the Cloud. Instead of manually configuring your Cloud environment, the argument goes, use Chef (or Puppet, or one of several other similar tools) to write a recipe, or script, or template for that configuration, so that you can create as diverse and dynamic a configuration as your heart desires.
As the presentation laments, however, configuring, say, an Amazon EC2 instance using a recipe can lead to painfully slow boot times. But have no fear, Amazon has the answer: Amazon Machine Images (AMIs), which are preconfigured in a plethora of different ways to meet a wide range of possible needs. Simply pick the one you want, install it, and you’re off and running.
There are two main problems with the AMI approach, however. First, you need a boatload of AMIs – and the list of available ones keeps increasing. Second, there’s always the chance that no AMI will quite fit your requirement. Then you’re back to hacking individual VMs, which everybody wants to avoid.
The current solution to this conundrum is to mix and match – prebake certain things, but put recipes in your cookbook for others. Then the challenge becomes engineering your cookbook: handling lifecycle-centric issues like QA and versioning for the cookbook itself.
True, performing such engineering is DevOps at its finest, with ops folks looking to dev folks for engineering pointers they can apply to the automation of the ops environment. But if this entire picture of engineered cookbooks with a mix of pre-baked and customized recipes strikes you as inefficient, you’re not alone. Have we really done everything we could to leverage Cloud Computing and DevOps to squeeze inefficiencies out of our approach, or have we simply moved those inefficiencies around a bit?
True, the Cloud-savvy success stories today have navigated these waters and figured out how to make such cookbooks work for them. But this entire approach still smacks of the horseless carriage: taking a new paradigm and bringing old procedures and approaches to it because that’s how you’ve always done things. Any developer who has made the decision whether to use a prebuilt code module or build one themselves has worked through the same tradeoff. So what else is new?
What we need is a new paradigm for automating interactions that rises to the level of expectations that the Cloud sets for us. We need better than pre-written scripts mixed together with pre-baked machine images. The Cloud demands the ability to build fully dynamic, customizable configurations of anything without sacrificing performance. Anything less and you might as well be saddling up for a ride.
Posted by Jason Bloomberg
on February 24, 2014
There are two fundamental architectural paradigms battling for supremacy for the Internet of Things (IoT). First, as the appellation Internet would suggest, we have the Client/Server (C/S) paradigm. At the core of (C/S) is the notion of one-to-many: one server serving the needs of many clients. We’ve added layers to this structure, and the Web brought us thin clients, and SOA exposed server capabilities as contracted interfaces we called Services, but both the Web and SOA follow C/S under the covers.
Perhaps the IoT should follow C/S. The various sensors and controls that characterize the IoT could be thought of as a type of client, and they get their job done by talking to servers somewhere. Those servers, naturally, serve many such clients. Or perhaps not, as there is another architectural paradigm fighting for its spot in the sun: Peer-to-Peer (P2P).
P2P eschews servers, instead basing its core interaction paradigm on communications between endpoint nodes. True, there may be some box in a data center somewhere facilitating this interaction, but the role of that device is to support a lookup service so that nodes can find each other. Instant messaging and Skype are two P2P examples, although P2P file sharing services like BitTorrent are also quite prevalent.
So, maybe P2P is the way for the IoT to go? If we want our thermostat controlling our dishwasher, after all, doesn’t it make sense for the one device to speak directly to the other, perhaps over our home Wi-Fi? We don’t need a server for such interactions, certainly.
Again, perhaps. Clearly there are reasons to follow C/S, especially when you’re counting on the server to provide core functionality – just as there are good reasons to follow P2P, when you’re looking to establish a network of interacting nodes. But even when you use these architectural styles in combination, there remain a broad range of desirable functionality that neither approach can adequately deliver.
The gaping hole with the IoT, of course, is security. I’ve written about this problem before, but what I didn’t point out is that security, in fact, is the biggest pitfall for an even broader challenge: control. As consumers of IoT technology, we have a deep level of discomfort with the notion that somehow we won’t be in control of the various sensors and devices that are springing up all around us. True, we don’t want anyone hacking our automobiles or refrigerators or factory floor equipment, but threat prevention is only part of the battle. But in this post-NSA spying world, we just don’t like the idea of some kind of IoT device in our proximity that someone else controls.
Fair enough, but we don’t want the burden of controlling each device and sensor manually, either. After all, if we had to manually manage the device that controlled our dishwasher, we might as well simply have an ordinary, 20th century dishwasher! From the perspective of the consumer, the entire value proposition of the IoT centers on automation. If we have to control everything manually, we’ve just defeated the entire purpose of the IoT.
The solution to this conundrum is to delegate control of each IoT node locally to the node itself. In other words, IoT nodes must contain intelligent agents that are able to take initiative based upon the instructions we give them, as well as policies or other metadata that apply to their behavior. In addition to this autonomous, goal-oriented behavior, they must also be social: interaction between agents is a core part of their functionality. And finally, they must be able to learn about their environment and from their previous behavior.
Intelligent agents have been around for decades. I first heard of them with the advent of General Magic, a mid-1990s Apple Computer spin-off that sought to build a precursor to the personal digital assistant (PDA) based on networked intelligent agents – and the Wikipedia article I just referenced describes a system that is eerily like the agent-based IoT I’m proposing in this article.
I ran into agents again in the SOA era with SOA Software’s Network Director product, a set of distributed agents that act as lightweight intermediaries to support the Service abstraction in conjunction with ESBs or other traditional middleware. And of course, we’re all familiar with Apple Siri, the snarky intelligent agent on the iPhone.
To address the architectural and security issues with the IoT, however, intelligent agents must represent more than a smarter way to build IoT nodes. We must actually move to a new architectural paradigm – an architectural style I hazard to label Agent-Oriented Architecture (AOA). AOA inherits some characteristics of C/S and P2P, but is truly a different paradigm, because the agents control the interactions. And as long as the agents do our bidding, AOA puts us – the technology consumer – in control of the IoT.
Posted by Jason Bloomberg
on February 17, 2014
Are you recreating existing technology silos in the Cloud? If so, your entire enterprise investment in the Cloud is at risk.
From the perspective of IT, organizational silos seem to be the root of all problems. Every line of business, every department, every functional area has its own requirements, its own technology preferences, and its own way of doing things. They have historically invested in specialized components for narrow purposes, which IT must then conventionally integrate via application middleware --- increasing the cost, complexity, and brittleness of the overall architecture.
Now those same stakeholders want to move to the Cloud. Save money with SaaS apps! Reduce data center costs with IaaS! Build a single Private Cloud we can all share! But breaking down the technical silos is easier said than done. There are endless problems: Static interfaces. Legacy technology. Inconsistent policies, rules, and processes. Crusty old middleware that predates the Cloud. And everybody still has their own data model and their own version of the truth.
The Cloud alone can't solve these complex challenges. In fact, the challenge is not entirely within the realm of IT. Organizational change is also necessary – and of course, such change is the most difficult to achieve, especially when the underlying force driving the transformation is technological. Enterprise architecture is part of the solution, of course, but when the business stakeholders resist necessary change, no level of exhortation from the architecture team will make much difference.
How, then, do we find our way out of this impasse? In particular, how can the IT organization drive the organizational change necessary to break down the silos necessary to achieve strategic value with the Cloud? The answer, of course, is money. Whenever IT (or any other part of the business, for that matter) wants to effect change in their organization, they must translate their message into the financial benefits their organization can achieve by making the change.
Siloed technology, fundamentally, is inefficient. It’s expensive to purchase, to maintain, and in particular, to integrate. We need a better approach to implementing technology that brings silos together, while allowing the personalization and customization that meets stakeholder needs. It’s time to rethink how we handle both data and code to align with the storage and processing model of the Cloud: distributed, horizontally scalable, and event-driven. We need an intelligent, active approach to building and running applications that is both dynamic and inherently Cloud-friendly.
Such an approach will take some time, as most IT departments must rebuild long-lost credibility with business stakeholders. But by achieving greater levels of efficiency and cost savings while driving toward the strategic goals of the enterprise, the IT organization can be a positive force for business transformation in their organizations. But first, they must get the technology right.
Posted by Jason Bloomberg
on February 13, 2014
Platform-as-a-Service (PaaS) has hit a bit of a rough spot lately. Unlike its big brothers SaaS and IaaS, PaaS providers are looking for traction in this diverse, even fragmented market. In fact, there’s even a question as to whether PaaS is really a market at all. Perhaps it’s just a part of IaaS? After all, Amazon Web Services offers PaaS. Or maybe it’s a part of SaaS?
Even if PaaS remains a true market segment, the players in this market are surprisingly diverse. Application development, test, and deployment as a Service is at the core of PaaS, but PaaS could easily include Database-as-a-Service, Integration-as-a-Service, Identity-Management-as-a-Service, and many other subcategories.
Who, then, is the final arbiter as to what constitutes the PaaS market? Fundamentally, customers do, when they vote with their dollars. But in advance of established customer purchasing patterns, it falls to the industry analysts to provide market definitions in order to inform customer purchasing (as well as investor) decisions as markets emerge. And the 800-pound gorilla in the IT industry is indisputably Gartner.
Gartner, therefore, is in the enviable position of clarifying what PaaS really means. Here, then, is Gartner’s definition of PaaS:
“’PaaS’ is the term generally accepted by the industry to indicate application infrastructure (middleware) functionality, enriched with cloud characteristics and offered as a cloud service (encapsulating and hiding the underlying system infrastructure). Gartner refers to it more precisely as ‘cloud application infrastructure services’ and has identified 15 classes of PaaS, each roughly mirroring a corresponding class of on-premises middleware products.
Unfortunately, this definition shows a surprising lack of understanding of the transformative potential of PaaS. By stating that PaaS is middleware “enriched” with Cloud characteristics, Gartner is stating that such Cloud characteristics are superficial. In other words, scratch PaaS and you’ll find traditional middleware underneath.
Today’s established middleware vendors are no doubt ecstatic with this definition. After all, they can take their existing “application infrastructure services” and enrich them with some added Cloudy bells and whistles, as though they were enriching their Frosted Flakes with vitamins and minerals.
But vitamins and minerals don’t change those flakes into health food, folks. And no “enrichment” will take existing middleware and make it PaaS.
Fundamentally, PaaS represents a new paradigm for creating and delivering software functionality, and the players who in the end will succeed with their PaaS offerings will be the service providers who understand and can capitalize on this paradigm shift.
If you’re a Gartner customer and actually listen to their advice, however, you may miss the entire PaaS differentiation, which will limit your success with the Cloud. But even worse, Gartner’s influence on the marketplace may actually limit the ability for some transformative PaaS players to get the investor and customer traction they need to be successful.
Posted by Jason Bloomberg
on February 10, 2014
I attended SIGS DATACOMM OOP 2014 last week in Munich, Germany. I passed on the German language sessions – my high school Deutsch not sufficiently wunderbar – but among the English language sessions was the keynote from Martin Fowler.
All software developers will immediately recognize Fowler’s name as one of the fathers of modern software development, and in particular, of the Agile movement – and the first part of his talk on refactoring didn’t disappoint. But it was the second half of his keynote I found the most notable.
He took this opportunity in front of a sympathetic crowd of around 500 developers, mostly male and mostly German, to exhort the crowd against the dangers of dark patterns. Dark patterns, for the uninitiated, represent the production of intentionally deceptive software. After all, someone has to code all those phishing attacks and porn sites full of crapware. Fowler figures some of these developers are in the audience, and he wants them to stop.
Fowler’s cause is noble to be sure. The last thing we need in this world are more spam or online scams or deceptive Web sites. And certainly if there were any developers in the room considering a career in such dark patterns, or perhaps already embarking in such a career, Fowler’s evangelism might cause an engineer or two to reconsider the error of his (or her) ways.
Unfortunately, I don’t believe Fowler’s efforts will make any difference. There is simply too much money in dark patterns. After all, true black hat hacking consists of going over to the dark side, and whether the hacker’s motivation is economic or political, the moralizing of one thought leader won’t compensate for such motivations.
But what about the borderline cases? Those presumably young, impressionable developers who might get sucked into a career of building better spam engines or what have you, if not for the proselytizing of one authority figure they respect and admire? Perhaps, but I wouldn’t hold my breath. The lure of dark patterns is too great. If we’re ever going to find a successful way to fight hackers, malware, crapware, adware, and all the other forces of the dark side we’ll need more than a handful of leaders showing the impressionable masses the way to the light.
Posted by Jason Bloomberg
on February 2, 2014
According to Wikipedia, late binding is a mechanism in which the method being called upon an object is looked up by name at runtime. This mechanism was useful in Microsoft’s Component Object Model (COM), because compilers wouldn’t have to reference libraries at compile time. Late binding is related to dynamic dispatch, which is the process of selecting which implementation of a polymorphic operation (method or function) to call at runtime.
The rise of Web Services in the early 2000s extended the definition of late binding. In the original vision for Web Services, Service consumers would look up the WSDL file for a desired Service in a UDDI registry, and bind to the resulting Service at runtime based upon the particular WSDL and the instructions the consumer would find inside it.
Only such “dynamic discovery” proved impractical in most situations, and furthermore, UDDI as a standard proved awkward and fundamentally useless. Instead, late binding in the SOA context represented a dynamic lookup of the endpoint location of a Web Service, where the WSDL file was already available for developers when they created or configured the consumer.
While this maturation of Web Services followed a convoluted path, REST cut through the noise with a more direct approach. Dynamic lookups follow the pattern of DNS, where a gateway intermediary can resolve one URI into another. We all know how DNS works, so why complicate matters?
Simply finding the desired Service or resource endpoint, however, is only the price of admission. Even with REST we still have the problem of ensuring the client and resource agree on other metadata associated with the distributed interaction. Such metadata may include data schemas, policies, or richer semantic content related to the interaction. REST addresses this problem via hypermedia: the client and resource interact via repeated following of hyperlinks on the client. Done properly, hypermedia drive the application on the client – but most RESTafarians can’t get this to work, and furthermore, hypermedia don’t address the broader enterprise integration challenge where the client is an arbitrary piece of software and the application isn’t ensconced in the client.
Cut to 2014. We’ve traveled the SOA gauntlet, and we’ve leveraged some aspects of REST (although maybe not the hypermedia bits). Today the story is the API Economy. But what are APIs but static interfaces? Even with the addition of contract metadata (having learned the lessons of Web Services), we still have the late binding challenge: how do two arbitrary endpoints understand all the metadata relevant to their interaction?
We need what I like to call extreme late binding: the ability for distributed computing endpoints to negotiate their interaction at runtime by leveraging all the metadata that apply to that interaction. Without such extreme late binding, even today’s modern APIs are just as inflexible and static as the fixed, early bound APIs of old.
Posted by Jason Bloomberg
on January 27, 2014
Cloudwashing has always been one of the greatest challenges of the emerging Cloud Computing marketplace. At its core, we define Cloudwashing as saying something is Cloud when it really isn’t. This form of obfuscation is popular among software vendors and managed hosting providers who want to jump on the Cloud bandwagon but whose offerings aren’t yet up to snuff. Cloudwashing is also popular among CIOs and other IT denizens who want to convince their bosses that they’re really doing Cloud when in fact they aren’t.
This pattern of exaggeration isn’t unique to Cloud, of course. Any time there’s a new approach or technology that is difficult to implement, people on both the sell side and buy side of the IT marketplace will trumpet their successes with the new approach, regardless of whether they’re really up to speed on the new technology.
In fact, the rise of SOA in the 2000s was fraught with the same sort of obfuscation we’re seeing now—only we didn’t call it Cloudwashing, of course. This intentional mass confusion over SOA had many similarities to today’s Cloudwashing, but there were some important differences. Because SOA was an architectural approach, vendors couldn’t sell it – not because there was anything wrong or immature about their products, but fundamentally because SOA is something you do, not something you buy.
Cloud Computing, in contrast, is something you can buy. You would think, therefore, that it’s only a matter of time until Cloudwashing goes away. Give today’s Cloud providers (software and hardware vendors as well as the service providers) enough time to mature their offerings, and there will be no need to Cloudwash anymore, right?
Only that’s not what’s happening now – or at least, not all that’s happening. There are a plethora of Cloud vendors and service providers who are doubling down on their Cloudwashing. Instead of maturing their offerings in order to tell a true Cloud story, they have taken a sharp left turn, and now they’re trumpeting offerings that aren’t true Cloud offerings, not because they’re immature, but because they feel that their customers don’t really want Cloud after all.
For the companies who are following this path, here is their reasoning: Public Cloud is scary, so customers want Private Cloud. But Private Cloud is too expensive and difficult, so they don’t really want Private Cloud, either. What they really want is managed hosting. Or perhaps: the customer says they want SaaS, but they really want a Web-based application. Regardless, the basic pattern is that the customer doesn’t really want Cloud, they really want something else, but they want us to call it Cloud, so that’s what we’ll do.
What is so frightening about this trend is that in many cases, these vendors are right. Their customers don’t really want Cloud. Sure, they want to say they have Cloud, but they have various business reasons to shun the essential characteristics of Cloud Computing. Unlimited elasticity? Our application architecture doesn’t support it. Automated self-service provisioning? We don’t trust our users. Pay as you go pricing? No clue how to budget for that. Fully automated operational environment? Won’t work with the hodgepodge of equipment we currently have in our data center. Multitenancy? Yuck, sounds like computing in a public restroom – who knows what kind of scumbag is in the next stall.
But of course, these doom-and-gloom Cloudwashers are not always right. True Cloud Computing – elastic, multitenant, self-service, automated, pay as you go Cloud Computing – is huge, and it’s here to stay. Furthermore, true Cloud is making plenty of inroads into large enterprises – big companies and government agencies with plenty of legacy to go around. So what’s really going on here?
The underlying story is one of transformation. Cloud Computing does not just represent new ways of procuring IT assets or software. It represents new ways of doing business. Cloud, along with other transformative trends in IT including the rise of mobile technologies, the Internet of Things, and the Agile Architecture that facilitates the entire mess, are in the process of revolutionizing how businesses – and people – leverage technology. And as with any revolution, the change is both difficult to implement and impossible to understand while it’s happening. The choices facing today’s executives are far more complex than is it Cloud or isn’t it, or should we do Cloud or not. Instead, the question is how to keep your eye on your business goals as technology change transforms the entire business landscape.
Posted by Jason Bloomberg
on January 22, 2014
As the author of The Agile Architecture Revolution, one of the questions I get now that I work for a software vendor is how Enterprise Architecture can help my company, EnterpriseWeb, sell its software platform. After all, the only way to sell anything (software or no) is to solve somebody’s problem at a price they are willing to pay. What does that have to do with EA?
Everything, in fact. True EA principles (in particular, Agile EA principles) are focused on business value – although there are plenty of EAs out there who don’t connect the dots properly. But yes, we must always connect those dots.
As I explained in my Agile Architecture course, there are three things that keep the business stakeholder up at night: making money, saving money, and keeping customers happy. (In the public sector, it’s staying within the budget and the mission goals). These priorities should drive all decision making in IT.
One of the themes in the course as well as my book is how business agility relates to these priorities. After all, the CEO doesn’t ask for business agility directly. He/she wants the three priorities above. The challenge of the CIO, and by extension the EA, is understanding when the business driver is an agility driver, and then to make the appropriate technology choices to achieve that agility in furtherance of the core business drivers.
This story goes directly to our positioning in the marketplace. EnterpriseWeb is a versatile tool that can potentially solve a range of problems, and thus is far better suited to enabling business agility than a tool that is built for a particular purpose. But simply talking about our capabilities leads to what I like to call the “Swiss army knife” problem – a tool that does so many things, nobody knows what to use it for.
Our core technology, after all, is our secret sauce, and our technical differentiator that provides a barrier to entry to the competition. But customers don’t really care, as long as we can really do what we say we can do, and we’re a better choice for solving their problems than our competition is – in other words, we’re a better value for solving the business problem.
The Agile EA challenge is then to tell the story: “here’s the business problem (making money, saving money, keeping customers happy), here’s how and why it’s an agility problem, and here are the tools you need to solve the problem.” Then we simply have to explain why our tool is the best choice.
Posted by Jason Bloomberg
on January 17, 2014
My good buddy Joe McKendrick at Forbes Magazine recently penned a thought-provoking piece: Cloud Providers May Not Be Ready For Big Data Onslaught, Users Fear. He’s right, of course – many Cloud environments aren’t ready or able to handle the Big Data sets and corresponding analytics that many organizations are now looking to shoehorn into the Cloud. That conclusion, however, brings up two important questions: why aren’t the Cloud providers ready, and what are we going to do about it?
One line of thinking goes as follows: to build a Cloud properly, you need to offer unlimited elasticity, which builds upon the illusion of infinite capacity. There’s no such thing as infinite capacity, of course; but Clouds should be big enough to give their customers the confidence that they’re big enough for even the most gargantuan of Big Data challenges.
I call this the Field of Dreams scenario: if they build it, you will come. The theory is that Cloud providers will invest all the time and money necessary ahead of customer demand to ensure that when that demand finally comes along, they’ll be ready. Sounds good in theory, from the customer perspective. But from the provider perspective? Overbuilding. Unnecessary expense. In other words, bad business.
On the other hand, we may be in more of a Catch-22 situation. The providers are only too happy to build out their Clouds to meet customer demand, but the customers won’t demand that capacity until the providers offer it. The result: steamed customers who are wondering when the Cloud providers will finally get their act together. Providers who have misread the customer landscape, and have failed to predict the pattern of immanent demand.
Regardless of which movie you’re watching, Field of Dreams or Catch-22, customer demand and provider capacity aren’t lining up as efficiently as anyone would like. The answer? Give Adam Smith’s invisible hand some time to coax the market into balance, and in the meantime, let the buyer beware.
Posted by Jason Bloomberg
on January 14, 2014
One of the primary enablers of Public Cloud Providers’ dramatic economies of scale is their fanatical devotion to homogeneity at the hardware level. Walk into one of their data centers and you’ll see identical racks of identical equipment – with an emphasis on identical. (Of course, Private Clouds should also aspire to the same level of homogeneity as well.) This homogeneity also extends to the network hardware, as you might expect. But when you connect those switches and routers to the telecommunication providers’ equipment, at that point all bets are off.
In contrast to Cloud best practice, the underlying telco infrastructure is a mishmash of dedicated, often proprietary types of hardware that go by a regular alphabet soup of designations: BNG, CG-NAT, MME, SGSN, RNC, SBCs, and more. This hodgepodge is expensive and slow to implement, and limits the flexibility of the telco to support its Cloud Service Provider (CSP) customers, as well as any enterprise customer with dynamic communications requirements – which seems to be everyone these days.
To rise to these challenges, a consortium of telco carriers and vendors have put their heads together and hammered out Network Functions Virtualization (NFV) – a specification for abstracting network hardware in order to move the control of network functions to software. The benefits of NFV for the telcos include reduced equipment costs and power consumption, faster time to market, and greater agility in providing diversified services across customer types and geographies. NFV also promises greater support for multi-tenancy at the network layer, which will drive the above business benefits to any CSP who is offering multitenant IaaS to their customers.
NFV is similar to Software-Defined Networking (SDN) in many ways, but is in reality quite complementary. To understand the differences, it’s important to get a feel for the types of network functions that NFV is trying to virtualize. Such network functions include switching functions, mobile network functions, service assurance and SLA monitoring functions, policy control and security functions, etc. Once the telco has abstracted these basic functions, they can then chain them in order to deliver more complex network services.
The big win for the telcos, however, depends upon NFV orchestration, which automates the ability to instantiate, monitor, repair, and bill for the network functions and chained services – in a hardware-independent manner. On the one hand, this hardware independence leads to greater agility and economies of scale – but perhaps the most important benefit is that the telcos are finally following essential Cloud best practice. Welcome to the club, people!