Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Jason Bloomberg on April 14, 2014

Amazon.com played an April Fool’s Day prank on me. My shaver head gave out April 1st, so I ordered a replacement on Amazon.com. I chose free shipping (without Amazon Prime, their monthly subscription membership that comes with free two-day shipping). The site promised a delivery in 8 to 11 days. After all, based past experience, I would likely get it sooner, since it was shipping direct from Amazon. In this case, however, they didn’t ship for 8 of the 11 days, instead reporting they were “preparing for shipment” for those 8 days. I ended up receiving the order on day 11 – just within the promised window.

So you’re probably thinking, Amazon stuck to their promised delivery window, so I should quit my bitching already. And you’d be right. And Amazon may have had a very good reason why they needed 8 full days to prepare my order for shipment.

But I don’t think so. My theory as to what happened (keeping in mind it’s only a theory), is that Amazon changed their policy regarding free shipping in order to encourage customers to sign up for Amazon Prime. After all, if customers can get free shipping with quick delivery without paying for Amazon Prime, then why would anybody ever pay for the premium subscription at all?

From a business perspective, Amazon’s change in policy makes sense. But they had to lower their expected customer service as a result – an expectation based on past behavior. True, I could shop elsewhere, and I might, but probably not. That’s the bet Amazon is making here.

For the audience of this blog, however, the question of the day is: would Amazon pull the same trick with their Amazon Web Services Cloud offering? Would Amazon ever lower their level of service on a Cloud offering in order to move customers to a more expensive choice?

The answer: absolutely. You might think Amazon simply wants to be the low cost leader because customers love low costs, and Amazon loves customers. And that’s true to a certain extent. But if they can squeeze more money out of you in a way that won’t jeopardize their pricing pressure on their competition and also won’t likely cause you to drop Amazon and switch to said competition, now you know they will have no qualms about doing so. After all, once you’re in Amazon’s Cloud, it’s tough to move. All you have to do is see a photo of me with an 11-day beard as a reminder.


Posted by Jason Bloomberg on April 8, 2014

Now that I am Chief Evangelist at EnterpriseWeb, people occasionally ask me what a Chief Evangelist does. My answer is that I provide thought leadership and marketing. To which my ever-curious audience predictably asks what the difference is between the two.

Thought leadership and marketing are in fact two different tasks with different (although related) goals. Marketing centers on communicating the value proposition and differentiator – what problems we solve, why you should buy what we’re selling, and why you shouldn’t (or can’t) buy it from anyone else.

But thought leadership has a different goal. The goal of thought leadership is to paint the picture of the world of the future, a world our technology enables. Technology is changing and business is changing, and how technology-empowered business will look five, ten, or twenty years out is a complicated, imperceptible mess. Thought leadership helps people clear up the confusion so they can gradually understand how all the pieces fit together.

Marketing is about today and perhaps a few months into the future – what can we do for customers this year. Thought leadership connects today to tomorrow. It’s not about getting all the details right. It’s about painting the big picture. Thought leadership gives us the opportunity to place our technology into the broader context.

Thought leadership involves telling a story, one chapter at a time. Take the reader on a journey, filling in the missing pieces to the big picture over time. The story will naturally improve over time, and that’s OK – since no one really cares about what the story was in years past. It’s assembling the big picture of the future, piece by piece. Each piece has to stand on its own, but how they all fit together is the real lesson.


Posted by Jason Bloomberg on April 2, 2014

No, it wasn’t an April Fool’s joke: Hadoop vendor Cloudera just closed a $900 million financing round, showing the world that PowerBall isn’t the only way to crazy riches. And while on the surface it seems to be a good problem to have (like we should all have such problems!), $900 million in the bank may actually be more trouble than it’s worth. What’s Cloudera going to do with all that green?

Clearly, at those stratospheric investment heights, the only possible exit is to go public. So, what should Cloudera spend money on to build a market cap even higher than its current $3.37 billion valuation? Yes, that’s billion with a B, or $3,370,000,000 for all you zero-counters out there.

Clearly, they need to improve their product. While the Big Data opportunity is unarguably large, Hadoop as a platform has its challenges. The problem with sinking cash into the tech is that they’ll quickly run into the “mythical man month” paradox: simply throwing people (in other words, money) at a piece of technology can only improve that technology so fast. All those zeroes won’t buy you a baby in a month, you know.

Perhaps they’ll invest in other products, either by assembling a gargantuan team of developers or by acquiring other companies, or both. Such a move is likely – but they’ll end up with a mishmash of different technologies, or they’ll run into the man-month problem again. Or both.

They’re also likely to grow their team. More sales people selling Hadoop to all 500 of the Fortune 100. More Hadoop experts – going after all 1000 of the 500 top gurus out there. More recruiters perhaps, looking to squeeze more blood out of the Silicon Valley techie turnip. True, such fuzzy math works to your benefit if you’re one of said gurus, but fuzzy math it is. You can only do so much hiring before you’re hitting the bottom of every barrel.

Whatever happens, there’s plenty of money to go around – unless, of course, you’re already a holder of Cloudera stock or options. If so, you may have just been diluted to the point you could call yourself a homeopathy treatment. But regardless of where you stand with respect to the Cloudera nest egg, it’s nigh impossible to divine a path that works out well for any of the parties involved – Cloudera employees, investors, or customers. But in the meantime, I’m sure they’ll throw some kick-ass parties. Pass the shrimp, please!


Posted by Jason Bloomberg on March 28, 2014

This week I attended the bpmNEXT Conference in California. Unlike virtually every other conference I’ve ever attended, this one attracted Business Process Management (BPM) vendors and analysts, but not customers – and the vendors were perfectly happy with that. Essentially, this event was in part an opportunity for vendors to show their products to each other, but primarily an excuse to network with other people in the BPM market over drinks and dinner.

You would expect such a crowd to be cheerleaders for BPM, and many of them were. But all was not right in the world. One fellow quipped that not only was BPM dying, it was never alive in the first place. Another vendor pointed out that BPM is never on CIO’s “must have” lists. And then we have vendors spending time and money to come to a conference devoid of sales opportunities.

So, what’s wrong with the BPM market? True, there is a market for this gear, as many of the presenters pointed out in discussions of customers. But there was always the undercurrent that this stuff really isn’t as useful or popular as people would like.

Welcome to the BPM zombie apocalypse. Zombies, after all, are dead people who don’t realize they’re dead, so they attempt to go about their business as though nothing was amiss. But instead of acting like normal, living people, they end up shuffling around as they shed body parts, groaning for brains. Time to get my shovel and escape to hype – and customer – filled conferences focusing on Big Data and Cloud.


Posted by Jason Bloomberg on March 24, 2014

When you write a computer program, you’re providing instructions to one or more computers so that they can do whatever it is you’ve programmed them to do. In other words, you programmed the computers to have one or more capabilities.

According to Wikipedia, a capability is the ability to perform or achieve certain actions or outcomes through a set of controllable and measurable faculties, features, functions, processes, or services. But of course, you already knew that, as capability is a common English word and we’re using it in a perfectly common way.

But not only is the term common, the notion that we program computers to give them capabilities is also ubiquitous. The problem is, this capability-centric notion of software has led us down a winding path with nothing but a dead end to greet us.

The problem with thinking of software as providing capabilities to our computers is that the computers will only be able to do those things we have programmed them to do. If our requirements change, we must change the program. Only once we deploy the program, it becomes instant legacy – software that is mired in inflexibility, difficult or even impossible to reprogram or replace. Hence the proverbial winding path to nowhere.

Our computers, however, are really nothing but tools. When they come off the assembly line, they have really no idea what programs they’ll end up running – and they don’t care. Yet while we’re comfortable thinking of our hardware as tools, it takes a mind shift to fully grasp what it means to consider all of our software as tools.

Tools, you see, don’t have capabilities. They have affordances. Affordance is an unquestionably uncommon word, so let’s jump right to Wikipedia for the definition: an affordance is a property of an object, or an environment, which allows an individual to perform an action. The point to a tool, of course, is its affordances: a screwdriver affords users the ability to turn screws or open paint can lids, as well as unintended affordances like hitting a drum or perhaps breaking a window. But the screwdriver doesn’t have the capability of driving screws; rather, a person has that capability when they have a screwdriver – and furthermore, it’s up to the user to decide how they want to use the tool, not the manufacturer of the tool.

The software we use every day has affordances as well. Links are for clicking, buttons are for pushing, etc. Every coder knows how to build user interfaces that offer affordances. And we also have software we explicitly identify as tools: development tools (which afford the ability to develop software, among other affordances) for example. The problem arises when we cross the line from coding affordances to coding capabilities, which happens when we’re no longer coding tools, but we’re coding applications (generally speaking) or solutions.

Such applications are especially familiar in the enterprise space, where they are not simply single programs running on individual computers, but complicated, distributed monstrosities that serve many users and leverage many computers. We may use tools to build such applications, but the entire enterprise software lifecycle focuses on delivering the required functionality for bespoke applications – in other words, capabilities, rather than affordances. Even when you buy enterprise applications, the bulk of the value of the software comes from its capabilities. It’s no wonder we all hate legacy applications!

The challenge for the enterprise app space – and by extension, all categories of software – is to shift this balance between capabilities and affordances to the extreme of maximum affordance. In other words, instead of building or buying software that can do things (in other words, has capabilities), we want software that can enable users to do things – and then maximize the affordances so that we have software smart enough to afford any action.

Superficially this goal sounds too good to be true, but remember what computers are for: they’re for running programs which give them instructions. In other words, computers are examples of maximum affordance in action. The next step is to build software with the same purpose.


Posted by Jason Bloomberg on March 19, 2014

In a recent article for ComputerWorld, Howard Baldwin took a well-deserved poke at leading consulting punditocracy for pushing “Digital Transformation” on their customers. You must build a “digital industrial economy” opines Gartner! Or perhaps a “digital business” that includes a “comprehensive strategy that leads to new architectures, new services and new platforms” according to Accenture and McKinsey. Or maybe PricewaterhouseCooper’s “digital IQ” is more your cup of tea?

The thrust of Baldwin’s article, however, is that CIOs are pushing back against all this consultant newspeak. Readers of this blog may well be wondering where I fall in this discussion. After all, I recently penned The Agile Architecture Revolution. In the book I make the case that we are in the midst of a true revolution – one that reinvents old ways of doing IT, replacing them with entirely new approaches. You might think, therefore, that I align with the gurus of Gartner or the mages of McKinsey.

Sorry to disappoint. Just because we’re in the midst of broad-based transformation in enterprise IT doesn’t necessarily mean that “digital transformation” should be on your corporate shopping list. Digital transformation, after all, isn’t a business priority. Making money, saving money, and keeping customers happy are business priorities. You should only craft a digital transformation strategy for your organization if it promises to improve the bottom line – and you can adequately connect the dots to said bottom line.

I’m sure the pundits at Pricewaterhouse and the others understand this point, and if you hire them, they’ll connect the dots between their whiz-bang digital whatzit and, you know, actually making money. But if you read their white papers or see their executives pontificate at a conference, that’s when they bring out the flashing lights and fireworks.

Bottom line: yes, we’re in a period of great transformation, and yes, you’ll need to figure out how to deal with it. But your business strategy must always focus on your core business priorities, not some flashy collection of buzzwords. Tech fads come and go, but business fundamentals remain the same.


Posted by Jason Bloomberg on March 13, 2014

All developers these days are familiar with the second statement in the Agile Manifesto: customer collaboration over contract negotiation. You’re on the same team as your customer or stakeholder, the reasoning goes, so you don’t want an adversarial relationship with them. Instead, you should work together to achieve the common goal of working software that meets customer needs.

If you’re not heavily involved with Agile, or even if you are and you step back a moment and look at this principle in a new light, you’ll see that it comes across as calling for some kind of unrealistic Kumbaya moment. Throw away decades of friction and misunderstanding between stakeholders and developers, and miraculously work together in the spirit of love and cooperation! Gag me now, seriously.

In reality, there’s nothing particularly Kumbaya about your run-of-the-mill stakeholder. They’re too busy with, you know, their jobs to spend time holding hands with developers – people who make them feel uncomfortable on the best days. From their perspective, the coders are there to build what they ask, so go away and don’t bother them until you’re done already!

What’s an Agile developer or Scrum master to do when your stakeholders are correspondingly intractable? No amount of touchy-feeliness is going to bring them around. But you can’t really be Agile without their participation, either.

Time for in-your-face Agile.

The bottom line is that the “customer collaboration” from the manifesto isn’t meant to indicate that the parties will be friends or even willing participants. Draw a line in the sand. Make it clear to your stakeholders that the project simply won’t get done unless they cooperate.

You’ll need to use your best judgment, of course – I’m not recommending threats of violence here. But sometimes you have to get tough. If you’re a development firm serving a paying customer, threaten to give their money back. You don’t want business from customers who want the benefits of Agile but aren’t willing to do their part.

For an internal stakeholder, it’s your call whether you want to put your job on the line – but sometimes that might be your best option, if the alternative is to spend months of your time working on a project that you know is doomed to failure due to stakeholder intransigence. However, if you join with the rest of your team and simply refuse to work on a project that lacks proper stakeholder participation, you’re spreading the risk. Remember, if your team is any good, better jobs with more cooperative stakeholders are always plentiful anyway.

In-your-face Agile is unlikely to make you any friends. Don’t expect warm fuzzies around the holidays. But if your efforts lead to successful projects, everyone wins in the end – including even the most obstinate of customers.


Posted by Jason Bloomberg on March 10, 2014

To get REST right, you need HATEOAS (Hypermedia as the Engine of Application State). And to get HATEOAS right, you need hypermedia-rich data formats that support flexible media types. Problem is, we didn’t really have any such data format.

Until now, that is, with the JSON-LD, a JSON-based Serialization for Linked Data, which became a W3C recommendation on January 16th of this year. Now we can build fully interoperable RESTful APIs without worrying about such issues as changing data formats or semantic conflicts, because we can resolve any such incompatibilities with HATEOAS.

Yeah, right. Only in our dreams, it seems. Turns out one of the primary creators of JSON-LD, Manu Sporny, has a decidedly mixed opinion of the recommendation, as well as the process for creating one, as he explains in his refreshingly candid blog post.

The disclaimer at the top of this post is the best disclaimer I’ve ever seen, and the rest doesn’t disappoint. But in all seriousness, this post illustrates the politics and compromises that go into any standards effort – in particular, interoperability standards that codify the metadata that lead to loose coupling of interfaces. They all involve politics and compromises, which means they all suck.

And not only that, they will always continue to suck into the future. We’re chasing the pot of gold under some RESTful rainbow somewhere. The only way we’ll ever come up with some kind of final standard is for everyone involved to have the same perspective and agree on everything, and that’s entirely counter to human nature. There’s simply no hope that someday the “perfect” interoperability standard will finally come along and resolve all interoperability issues.

The API Economy is winding down, folks. It’s time for a different approach.


Posted by Jason Bloomberg on March 5, 2014

One show-stopping problem with automation is that the more dynamic the recipe is for the automation, the slower it ends up running in the operational environment. I ran into a discussion of this problem yet again in a presentation on the Chef Web site.

Chef, of course, is one of a new crop of vendors hawking automation tools for the Cloud. Instead of manually configuring your Cloud environment, the argument goes, use Chef (or Puppet, or one of several other similar tools) to write a recipe, or script, or template for that configuration, so that you can create as diverse and dynamic a configuration as your heart desires.

As the presentation laments, however, configuring, say, an Amazon EC2 instance using a recipe can lead to painfully slow boot times. But have no fear, Amazon has the answer: Amazon Machine Images (AMIs), which are preconfigured in a plethora of different ways to meet a wide range of possible needs. Simply pick the one you want, install it, and you’re off and running.

There are two main problems with the AMI approach, however. First, you need a boatload of AMIs – and the list of available ones keeps increasing. Second, there’s always the chance that no AMI will quite fit your requirement. Then you’re back to hacking individual VMs, which everybody wants to avoid.

The current solution to this conundrum is to mix and match – prebake certain things, but put recipes in your cookbook for others. Then the challenge becomes engineering your cookbook: handling lifecycle-centric issues like QA and versioning for the cookbook itself.

True, performing such engineering is DevOps at its finest, with ops folks looking to dev folks for engineering pointers they can apply to the automation of the ops environment. But if this entire picture of engineered cookbooks with a mix of pre-baked and customized recipes strikes you as inefficient, you’re not alone. Have we really done everything we could to leverage Cloud Computing and DevOps to squeeze inefficiencies out of our approach, or have we simply moved those inefficiencies around a bit?

True, the Cloud-savvy success stories today have navigated these waters and figured out how to make such cookbooks work for them. But this entire approach still smacks of the horseless carriage: taking a new paradigm and bringing old procedures and approaches to it because that’s how you’ve always done things. Any developer who has made the decision whether to use a prebuilt code module or build one themselves has worked through the same tradeoff. So what else is new?

What we need is a new paradigm for automating interactions that rises to the level of expectations that the Cloud sets for us. We need better than pre-written scripts mixed together with pre-baked machine images. The Cloud demands the ability to build fully dynamic, customizable configurations of anything without sacrificing performance. Anything less and you might as well be saddling up for a ride.


Posted by Jason Bloomberg on February 24, 2014

There are two fundamental architectural paradigms battling for supremacy for the Internet of Things (IoT). First, as the appellation Internet would suggest, we have the Client/Server (C/S) paradigm. At the core of (C/S) is the notion of one-to-many: one server serving the needs of many clients. We’ve added layers to this structure, and the Web brought us thin clients, and SOA exposed server capabilities as contracted interfaces we called Services, but both the Web and SOA follow C/S under the covers.

Perhaps the IoT should follow C/S. The various sensors and controls that characterize the IoT could be thought of as a type of client, and they get their job done by talking to servers somewhere. Those servers, naturally, serve many such clients. Or perhaps not, as there is another architectural paradigm fighting for its spot in the sun: Peer-to-Peer (P2P).

P2P eschews servers, instead basing its core interaction paradigm on communications between endpoint nodes. True, there may be some box in a data center somewhere facilitating this interaction, but the role of that device is to support a lookup service so that nodes can find each other. Instant messaging and Skype are two P2P examples, although P2P file sharing services like BitTorrent are also quite prevalent.

So, maybe P2P is the way for the IoT to go? If we want our thermostat controlling our dishwasher, after all, doesn’t it make sense for the one device to speak directly to the other, perhaps over our home Wi-Fi? We don’t need a server for such interactions, certainly.

Again, perhaps. Clearly there are reasons to follow C/S, especially when you’re counting on the server to provide core functionality – just as there are good reasons to follow P2P, when you’re looking to establish a network of interacting nodes. But even when you use these architectural styles in combination, there remain a broad range of desirable functionality that neither approach can adequately deliver.

The gaping hole with the IoT, of course, is security. I’ve written about this problem before, but what I didn’t point out is that security, in fact, is the biggest pitfall for an even broader challenge: control. As consumers of IoT technology, we have a deep level of discomfort with the notion that somehow we won’t be in control of the various sensors and devices that are springing up all around us. True, we don’t want anyone hacking our automobiles or refrigerators or factory floor equipment, but threat prevention is only part of the battle. But in this post-NSA spying world, we just don’t like the idea of some kind of IoT device in our proximity that someone else controls.

Fair enough, but we don’t want the burden of controlling each device and sensor manually, either. After all, if we had to manually manage the device that controlled our dishwasher, we might as well simply have an ordinary, 20th century dishwasher! From the perspective of the consumer, the entire value proposition of the IoT centers on automation. If we have to control everything manually, we’ve just defeated the entire purpose of the IoT.

The solution to this conundrum is to delegate control of each IoT node locally to the node itself. In other words, IoT nodes must contain intelligent agents that are able to take initiative based upon the instructions we give them, as well as policies or other metadata that apply to their behavior. In addition to this autonomous, goal-oriented behavior, they must also be social: interaction between agents is a core part of their functionality. And finally, they must be able to learn about their environment and from their previous behavior.

Intelligent agents have been around for decades. I first heard of them with the advent of General Magic, a mid-1990s Apple Computer spin-off that sought to build a precursor to the personal digital assistant (PDA) based on networked intelligent agents – and the Wikipedia article I just referenced describes a system that is eerily like the agent-based IoT I’m proposing in this article.

I ran into agents again in the SOA era with SOA Software’s Network Director product, a set of distributed agents that act as lightweight intermediaries to support the Service abstraction in conjunction with ESBs or other traditional middleware. And of course, we’re all familiar with Apple Siri, the snarky intelligent agent on the iPhone.

To address the architectural and security issues with the IoT, however, intelligent agents must represent more than a smarter way to build IoT nodes. We must actually move to a new architectural paradigm – an architectural style I hazard to label Agent-Oriented Architecture (AOA). AOA inherits some characteristics of C/S and P2P, but is truly a different paradigm, because the agents control the interactions. And as long as the agents do our bidding, AOA puts us – the technology consumer – in control of the IoT.


Sitemap