Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Jason Bloomberg on October 17, 2014

At Intellyx our focus is on digital transformation, so I spend a lot of my time helping digital professionals understand how to leverage the various technology options open to them to achieve their customer-driven business goals.

Who is a digital professional? People with titles like Chief Digital Officer, VP of Ecommerce, VP of Digital Marketing, or even Chief Marketing Officer – in other words, people who are marketers at heart, but who now have one foot solidly in the technology arena, as they’re on the digital front lines, where customers interact with the organization.

One of the most important activities that enables me to interact with such digital professionals is attending conferences on digital strategy. To this end I have been attending Innovation Enterprise conferences – first, the Digital Strategy Innovation conference a few weeks ago in San Francisco, and coming up, the Big Data and Marketing Innovation Summit in Miami November 6 – 7.

Full disclosure: Intellyx is an Innovation Enterprise media sponsor, and I’m speaking at the upcoming conference as well as chairing the first day – but choosing to be involved with these conferences was a deliberate decision on my part, as the digital professional is an important audience for Intellyx.

Nevertheless, my traditional and still core audience is the IT professional. Most of the conferences I attend are IT-centric, even though the digital story is driving much of the business activity within the IT world as well as the marketing world.

Even so, I find most tech conferences suffer from the same affliction: the echo chamber effect. By echo chamber I mean that tech conferences predictably attract techies – and for the most part, only techies. The exhibitors are techies. The speakers are techies. And of course, the attendees are techies. The entire event consists of techies talking to techies.

The exhibitors, therefore, are hoping that some of the techies that walk by their booth are buyers, or at least, influencers of the technology buying decision. And thus they keep exhibiting, hoping for those hot leads.

There were exhibitors at the Digital Strategy Innovation show as well – mostly marketing automation vendors, with a few marketing intelligence vendors mixed in. In other words, the vendor community expected the digital crowd to be interested solely in marketing technology. After all, the crowd was a marketing crowd, right?

True, that digital crowd was a marketing crowd, but that doesn’t mean their problems were entirely marketing problems. In fact, the audience was struggling much more with web and mobile performance issues than marketing automation issues.

So, where were the web and mobile performance vendors? Nary a one at the Digital Strategy Innovation summit – they were at the O’Reilly Velocity show, a conference centered on web performance that attracts, you guessed it, a heavily technical crowd.

What about the upcoming Big Data and Marketing Innovation Summit? True, there are a couple of Big Data technology vendors exhibiting, but the sponsorship rolls are surprisingly sparse. We media sponsors actually outnumber the paying sponsors at this point!

So, where are all the Big Data guys? At shows like Dataversity’s Enterprise Data World, yet another echo chamber technology show (although more people on the business side come to EDW than to shows like Velocity).

The moral of this story: the digital technology buyer is every bit as likely to be a marketing person as a techie, if not more so. For vendors who have a digital value proposition, centering your marketing efforts solely on technology audiences will miss an important and growing market segment.

It’s just a matter of time until vendors figure this out. If you’re a vendor, then who will be the first to capitalize on this trend, you or your competition?


Posted by Jason Bloomberg on October 9, 2014

IT industry analyst behemoth Gartner is having their Symposium shindig this week in Orlando, where they made such predictions as “one in three jobs will be taken by software or robots by 2025” and “By year-end 2016, more than USD 2 billion online shopping will be performed exclusively by mobile digital assistants,” among other deep and unquestionably thoughtful prognostications.

And of course, Gartner isn’t the only analyst firm who uses their crystal ball to make news. Forrester Research and IDC, the other two remaining large players in the IT industry analysis space, also feed their customers – as well as the rest of us – such predictions of the future.

Everybody knows, however, that predicting the future is never a sure thing. Proclamations such as the ones above boil down to matters of opinion – as the fine print on any Gartner report will claim. And yet, at some point in time, such claims will become verifiable matters of fact.

The burning question in my mind, therefore, is where are the analyses of past predictions? Just how polished are the crystal balls at the big analyst firms anyway? And are their predictions better than anyone else’s?

If all you hear are crickets in response to these questions, you’re not alone. Analyst firms rarely go back over past predictions and compare them to actual data. And we can all guess the reason: their predictions are little more than random shots in the dark. If they ever get close to actually getting something right, there’s no reason to believe such an eventuality is anything more than random luck.

Of course, anyone in the business of making predictions faces the same challenge, dating back to the Oracle of Delphi in ancient Greece. So what’s different now? The answer: Big Data.

You see, Gartner and the rest spend plenty of time talking about the predictive power of Big Data. Our predictive analysis tools are better than ever, and furthermore, the quantity of available data as well as our ability to analyze them are improving dramatically.

Furthermore, an established predictive analytics best practice is to measure the accuracy of your predictions and feed back that information in order to improve the predictive algorithms, thus iteratively polishing your crystal ball to a mirror-like sheen.

So ask yourself (and if you’re a client of one of the aforementioned firms, ask them) – why aren’t the big analyst shops analyzing their own past predictions, not only to let us know just how good they are at prognostication, but to improve their prediction methodologies? Time to eat your own dog food, Gartner!


Posted by Jason Bloomberg on October 1, 2014

In some ways, Oracle’s self-congratulatory San Francisco shindig known as OpenWorld is as gripping as any Shakespearean tragedy. For all the buzz today about transformation, agility, and change, it’s hard to get a straight story out of Oracle about what they want to change – if they really want to change at all.

First, there’s the odd shuffling of executives at the top, shifting company founder Larry Ellison into a dual role as Executive Chairman of the Board and CTO, a role that Ellison joked about: "I’m CTO now, I have to do my demos by myself. I used to have help, now it’s gone.” But on a more serious note, Oracle has been stressing that nothing will change at the big company.

Nothing will change? Why would you appoint new CEOs if you didn’t want anything to change? And isn’t the impact of the Cloud a disruptive force that is forcing Oracle to transform, like it or not? Perhaps they felt that claiming the exec shuffle was simply business as usual would calm down skittish shareholders and a skeptical Wall Street. But if I were betting money on Oracle stock, I’d be looking for them to change, not sticking their head in the sand and claiming that no change at all was preferable.

And what about their Cloud strategy, anyway? Ellison has been notoriously wishy-washy on the entire concept, but it’s clear that Cloud is perhaps Oracle’s biggest bet this year. However, “while those products are growing quickly, they remain a small fraction of the company's total business,” accounting for “just 5 percent of his company's revenue,” according to Reuters.

Thus Oracle finds itself in the same growth paradox that drove TIBCO out of the public stock market: only a small portion of the company is experiencing rapid growth, while the lion’s share is not. Of course, these slow-growth doldrums are par for the course for any established vendor; there’s nothing particularly unique about Oracle’s situation in that regard. But the fact still remains that Wall Street loves growth from tech vendors, and it doesn’t matter how fast Oracle grows its Cloud business, investors will still see a moribund incumbent.

The big questions facing Oracle moving forward, therefore, are how much of their traditional business should they reinvent, and will the Cloud be the central platform for that reinvention. Unfortunately for Oracle and its shareholders, indications are that the company has no intention of entering a period of creative disruption.

As Ellison said back in 2008, “There are still mainframes. Mainframes were the first that were going to be destroyed. And watching mainframes being destroyed is like watching a glacier melt. Even with global warming, it is taking long time.” Only now it’s 2014, and mainframes aren’t the question – Oracle’s core business is. Will Oracle still use the glacier metaphor? Given the accelerating rate of climate change, I wouldn’t bet on it.


Posted by Jason Bloomberg on September 25, 2014

A decade ago, back in the “SOA days,” we compared various Enterprise Service Bus (ESB) vendors and the products they were hawking. When the conversation came around to TIBCO, we liked to joke that they were the “Scientology of ESB vendors,” because their technology was so proprietary that techies essentially had to devote their life to TIBCO to be worthy of working with their products.

But joking aside, we also gave them credit where credit was due. Their core ESB product, Rendezvous, actually worked quite well. After all, NASDAQ, FedEx, and Delta Airlines ran the thing. TIBCO obviously had the whole scalability thing nailed – unlike competitors like SeeBeyond back in the day, who competed with TIBCO in the Enterprise Application Integration space (the precursor to ESBs).

Cut to 2014, and TIBCO’s fortunes are now in question, as the stock market has pummeled their stock price, and a leveraged buyout (LBO) is in the works, with deep pocketed firms hoping to take the company private.

Sometimes, going private can be a good thing for a company, as it gives them the money as well as the privacy they need to make bold, innovative changes before relaunching as a public company. But in other cases, LBOs are opportunities for the venture capitalist vultures to sell off the company in parts, squeezing every last penny out of the assets while shifting all the risk to employees, customers, and basically anybody but themselves.

Which path TIBCO will take is unclear, as the buyout itself isn’t even a sure thing at this point. But TIBCO’s downfall – noting that I’m sure no one at the company would call it that – has some important lessons for all of us, because TIBCO’s story isn’t simply about a dinosaur unable to adapt to a new environment.

Their story is not a simple Innovator’s Dilemma case study. In fact, they’ve moved solidly into Cloud, Big Data, and Social Technologies – three of the hot, growing areas that characterize the technology landscape for the 2010s. So what happened to them?

It could be argued that they simply executed poorly, essentially taking some wrong turns on the way to a Cloudified nirvana. Rolling out a special social media product only for rich and important people – a social network for the one percent – does indicate that they’re out of touch with most customers.

And then there’s the proprietary aspect to their technology that is still haunting them. Today’s techies would much rather work with modern languages and environments than to have to go back to school to earn a particular vendor’s way of doing things.

Perhaps the problem is price. Their upstart competitors continue their downward pricing pressure, one of the economic patterns that the move to the Cloud has doubled down on. From the perspective of shareholders, however, TIBCO’s biggest problem has been growth. It’s very difficult for a large, established vendor to grow nearly as fast as smaller, more nimble players, especially when it still makes a lot of its money in saturated markets like messaging middleware.

Adding Cloud, Big Data, or Social Media products to the product list doesn’t change this fundamental mathematics, even though those new products may themselves experience rapid growth, since the new product lines account for a relatively small portion of their overall revenue.

So, how is a company like TIBCO to compete with smaller, faster growing vendors? Here’s where LBO plan B comes in: break up the company. Sell off the established products like Rendezvous to larger middleware players like Oracle. I’m sure Oracle would be happy to have TIBCO’s middleware customers, and they have shown a reasonable ability to keep such customers generally happy over the years.

Any SeeBeyond customers out there? SeeBeyond was acquired by Sun Microsystems, who renamed the product Java CAPS. Then Oracle acquired Sun, rolling Java CAPS and BEA Systems’ middleware products into Oracle SOA Suite. No one would be that surprised if Rendezvous suffered a similar fate.

The owners of whatever is left of TIBCO would focus their efforts on growing smaller, newer products. The end result won’t be a TIBCO us old timers would recognize, but should they ever go public again, they have a change to be a market darling once more.


Posted by Jason Bloomberg on September 20, 2014

Want to make tech headlines without having to change anything – or in fact, do anything? If you’re Oracle, all you have to do (or not do, as the case may be) is shake up the top levels of management.

The news this week, as per the Oracle press release: the only CEO in Oracle’s history, Larry Ellison, is stepping down as CEO. Big news, right? After all, he’s 70 years old now, and he’s a fixture on the yachting circuit. Maybe it’s time for him to relax on his yacht and enjoy his billions in retirement while hand-picked successors Mark Hurd and Safra Catz take the reins as co-CEOs. (Apparently Ellison’s shoes are so big the only way to fill them is to put one new CEO in each.)

But look more closely and you’ll see that sipping Mai Tais on the Rising Sun isn’t Ellison’s plan at all. He’s planning to keep working full time as CTO and in his newly appointed role as Executive Chairman. The only difference here is the reporting structure: Hurd and Catz now report to the Board instead of directly to Ellison. “Safra and Mark will now report to the Oracle Board rather than to me,” Ellison purrs. “All the other reporting relationships will remain unchanged.”

Oh, and Ellison reports directly to the Board as well, as he has always done, rather than to either Hurd or Catz. And who does the Board report to? Ellison, of course, in his new role as Executive Chairman.

It’s important to note that Oracle never had an Executive Chairman before, only a Chairman (Jeff Henley, now demoted to Vice Chairman of the Board). So, what’s the difference between a Chairman and an Executive Chairman? According to Wikipedia, the Executive Chairman is “An office separate from that of CEO, where the titleholder wields influence over company operations.”

In other words, Ellison is now even more in charge than he was before. In his role as CEO, he reported to the Board, led by a (non-executive) Chairman. But now, he gets to run the board, as well as the technology wing of Oracle.

So, will anything really change at Oracle? Unlikely – at least not until Ellison finally kicks the bucket. It was always Ellison’s show, and now Ellison has further consolidated his iron grip on his baby. If you’re expecting change from Oracle – say, increased innovation for example – you’ll have to keep waiting.


Posted by Jason Bloomberg on September 10, 2014

The new Apple Watch has many cool features to be sure, but I just don’t like the fact that Apple discriminates on the basis of handedness.

The Apple Watch comes in a right-handed configuration. Yes, there’s a left-handed setting, but you need to switch the band around, and then the button on the side is in an awkward lower position.

In other words, left-handed people either have to suck it up and use the watch in the right-handed configuration, or go through the hassle of reconfiguring it only to end up with an inferior design. Thanks a lot, Apple. But hey, we're only lefties, and we're only being inconvenienced.

We should be used to it, right? After all, user interfaces have been right-handed for years. To this day the arrow cursor is right-handed, and scrollbars are always on the right. And for software that does have a left-handed configuration, more often than not some aspect of the UI doesn’t work properly in left-handed mode.

If we were a legally protected minority then it wouldn't be a question of being inconvenienced, right? Were separate water fountains simply inconvenient?

10% of the population is left-handed. And all us lefties know that left-handedness correlates with intelligence, so I wouldn’t be surprised if the percentage is higher within the storied walls of Apple. So, why didn’t Apple release a left-handed version of the Apple Watch?

I think Apple is being offensive by paying lip service to handedness, but giving lefties a second-class experience nevertheless. But that's just me. Who cares what lefties think?


Posted by Jason Bloomberg on September 5, 2014

“Never believe anything you read on the Internet.” – Abraham Lincoln

Honest Abe never spoke a truer word – even though he didn’t say anything of the sort, of course. And while we can immediately tell this familiar Facebook saying is merely a joke, there are many documents on the Internet that have such a veneer of respectability that we’re tempted to take them at their word – even though they may be just as full of nonsense as the presidential proclamation above.

Among the worst offenders are survey reports, especially when they are surveys of professionals about emerging technologies or approaches. Fortunately, it’s possible to see through the bluster, if you know the difference between a good survey and a bad one. Forewarned is forearmed, as the saying goes – even though Lincoln probably didn’t say that.

The Basics of a Good Survey

The core notion of a survey is that a reputable firm asks questions of a group of people who represent a larger population. If the surveyed group accurately represents the larger population, the answers are truthful, and questions are good, then the results are likely to be relatively accurate (although statistical error is always a factor). Unfortunately, all of these criteria present an opportunity for problems. Here are a few things to look for.

Does the sample group represent the larger population? The key here is that the sample group must be selected randomly from the population, and any deviation from randomness must be compensated for in the analysis. Ensuring randomness, however, is quite difficult, since respondents may or may not want to participate, or may or may not be easy to find or identify.

Here’s how reputable political pollsters handle deviations from randomness. First, they have existing demographic data about the population in question (say, voters in a county). Based on census data, they know what percent are male and female, what percent are registered Democrat or Republican, what the age distribution of the population is, etc. Then they select, say, 100 telephone numbers at random in the county, and call each of them. Some go to voicemail or don’t answer, and many people who do answer refuse to participate. For the ones that do participate, they ask demographic questions as well as the questions the survey is actually interested in. If they find, say, that 50% of voters in a county are female, but 65% of respondents were female, they have to adjust the results accordingly. Making such adjustments for all factors – including who has phones, which numbers are mobile, etc. – is complex and error-prone, but is the best they can do to get the most accurate result possible.

Compare that political polling selection process to how, say, Digital Transformation, Big Data, or Cloud Computing adoption surveys assemble their populations. Perhaps the survey company emails their mailing list and asks for volunteers. Maybe it’s a Web page or a document handed out at a conference. Or worst of all, perhaps survey participants are hand-selected by the sponsor of the survey. None of these methods produces a sample that’s even close to being random. The result? The results of the survey cannot be expected to represent the opinions of any population other than the survey participants themselves.

Are the answers truthful? I’m willing to posit that people are generally honest folks, so the real question here is, what motivations would people have not to be completely honest on a survey? For emerging technologies and approaches the honesty question is especially important, because people like to think they’re adopting some new buzzword, even if they’re not. Furthermore, people like to think they understand a new buzzword, even if they don’t. People also tend to exaggerate their adoption: they may say they’re “advanced Cloud adopters” when they simply use online email, for example. Finally, executives may have different responses than people in the trenches. CIOs are more likely to say they’re doing DevOps than developers in the same organization, for example.

Are the questions good? This criterion is the most subtle, as the answer largely amounts to a matter of opinion. If the surveying company or the sponsor thinks the questions are good, then aren’t they? Perhaps, but the real question here is one of ulterior motives. Is the sponsor looking for the survey to achieve a particular result, and thus is influencing the questions accordingly? Were certain questions thrown out after responses were received, because those answers didn’t make the surveying company or sponsor happy? If scientific researchers were to exclude certain questions because they didn’t like the results, they’d get fired and blacklisted. Unfortunately, there are no such punishments in the world of business surveys.

So, How Do You Tell?

I always recommend taking surveys with a large grain of salt regardless, but the best way to get a sense of the quality of a survey is to look at the methodology section. The survey you’re wondering about doesn’t have a methodology section, you say? Well, it might be good for wrapping fish, but not much else, since every survey report should have one.

Even if it has one, take a look at it with a critical eye, not just for what it says, but for what it doesn’t say. Then, if some critical bit of information is missing, assume the worst. For example, here is the entire methodology section from a recent Forrester Research “Thought Leadership Paper” survey on Business Transformation commissioned by Tata Consultancy Services (TCS):

In this study, Forrester interviewed 10 business transformation leaders and conducted an online survey of 100 US and UK decision-makers with significant involvement in business transformation projects. Survey participants included Director+ decision-makers in IT and line of business. Questions provided to the participants asked about their goals, metrics, and best practices around business transformation projects. Respondents were offered an incentive as a thank you for time spent on the survey. The study began in February 2014 and was completed in May 2014.

How did Forrester ensure the randomness of their survey sample? They didn’t. Is there any reason to believe the survey sample accurately represents a larger population? Nope. How did they select the people they surveyed? It doesn’t say, except to point out they have significant involvement in business transformation projects. So if we assume the worst, we should assume the respondents were hand-selected by the sponsor. Does the report provide an analysis of the answers to every question asked? It doesn’t say. The methodology statement does point out respondents were offered an incentive for participating, however. This admission indicates Forrester is a reputable firm to be sure, but doesn’t say much for the accuracy or usefulness of the results of the report.

So, what should a business survey report methodology look like? Take a look at this one from the International Finance Corporation (IFC), a member of the World Bank Group. The difference is clear. Consider yourself forewarned!


Posted by Jason Bloomberg on August 26, 2014

I attended Dataversity’s NoSQL Now! Conference last week, and among the many vendors I spoke with, one story caught my interest. This vendor (who alas must remain nameless) is a leader in the NoSQL database market, specializing in particular on supporting XML as a native file type.

In their upcoming release, however, they’re adding JavaScript support – native JSON as well as Server-Side JavaScript as a language for writing procedures. And while the addition of JavaScript/JSON may be newsworthy in itself, the interesting story here is why they decided to add such support to their database.

True, JavaScript/JSON support is a core feature of competing databases like MongoDB. And yes, customers are asking for this capability. But they don’t want JavaScript support because they think it will solve any business problems better than the XML support the database already offers.

The real reason they’re adding JavaScript support is because developers are demanding it – because they want JSON on their resumes, and because JSON is cool, whereas XML isn’t. So for the people actually responsible for buying database technology, they’re asking for JSON support as a recruitment and retention tool.

Will adding JavaScript/JSON support make their database more adept at solving real business problems? Perhaps. But if developers will bolt if your database isn’t cool, then coolness suddenly becomes your business driver, for better or worse. One can only wonder: how many other software features are simply the result of the developer coolness factor, independent of any other value to the businesses footing the bill?


Posted by Jason Bloomberg on August 21, 2014

In my latest Cortex newsletter I referred to “tone deaf” corporations who have flexible technology like corporate social media in place, but lack the organizational flexibility to use it properly. The result is a negative customer experience that defeats the entire purpose of interacting with customers.

Not all large corporations are tone deaf, however. So instead of finding an egregious example of tone deafness and lambasting it, I actually found an example of a corporation who uses social media in an exemplary way. Let’s see what Delta Airlines is doing right.

The screenshot above is from the Delta Facebook page. Delta regularly posts promotional and PR pieces to the page, and in this case, they are telling the story of a long-time employee. Giving a human face to the company is a good practice to be sure, but doesn’t leverage the social aspect of Facebook – how Delta handles the comments does.

As often happens, a disgruntled customer decided to post a grievance. Delta could have answered with a formulaic response (tone deaf) or chosen not to respond at all (even more tone deaf). But instead, a real person responded with an on-point apology. Furthermore, this real person signed the response with her name (I’ll assume Alex is female for the sake of simplicity) – so even though she is posting under the Delta corporate account, the customer, as well as everybody else viewing the interchange, knows a human being at Delta is responding.

If Alex’s response ended at a simple apology, however, such a response would still be tone deaf, because it wouldn’t have addressed the problem. But in this case, she also provided a link to the complaints page and actually recommended to the customer that she file a formal complaint. In other words, Delta uses social media to empower its customers – the one who complained, and of course, everyone else who happens to see the link.

It could be argued that Alex was simply handing off the customer to someone else, thus passing the buck. In this case, however, I believe the response was the best that could be expected, as the details of the customer’s complaint aren’t salient for a public forum like social media. Naturally, the complaints Web site might drop the ball, but as far as Delta’s handling of social media, they have shown a mastery of the medium.

So, who is Alex? Is she in customer service or public relations? The answer, of course, is both – which shows a customer-facing organizational strategy at Delta that many other companies struggle with. Where is your customer service? Likely in a call center, which you may have even outsourced. Where is your PR? Likely out of your marketing department, or yes, even outsourced to a PR firm.

How do these respective teams interact with customers? The call center rep follows a script, and if a problem deviates, the rep has to escalate to a manager. Any communications from the PR firm go through several approvals within the firm and at the client before they hit the wire. In other words, the power rests centrally with corporate management.

However, not only does a social media response team like Alex’s bring together customer service and PR, but whatever script she follows can only be a loose guideline, or responses would sound formulaic, and hence tone deaf. Instead, Delta has empowered Alex and her colleagues to take charge of the customer interaction, and in turn, Alex empowers customers to take control of their interactions with Delta.

The secret to corporate social media success? Empowerment. Trust the people on the front lines to interact with customers, and trust the customer as well. Loosen the ties to management. Social media are social, not hierarchical. After all, Digital Transformation is always about transforming people.


Posted by Jason Bloomberg on August 12, 2014

Two stories on the Internet of Things (IoT) caught my eye this week. First, IDC’s prediction that the IoT market will balloon from US$1.9 trillion in 2013 to $7.1 trillion in 2020. Second, the fact it took hackers 15 seconds to hack the Google Nest thermostat – the device Google wants to make the center of the IoT for the home.

These two stories aren’t atypical, either. Gartner has similarly overblown market growth predictions, although they do admit a measure of overhypedness in the IoT market (ya think?). And as far as whether Nest is an unusual instance, unfortunately, the IoT is rife with security problems.

What are we to make of these opposite, potentially contradictory trends? Here are some possibilities:

We simply don’t care that the IoT is insecure. We really don’t mind that everyone from Russian organized criminals to the script kiddie down the block can hack the IoT. We want it anyway. The benefits outweigh any drawbacks.

Vendors will sufficiently address the IoT’s security issues, so by 2020, we’ll all be able to live in a reasonably hacker-free (and government spying-free) world of connected things. After all, vendors have done such a splendid job making sure our everyday computers are hack and spy-free so far, right?

Perhaps one or both of the above possibilities will take place, but I’m skeptical. Why, then, all the big numbers? Perhaps it’s the analysts themselves? Here are two more possibilities:

Vendors pay analysts (directly or indirectly) to make overblown market size predictions, because such predictions convince customers, investors, and shareholders open their wallets. Never mind the hacker behind the curtain, we’re the great and terrible Wizard of IoT!

Analysts simply ignore factors like the public perception of security when making their predictions. Analysts make their market predictions by asking vendors what their revenues were over the last few years, putting the numbers into a spreadsheet, and dragging the cells to the right. Voila! Market predictions. Only there’s no room in the spreadsheet for adverse influences like security perception issues.

Maybe the analysts are the problem. Or just as likely, I got out on the wrong side of bed this morning. Be that as it may, here’s a contrarian prediction for you:

Both consumers and executives will get fed up with the inability of vendors to secure their gear, and the IoT will wither on the vine.

The wheel is spinning, folks. Which will it be? Time to place your bets!

the IoT will wither on the vine


Sitemap