Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Jason Bloomberg on October 30, 2014

The latest cyberattack to hit the news is POODLE (Padding Oracle on Downgraded Legacy Encryption). While POODLE wins points for both the cutest title and including Oracle in its name, I’m cheering it on for a different reason.

POODLE compromises the obsolete protocol SSL 3.0. That alone wouldn’t be a big deal, but it’s sneakier than that, since it tricks browsers and other applications that use more recent, more secure transport-layer security protocols to downgrade to SSL 3.0, thus becoming vulnerable to attack.

The best way to protect yourself from POODLE is to disable SSL 3.0 across your entire IT environment – servers, browsers, the lot.

And that’s why I’m cheering. You see, the bane of many an IT manager’s and web developer’s existence is Internet Explorer Version 6. This browser version has been obsolete for years, but numerous enterprises still insist on remaining standardized on it. It doesn’t support HTML 5, which is one of the many reasons web developers hate it. But that deficiency alone hasn’t forced shops to switch browsers.

The good news, however, is that IE 6 doesn’t support any transport-layer security protocol newer than SSL 3.0. So not only can POODLE drive a big truck through IE 6’s security defenses, but now every IT shop must disable all SSL 3.0 support to protect the rest of its applications. And that means finally getting rid of IE 6 once and for all.

Hallelujah!


Posted by Jason Bloomberg on October 24, 2014

As I built my first Web site – and wrote my first article about the Web – back in 1995, I have the privilege of counting myself among the senior citizens of the Internet. So for all you young pups out there, gather around and let grandpa tell you a scary story, just in time for Halloween.

Once  upon a time, there was a speculative bubble, as more and more people saw the stocks of fledgling dot.coms go up and up, in spite of the fact that many of them hadn’t turned a profit, and in fact, several didn’t really have any viable plans to do so. But we all started believing our own hype, and the VCs and other investors piled on, and companies who rushed to go public saw insane ramp-ups of their stock prices, mostly on the relatively new NASDAQ market.

Until, of course, the speculative bubble burst, and the whole shebang came crashing down, taking with it the fortunes of many established technology players as well as the telcos, leading to what I snarkily like to call “Bush Recession #1” around 2001.

But while dot.com darlings like Kozmo, Boo.com, and TheGlobe.com are relegated to the history books, others like Amazon.com, Google, eBay, and Yahoo are still with us. It took a few years, but the technology and telco sectors are now going full steam, in spite of the financial crisis and – you guessed it – the 2007 – 2009 “Bush Recession #2” (two recessions, same Bush). But while the financial crisis hit banking and real estate, it only presented a speed bump on the road to today’s insane technology run up, as the all-time NASDAQ chart below will attest.

NASDAQ Chart

That ominous spike around 2000 was the dot.com bubble, of course. The little dip around 2008? Well, that was Bush Recession #2, aka the financial crisis.

Sure Signs of a Bubble

As we complete our scary Halloween story, the question now is, do we live happily ever after? Direct your attention if you will to what’s happened with the NASDAQ since the last recession. A rather sharp run up, wouldn’t you say? Now my crystal ball is no clearer than anyone else’s, but my bet is that we’re in the midst of yet another speculative bubble – and all bubbles pop sooner or later.

I’m no financial analyst, but I do follow the technology marketplace, and I have the perspective of someone who lived through the dot.com bubble. The reason I make such a dire prediction isn’t primarily based on the chart above, but rather the following similarities between what’s going on today and the period during the dot.com run up.

Crazy Money: Acquisitions. In “normal” times, a small tech company with a few dozen people and no profits might sell for a few million bucks. Well, you don’t have to be an avid reader of the financial press to know that there have been a series of such acquisitions, only in the billions of dollars. Facebook picked up WhatsApp for a cool $19 billion. Google buying Nest Labs for $3.2 billion. Facebook again using its clout to buy Instagram for the bargain basement price of only a billion – to name a few.

These transactions in and of themselves aren’t necessarily the primary indicator of a speculative bubble. Rather, it’s the effect they have on other small tech companies and the people who work for them – or who might want to join such a company. When people start or work for companies because they see them as lottery tickets with billion dollar jackpots, rather than opportunities to make some money solving problems for customers, you have a huge “party like it’s 1999” red flag on your hands.

Crazy Money: Investments. Remember Zefer? No? Well, listen to Grandpa again. Zefer was one of a group of dot.com consulting darlings we liked to call iBuilders (I worked for a time at another iBuilder, USWeb/CKS, which became spectacular dot.com flameout marchFIRST, but I digress.) Zefer made history back in 1999 when they snagged a $100 million VC investment – unheard of at the time for a professional services firm.

Today, $100 million is chump change. So far this year alone, we’ve seen VC investments of $1.2 billion for Uber. $325 million for Dropbox. $250 million for Lyft. $200 million for AirBnB. $160 million for Pinterest. $160 million for Cloudera. $158 million for Box. And too many deals in the $100 million range to list (numbers from here). And those are just some 2014 deals – many of these companies had received many tens or hundreds of millions in earlier rounds.

And just what are these companies supposed to do with all this green? Grow. Big. And fast. The VCs are looking for “multiple baggers” – a simple 10% or 20% return on investment isn’t good enough for this group of one percenters, oh no. They want to multiply their investments many times over. 1000%. 2000%. Those are the returns they’re really hoping for.

When investors put $10 million into a company hoping to get $100 million out that’s one thing. But just what will that NASDAQ chart have to look like for the investments like the ones above to pay off? Might as well invest in tulips.

Hype about Hype. Hype – which I might define as overblown rhetoric touting products or services with a limited current value proposition – is a common phenomenon at all times, and doesn’t necessarily indicate a speculative bubble. But when we start seeing hype about the hype, that’s when my alarm bells go off. Case in point: when The Motley Fool investing site publishes an article entitled Believe the Hype: The Internet of Things Is No Gimmick, then in my opinion, it’s time to sell all your stock in the market in question, which in this case is the ridiculously overhyped Internet of Things.

How to Mitigate the Fallout

The great thing about playing musical chairs is we all have a seat until the music stops. So get while the getting is good to be sure. Also remember that it’s anybody’s guess whether a correction in the technology marketplace (or the broader digital marketplace, as the Ubers and AirBnBs of the world aren’t really technology plays) will lead to a broader market recession. After all, unemployment in the US has been going down steadily for years (yes, the “Obama Recovery,” naturally), and the Federal Reserve has yet to even start raising interest rates to cool inflation, both signs that we have a good while to go before the broader market cools.

In the meantime, my advice is the same advice I’d give any business at any time – only during speculative run ups, this advice becomes even more important: focus on the fundamentals. Businesses exist to serve their customers. Customers pay for products and services they want or need, and companies make money by providing them at prices customers are willing to pay. So simple, and yet so easy to forget during times of insanity. Keep your eye on your customer and you’ll do OK – even if everyone else is partying like it’s 1999.


Posted by Sandeep Chanda on October 22, 2014

Docker has sort of revolutionized the micro-services ecosystem since its first launch little more than a year back. The recent announcement from Microsoft about the partnership with Docker is a significant move, with some even calling it the best thing that has happened to Microsoft since .NET. This partnership will allow developers to create Windows Server Docker containers!

What is interesting is that this move will produce a mixed bag of efforts and investment directly from the Windows Server product team, as well as from the open source community that has been championing the cause for Docker. Thus getting it a serious footprint in the world of distributed applications enabling development, build, and distribution.

Dockerized apps for Linux containers on Windows Azure have already been in play for a while now. With this new initiative, Windows Server based containers will see the light of day. This is very exciting for developers as it will allow them to create and distribute applications on a mixed platform of both Linux and Windows. To align with the Docker platform, Microsoft will focus on the Windows Server Container infrastructure that will allow developers in the .NET world to share, publish and ship containers to virtually any location running the next gen Windows Server, including Microsoft Azure. The following initiatives have been worked out:

  1. Docker Engine supporting Windows Server images in the Docker Hub.
  2. Portability with Docker Remote API for multi-container applications.
  3. Integration of Docker Hub with Microsoft Azure Management Portal for easy provisioning and configuration.
  4. MS Open Tech will contribute the code to Docker Client supporting the provisioning of multi-container Docker applications using the Remote API.

This partnership should silence the reservations critics had regarding the success of the Docker platform and will be a great win for developers in the .NET world!


Posted by Jason Bloomberg on October 17, 2014

At Intellyx our focus is on digital transformation, so I spend a lot of my time helping digital professionals understand how to leverage the various technology options open to them to achieve their customer-driven business goals.

Who is a digital professional? People with titles like Chief Digital Officer, VP of Ecommerce, VP of Digital Marketing, or even Chief Marketing Officer – in other words, people who are marketers at heart, but who now have one foot solidly in the technology arena, as they’re on the digital front lines, where customers interact with the organization.

One of the most important activities that enables me to interact with such digital professionals is attending conferences on digital strategy. To this end I have been attending Innovation Enterprise conferences – first, the Digital Strategy Innovation conference a few weeks ago in San Francisco, and coming up, the Big Data and Marketing Innovation Summit in Miami November 6 – 7.

Full disclosure: Intellyx is an Innovation Enterprise media sponsor, and I’m speaking at the upcoming conference as well as chairing the first day – but choosing to be involved with these conferences was a deliberate decision on my part, as the digital professional is an important audience for Intellyx.

Nevertheless, my traditional and still core audience is the IT professional. Most of the conferences I attend are IT-centric, even though the digital story is driving much of the business activity within the IT world as well as the marketing world.

Even so, I find most tech conferences suffer from the same affliction: the echo chamber effect. By echo chamber I mean that tech conferences predictably attract techies – and for the most part, only techies. The exhibitors are techies. The speakers are techies. And of course, the attendees are techies. The entire event consists of techies talking to techies.

The exhibitors, therefore, are hoping that some of the techies that walk by their booth are buyers, or at least, influencers of the technology buying decision. And thus they keep exhibiting, hoping for those hot leads.

There were exhibitors at the Digital Strategy Innovation show as well – mostly marketing automation vendors, with a few marketing intelligence vendors mixed in. In other words, the vendor community expected the digital crowd to be interested solely in marketing technology. After all, the crowd was a marketing crowd, right?

True, that digital crowd was a marketing crowd, but that doesn’t mean their problems were entirely marketing problems. In fact, the audience was struggling much more with web and mobile performance issues than marketing automation issues.

So, where were the web and mobile performance vendors? Nary a one at the Digital Strategy Innovation summit – they were at the O’Reilly Velocity show, a conference centered on web performance that attracts, you guessed it, a heavily technical crowd.

What about the upcoming Big Data and Marketing Innovation Summit? True, there are a couple of Big Data technology vendors exhibiting, but the sponsorship rolls are surprisingly sparse. We media sponsors actually outnumber the paying sponsors at this point!

So, where are all the Big Data guys? At shows like Dataversity’s Enterprise Data World, yet another echo chamber technology show (although more people on the business side come to EDW than to shows like Velocity).

The moral of this story: the digital technology buyer is every bit as likely to be a marketing person as a techie, if not more so. For vendors who have a digital value proposition, centering your marketing efforts solely on technology audiences will miss an important and growing market segment.

It’s just a matter of time until vendors figure this out. If you’re a vendor, then who will be the first to capitalize on this trend, you or your competition?


Posted by Sandeep Chanda on October 15, 2014

In one of the previous blog posts, I introduced DocumentDB - Microsoft's debut into the world of NoSQL databases. You learned how it is different for being a JSON document only database. You also learned to create an instance of DocumentDB in Azure.

In the previous post, you used NuGet to install the required packages to program against DocumentDB in a .NET application. Today let's explore some of the programming constructs to operate on an instance of DocumentDB.

First step is to create a repository to allow you connect to your instance of DocumentDB. Create a repository class and reference the Microsoft.Azure.Documents.Client namespace in it. The Database object can be used to create an instance. The following code illustrates:

Database db = DbClient.CreateDatabaseAsync(new Database { Id = DbId } ).Result; 

Here DbClient is a property of type DatabaseClient exposed by Microsoft.Azure.Documents.Client API in your repository class. It provides the method CreateDatabaseAsync to connect to DocumentDB. You need to have the following key values from your instance of DocumentDB in azure:

  1. End point URL from Azure Management Portal
  2. Authentication Key
  3. Database Id
  4. Collection name

You can create an instance of DocumentClient using the following construct:

private static DocumentClient DbClient
    {
        get
        {
            Uri endpointUri = new Uri(ConfigurationManager.AppSettings["endpoint"]);
                return new DocumentClient(endpointUri, ConfigurationManager.AppSettings["authKey"];

        }
    }

Next you need to create a Document Collection using the method CreateDocumentCollectionAsync.

DocumentCollection collection = DbClient. CreateDocumentCollectionAsync ( Database.SelfLink, new DocumentCollection { Id = CollectionId } ).Result; 

You are now all set to perform DocumentDB operations using the repository. Note that you need to reference Microsoft.Azure.Documents.Linq to use Linq constructs for querying. Here is an example:

var results = DbClient.CreateDocumentQuery<T>(collection.DocumentsLink); 

Note that whatever entity replaces type T, the properties of that entity must be decorated with JsonProperty attribute to allow JSON serialization.

To create an entry you can use the CreateDocumentAsync method as shown here:

DbClient.CreateDocumentAsync(collection.SelfLink, T); 

In a similar fashion, you can also use the equivalent update method to update the data in your instance of DocumentDB.

Beyond .NET, DocumentDB also provides libraries to allow using JavaScript and Node.js. The interesting aspect is it allows T-SQL style operations such as creation of stored procedures, triggers, and user defined functions using JavaScript. You can write procedural logic in JavaScript, with atomic transactions. Performance is typically very good with JSON mapped all the way from the client side to DocumentDB as the unit of storage.  


Posted by Sandeep Chanda on October 10, 2014

The ongoing Xamarin Evolve conference is generating a lot of enthusiasm amongst cross-platform developers across the globe.

Xamarin has so far showcased an Android player, a simulator with hardware acceleration that claims to be much faster than the emulator with Android SDK. It is based on OpenGL and utilizes hardware accelerated virtualization with VT-x and AMD-V. The player also relies on Virtual Box 4.3 or higher to run. It would run equally well on Windows (7 or later) and OS X (10.7 or higher). After installing the player, you can select the emulator image to run. Select the device to simulate from the Device Manager. The emulator will then run exactly like the Android SDK emulator and you can perform various actions (typical of a hardware operation) by clicking the buttons provided on the right hand side. You can also simulate operations like multi-touch, battery operations, and location controls, etc. To install your apps for testing, you can drag and drop the APK file into the player.

Another cool release is the profiler that can be leveraged to perform code analysis of the C# code and profile it for potential performance bottlenecks and leaks. The profiler performs two important tasks. It does sampling for tracking memory allocation and looks at the call tree to determine the order of calling functions. It also provides a snapshot of memory usage on a timeline allowing the administrators to gain valuable insights into memory usage patterns.

My most favourite feature so far, however, is the preview of Sketches. Sketches provides an environment to quickly evaluate code and analyse the outcome. It offers immediate results without having the need to compile or deploy and you can use it from your Xamarin Studio. More on Sketches in the next post after I install and give it a try myself.


Posted by Jason Bloomberg on October 9, 2014

IT industry analyst behemoth Gartner is having their Symposium shindig this week in Orlando, where they made such predictions as “one in three jobs will be taken by software or robots by 2025” and “By year-end 2016, more than USD 2 billion online shopping will be performed exclusively by mobile digital assistants,” among other deep and unquestionably thoughtful prognostications.

And of course, Gartner isn’t the only analyst firm who uses their crystal ball to make news. Forrester Research and IDC, the other two remaining large players in the IT industry analysis space, also feed their customers – as well as the rest of us – such predictions of the future.

Everybody knows, however, that predicting the future is never a sure thing. Proclamations such as the ones above boil down to matters of opinion – as the fine print on any Gartner report will claim. And yet, at some point in time, such claims will become verifiable matters of fact.

The burning question in my mind, therefore, is where are the analyses of past predictions? Just how polished are the crystal balls at the big analyst firms anyway? And are their predictions better than anyone else’s?

If all you hear are crickets in response to these questions, you’re not alone. Analyst firms rarely go back over past predictions and compare them to actual data. And we can all guess the reason: their predictions are little more than random shots in the dark. If they ever get close to actually getting something right, there’s no reason to believe such an eventuality is anything more than random luck.

Of course, anyone in the business of making predictions faces the same challenge, dating back to the Oracle of Delphi in ancient Greece. So what’s different now? The answer: Big Data.

You see, Gartner and the rest spend plenty of time talking about the predictive power of Big Data. Our predictive analysis tools are better than ever, and furthermore, the quantity of available data as well as our ability to analyze them are improving dramatically.

Furthermore, an established predictive analytics best practice is to measure the accuracy of your predictions and feed back that information in order to improve the predictive algorithms, thus iteratively polishing your crystal ball to a mirror-like sheen.

So ask yourself (and if you’re a client of one of the aforementioned firms, ask them) – why aren’t the big analyst shops analyzing their own past predictions, not only to let us know just how good they are at prognostication, but to improve their prediction methodologies? Time to eat your own dog food, Gartner!


Posted by Jason Bloomberg on October 1, 2014

In some ways, Oracle’s self-congratulatory San Francisco shindig known as OpenWorld is as gripping as any Shakespearean tragedy. For all the buzz today about transformation, agility, and change, it’s hard to get a straight story out of Oracle about what they want to change – if they really want to change at all.

First, there’s the odd shuffling of executives at the top, shifting company founder Larry Ellison into a dual role as Executive Chairman of the Board and CTO, a role that Ellison joked about: "I’m CTO now, I have to do my demos by myself. I used to have help, now it’s gone.” But on a more serious note, Oracle has been stressing that nothing will change at the big company.

Nothing will change? Why would you appoint new CEOs if you didn’t want anything to change? And isn’t the impact of the Cloud a disruptive force that is forcing Oracle to transform, like it or not? Perhaps they felt that claiming the exec shuffle was simply business as usual would calm down skittish shareholders and a skeptical Wall Street. But if I were betting money on Oracle stock, I’d be looking for them to change, not sticking their head in the sand and claiming that no change at all was preferable.

And what about their Cloud strategy, anyway? Ellison has been notoriously wishy-washy on the entire concept, but it’s clear that Cloud is perhaps Oracle’s biggest bet this year. However, “while those products are growing quickly, they remain a small fraction of the company's total business,” accounting for “just 5 percent of his company's revenue,” according to Reuters.

Thus Oracle finds itself in the same growth paradox that drove TIBCO out of the public stock market: only a small portion of the company is experiencing rapid growth, while the lion’s share is not. Of course, these slow-growth doldrums are par for the course for any established vendor; there’s nothing particularly unique about Oracle’s situation in that regard. But the fact still remains that Wall Street loves growth from tech vendors, and it doesn’t matter how fast Oracle grows its Cloud business, investors will still see a moribund incumbent.

The big questions facing Oracle moving forward, therefore, are how much of their traditional business should they reinvent, and will the Cloud be the central platform for that reinvention. Unfortunately for Oracle and its shareholders, indications are that the company has no intention of entering a period of creative disruption.

As Ellison said back in 2008, “There are still mainframes. Mainframes were the first that were going to be destroyed. And watching mainframes being destroyed is like watching a glacier melt. Even with global warming, it is taking long time.” Only now it’s 2014, and mainframes aren’t the question – Oracle’s core business is. Will Oracle still use the glacier metaphor? Given the accelerating rate of climate change, I wouldn’t bet on it.


Posted by Sandeep Chanda on September 29, 2014

Azure is increasingly becoming the scalable CMS platform with support for a host of popular CMS providers via the marketplace. The list already includes some of the big names in the CMS industry, like Umbraco, Kentico, Joomla, and DNN.

The most recent addition to this list is WordPress. It is very simple to create a WordPress website. Go to the Azure Preview Portal and click New to go to the Gallery. Select Web from the navigation pane and you will see Scalable WordPress listed as one of the options (along with other options such as Umbraco and Zoomla).

Scalable WordPress uses Azure Storage by default to store site content. This automatically allows you to use Azure CDN for the media content that you want to use in your WordPress website.

Once you select Scalable WordPress, you will be redirected to the website configuration pane, where you can specify the name of the website, the database and the storage configuration settings. You are all set!

Login to your WordPress site dashboard to configure plug-ins like Jetpack. Jetpack, formerly available with WordPress.com, is now also available with Scalable WordPress. Your WordPress CMS site hosted in Azure can now support millions of visits and scale on demand. The Azure WordPress CMS website will support auto-scale out of the box. You can also enable backup and restore features available with Azure websites for your CMS site. It will also support publishing of content from stage to production.


Posted by Jason Bloomberg on September 25, 2014

A decade ago, back in the “SOA days,” we compared various Enterprise Service Bus (ESB) vendors and the products they were hawking. When the conversation came around to TIBCO, we liked to joke that they were the “Scientology of ESB vendors,” because their technology was so proprietary that techies essentially had to devote their life to TIBCO to be worthy of working with their products.

But joking aside, we also gave them credit where credit was due. Their core ESB product, Rendezvous, actually worked quite well. After all, NASDAQ, FedEx, and Delta Airlines ran the thing. TIBCO obviously had the whole scalability thing nailed – unlike competitors like SeeBeyond back in the day, who competed with TIBCO in the Enterprise Application Integration space (the precursor to ESBs).

Cut to 2014, and TIBCO’s fortunes are now in question, as the stock market has pummeled their stock price, and a leveraged buyout (LBO) is in the works, with deep pocketed firms hoping to take the company private.

Sometimes, going private can be a good thing for a company, as it gives them the money as well as the privacy they need to make bold, innovative changes before relaunching as a public company. But in other cases, LBOs are opportunities for the venture capitalist vultures to sell off the company in parts, squeezing every last penny out of the assets while shifting all the risk to employees, customers, and basically anybody but themselves.

Which path TIBCO will take is unclear, as the buyout itself isn’t even a sure thing at this point. But TIBCO’s downfall – noting that I’m sure no one at the company would call it that – has some important lessons for all of us, because TIBCO’s story isn’t simply about a dinosaur unable to adapt to a new environment.

Their story is not a simple Innovator’s Dilemma case study. In fact, they’ve moved solidly into Cloud, Big Data, and Social Technologies – three of the hot, growing areas that characterize the technology landscape for the 2010s. So what happened to them?

It could be argued that they simply executed poorly, essentially taking some wrong turns on the way to a Cloudified nirvana. Rolling out a special social media product only for rich and important people – a social network for the one percent – does indicate that they’re out of touch with most customers.

And then there’s the proprietary aspect to their technology that is still haunting them. Today’s techies would much rather work with modern languages and environments than to have to go back to school to earn a particular vendor’s way of doing things.

Perhaps the problem is price. Their upstart competitors continue their downward pricing pressure, one of the economic patterns that the move to the Cloud has doubled down on. From the perspective of shareholders, however, TIBCO’s biggest problem has been growth. It’s very difficult for a large, established vendor to grow nearly as fast as smaller, more nimble players, especially when it still makes a lot of its money in saturated markets like messaging middleware.

Adding Cloud, Big Data, or Social Media products to the product list doesn’t change this fundamental mathematics, even though those new products may themselves experience rapid growth, since the new product lines account for a relatively small portion of their overall revenue.

So, how is a company like TIBCO to compete with smaller, faster growing vendors? Here’s where LBO plan B comes in: break up the company. Sell off the established products like Rendezvous to larger middleware players like Oracle. I’m sure Oracle would be happy to have TIBCO’s middleware customers, and they have shown a reasonable ability to keep such customers generally happy over the years.

Any SeeBeyond customers out there? SeeBeyond was acquired by Sun Microsystems, who renamed the product Java CAPS. Then Oracle acquired Sun, rolling Java CAPS and BEA Systems’ middleware products into Oracle SOA Suite. No one would be that surprised if Rendezvous suffered a similar fate.

The owners of whatever is left of TIBCO would focus their efforts on growing smaller, newer products. The end result won’t be a TIBCO us old timers would recognize, but should they ever go public again, they have a change to be a market darling once more.


Sitemap