Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Jason Bloomberg on October 1, 2014

In some ways, Oracle’s self-congratulatory San Francisco shindig known as OpenWorld is as gripping as any Shakespearean tragedy. For all the buzz today about transformation, agility, and change, it’s hard to get a straight story out of Oracle about what they want to change – if they really want to change at all.

First, there’s the odd shuffling of executives at the top, shifting company founder Larry Ellison into a dual role as Executive Chairman of the Board and CTO, a role that Ellison joked about: "I’m CTO now, I have to do my demos by myself. I used to have help, now it’s gone.” But on a more serious note, Oracle has been stressing that nothing will change at the big company.

Nothing will change? Why would you appoint new CEOs if you didn’t want anything to change? And isn’t the impact of the Cloud a disruptive force that is forcing Oracle to transform, like it or not? Perhaps they felt that claiming the exec shuffle was simply business as usual would calm down skittish shareholders and a skeptical Wall Street. But if I were betting money on Oracle stock, I’d be looking for them to change, not sticking their head in the sand and claiming that no change at all was preferable.

And what about their Cloud strategy, anyway? Ellison has been notoriously wishy-washy on the entire concept, but it’s clear that Cloud is perhaps Oracle’s biggest bet this year. However, “while those products are growing quickly, they remain a small fraction of the company's total business,” accounting for “just 5 percent of his company's revenue,” according to Reuters.

Thus Oracle finds itself in the same growth paradox that drove TIBCO out of the public stock market: only a small portion of the company is experiencing rapid growth, while the lion’s share is not. Of course, these slow-growth doldrums are par for the course for any established vendor; there’s nothing particularly unique about Oracle’s situation in that regard. But the fact still remains that Wall Street loves growth from tech vendors, and it doesn’t matter how fast Oracle grows its Cloud business, investors will still see a moribund incumbent.

The big questions facing Oracle moving forward, therefore, are how much of their traditional business should they reinvent, and will the Cloud be the central platform for that reinvention. Unfortunately for Oracle and its shareholders, indications are that the company has no intention of entering a period of creative disruption.

As Ellison said back in 2008, “There are still mainframes. Mainframes were the first that were going to be destroyed. And watching mainframes being destroyed is like watching a glacier melt. Even with global warming, it is taking long time.” Only now it’s 2014, and mainframes aren’t the question – Oracle’s core business is. Will Oracle still use the glacier metaphor? Given the accelerating rate of climate change, I wouldn’t bet on it.


Posted by Sandeep Chanda on September 29, 2014

Azure is increasingly becoming the scalable CMS platform with support for a host of popular CMS providers via the marketplace. The list already includes some of the big names in the CMS industry, like Umbraco, Kentico, Joomla, and DNN.

The most recent addition to this list is WordPress. It is very simple to create a WordPress website. Go to the Azure Preview Portal and click New to go to the Gallery. Select Web from the navigation pane and you will see Scalable WordPress listed as one of the options (along with other options such as Umbraco and Zoomla).

Scalable WordPress uses Azure Storage by default to store site content. This automatically allows you to use Azure CDN for the media content that you want to use in your WordPress website.

Once you select Scalable WordPress, you will be redirected to the website configuration pane, where you can specify the name of the website, the database and the storage configuration settings. You are all set!

Login to your WordPress site dashboard to configure plug-ins like Jetpack. Jetpack, formerly available with WordPress.com, is now also available with Scalable WordPress. Your WordPress CMS site hosted in Azure can now support millions of visits and scale on demand. The Azure WordPress CMS website will support auto-scale out of the box. You can also enable backup and restore features available with Azure websites for your CMS site. It will also support publishing of content from stage to production.


Posted by Jason Bloomberg on September 25, 2014

A decade ago, back in the “SOA days,” we compared various Enterprise Service Bus (ESB) vendors and the products they were hawking. When the conversation came around to TIBCO, we liked to joke that they were the “Scientology of ESB vendors,” because their technology was so proprietary that techies essentially had to devote their life to TIBCO to be worthy of working with their products.

But joking aside, we also gave them credit where credit was due. Their core ESB product, Rendezvous, actually worked quite well. After all, NASDAQ, FedEx, and Delta Airlines ran the thing. TIBCO obviously had the whole scalability thing nailed – unlike competitors like SeeBeyond back in the day, who competed with TIBCO in the Enterprise Application Integration space (the precursor to ESBs).

Cut to 2014, and TIBCO’s fortunes are now in question, as the stock market has pummeled their stock price, and a leveraged buyout (LBO) is in the works, with deep pocketed firms hoping to take the company private.

Sometimes, going private can be a good thing for a company, as it gives them the money as well as the privacy they need to make bold, innovative changes before relaunching as a public company. But in other cases, LBOs are opportunities for the venture capitalist vultures to sell off the company in parts, squeezing every last penny out of the assets while shifting all the risk to employees, customers, and basically anybody but themselves.

Which path TIBCO will take is unclear, as the buyout itself isn’t even a sure thing at this point. But TIBCO’s downfall – noting that I’m sure no one at the company would call it that – has some important lessons for all of us, because TIBCO’s story isn’t simply about a dinosaur unable to adapt to a new environment.

Their story is not a simple Innovator’s Dilemma case study. In fact, they’ve moved solidly into Cloud, Big Data, and Social Technologies – three of the hot, growing areas that characterize the technology landscape for the 2010s. So what happened to them?

It could be argued that they simply executed poorly, essentially taking some wrong turns on the way to a Cloudified nirvana. Rolling out a special social media product only for rich and important people – a social network for the one percent – does indicate that they’re out of touch with most customers.

And then there’s the proprietary aspect to their technology that is still haunting them. Today’s techies would much rather work with modern languages and environments than to have to go back to school to earn a particular vendor’s way of doing things.

Perhaps the problem is price. Their upstart competitors continue their downward pricing pressure, one of the economic patterns that the move to the Cloud has doubled down on. From the perspective of shareholders, however, TIBCO’s biggest problem has been growth. It’s very difficult for a large, established vendor to grow nearly as fast as smaller, more nimble players, especially when it still makes a lot of its money in saturated markets like messaging middleware.

Adding Cloud, Big Data, or Social Media products to the product list doesn’t change this fundamental mathematics, even though those new products may themselves experience rapid growth, since the new product lines account for a relatively small portion of their overall revenue.

So, how is a company like TIBCO to compete with smaller, faster growing vendors? Here’s where LBO plan B comes in: break up the company. Sell off the established products like Rendezvous to larger middleware players like Oracle. I’m sure Oracle would be happy to have TIBCO’s middleware customers, and they have shown a reasonable ability to keep such customers generally happy over the years.

Any SeeBeyond customers out there? SeeBeyond was acquired by Sun Microsystems, who renamed the product Java CAPS. Then Oracle acquired Sun, rolling Java CAPS and BEA Systems’ middleware products into Oracle SOA Suite. No one would be that surprised if Rendezvous suffered a similar fate.

The owners of whatever is left of TIBCO would focus their efforts on growing smaller, newer products. The end result won’t be a TIBCO us old timers would recognize, but should they ever go public again, they have a change to be a market darling once more.


Posted by Jason Bloomberg on September 20, 2014

Want to make tech headlines without having to change anything – or in fact, do anything? If you’re Oracle, all you have to do (or not do, as the case may be) is shake up the top levels of management.

The news this week, as per the Oracle press release: the only CEO in Oracle’s history, Larry Ellison, is stepping down as CEO. Big news, right? After all, he’s 70 years old now, and he’s a fixture on the yachting circuit. Maybe it’s time for him to relax on his yacht and enjoy his billions in retirement while hand-picked successors Mark Hurd and Safra Catz take the reins as co-CEOs. (Apparently Ellison’s shoes are so big the only way to fill them is to put one new CEO in each.)

But look more closely and you’ll see that sipping Mai Tais on the Rising Sun isn’t Ellison’s plan at all. He’s planning to keep working full time as CTO and in his newly appointed role as Executive Chairman. The only difference here is the reporting structure: Hurd and Catz now report to the Board instead of directly to Ellison. “Safra and Mark will now report to the Oracle Board rather than to me,” Ellison purrs. “All the other reporting relationships will remain unchanged.”

Oh, and Ellison reports directly to the Board as well, as he has always done, rather than to either Hurd or Catz. And who does the Board report to? Ellison, of course, in his new role as Executive Chairman.

It’s important to note that Oracle never had an Executive Chairman before, only a Chairman (Jeff Henley, now demoted to Vice Chairman of the Board). So, what’s the difference between a Chairman and an Executive Chairman? According to Wikipedia, the Executive Chairman is “An office separate from that of CEO, where the titleholder wields influence over company operations.”

In other words, Ellison is now even more in charge than he was before. In his role as CEO, he reported to the Board, led by a (non-executive) Chairman. But now, he gets to run the board, as well as the technology wing of Oracle.

So, will anything really change at Oracle? Unlikely – at least not until Ellison finally kicks the bucket. It was always Ellison’s show, and now Ellison has further consolidated his iron grip on his baby. If you’re expecting change from Oracle – say, increased innovation for example – you’ll have to keep waiting.


Posted by Sandeep Chanda on September 15, 2014

NuGet has been a fairly popular mechanism to publish and distribute packaged components to be consumed by Visual Studio projects and solutions. Releases from the Microsoft product teams are increasingly being distributed as NuGet packages and it is officially the package manager for the Microsoft development platform. including .NET.

NuGet.org is the central package repository used by authors and consumers for global open distribution. One limitation of NuGet central repository is that, in large scale enterprise teams, it often results in package version mismatch across teams/solutions/projects. If not managed early this spirals into a significant application versioning problem for release managers during deployment.

One approach to solving this problem would be to use a Local NuGet Server that you can provision for your enterprise. It mimics the central repository, however it remains in the control of your release managers who can now decide which package versions to release for your consumers. The idea is that your Visual Studio users will point to your local NuGet server instead of the central repository and the release management team will control what versions of packages the teams use for consistency. The following figure illustrates the process:

It is very easy to create a NuGet server. You can use the nuget command line tool to publish packages. You will need an API Key and the host URL.

Developers using Visual Studio can go to Tools  →  Options  →  NuGet Package Manager → Package Sources and add the internal package server as a source.

While local NuGet servers are used today as a mechanism for distributing internal packages, they can also be extended to become a gated process for distributing global packages to bring consistency in the versions used across teams.


Posted by Jason Bloomberg on September 10, 2014

The new Apple Watch has many cool features to be sure, but I just don’t like the fact that Apple discriminates on the basis of handedness.

The Apple Watch comes in a right-handed configuration. Yes, there’s a left-handed setting, but you need to switch the band around, and then the button on the side is in an awkward lower position.

In other words, left-handed people either have to suck it up and use the watch in the right-handed configuration, or go through the hassle of reconfiguring it only to end up with an inferior design. Thanks a lot, Apple. But hey, we're only lefties, and we're only being inconvenienced.

We should be used to it, right? After all, user interfaces have been right-handed for years. To this day the arrow cursor is right-handed, and scrollbars are always on the right. And for software that does have a left-handed configuration, more often than not some aspect of the UI doesn’t work properly in left-handed mode.

If we were a legally protected minority then it wouldn't be a question of being inconvenienced, right? Were separate water fountains simply inconvenient?

10% of the population is left-handed. And all us lefties know that left-handedness correlates with intelligence, so I wouldn’t be surprised if the percentage is higher within the storied walls of Apple. So, why didn’t Apple release a left-handed version of the Apple Watch?

I think Apple is being offensive by paying lip service to handedness, but giving lefties a second-class experience nevertheless. But that's just me. Who cares what lefties think?


Posted by Jason Bloomberg on September 5, 2014

“Never believe anything you read on the Internet.” – Abraham Lincoln

Honest Abe never spoke a truer word – even though he didn’t say anything of the sort, of course. And while we can immediately tell this familiar Facebook saying is merely a joke, there are many documents on the Internet that have such a veneer of respectability that we’re tempted to take them at their word – even though they may be just as full of nonsense as the presidential proclamation above.

Among the worst offenders are survey reports, especially when they are surveys of professionals about emerging technologies or approaches. Fortunately, it’s possible to see through the bluster, if you know the difference between a good survey and a bad one. Forewarned is forearmed, as the saying goes – even though Lincoln probably didn’t say that.

The Basics of a Good Survey

The core notion of a survey is that a reputable firm asks questions of a group of people who represent a larger population. If the surveyed group accurately represents the larger population, the answers are truthful, and questions are good, then the results are likely to be relatively accurate (although statistical error is always a factor). Unfortunately, all of these criteria present an opportunity for problems. Here are a few things to look for.

Does the sample group represent the larger population? The key here is that the sample group must be selected randomly from the population, and any deviation from randomness must be compensated for in the analysis. Ensuring randomness, however, is quite difficult, since respondents may or may not want to participate, or may or may not be easy to find or identify.

Here’s how reputable political pollsters handle deviations from randomness. First, they have existing demographic data about the population in question (say, voters in a county). Based on census data, they know what percent are male and female, what percent are registered Democrat or Republican, what the age distribution of the population is, etc. Then they select, say, 100 telephone numbers at random in the county, and call each of them. Some go to voicemail or don’t answer, and many people who do answer refuse to participate. For the ones that do participate, they ask demographic questions as well as the questions the survey is actually interested in. If they find, say, that 50% of voters in a county are female, but 65% of respondents were female, they have to adjust the results accordingly. Making such adjustments for all factors – including who has phones, which numbers are mobile, etc. – is complex and error-prone, but is the best they can do to get the most accurate result possible.

Compare that political polling selection process to how, say, Digital Transformation, Big Data, or Cloud Computing adoption surveys assemble their populations. Perhaps the survey company emails their mailing list and asks for volunteers. Maybe it’s a Web page or a document handed out at a conference. Or worst of all, perhaps survey participants are hand-selected by the sponsor of the survey. None of these methods produces a sample that’s even close to being random. The result? The results of the survey cannot be expected to represent the opinions of any population other than the survey participants themselves.

Are the answers truthful? I’m willing to posit that people are generally honest folks, so the real question here is, what motivations would people have not to be completely honest on a survey? For emerging technologies and approaches the honesty question is especially important, because people like to think they’re adopting some new buzzword, even if they’re not. Furthermore, people like to think they understand a new buzzword, even if they don’t. People also tend to exaggerate their adoption: they may say they’re “advanced Cloud adopters” when they simply use online email, for example. Finally, executives may have different responses than people in the trenches. CIOs are more likely to say they’re doing DevOps than developers in the same organization, for example.

Are the questions good? This criterion is the most subtle, as the answer largely amounts to a matter of opinion. If the surveying company or the sponsor thinks the questions are good, then aren’t they? Perhaps, but the real question here is one of ulterior motives. Is the sponsor looking for the survey to achieve a particular result, and thus is influencing the questions accordingly? Were certain questions thrown out after responses were received, because those answers didn’t make the surveying company or sponsor happy? If scientific researchers were to exclude certain questions because they didn’t like the results, they’d get fired and blacklisted. Unfortunately, there are no such punishments in the world of business surveys.

So, How Do You Tell?

I always recommend taking surveys with a large grain of salt regardless, but the best way to get a sense of the quality of a survey is to look at the methodology section. The survey you’re wondering about doesn’t have a methodology section, you say? Well, it might be good for wrapping fish, but not much else, since every survey report should have one.

Even if it has one, take a look at it with a critical eye, not just for what it says, but for what it doesn’t say. Then, if some critical bit of information is missing, assume the worst. For example, here is the entire methodology section from a recent Forrester Research “Thought Leadership Paper” survey on Business Transformation commissioned by Tata Consultancy Services (TCS):

In this study, Forrester interviewed 10 business transformation leaders and conducted an online survey of 100 US and UK decision-makers with significant involvement in business transformation projects. Survey participants included Director+ decision-makers in IT and line of business. Questions provided to the participants asked about their goals, metrics, and best practices around business transformation projects. Respondents were offered an incentive as a thank you for time spent on the survey. The study began in February 2014 and was completed in May 2014.

How did Forrester ensure the randomness of their survey sample? They didn’t. Is there any reason to believe the survey sample accurately represents a larger population? Nope. How did they select the people they surveyed? It doesn’t say, except to point out they have significant involvement in business transformation projects. So if we assume the worst, we should assume the respondents were hand-selected by the sponsor. Does the report provide an analysis of the answers to every question asked? It doesn’t say. The methodology statement does point out respondents were offered an incentive for participating, however. This admission indicates Forrester is a reputable firm to be sure, but doesn’t say much for the accuracy or usefulness of the results of the report.

So, what should a business survey report methodology look like? Take a look at this one from the International Finance Corporation (IFC), a member of the World Bank Group. The difference is clear. Consider yourself forewarned!


Posted by Sandeep Chanda on September 3, 2014

Microsoft's recent addition into the world of NoSQL Databases has been greeted with quite a fanfare and with mixed reviews from competing products. What is interesting is that Microsoft chose DocumentDB as a new Azure-only feature against enhancing its already existing table storage capabilities.

DocumentDB is a JSON document only database as a service. A significant feature included in DocumentDB, that is missing in its traditional rivals, is the support for rich queries (including support for LINQ) and transaction support. What is also interesting is that the new SQL syntax for querying JSON documents automatically recognizes native JavaScript constructs. It also supports programmability features such as user defined functions, stored procedures, and triggers. Given that it is backed by Azure with high availability and scalability, the offering seems to hold an extremely promising future.

To start, first create a new instance of DocumentDB in your Microsoft Azure Preview portal.

Click New in the preview portal and select DocumentDB. Specify a name and additional details like the capacity configuration and resource group. Go ahead and click the Create button to create an instance of DocumentDB. After creating the instance you can get the URI and keys by clicking on the Keys tile.

Done! You are now good to start using DocumentDB to store and query JSON documents. In your instance of Visual Studio, run the following NuGet command using the package manager console to install the pre-requisites in order to start programming with DocumentDB.

PM> Install-Package Microsoft.Azure.Documents.Client -Pre

If you want to program it using JavaScript, you can also install the JavaScript SDK from here https://github.com/Azure/azure-documentdb-js, and then leverage the REST interface to access DocumentDB using permissions authorization. In a future post, we will look at some of the language constructs in programming with DocumentDB.


Posted by Jason Bloomberg on August 26, 2014

I attended Dataversity’s NoSQL Now! Conference last week, and among the many vendors I spoke with, one story caught my interest. This vendor (who alas must remain nameless) is a leader in the NoSQL database market, specializing in particular on supporting XML as a native file type.

In their upcoming release, however, they’re adding JavaScript support – native JSON as well as Server-Side JavaScript as a language for writing procedures. And while the addition of JavaScript/JSON may be newsworthy in itself, the interesting story here is why they decided to add such support to their database.

True, JavaScript/JSON support is a core feature of competing databases like MongoDB. And yes, customers are asking for this capability. But they don’t want JavaScript support because they think it will solve any business problems better than the XML support the database already offers.

The real reason they’re adding JavaScript support is because developers are demanding it – because they want JSON on their resumes, and because JSON is cool, whereas XML isn’t. So for the people actually responsible for buying database technology, they’re asking for JSON support as a recruitment and retention tool.

Will adding JavaScript/JSON support make their database more adept at solving real business problems? Perhaps. But if developers will bolt if your database isn’t cool, then coolness suddenly becomes your business driver, for better or worse. One can only wonder: how many other software features are simply the result of the developer coolness factor, independent of any other value to the businesses footing the bill?


Posted by Sandeep Chanda on August 25, 2014

Enterprise monitoring needs over the years have been addressed by Microsoft Systems Centre Operations Manager to a large extent. The problem however is that SCOM produces a lot of noise and the data could very quickly become irrelevant for producing any actionable information. IT teams very easily fall in the trap of configuring SCOM for every possible scheme of alerts, but do not put effective mechanisms in place to improve the alert to noise ratio by creating usable knowledge base out of the alerts that are generated by SCOM. Splunk and its cloud avatar, Hunk could be very useful in the following aspects:

  1. Providing actionable analytics using the alert log in the form of self-service dashboards
  2. Isolation of vertical and horizontal monitoring needs
  3. Generating context around alerts or a group of alerts
  4. Collaboration between IT administrators and business analysts
  5. Creating a consistent alerting scale for participating systems
  6. Providing a governance model for iteratively fine tuning the system.

In your enterprise, Splunk could be positioned in a layer above SCOM, where it gets the alert log as input for processing and analysis. This pair can be used to address the following enterprise monitoring needs of an organization:

  1. Global Service Monitoring - Provides information on the overall health of the infrastructure, which includes surfacing actionable information on disk and CPU usage. It could also be extended to include the network performance and the impact specific software applications are having on the health of the system. Splunk will augment SCOM in creating dashboards from the data collected that could help make decisions. For example, looking at the CPU usage trends on a timeline, IT owners can decide increasing or decreasing the core fabric.
  2. Application Performance Monitoring - Splunk can be extremely useful in making business decisions out of the instrumentation you do in code and the trace log it generates. You can identify purchase patterns of your customers. The application logs and alerts generated by custom applications and commercial of the shelf software (COTS) could be routed to Splunk via SCOM using the management packs. Splunk can then help you create management dashboards that in-turn will help the executive team decide the future course of business.

Using Splunk in conjunction with SCOM provides you a very robust enterprise monitoring infrastructure. That said, the true benefit of this stack can be realized only with an appropriate architecture for alert design, a process guidance on thresholds, and identification of key performance indicators to improve the signal to noise ratio.


Sitemap