Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on October 22, 2014

Docker has sort of revolutionized the micro-services ecosystem since its first launch little more than a year back. The recent announcement from Microsoft about the partnership with Docker is a significant move, with some even calling it the best thing that has happened to Microsoft since .NET. This partnership will allow developers to create Windows Server Docker containers!

What is interesting is that this move will produce a mixed bag of efforts and investment directly from the Windows Server product team, as well as from the open source community that has been championing the cause for Docker. Thus getting it a serious footprint in the world of distributed applications enabling development, build, and distribution.

Dockerized apps for Linux containers on Windows Azure have already been in play for a while now. With this new initiative, Windows Server based containers will see the light of day. This is very exciting for developers as it will allow them to create and distribute applications on a mixed platform of both Linux and Windows. To align with the Docker platform, Microsoft will focus on the Windows Server Container infrastructure that will allow developers in the .NET world to share, publish and ship containers to virtually any location running the next gen Windows Server, including Microsoft Azure. The following initiatives have been worked out:

  1. Docker Engine supporting Windows Server images in the Docker Hub.
  2. Portability with Docker Remote API for multi-container applications.
  3. Integration of Docker Hub with Microsoft Azure Management Portal for easy provisioning and configuration.
  4. MS Open Tech will contribute the code to Docker Client supporting the provisioning of multi-container Docker applications using the Remote API.

This partnership should silence the reservations critics had regarding the success of the Docker platform and will be a great win for developers in the .NET world!


Posted by Jason Bloomberg on October 17, 2014

At Intellyx our focus is on digital transformation, so I spend a lot of my time helping digital professionals understand how to leverage the various technology options open to them to achieve their customer-driven business goals.

Who is a digital professional? People with titles like Chief Digital Officer, VP of Ecommerce, VP of Digital Marketing, or even Chief Marketing Officer – in other words, people who are marketers at heart, but who now have one foot solidly in the technology arena, as they’re on the digital front lines, where customers interact with the organization.

One of the most important activities that enables me to interact with such digital professionals is attending conferences on digital strategy. To this end I have been attending Innovation Enterprise conferences – first, the Digital Strategy Innovation conference a few weeks ago in San Francisco, and coming up, the Big Data and Marketing Innovation Summit in Miami November 6 – 7.

Full disclosure: Intellyx is an Innovation Enterprise media sponsor, and I’m speaking at the upcoming conference as well as chairing the first day – but choosing to be involved with these conferences was a deliberate decision on my part, as the digital professional is an important audience for Intellyx.

Nevertheless, my traditional and still core audience is the IT professional. Most of the conferences I attend are IT-centric, even though the digital story is driving much of the business activity within the IT world as well as the marketing world.

Even so, I find most tech conferences suffer from the same affliction: the echo chamber effect. By echo chamber I mean that tech conferences predictably attract techies – and for the most part, only techies. The exhibitors are techies. The speakers are techies. And of course, the attendees are techies. The entire event consists of techies talking to techies.

The exhibitors, therefore, are hoping that some of the techies that walk by their booth are buyers, or at least, influencers of the technology buying decision. And thus they keep exhibiting, hoping for those hot leads.

There were exhibitors at the Digital Strategy Innovation show as well – mostly marketing automation vendors, with a few marketing intelligence vendors mixed in. In other words, the vendor community expected the digital crowd to be interested solely in marketing technology. After all, the crowd was a marketing crowd, right?

True, that digital crowd was a marketing crowd, but that doesn’t mean their problems were entirely marketing problems. In fact, the audience was struggling much more with web and mobile performance issues than marketing automation issues.

So, where were the web and mobile performance vendors? Nary a one at the Digital Strategy Innovation summit – they were at the O’Reilly Velocity show, a conference centered on web performance that attracts, you guessed it, a heavily technical crowd.

What about the upcoming Big Data and Marketing Innovation Summit? True, there are a couple of Big Data technology vendors exhibiting, but the sponsorship rolls are surprisingly sparse. We media sponsors actually outnumber the paying sponsors at this point!

So, where are all the Big Data guys? At shows like Dataversity’s Enterprise Data World, yet another echo chamber technology show (although more people on the business side come to EDW than to shows like Velocity).

The moral of this story: the digital technology buyer is every bit as likely to be a marketing person as a techie, if not more so. For vendors who have a digital value proposition, centering your marketing efforts solely on technology audiences will miss an important and growing market segment.

It’s just a matter of time until vendors figure this out. If you’re a vendor, then who will be the first to capitalize on this trend, you or your competition?


Posted by Sandeep Chanda on October 15, 2014

In one of the previous blog posts, I introduced DocumentDB - Microsoft's debut into the world of NoSQL databases. You learned how it is different for being a JSON document only database. You also learned to create an instance of DocumentDB in Azure.

In the previous post, you used NuGet to install the required packages to program against DocumentDB in a .NET application. Today let's explore some of the programming constructs to operate on an instance of DocumentDB.

First step is to create a repository to allow you connect to your instance of DocumentDB. Create a repository class and reference the Microsoft.Azure.Documents.Client namespace in it. The Database object can be used to create an instance. The following code illustrates:

Database db = DbClient.CreateDatabaseAsync(new Database { Id = DbId } ).Result; 

Here DbClient is a property of type DatabaseClient exposed by Microsoft.Azure.Documents.Client API in your repository class. It provides the method CreateDatabaseAsync to connect to DocumentDB. You need to have the following key values from your instance of DocumentDB in azure:

  1. End point URL from Azure Management Portal
  2. Authentication Key
  3. Database Id
  4. Collection name

You can create an instance of DocumentClient using the following construct:

private static DocumentClient DbClient
    {
        get
        {
            Uri endpointUri = new Uri(ConfigurationManager.AppSettings["endpoint"]);
                return new DocumentClient(endpointUri, ConfigurationManager.AppSettings["authKey"];

        }
    }

Next you need to create a Document Collection using the method CreateDocumentCollectionAsync.

DocumentCollection collection = DbClient. CreateDocumentCollectionAsync ( Database.SelfLink, new DocumentCollection { Id = CollectionId } ).Result; 

You are now all set to perform DocumentDB operations using the repository. Note that you need to reference Microsoft.Azure.Documents.Linq to use Linq constructs for querying. Here is an example:

var results = DbClient.CreateDocumentQuery<T>(collection.DocumentsLink); 

Note that whatever entity replaces type T, the properties of that entity must be decorated with JsonProperty attribute to allow JSON serialization.

To create an entry you can use the CreateDocumentAsync method as shown here:

DbClient.CreateDocumentAsync(collection.SelfLink, T); 

In a similar fashion, you can also use the equivalent update method to update the data in your instance of DocumentDB.

Beyond .NET, DocumentDB also provides libraries to allow using JavaScript and Node.js. The interesting aspect is it allows T-SQL style operations such as creation of stored procedures, triggers, and user defined functions using JavaScript. You can write procedural logic in JavaScript, with atomic transactions. Performance is typically very good with JSON mapped all the way from the client side to DocumentDB as the unit of storage.  


Posted by Sandeep Chanda on October 10, 2014

The ongoing Xamarin Evolve conference is generating a lot of enthusiasm amongst cross-platform developers across the globe.

Xamarin has so far showcased an Android player, a simulator with hardware acceleration that claims to be much faster than the emulator with Android SDK. It is based on OpenGL and utilizes hardware accelerated virtualization with VT-x and AMD-V. The player also relies on Virtual Box 4.3 or higher to run. It would run equally well on Windows (7 or later) and OS X (10.7 or higher). After installing the player, you can select the emulator image to run. Select the device to simulate from the Device Manager. The emulator will then run exactly like the Android SDK emulator and you can perform various actions (typical of a hardware operation) by clicking the buttons provided on the right hand side. You can also simulate operations like multi-touch, battery operations, and location controls, etc. To install your apps for testing, you can drag and drop the APK file into the player.

Another cool release is the profiler that can be leveraged to perform code analysis of the C# code and profile it for potential performance bottlenecks and leaks. The profiler performs two important tasks. It does sampling for tracking memory allocation and looks at the call tree to determine the order of calling functions. It also provides a snapshot of memory usage on a timeline allowing the administrators to gain valuable insights into memory usage patterns.

My most favourite feature so far, however, is the preview of Sketches. Sketches provides an environment to quickly evaluate code and analyse the outcome. It offers immediate results without having the need to compile or deploy and you can use it from your Xamarin Studio. More on Sketches in the next post after I install and give it a try myself.


Posted by Jason Bloomberg on October 9, 2014

IT industry analyst behemoth Gartner is having their Symposium shindig this week in Orlando, where they made such predictions as “one in three jobs will be taken by software or robots by 2025” and “By year-end 2016, more than USD 2 billion online shopping will be performed exclusively by mobile digital assistants,” among other deep and unquestionably thoughtful prognostications.

And of course, Gartner isn’t the only analyst firm who uses their crystal ball to make news. Forrester Research and IDC, the other two remaining large players in the IT industry analysis space, also feed their customers – as well as the rest of us – such predictions of the future.

Everybody knows, however, that predicting the future is never a sure thing. Proclamations such as the ones above boil down to matters of opinion – as the fine print on any Gartner report will claim. And yet, at some point in time, such claims will become verifiable matters of fact.

The burning question in my mind, therefore, is where are the analyses of past predictions? Just how polished are the crystal balls at the big analyst firms anyway? And are their predictions better than anyone else’s?

If all you hear are crickets in response to these questions, you’re not alone. Analyst firms rarely go back over past predictions and compare them to actual data. And we can all guess the reason: their predictions are little more than random shots in the dark. If they ever get close to actually getting something right, there’s no reason to believe such an eventuality is anything more than random luck.

Of course, anyone in the business of making predictions faces the same challenge, dating back to the Oracle of Delphi in ancient Greece. So what’s different now? The answer: Big Data.

You see, Gartner and the rest spend plenty of time talking about the predictive power of Big Data. Our predictive analysis tools are better than ever, and furthermore, the quantity of available data as well as our ability to analyze them are improving dramatically.

Furthermore, an established predictive analytics best practice is to measure the accuracy of your predictions and feed back that information in order to improve the predictive algorithms, thus iteratively polishing your crystal ball to a mirror-like sheen.

So ask yourself (and if you’re a client of one of the aforementioned firms, ask them) – why aren’t the big analyst shops analyzing their own past predictions, not only to let us know just how good they are at prognostication, but to improve their prediction methodologies? Time to eat your own dog food, Gartner!


Posted by Jason Bloomberg on October 1, 2014

In some ways, Oracle’s self-congratulatory San Francisco shindig known as OpenWorld is as gripping as any Shakespearean tragedy. For all the buzz today about transformation, agility, and change, it’s hard to get a straight story out of Oracle about what they want to change – if they really want to change at all.

First, there’s the odd shuffling of executives at the top, shifting company founder Larry Ellison into a dual role as Executive Chairman of the Board and CTO, a role that Ellison joked about: "I’m CTO now, I have to do my demos by myself. I used to have help, now it’s gone.” But on a more serious note, Oracle has been stressing that nothing will change at the big company.

Nothing will change? Why would you appoint new CEOs if you didn’t want anything to change? And isn’t the impact of the Cloud a disruptive force that is forcing Oracle to transform, like it or not? Perhaps they felt that claiming the exec shuffle was simply business as usual would calm down skittish shareholders and a skeptical Wall Street. But if I were betting money on Oracle stock, I’d be looking for them to change, not sticking their head in the sand and claiming that no change at all was preferable.

And what about their Cloud strategy, anyway? Ellison has been notoriously wishy-washy on the entire concept, but it’s clear that Cloud is perhaps Oracle’s biggest bet this year. However, “while those products are growing quickly, they remain a small fraction of the company's total business,” accounting for “just 5 percent of his company's revenue,” according to Reuters.

Thus Oracle finds itself in the same growth paradox that drove TIBCO out of the public stock market: only a small portion of the company is experiencing rapid growth, while the lion’s share is not. Of course, these slow-growth doldrums are par for the course for any established vendor; there’s nothing particularly unique about Oracle’s situation in that regard. But the fact still remains that Wall Street loves growth from tech vendors, and it doesn’t matter how fast Oracle grows its Cloud business, investors will still see a moribund incumbent.

The big questions facing Oracle moving forward, therefore, are how much of their traditional business should they reinvent, and will the Cloud be the central platform for that reinvention. Unfortunately for Oracle and its shareholders, indications are that the company has no intention of entering a period of creative disruption.

As Ellison said back in 2008, “There are still mainframes. Mainframes were the first that were going to be destroyed. And watching mainframes being destroyed is like watching a glacier melt. Even with global warming, it is taking long time.” Only now it’s 2014, and mainframes aren’t the question – Oracle’s core business is. Will Oracle still use the glacier metaphor? Given the accelerating rate of climate change, I wouldn’t bet on it.


Posted by Sandeep Chanda on September 29, 2014

Azure is increasingly becoming the scalable CMS platform with support for a host of popular CMS providers via the marketplace. The list already includes some of the big names in the CMS industry, like Umbraco, Kentico, Joomla, and DNN.

The most recent addition to this list is WordPress. It is very simple to create a WordPress website. Go to the Azure Preview Portal and click New to go to the Gallery. Select Web from the navigation pane and you will see Scalable WordPress listed as one of the options (along with other options such as Umbraco and Zoomla).

Scalable WordPress uses Azure Storage by default to store site content. This automatically allows you to use Azure CDN for the media content that you want to use in your WordPress website.

Once you select Scalable WordPress, you will be redirected to the website configuration pane, where you can specify the name of the website, the database and the storage configuration settings. You are all set!

Login to your WordPress site dashboard to configure plug-ins like Jetpack. Jetpack, formerly available with WordPress.com, is now also available with Scalable WordPress. Your WordPress CMS site hosted in Azure can now support millions of visits and scale on demand. The Azure WordPress CMS website will support auto-scale out of the box. You can also enable backup and restore features available with Azure websites for your CMS site. It will also support publishing of content from stage to production.


Posted by Jason Bloomberg on September 25, 2014

A decade ago, back in the “SOA days,” we compared various Enterprise Service Bus (ESB) vendors and the products they were hawking. When the conversation came around to TIBCO, we liked to joke that they were the “Scientology of ESB vendors,” because their technology was so proprietary that techies essentially had to devote their life to TIBCO to be worthy of working with their products.

But joking aside, we also gave them credit where credit was due. Their core ESB product, Rendezvous, actually worked quite well. After all, NASDAQ, FedEx, and Delta Airlines ran the thing. TIBCO obviously had the whole scalability thing nailed – unlike competitors like SeeBeyond back in the day, who competed with TIBCO in the Enterprise Application Integration space (the precursor to ESBs).

Cut to 2014, and TIBCO’s fortunes are now in question, as the stock market has pummeled their stock price, and a leveraged buyout (LBO) is in the works, with deep pocketed firms hoping to take the company private.

Sometimes, going private can be a good thing for a company, as it gives them the money as well as the privacy they need to make bold, innovative changes before relaunching as a public company. But in other cases, LBOs are opportunities for the venture capitalist vultures to sell off the company in parts, squeezing every last penny out of the assets while shifting all the risk to employees, customers, and basically anybody but themselves.

Which path TIBCO will take is unclear, as the buyout itself isn’t even a sure thing at this point. But TIBCO’s downfall – noting that I’m sure no one at the company would call it that – has some important lessons for all of us, because TIBCO’s story isn’t simply about a dinosaur unable to adapt to a new environment.

Their story is not a simple Innovator’s Dilemma case study. In fact, they’ve moved solidly into Cloud, Big Data, and Social Technologies – three of the hot, growing areas that characterize the technology landscape for the 2010s. So what happened to them?

It could be argued that they simply executed poorly, essentially taking some wrong turns on the way to a Cloudified nirvana. Rolling out a special social media product only for rich and important people – a social network for the one percent – does indicate that they’re out of touch with most customers.

And then there’s the proprietary aspect to their technology that is still haunting them. Today’s techies would much rather work with modern languages and environments than to have to go back to school to earn a particular vendor’s way of doing things.

Perhaps the problem is price. Their upstart competitors continue their downward pricing pressure, one of the economic patterns that the move to the Cloud has doubled down on. From the perspective of shareholders, however, TIBCO’s biggest problem has been growth. It’s very difficult for a large, established vendor to grow nearly as fast as smaller, more nimble players, especially when it still makes a lot of its money in saturated markets like messaging middleware.

Adding Cloud, Big Data, or Social Media products to the product list doesn’t change this fundamental mathematics, even though those new products may themselves experience rapid growth, since the new product lines account for a relatively small portion of their overall revenue.

So, how is a company like TIBCO to compete with smaller, faster growing vendors? Here’s where LBO plan B comes in: break up the company. Sell off the established products like Rendezvous to larger middleware players like Oracle. I’m sure Oracle would be happy to have TIBCO’s middleware customers, and they have shown a reasonable ability to keep such customers generally happy over the years.

Any SeeBeyond customers out there? SeeBeyond was acquired by Sun Microsystems, who renamed the product Java CAPS. Then Oracle acquired Sun, rolling Java CAPS and BEA Systems’ middleware products into Oracle SOA Suite. No one would be that surprised if Rendezvous suffered a similar fate.

The owners of whatever is left of TIBCO would focus their efforts on growing smaller, newer products. The end result won’t be a TIBCO us old timers would recognize, but should they ever go public again, they have a change to be a market darling once more.


Posted by Jason Bloomberg on September 20, 2014

Want to make tech headlines without having to change anything – or in fact, do anything? If you’re Oracle, all you have to do (or not do, as the case may be) is shake up the top levels of management.

The news this week, as per the Oracle press release: the only CEO in Oracle’s history, Larry Ellison, is stepping down as CEO. Big news, right? After all, he’s 70 years old now, and he’s a fixture on the yachting circuit. Maybe it’s time for him to relax on his yacht and enjoy his billions in retirement while hand-picked successors Mark Hurd and Safra Catz take the reins as co-CEOs. (Apparently Ellison’s shoes are so big the only way to fill them is to put one new CEO in each.)

But look more closely and you’ll see that sipping Mai Tais on the Rising Sun isn’t Ellison’s plan at all. He’s planning to keep working full time as CTO and in his newly appointed role as Executive Chairman. The only difference here is the reporting structure: Hurd and Catz now report to the Board instead of directly to Ellison. “Safra and Mark will now report to the Oracle Board rather than to me,” Ellison purrs. “All the other reporting relationships will remain unchanged.”

Oh, and Ellison reports directly to the Board as well, as he has always done, rather than to either Hurd or Catz. And who does the Board report to? Ellison, of course, in his new role as Executive Chairman.

It’s important to note that Oracle never had an Executive Chairman before, only a Chairman (Jeff Henley, now demoted to Vice Chairman of the Board). So, what’s the difference between a Chairman and an Executive Chairman? According to Wikipedia, the Executive Chairman is “An office separate from that of CEO, where the titleholder wields influence over company operations.”

In other words, Ellison is now even more in charge than he was before. In his role as CEO, he reported to the Board, led by a (non-executive) Chairman. But now, he gets to run the board, as well as the technology wing of Oracle.

So, will anything really change at Oracle? Unlikely – at least not until Ellison finally kicks the bucket. It was always Ellison’s show, and now Ellison has further consolidated his iron grip on his baby. If you’re expecting change from Oracle – say, increased innovation for example – you’ll have to keep waiting.


Posted by Sandeep Chanda on September 15, 2014

NuGet has been a fairly popular mechanism to publish and distribute packaged components to be consumed by Visual Studio projects and solutions. Releases from the Microsoft product teams are increasingly being distributed as NuGet packages and it is officially the package manager for the Microsoft development platform. including .NET.

NuGet.org is the central package repository used by authors and consumers for global open distribution. One limitation of NuGet central repository is that, in large scale enterprise teams, it often results in package version mismatch across teams/solutions/projects. If not managed early this spirals into a significant application versioning problem for release managers during deployment.

One approach to solving this problem would be to use a Local NuGet Server that you can provision for your enterprise. It mimics the central repository, however it remains in the control of your release managers who can now decide which package versions to release for your consumers. The idea is that your Visual Studio users will point to your local NuGet server instead of the central repository and the release management team will control what versions of packages the teams use for consistency. The following figure illustrates the process:

It is very easy to create a NuGet server. You can use the nuget command line tool to publish packages. You will need an API Key and the host URL.

Developers using Visual Studio can go to Tools  →  Options  →  NuGet Package Manager → Package Sources and add the internal package server as a source.

While local NuGet servers are used today as a mechanism for distributing internal packages, they can also be extended to become a gated process for distributing global packages to bring consistency in the versions used across teams.


Sitemap