Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Jason Bloomberg on August 26, 2014

I attended Dataversity’s NoSQL Now! Conference last week, and among the many vendors I spoke with, one story caught my interest. This vendor (who alas must remain nameless) is a leader in the NoSQL database market, specializing in particular on supporting XML as a native file type.

In their upcoming release, however, they’re adding JavaScript support – native JSON as well as Server-Side JavaScript as a language for writing procedures. And while the addition of JavaScript/JSON may be newsworthy in itself, the interesting story here is why they decided to add such support to their database.

True, JavaScript/JSON support is a core feature of competing databases like MongoDB. And yes, customers are asking for this capability. But they don’t want JavaScript support because they think it will solve any business problems better than the XML support the database already offers.

The real reason they’re adding JavaScript support is because developers are demanding it – because they want JSON on their resumes, and because JSON is cool, whereas XML isn’t. So for the people actually responsible for buying database technology, they’re asking for JSON support as a recruitment and retention tool.

Will adding JavaScript/JSON support make their database more adept at solving real business problems? Perhaps. But if developers will bolt if your database isn’t cool, then coolness suddenly becomes your business driver, for better or worse. One can only wonder: how many other software features are simply the result of the developer coolness factor, independent of any other value to the businesses footing the bill?


Posted by Sandeep Chanda on August 25, 2014

Enterprise monitoring needs over the years have been addressed by Microsoft Systems Centre Operations Manager to a large extent. The problem however is that SCOM produces a lot of noise and the data could very quickly become irrelevant for producing any actionable information. IT teams very easily fall in the trap of configuring SCOM for every possible scheme of alerts, but do not put effective mechanisms in place to improve the alert to noise ratio by creating usable knowledge base out of the alerts that are generated by SCOM. Splunk and its cloud avatar, Hunk could be very useful in the following aspects:

  1. Providing actionable analytics using the alert log in the form of self-service dashboards
  2. Isolation of vertical and horizontal monitoring needs
  3. Generating context around alerts or a group of alerts
  4. Collaboration between IT administrators and business analysts
  5. Creating a consistent alerting scale for participating systems
  6. Providing a governance model for iteratively fine tuning the system.

In your enterprise, Splunk could be positioned in a layer above SCOM, where it gets the alert log as input for processing and analysis. This pair can be used to address the following enterprise monitoring needs of an organization:

  1. Global Service Monitoring - Provides information on the overall health of the infrastructure, which includes surfacing actionable information on disk and CPU usage. It could also be extended to include the network performance and the impact specific software applications are having on the health of the system. Splunk will augment SCOM in creating dashboards from the data collected that could help make decisions. For example, looking at the CPU usage trends on a timeline, IT owners can decide increasing or decreasing the core fabric.
  2. Application Performance Monitoring - Splunk can be extremely useful in making business decisions out of the instrumentation you do in code and the trace log it generates. You can identify purchase patterns of your customers. The application logs and alerts generated by custom applications and commercial of the shelf software (COTS) could be routed to Splunk via SCOM using the management packs. Splunk can then help you create management dashboards that in-turn will help the executive team decide the future course of business.

Using Splunk in conjunction with SCOM provides you a very robust enterprise monitoring infrastructure. That said, the true benefit of this stack can be realized only with an appropriate architecture for alert design, a process guidance on thresholds, and identification of key performance indicators to improve the signal to noise ratio.


Posted by Jason Bloomberg on August 21, 2014

In my latest Cortex newsletter I referred to “tone deaf” corporations who have flexible technology like corporate social media in place, but lack the organizational flexibility to use it properly. The result is a negative customer experience that defeats the entire purpose of interacting with customers.

Not all large corporations are tone deaf, however. So instead of finding an egregious example of tone deafness and lambasting it, I actually found an example of a corporation who uses social media in an exemplary way. Let’s see what Delta Airlines is doing right.

The screenshot above is from the Delta Facebook page. Delta regularly posts promotional and PR pieces to the page, and in this case, they are telling the story of a long-time employee. Giving a human face to the company is a good practice to be sure, but doesn’t leverage the social aspect of Facebook – how Delta handles the comments does.

As often happens, a disgruntled customer decided to post a grievance. Delta could have answered with a formulaic response (tone deaf) or chosen not to respond at all (even more tone deaf). But instead, a real person responded with an on-point apology. Furthermore, this real person signed the response with her name (I’ll assume Alex is female for the sake of simplicity) – so even though she is posting under the Delta corporate account, the customer, as well as everybody else viewing the interchange, knows a human being at Delta is responding.

If Alex’s response ended at a simple apology, however, such a response would still be tone deaf, because it wouldn’t have addressed the problem. But in this case, she also provided a link to the complaints page and actually recommended to the customer that she file a formal complaint. In other words, Delta uses social media to empower its customers – the one who complained, and of course, everyone else who happens to see the link.

It could be argued that Alex was simply handing off the customer to someone else, thus passing the buck. In this case, however, I believe the response was the best that could be expected, as the details of the customer’s complaint aren’t salient for a public forum like social media. Naturally, the complaints Web site might drop the ball, but as far as Delta’s handling of social media, they have shown a mastery of the medium.

So, who is Alex? Is she in customer service or public relations? The answer, of course, is both – which shows a customer-facing organizational strategy at Delta that many other companies struggle with. Where is your customer service? Likely in a call center, which you may have even outsourced. Where is your PR? Likely out of your marketing department, or yes, even outsourced to a PR firm.

How do these respective teams interact with customers? The call center rep follows a script, and if a problem deviates, the rep has to escalate to a manager. Any communications from the PR firm go through several approvals within the firm and at the client before they hit the wire. In other words, the power rests centrally with corporate management.

However, not only does a social media response team like Alex’s bring together customer service and PR, but whatever script she follows can only be a loose guideline, or responses would sound formulaic, and hence tone deaf. Instead, Delta has empowered Alex and her colleagues to take charge of the customer interaction, and in turn, Alex empowers customers to take control of their interactions with Delta.

The secret to corporate social media success? Empowerment. Trust the people on the front lines to interact with customers, and trust the customer as well. Loosen the ties to management. Social media are social, not hierarchical. After all, Digital Transformation is always about transforming people.


Posted by Sandeep Chanda on August 14, 2014

In Visual Studio 2013, the team unified the performance and diagnostics experience (memory profiling, etc.) under one umbrella and named it Performance and Diagnostics Hub. Available under the Debug menu, this option reduces lot of clutter in terms of profiling client and server side code during a debug operation. There was lot of visual noise in the IDE in the 2012 version and the hub is a significant addition in improving developer productivity.

In the Performance and Diagnostics hub, you may select the target, and specify the performance tools with which you would want to run diagnostics. There are various tools that you can use to start capturing the performance matrices like CPU Usage and Memory Allocation. You can collect CPU utilization matrices on a Windows forms based or WPF application.

The latest release of Update 3 brings with it some key enhancements to the CPU and memory usage tools. In the CPU usage tool, you can now right-click on a function name that was captured as part of the diagnostics and click View Source. This will allow you to easily navigate to the code that is consuming CPU in your application. The memory usage tool now allows you to capture memory usage for Win32 and WPF applications.

The hub will also allow you to figure hot paths in the application code that might be causing more CPU cycles and may need refactoring.

You can also look for functions that is doing most work as illustrated in the figure below.

Overall, the Performance and Diagnostics hub has become a useful arsenal for developer productivity and catering to non-functional aspects of the application scope.


Posted by Jason Bloomberg on August 12, 2014

Two stories on the Internet of Things (IoT) caught my eye this week. First, IDC’s prediction that the IoT market will balloon from US$1.9 trillion in 2013 to $7.1 trillion in 2020. Second, the fact it took hackers 15 seconds to hack the Google Nest thermostat – the device Google wants to make the center of the IoT for the home.

These two stories aren’t atypical, either. Gartner has similarly overblown market growth predictions, although they do admit a measure of overhypedness in the IoT market (ya think?). And as far as whether Nest is an unusual instance, unfortunately, the IoT is rife with security problems.

What are we to make of these opposite, potentially contradictory trends? Here are some possibilities:

We simply don’t care that the IoT is insecure. We really don’t mind that everyone from Russian organized criminals to the script kiddie down the block can hack the IoT. We want it anyway. The benefits outweigh any drawbacks.

Vendors will sufficiently address the IoT’s security issues, so by 2020, we’ll all be able to live in a reasonably hacker-free (and government spying-free) world of connected things. After all, vendors have done such a splendid job making sure our everyday computers are hack and spy-free so far, right?

Perhaps one or both of the above possibilities will take place, but I’m skeptical. Why, then, all the big numbers? Perhaps it’s the analysts themselves? Here are two more possibilities:

Vendors pay analysts (directly or indirectly) to make overblown market size predictions, because such predictions convince customers, investors, and shareholders open their wallets. Never mind the hacker behind the curtain, we’re the great and terrible Wizard of IoT!

Analysts simply ignore factors like the public perception of security when making their predictions. Analysts make their market predictions by asking vendors what their revenues were over the last few years, putting the numbers into a spreadsheet, and dragging the cells to the right. Voila! Market predictions. Only there’s no room in the spreadsheet for adverse influences like security perception issues.

Maybe the analysts are the problem. Or just as likely, I got out on the wrong side of bed this morning. Be that as it may, here’s a contrarian prediction for you:

Both consumers and executives will get fed up with the inability of vendors to secure their gear, and the IoT will wither on the vine.

The wheel is spinning, folks. Which will it be? Time to place your bets!

the IoT will wither on the vine


Posted by Jason Bloomberg on August 8, 2014

One of the most fascinating aspects of the Agile Architecture drum I’ve been beating for the last few years is how multifaceted the topic is. Sometimes the focus is on Enterprise Architecture. Other times I’m talking about APIs and Services. And then there is the data angle, as well as the difficult challenge of semantic interoperability. And finally, there’s the Digital Transformation angle, driven by marketing departments who want to tie mobile and social to the Web but struggle with the deeper technology issues.

As it happens, I’ll be presenting on each of these topics over the next few weeks. First up, a Webinar on Agile Architecture Challenges & Best Practices I’m running jointly with EITA Global on Tuesday August 19 at 10:00 PDT/1:00 EDT. I’ll provide a good amount of depth on Agile Architecture – both architecture for Agile development projects as well as architecture for achieving greater business agility. This Webinar lasts a full ninety minutes, and covers the central topics in Bloomberg Agile Architecture™. If you’re interested in my Bloomberg Agile Architecture Certification course, but don’t have the time or budget for a three-day course (or you simply don’t want to wait for the November launch), then this Webinar is for you.

Next up: my talk at the Dataversity Semantic Technology & Business Conference in San Jose CA, which is collocated with their NoSQL Now! Conference August 19 – 21. My talk is on Dynamic Coupling: The Pot of Gold under the Semantic Rainbow, and I’ll be speaking at 3:00 on Thursday August 21st. I’ll be doing a deep dive into the challenges of semantic integration at the API level, and how Agile Architectural approaches can resolve such challenges. If you’re in the Bay Area the week of August 18th and you’d like to get together, please drop me a line.

If you’re interested in lighter, more business-focused fare, come see me at The Innovation Enterprise’s Digital Strategy Innovation Summit in San Francisco CA September 25 – 26. I’ll be speaking the morning of Thursday September 25th on the topic Why Enterprise Digital Strategies Must Drive IT Modernization. Yes, I know – even for this marketing-centric Digital crowd, I’m still talking about IT, but you’ll get to see me talk about it from the business perspective: no deep dives into dynamic APIs or Agile development practices, promise! I’ll also be moderating a panel on Factoring Disruptive Tech into Business with top executives from Disney, Sabre, Sephora, and more.

I’m particularly excited about the Digital Strategy Innovation Summit because it’s a new crowd for me. I’ve always tried to place technology into the business context, but so far most of my audience has been technical. Hope you can make it to at least one of these events, if only to see my Digital Transformation debut!


Posted by Sandeep Chanda on August 5, 2014

Microsoft Azure Service Bus Event Hubs provide a topic based publish/subscribe messaging platform that allows for high throughput and low latency message processing. A preview version has been recently released by the Microsoft Azure team.

Event Hubs is a component of the service bus, and works alongside service bus topics and queues. Event Hubs provide a perfect platform for collecting event streams from multiple devices and sending them to an analytics engine for processing. This makes it ideal for an Internet of Things (IoT) scenario, where you can capture events from various connected devices and make meaningful decisions based on the ingested event stream.

You can also make use of analytics on the ingress to perform tenant billing, and performance monitoring amongst many possibilities. Event Hubs not only provide a reliable message processing platform, but also support durability for a predefined retention period allowing consumers to connect back in case of a failure.

Getting Started

An Event Hub is part of a Service Bus namespace, and typically consists of a Publisher Policy, Consumer Groups and Partition Keys. A publisher is a logical concept for publishing a message into an Event Hub, and a consumer is a logical concept for receiving messages. Partition allows for scaling Event Hubs and subscribers connect to a partition. Events in a partition are also ordered for delivery.

Currently the supported protocols for pub-sub are HTTP and AMQP. Note that for receiving data only AMQP is currently supported.

The Azure Service Bus NuGet package provides the EventProcessorHost and the EventHubClient API to process messages and send messages to the hub respectively. To start a host that can listen for incoming messages you can create a new instance of the EventProcessorHostas shown below:

host = new EventProcessorHost(
                hostName,
                eventHubName,
                consumerGroupName,
                eventHubConnectionString,
                storageConnectionString, eventHubName.ToLowerInvariant());

Note that it is a good practice to share the hub name in lowercase to avoid any case conflicts on names that the subscribers may present. You need to provide the connection strings for the event hub on the service bus namespace, the storage connection string for the queue, the name of the consumer group and a host name. You can then create a processor factory implementation using the IEventProcessorFactory interface to provide a factory implementation for processing the incoming messages. The host instance can then register the factory to listen for ingress using the RegisterEventProcessorFactoryAsync method. Similarly from the client, you can create an instance of the Event Hub Client using the EventHubClient.CreateFromConnectionString method, and then start sending messages using the SendAsync method that the client exposes.


Posted by Jason Bloomberg on August 1, 2014

What’s wrong with this scenario? Bob, your VP of Engineering brings a ScrumMaster, a Java developer, a UX (user experience) specialist, and a Linux admin into his office. “We need to build this widget app,” he says, describing what a product manager told him she wanted. “So go ahead and self-organize.”

Bob’s intentions are good, right? After all, Agile teams are supposed to be self-organizing. Instead of giving the team specific directions, he laid out the general goal and then asked the team to organize themselves in order to achieve the goal. What could be more Agile than that?

Do you see the problem yet? Let’s shed a bit more light by snooping on the next meeting.

The four techies move to a conference room. The ScrumMaster says, “I’m here to make sure you have what you need, and to mentor you as needed. But you three have to self-organize.”

The other three look at each other. “Uh, I guess I’ll be the Java developer,” the Java developer says.

“I’ll be responsible for the user interface,” the UX person says.

“I guess I’ll be responsible for ops,” the admin volunteers.

Excellent! The team is now self-organized!

What’s wrong with this picture, of course, is that given the size of the team, the constraints of the self-organization were so narrow that there was really no organization to be done, self or not. And while this situation is an overly simplistic example, virtually all self-organizing teams, especially in the enterprise context, have so many explicit and implicit constraints placed upon them that their ability to self-organize is quite limited. As a result, the benefits the overall application creation effort can ever expect to get from such self-organization is paltry at best.

In fact, the behavior of self-organizing teams as well as their efficacy depend upon their goals and constraints. If a team has the wrong goals (or none at all) then self-organization won’t yield the desired benefits. Compare, for example, the hacker group Anonymous on the one hand with self-organizing groups like the Underground Railroad or the French Resistance in World War II on the other. Anonymous is self-organizing to be sure, but has no goals imposed externally. Instead, each individual or self-organized group within Anonymous decides on its own goals. The end result is both chaotic and unpredictable, and clearly makes a poor example for self-organization for teams within the enterprise.

In contrast, the Underground Railroad and the French Resistance had clear goals. What drove each effort to self-organize in the manner they did were their respective explicit constraints: get caught and you get thrown in jail or executed. Such drastically negative constraints led in both cases to the formation of semi-autonomous cells with limited inter-cell communication, so that the compromise of one cell wouldn’t lead to the compromise of others.

In the case of self-organizing application creation teams, goals should be appropriately high-level. “Code us a 10,000-line Java app” is clearly too low-level, while “improve our corporate bottom line” is probably too high-level. That being said, expressing the business goals (in terms of customer expectations as well as the bottom line) will lead to more effective self-organization than technical goals, since deciding on the specific technical goals should be a result of the self-organization (generally speaking).

The constraints on self-organizing teams are at least as important as the goals. While execution by firing squad is unlikely, there are always explicit constraints, for example, security, availability, and compliance requirements. Implicit constraints, however, are where most of the problems arise.

In the example at the beginning of this article, there was an implicit constraint that the team had precisely four members as listed. In real-world situations teams tend to be larger than this, of course, but if management assigns people to a team and then expects them to self-organize, there’s only so much organizing they can do given the implicit management-imposed constraint of team membership.

Motivation also introduces a messy set of implicit constraints. In enterprises, potential team members are generally on salary, and thus their pay doesn’t motivate them one way or another to work hard on a particular project. Instead, enterprises have HR processes for determining how well each individual is doing, and for making decisions on raises, reassignments, or firing – mostly independent from performance on specific projects. Such HR processes are implicit constraints that impact individuals’ motivation on self-organizing teams – what Adrian Cockcroft calls scar tissue.

A Hypothetical Model for True Self-Organization on Enterprise Application Creation Teams

What would an environment look like if the implicit constraints that result from traditionally run organizations, including management hierarchies and HR policies and procedures, were magically swept away? I’m still placing this discussion in the enterprise context, so business-driven project goals (goals that focus on customers/users and revenues/costs) as well as external, explicit constraints like security and governmental regulations remain. Within those parameters, here’s how it might work.

The organization has a large pool of professionals with a diversity of skills and seniority levels. When a business executive identifies a business need for an application, they enter it into an internal digital marketplace, specifying the business goals and the explicit constraints, including how much the business can expect to pay for the successful completion of the project given the benefits to the organization that the project will deliver, and the role the executive as project stakeholder (and other stakeholders) are willing and able to play on the project team. The financial constraint may appear as a fixed price budget or a contingent budget (with a specified list of contingencies).

Members of the professional pool can review all such projects and decide if they might want to participate. If so, they put themselves on the list for the project. Members can also review who has already added themselves to the list and have any discussions they like among that group of people, or other individuals in the pool they might want to reach out to. Based upon those discussions, any group of people can decide they want to take on the project based upon the financial constraints specified, or alternately, propose alternate financial arrangements to the stakeholders. Once the stakeholders and the team come to an agreement, the team gives their commitment to completing the project within the constraints specified. (Of course, if there are no takers, the stakeholder can increase the budget, or perhaps some kind of automated arbitrage like a reverse auction sets the prices.)

The team then organizes themselves however they see fit, and executes on the project in whatever manner they deem appropriate. They work with stakeholders as needed, and the team (including the stakeholders) always has the ability to adjust or renegotiate the terms of the agreement if the team deems it necessary. The team also decides how to divide up the money allotted to the project – how much to pay for tools, how much to pay for the operational environment, and how much to pay themselves.

Do your application creation teams self-organize to this extent? Probably not, as this example is clearly at an extreme. In the real world, the level of self-organization for a given team is a continuous spectrum, ranging from none (all organization is imposed by management) to the extreme example above. Most organizations fall in the middle, as they must work within hierarchical organizations and they don’t have the luxury (or the burden) of basing their own pay on market dynamics. But don’t fool yourself: simply telling a team to self-organize does not mean they have the ability to do so, given the goals and constraints that form the reality of the application creation process at most organizations.


Posted by Sandeep Chanda on July 31, 2014

In the previous post you learned how to setup an Express Node.js application in Microsoft Azure and also make it a unit of continuous deployment using Git. An Express Node.js application in silos without a data store to back it is not very useful. In this post you will explore setting up a MongoDB database using the Microsoft Azure marketplace that can then act as a repository for your Express Node.js web application to store large scale unstructured data. Hosted in Azure, it is limited only by the ability of the platform to scale, which is virtually infinite.

Getting Started

The first thing you would need to do is to subscribe to the MongoLab service from the Microsoft Azure store. MongoLab is a fully hosted MongoDB cloud database that is available with all the major cloud providers, including Azure.

To add MongoLab service to your subscription, click New in your management portal, and select the Store (preview) option.

Note that, depending on your subscription, the store may or may not be available to you. Reach out to Azure support if you need more details.

Find MongoLab from under the App Services category in the store and select to add it to your subscription.

500 MB is free to use. Enter your subscription details in the form that is presented and then click Purchase to complete the operation of adding it to your subscription. You can now use the Mongoose Node.js driver to connect to the MongoLab service database and start storing your model data.

Installation

To install the Mongoose driver, run the following command in your console:

 npm install mongoose –save

You are now all set to connect to the MongoDB database hosted in Azure. Get the connection string for the hosted instance and then use it in your express Node.js application controller code:

var mongoose = require('mongoose');
mongoose.connect('[connection string]');

You can use the model function to get associate a model with the Mongoose schema and then perform your operations on the model data.


Posted by Sandeep Chanda on July 28, 2014

Express is a powerful, yet lightweight and flexible web application framework for Node.js. In this post we will explore how you can create and deploy an express application in Microsoft Azure.

Prerequisites

First and foremost you need Node.js. Once you have installed Node.js, use the command prompt to install Express.

npm install express

You can also use the –g switch to install express globally rather than to a specific directory.

In addition, you will need to create a web site in Microsoft Azure that will host the application. If you have the Azure SDK for Node.js then you would already have the command line tools, but if not, use the following command to install the Azure command line tool:

npm install azure-cli

Create an Express App

Once you have installed Node.js, use the command prompt to create the Express scaffolding using the express command. This will install the scaffolding templates for views, controllers and other relevant resources with Jade and Stylus support:

express --css stylus [Your App Name] 

Next, run the install command to install the dependencies

npm install

This command will install the additional dependencies that are required by Express. The express command creates a package.json file that will be used by the Azure command tool to deploy the application and the dependencies in Azure.

The express command creates a folder structure for views, controllers and models to which you can add your own. To modify the default view, you can edit the index.jade file under the views folder and add your own mark-up code. The app.js file under the application folder will contain an instance of express:

var express = require('express');
var app = express();

You can now use VERBs to start defining routes.

Deploy an Express App

In order to deploy the Express app in Azure, first, install the Azure command tools for Node.js if you don’t have the SDK installed. The next thing you need to do is to get the publish settings from Azure and import it into your Node.js application using the following commands

azure account download
azure account import <publish settings file path>

Next, you need to create a web site in Azure, and also create a local git repository inside your application folder.

azure site create [your site name] --git

You can now commit your files to your local git repository and then use it to push for deployment to Azure using the following command:

git push azure master

You are now all set. The express Node.js application is deployed in Azure.


Sitemap