Posted by Jason Bloomberg
on April 18, 2014
Years ago when I first learned about declarative programming, it seemed like magic. Separate your logic from your control flow, so that you can simply describe the behavior you want, and your software will magically know how to render it! But of course, there’s nothing magic about declarative languages like SQL or HTML, to name two of the most familiar. With those languages, your database engine or browser, respectively, takes your declarative code and turns it into instructions a computer can understand.
The power of declarative programming isn’t that we magically don’t have to code programmatically. Rather, the programmatic code we write is generalized. If you’re coding a SQL processor, the same tool can process any valid SQL. Similarly, a single browser can render any valid HTML. To change a Web page, change the HTML – but the code inside the browser remains the same.
Hypermedia applications – the type of applications the REST architectural style is intended for – are also supposed to be built declaratively. Take as an example the simplest of hypermedia applications: a static Web site. The application consists of a set of representations conforming to standard Internet Media Types (HTML files, JPEG images, and the like), interconnected via hyperlinks. The user navigates the application by following the hyperlinks, and the current location of the user in the application – in other words, the application state – is managed by considering the Web site to be a straightforward finite state machine, where the pages are the allowed states and the hyperlinks specify the allowed transitions. In other words, hypermedia are the engine of application state – the RESTful architectural constraint we call HATEOAS.
The programmatic code that makes this hypermedia magic work lies in the underlying HTTP infrastructure that knows how to resolve URIs, resources that know how to process requests via a uniform interface, and clients that know how to render representations that conform to standard Internet Media Types. These three capabilities (URI resolution, request processing, and Media Type rendering) are generalized – it doesn’t matter what URI, request, or representation you care to deal with as long as they conform to the constraints of the architecture.
So far, everything’s hunky dorey, as long as the HATEOAS constraint is handled manually – that is, a person changes the state of their application by clicking hyperlinks. Where so many developers run into trouble with HATEOAS, and by extension all of REST, is when they try to automate HATEOAs.
It’s no wonder so many coders want to automate HATEOAS. Automating HATEOAS, after all, is how to get the full power out of REST. Free yourself from a simple browser as the client, and instead code an arbitrary piece of software to serve as the RESTful client (which may not even have a user interface) – a piece of software that knows how to follow hyperlinks to gather all the metadata it needs to understand how to behave.
Easier said than done, because standard Internet Media Types are generally insufficient to instruct our arbitrary clients how to behave. We typically want to exchange arbitrary data, where the Media Type doesn’t tell the client enough about how to process it. REST’s answer? Custom Media Types, which take a standard Media Type like XML or JSON and adds a schema or other metadata specific to the situation. Now the developer must code the client to understand those Custom Media Types – as well as how to make the appropriate requests for representations that conform to them. Finally, developers find themselves coding the resources specifically to deliver those representations.
Let’s review. Automating HATEOAS means custom-coding resources that return custom-formatted representations to custom-coded clients. Hopefully you see what’s wrong with this picture! We’ve given up entirely on the benefits of declarative programming, instead falling into the trap of building tightly coupled, inflexible, spaghetti code. There’s got to be a better way.
The correct approach to automating HATEOAS is to maintain the generalization of the components of the architecture, just as we did with manual HATEOAS. We must generalize the code in our resources. We must generalize the Media Type processing code. And we must generalize the internal workings of our RESTful client. We don’t want our client to know ahead of time what behavior it is supposed to exhibit. Instead, all of its behavior should be learned by referencing hyperlinks. Furthermore, we shouldn’t have to teach our client ahead of time how to deal with the representations it receives. It must be smart enough to follow hyperlinks to gather whatever information it needs to process any representation.
I’m not saying it’s easy. But once you’ve written such a client, it will serve to implement any application you care to specify declaratively. Drop me a line if you’d like to see an example.
Posted by Sandeep Chanda
on April 17, 2014
There is some good news for Visual Studio enthusiasts looking to develop web applications using Node.js. The Visual Studio team (with help from the community contributors) recently released the support for Node.js in Visual Studio 2013. While this is still in beta, and you may face issues while developing apps, it is definitely worthwhile to explore the features now and provide feedback to the team.
You can down the Node.js Tools for Visual Studio (NTVS) from CodePlex here. Follow the steps in the installation wizard to setup NTVS.
The predefined templates help you create a New Node.js Web Application, a New Azure Website built using Node.js, and a Worker Role with support in Node.js for creating long running processes using Node.
Note that Node.js is already supported by Azure Mobile Services and you can directly run Node.js scripts from Azure Mobile Services by configuring them in the Azure portal.
You can also create a project from existing Node.js code (which is likely to be the case if you were already developing on Node.js).
Select the "From Existing Node.js code" template. The dialog will launch a wizard to let you select the folder where your Node.js project is placed. It will enlist the Node.js start-up file if it finds one like the screenshot shown below:
(It is a good option to select "Exclude node modules", since they will unlikely to be your start-up app).
You are all set, but when you build your project, it is most likely to fail, since the node modules will not be present or correctly referenced. Right click on the project and use the "Open Command Prompt Here" command to launch the command prompt and use the "npm install" command to install the node modules.
The NTVS tools also provide a nice option to manage and install global node modules. In your solution structure, expand npm and click Manage Node Modules from the Global Modules context menu. The module manager dialog will be launched where you can search and install global node modules.
Posted by Jason Bloomberg
on April 8, 2014
Now that I am Chief Evangelist at EnterpriseWeb, people occasionally ask me what a Chief Evangelist does. My answer is that I provide thought leadership and marketing. To which my ever-curious audience predictably asks what the difference is between the two.
Thought leadership and marketing are in fact two different tasks with different (although related) goals. Marketing centers on communicating the value proposition and differentiator – what problems we solve, why you should buy what we’re selling, and why you shouldn’t (or can’t) buy it from anyone else.
But thought leadership has a different goal. The goal of thought leadership is to paint the picture of the world of the future, a world our technology enables. Technology is changing and business is changing, and how technology-empowered business will look five, ten, or twenty years out is a complicated, imperceptible mess. Thought leadership helps people clear up the confusion so they can gradually understand how all the pieces fit together.
Marketing is about today and perhaps a few months into the future – what can we do for customers this year. Thought leadership connects today to tomorrow. It’s not about getting all the details right. It’s about painting the big picture. Thought leadership gives us the opportunity to place our technology into the broader context.
Thought leadership involves telling a story, one chapter at a time. Take the reader on a journey, filling in the missing pieces to the big picture over time. The story will naturally improve over time, and that’s OK – since no one really cares about what the story was in years past. It’s assembling the big picture of the future, piece by piece. Each piece has to stand on its own, but how they all fit together is the real lesson.
Posted by Sandeep Chanda
on April 7, 2014
The world of HTML5 hybrid app development frameworks just got hotter with the beta release of Ionic Actinium, the Ionic Framework dubbed as "Bootstrap for Native", by the makers of Drifty, already popular for tools such as Jetstrap and Codiqa.
HTML5 is the platform of choice for desktop web applications and mobile websites and is gaining immense popularity for building hybrid and/or native self-contained mobile apps. Using HTML5 helps reduce the steep learning curve involved in developing native apps and in turn reduces time to market.
In this post, we will explore how you can setup Ionic on a Windows machine and then start building Android apps using the framework.
Following are the prerequisites:
- You must have JDK and the Android SDK installed. Typically look for JDK 7, but it worked for JDK 6 on my machine. From the Android perspective, you need the latest version, currently 22.6.2 for the SDK tools, and 19.0.1 for the platform-tools. You must also have a device configuration (AVD) for the latest API level as illustrated in the figure below:
- Download Apache Ant from here, and note the path of the extracted folder from the zip. It should be something like C:\ apache-ant-1.9.2-bin.
- Install Node.js from here.
- Configure the PATH variable in your system environment variables to include the path for JDK, Android SDK tools and platform tools, and the Ant bin folder that you extracted in step 2. You should create individual variables for each and then specify the %<Variable_name>% annotation to specify the paths in the PATH variable.
The following figure illustrates:
- ANDROID_HOME : C:\Users\sandeep.chanda\AppData\Local\Android\android-sdk\tools
- ANDROID_PLATFORM: C:\Users\sandeep.chanda\AppData\Local\Android\android-sdk\platform-tools
- JAVA_HOME: C:\Program Files (x86)\Java\jdk1.6.0_39
- ANTS_HOME: C:\Software\apache-ant-1.9.2-bin\apache-ant-1.9.2\bin
- PATH: %ANDROID_HOME%; %JAVA_HOME%... etc.
- Download Console 2, and extract the package into a folder.
You are now all set to configure Ionic and start building the apps.
Open an instance of Console 2 and execute the following commands
- First we will install Cordova if not installed already. Run the following command
- There is a command line utility for Ionic to build and package Ionic apps using Gulp. Run the following command to install the utility
- npm install -g gulp ionic
That's all. You are all set to now run Ionic projects.
Create an Ionic Project
First you need to create an Ionic project to get a template for building Ionic apps. You can run the following command to create an Ionic project.
- ionic start [Project Name]
This will download the necessary packages and will create the project folder structure as shown in the figure below:
This comes with a bootstrap template for building Ionic apps. You can directly build and run this in the Android emulator and you will get a basic template with navigation elements in place. To create the Android APK and deploy it in the emulator, first change into the project directory in Console 2
Next configure Android using the following command
You can now build and run the app in the emulator using the commands
- ionic build android
- ionic emulate android
This will build and launch the app in the emulator as shown below:
Start Building Ionic Apps
In the Ionic project folder structure, you will notice a www folder. This is where all your HTML5 pages will go. There are additional elements that we will explore in a future session, but navigate into the www folder and open the Index.html file using an editor. You will find the different elements that form a basic Ionic landing page. The header contains references to Ionic CSS, AngularJS, Cordova, and your application specific JS files with controller, app, and service logic.
The body element consists of the navigation structure and a place for rendering the views that are defined in the templates folder:
Now you can start creating your own views in the templates folder and build your app!
Posted by Jason Bloomberg
on April 2, 2014
No, it wasn’t an April Fool’s joke: Hadoop vendor Cloudera just closed a $900 million financing round, showing the world that PowerBall isn’t the only way to crazy riches. And while on the surface it seems to be a good problem to have (like we should all have such problems!), $900 million in the bank may actually be more trouble than it’s worth. What’s Cloudera going to do with all that green?
Clearly, at those stratospheric investment heights, the only possible exit is to go public. So, what should Cloudera spend money on to build a market cap even higher than its current $3.37 billion valuation? Yes, that’s billion with a B, or $3,370,000,000 for all you zero-counters out there.
Clearly, they need to improve their product. While the Big Data opportunity is unarguably large, Hadoop as a platform has its challenges. The problem with sinking cash into the tech is that they’ll quickly run into the “mythical man month” paradox: simply throwing people (in other words, money) at a piece of technology can only improve that technology so fast. All those zeroes won’t buy you a baby in a month, you know.
Perhaps they’ll invest in other products, either by assembling a gargantuan team of developers or by acquiring other companies, or both. Such a move is likely – but they’ll end up with a mishmash of different technologies, or they’ll run into the man-month problem again. Or both.
They’re also likely to grow their team. More sales people selling Hadoop to all 500 of the Fortune 100. More Hadoop experts – going after all 1000 of the 500 top gurus out there. More recruiters perhaps, looking to squeeze more blood out of the Silicon Valley techie turnip. True, such fuzzy math works to your benefit if you’re one of said gurus, but fuzzy math it is. You can only do so much hiring before you’re hitting the bottom of every barrel.
Whatever happens, there’s plenty of money to go around – unless, of course, you’re already a holder of Cloudera stock or options. If so, you may have just been diluted to the point you could call yourself a homeopathy treatment. But regardless of where you stand with respect to the Cloudera nest egg, it’s nigh impossible to divine a path that works out well for any of the parties involved – Cloudera employees, investors, or customers. But in the meantime, I’m sure they’ll throw some kick-ass parties. Pass the shrimp, please!
Posted by Jason Bloomberg
on March 28, 2014
This week I attended the bpmNEXT Conference in California. Unlike virtually every other conference I’ve ever attended, this one attracted Business Process Management (BPM) vendors and analysts, but not customers – and the vendors were perfectly happy with that. Essentially, this event was in part an opportunity for vendors to show their products to each other, but primarily an excuse to network with other people in the BPM market over drinks and dinner.
You would expect such a crowd to be cheerleaders for BPM, and many of them were. But all was not right in the world. One fellow quipped that not only was BPM dying, it was never alive in the first place. Another vendor pointed out that BPM is never on CIO’s “must have” lists. And then we have vendors spending time and money to come to a conference devoid of sales opportunities.
So, what’s wrong with the BPM market? True, there is a market for this gear, as many of the presenters pointed out in discussions of customers. But there was always the undercurrent that this stuff really isn’t as useful or popular as people would like.
Welcome to the BPM zombie apocalypse. Zombies, after all, are dead people who don’t realize they’re dead, so they attempt to go about their business as though nothing was amiss. But instead of acting like normal, living people, they end up shuffling around as they shed body parts, groaning for brains. Time to get my shovel and escape to hype – and customer – filled conferences focusing on Big Data and Cloud.
Posted by Sandeep Chanda
on March 25, 2014
I always look forward to attending retrospective meetings in an agile setup. It is the time to reflect upon how the team fared and make amendments for the future. There is always tons of learning and from every project springs some unique surprises during a retrospective session.
Team Foundation Server (TFS) Analytics can aid a retrospective discussion, and, more interestingly, be used as a form of gamification to bring in a bit of competitiveness amongst peers. The most interesting of the analytics you can bring into the discussion is the Code Churn report. It helps gauge the lines of code written by each member of the team and illustrates the churn generated. It can be a reflection of how much refactoring has gone in by comparing the deleted and added LOC. It may not be very useful directly for project budget and forecasting, but definitely gives a sense of achievement to the team and provides non-tangible benefits in the form of motivation. It is very easy to run the analytics reports provided by TFS. You however need to make sure that you have appropriate permissions to run.
Open Microsoft Office Excel 2013. You will see an option to connect to different sources under the Data tab. Select the option to create a connection to SQL Server analytics services as illustrated in the following figure.
This will open the data connection wizard. Type the TFS server name in the first step of the wizard and click next to step 2 that will bring the list of available cubes and perspectives as shown in the figure below:
Notice that Team System is the cube holding all possible dimensions and facts that you can create the analytics on, however specific perspective analytics are pre-created like Code Churn and Code Coverage.
Select the Code Churn perspective and finish. You will be prompted to choose the format in which you want to import the data. Select the PowerPivot option as shown:
From the PivotTable fields, choose the Code Churn Attributes as Column Values.
Scroll down the fields' information and select "Checked In By" under the Version Control Changeset category. This will get added as a Row Value and you will see a report generated as shown in the following figure.
This imported data shows the Code Churn Count, Lines Added, Deleted and the total impact in the form of Total Lines of Code. You can further dissect the information by adding Year / Month attributes to determine the highest and lowest code churn months / years. In addition, comparing with estimated hours of effort, you can use this information to drive sizing discussions.
There are additional perspectives that TFS has pre-generated like the Builds, Code Coverage, Tests, and Work Items. Each of these perspectives are useful for an effective retrospective discussing build quality, work progress, and tested code paths for sunny and rainy day scenarios.
Posted by Jason Bloomberg
on March 24, 2014
When you write a computer program, you’re providing instructions to one or more computers so that they can do whatever it is you’ve programmed them to do. In other words, you programmed the computers to have one or more capabilities.
According to Wikipedia, a capability is the ability to perform or achieve certain actions or outcomes through a set of controllable and measurable faculties, features, functions, processes, or services. But of course, you already knew that, as capability is a common English word and we’re using it in a perfectly common way.
But not only is the term common, the notion that we program computers to give them capabilities is also ubiquitous. The problem is, this capability-centric notion of software has led us down a winding path with nothing but a dead end to greet us.
The problem with thinking of software as providing capabilities to our computers is that the computers will only be able to do those things we have programmed them to do. If our requirements change, we must change the program. Only once we deploy the program, it becomes instant legacy – software that is mired in inflexibility, difficult or even impossible to reprogram or replace. Hence the proverbial winding path to nowhere.
Our computers, however, are really nothing but tools. When they come off the assembly line, they have really no idea what programs they’ll end up running – and they don’t care. Yet while we’re comfortable thinking of our hardware as tools, it takes a mind shift to fully grasp what it means to consider all of our software as tools.
Tools, you see, don’t have capabilities. They have affordances. Affordance is an unquestionably uncommon word, so let’s jump right to Wikipedia for the definition: an affordance is a property of an object, or an environment, which allows an individual to perform an action. The point to a tool, of course, is its affordances: a screwdriver affords users the ability to turn screws or open paint can lids, as well as unintended affordances like hitting a drum or perhaps breaking a window. But the screwdriver doesn’t have the capability of driving screws; rather, a person has that capability when they have a screwdriver – and furthermore, it’s up to the user to decide how they want to use the tool, not the manufacturer of the tool.
The software we use every day has affordances as well. Links are for clicking, buttons are for pushing, etc. Every coder knows how to build user interfaces that offer affordances. And we also have software we explicitly identify as tools: development tools (which afford the ability to develop software, among other affordances) for example. The problem arises when we cross the line from coding affordances to coding capabilities, which happens when we’re no longer coding tools, but we’re coding applications (generally speaking) or solutions.
Such applications are especially familiar in the enterprise space, where they are not simply single programs running on individual computers, but complicated, distributed monstrosities that serve many users and leverage many computers. We may use tools to build such applications, but the entire enterprise software lifecycle focuses on delivering the required functionality for bespoke applications – in other words, capabilities, rather than affordances. Even when you buy enterprise applications, the bulk of the value of the software comes from its capabilities. It’s no wonder we all hate legacy applications!
The challenge for the enterprise app space – and by extension, all categories of software – is to shift this balance between capabilities and affordances to the extreme of maximum affordance. In other words, instead of building or buying software that can do things (in other words, has capabilities), we want software that can enable users to do things – and then maximize the affordances so that we have software smart enough to afford any action.
Superficially this goal sounds too good to be true, but remember what computers are for: they’re for running programs which give them instructions. In other words, computers are examples of maximum affordance in action. The next step is to build software with the same purpose.
Posted by Jason Bloomberg
on March 19, 2014
In a recent article for ComputerWorld, Howard Baldwin took a well-deserved poke at leading consulting punditocracy for pushing “Digital Transformation” on their customers. You must build a “digital industrial economy” opines Gartner! Or perhaps a “digital business” that includes a “comprehensive strategy that leads to new architectures, new services and new platforms” according to Accenture and McKinsey. Or maybe PricewaterhouseCooper’s “digital IQ” is more your cup of tea?
The thrust of Baldwin’s article, however, is that CIOs are pushing back against all this consultant newspeak. Readers of this blog may well be wondering where I fall in this discussion. After all, I recently penned The Agile Architecture Revolution. In the book I make the case that we are in the midst of a true revolution – one that reinvents old ways of doing IT, replacing them with entirely new approaches. You might think, therefore, that I align with the gurus of Gartner or the mages of McKinsey.
Sorry to disappoint. Just because we’re in the midst of broad-based transformation in enterprise IT doesn’t necessarily mean that “digital transformation” should be on your corporate shopping list. Digital transformation, after all, isn’t a business priority. Making money, saving money, and keeping customers happy are business priorities. You should only craft a digital transformation strategy for your organization if it promises to improve the bottom line – and you can adequately connect the dots to said bottom line.
I’m sure the pundits at Pricewaterhouse and the others understand this point, and if you hire them, they’ll connect the dots between their whiz-bang digital whatzit and, you know, actually making money. But if you read their white papers or see their executives pontificate at a conference, that’s when they bring out the flashing lights and fireworks.
Bottom line: yes, we’re in a period of great transformation, and yes, you’ll need to figure out how to deal with it. But your business strategy must always focus on your core business priorities, not some flashy collection of buzzwords. Tech fads come and go, but business fundamentals remain the same.
Posted by Sandeep Chanda
on March 17, 2014
In one of the previous blogs, I discussed building background services that can scale by tenant. There could however be scenarios where you want the system to scale by load (e.g. number of messages to be processed from queue). Often in such scenarios you want to have control over the way the load is generated to avoid redundancy, but want the processing to happen as soon as the message arrives. You would also want maximum utilization of the CPU for processing to minimize costs of scaling. Having an effective worker role design can help you better understand the efficiency of background services. The following figure illustrates one such design using the OnStart and Run methods inside the RoleEntryPoint class.
You can use the OnStart method to create instances of scheduled services using utilities such as Quartz.Net cron service scheduler that can then run at a pre-defined interval and populate designated queues with messages to be processed. Typically, you would want only one instance from the configured instances to be able to write into the queue to avoid duplication of messages for processing. The following code shows a typical cron schedule. The service configured will have the implementation of the leased method (we discussed in the previous blog post) that will schedule the messages in queue.
public override bool OnStart()
UnityContainer = UnityHelper.ConfigureUnity();
QueueProvider = UnityContainer.Resolve<IQueueProvider>();
LogService = UnityContainer.Resolve<ILogService>();
The code inside the ScheduledServices method could look like:
DateTimeOffset runTime = DateBuilder.EvenMinuteDate(DateTime.Now);
JobScheduler scheduler = new JobScheduler();
These are examples of different types of cron services that are run by Quartz.net based on the defined schedule.
The following code illustrates the implementation inside the Run method of a worker role that uses Task Parallel Library to process multiple queues the same time.
public override void Run()
ProcessMessages<ISiteConfigurationManager, MaintenanceScheduleItem>(Constants.QueueNames.SiteConfigurationQueue, (m, e) => m.CreateSiteConfiguration(e));
var hasMessages = ProcessMessages<IAggregationManager, QueueMessage>(Constants.QueueNames.PacketDataQueue, null, (m, e) => m.ComputeSiteMetrics(e));
catch (Exception ex)
This can scale to as many instances as you want, depending on the load on the queue and the expected throughput. The parallel processing will ensure that the CPU is optimally utilized in the worker role, and the run method will generate a continuous run of the instance to process items from the queue. You can also use the auto scale configuration to automatically scale the instances based on load.
There is one known issue you must be aware of in this design regarding the data writes on an Azure Table storage. Since multiple instance will be writing to the table, if you are running updates, there is a chance that the data could have been modified between the time you picked the record and updated it back after processing. Azure, by default, rejects such operations. You can, however, force an update by setting the table entity's ETag property to "*". The following code illustrates a generic table entity save--with forced updates.
public void Save<T>(T entity, bool isUpdate = false) where T : ITableEntity, new()
TableName = typeof(T).Name;
entity.ETag = "*";
operations.Value.Add(isUpdate ? TableOperation.Replace(entity) : TableOperation.Insert(entity));
A word of caution though. This may not be the design you want to pursue if the system you are building is completely intolerant to a certain degree of data corruption at any point in time, since a forced update may result in such a behaviour.