Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Jason Bloomberg on April 14, 2014

Amazon.com played an April Fool’s Day prank on me. My shaver head gave out April 1st, so I ordered a replacement on Amazon.com. I chose free shipping (without Amazon Prime, their monthly subscription membership that comes with free two-day shipping). The site promised a delivery in 8 to 11 days. After all, based past experience, I would likely get it sooner, since it was shipping direct from Amazon. In this case, however, they didn’t ship for 8 of the 11 days, instead reporting they were “preparing for shipment” for those 8 days. I ended up receiving the order on day 11 – just within the promised window.

So you’re probably thinking, Amazon stuck to their promised delivery window, so I should quit my bitching already. And you’d be right. And Amazon may have had a very good reason why they needed 8 full days to prepare my order for shipment.

But I don’t think so. My theory as to what happened (keeping in mind it’s only a theory), is that Amazon changed their policy regarding free shipping in order to encourage customers to sign up for Amazon Prime. After all, if customers can get free shipping with quick delivery without paying for Amazon Prime, then why would anybody ever pay for the premium subscription at all?

From a business perspective, Amazon’s change in policy makes sense. But they had to lower their expected customer service as a result – an expectation based on past behavior. True, I could shop elsewhere, and I might, but probably not. That’s the bet Amazon is making here.

For the audience of this blog, however, the question of the day is: would Amazon pull the same trick with their Amazon Web Services Cloud offering? Would Amazon ever lower their level of service on a Cloud offering in order to move customers to a more expensive choice?

The answer: absolutely. You might think Amazon simply wants to be the low cost leader because customers love low costs, and Amazon loves customers. And that’s true to a certain extent. But if they can squeeze more money out of you in a way that won’t jeopardize their pricing pressure on their competition and also won’t likely cause you to drop Amazon and switch to said competition, now you know they will have no qualms about doing so. After all, once you’re in Amazon’s Cloud, it’s tough to move. All you have to do is see a photo of me with an 11-day beard as a reminder.


Posted by Jason Bloomberg on April 8, 2014

Now that I am Chief Evangelist at EnterpriseWeb, people occasionally ask me what a Chief Evangelist does. My answer is that I provide thought leadership and marketing. To which my ever-curious audience predictably asks what the difference is between the two.

Thought leadership and marketing are in fact two different tasks with different (although related) goals. Marketing centers on communicating the value proposition and differentiator – what problems we solve, why you should buy what we’re selling, and why you shouldn’t (or can’t) buy it from anyone else.

But thought leadership has a different goal. The goal of thought leadership is to paint the picture of the world of the future, a world our technology enables. Technology is changing and business is changing, and how technology-empowered business will look five, ten, or twenty years out is a complicated, imperceptible mess. Thought leadership helps people clear up the confusion so they can gradually understand how all the pieces fit together.

Marketing is about today and perhaps a few months into the future – what can we do for customers this year. Thought leadership connects today to tomorrow. It’s not about getting all the details right. It’s about painting the big picture. Thought leadership gives us the opportunity to place our technology into the broader context.

Thought leadership involves telling a story, one chapter at a time. Take the reader on a journey, filling in the missing pieces to the big picture over time. The story will naturally improve over time, and that’s OK – since no one really cares about what the story was in years past. It’s assembling the big picture of the future, piece by piece. Each piece has to stand on its own, but how they all fit together is the real lesson.


Posted by Sandeep Chanda on April 7, 2014

Overview

The world of HTML5 hybrid app development frameworks just got hotter with the beta release of Ionic Actinium, the Ionic Framework dubbed as "Bootstrap for Native", by the makers of Drifty, already popular for tools such as Jetstrap and Codiqa.

HTML5 is the platform of choice for desktop web applications and mobile websites and is gaining immense popularity for building hybrid and/or native self-contained mobile apps. Using HTML5 helps reduce the steep learning curve involved in developing native apps and in turn reduces time to market.

The Ionic Framework uses HTML5, CSS and JavaScript, packaged using the Cordova tools for creating platform specific iOS and Android apps. A lot of its core features are also built using AngularJS. Using AngularJS is a highly recommended option for building the apps using Ionic.

In this post, we will explore how you can setup Ionic on a Windows machine and then start building Android apps using the framework.

Prerequisites

Following are the prerequisites:

  1. You must have JDK and the Android SDK installed. Typically look for JDK 7, but it worked for JDK 6 on my machine. From the Android perspective, you need the latest version, currently 22.6.2 for the SDK tools, and 19.0.1 for the platform-tools. You must also have a device configuration (AVD) for the latest API level as illustrated in the figure below:

  2. Download Apache Ant from here, and note the path of the extracted folder from the zip. It should be something like C:\ apache-ant-1.9.2-bin.
  3. Install Node.js from here.
  4. Configure the PATH variable in your system environment variables to include the path for JDK, Android SDK tools and platform tools, and the Ant bin folder that you extracted in step 2. You should create individual variables for each and then specify the %<Variable_name>% annotation to specify the paths in the PATH variable.
    1. ANDROID_HOME : C:\Users\sandeep.chanda\AppData\Local\Android\android-sdk\tools
    2. ANDROID_PLATFORM: C:\Users\sandeep.chanda\AppData\Local\Android\android-sdk\platform-tools
    3. JAVA_HOME: C:\Program Files (x86)\Java\jdk1.6.0_39
    4. ANTS_HOME: C:\Software\apache-ant-1.9.2-bin\apache-ant-1.9.2\bin
    5. PATH: %ANDROID_HOME%; %JAVA_HOME%... etc.
    The following figure illustrates:

  5. Download Console 2, and extract the package into a folder.

You are now all set to configure Ionic and start building the apps.

Configure Ionic

Open an instance of Console 2 and execute the following commands

  1. First we will install Cordova if not installed already. Run the following command
    • npm install -g cordova
  2. There is a command line utility for Ionic to build and package Ionic apps using Gulp. Run the following command to install the utility
    • npm install -g gulp ionic

That's all. You are all set to now run Ionic projects.

Create an Ionic Project

First you need to create an Ionic project to get a template for building Ionic apps. You can run the following command to create an Ionic project.

  • ionic start [Project Name]

This will download the necessary packages and will create the project folder structure as shown in the figure below:

This comes with a bootstrap template for building Ionic apps. You can directly build and run this in the Android emulator and you will get a basic template with navigation elements in place. To create the Android APK and deploy it in the emulator, first change into the project directory in Console 2

  • cd [Project Name]

Next configure Android using the following command

  • ionic platform android

You can now build and run the app in the emulator using the commands

  • ionic build android
  • ionic emulate android

This will build and launch the app in the emulator as shown below:

Start Building Ionic Apps

In the Ionic project folder structure, you will notice a www folder. This is where all your HTML5 pages will go. There are additional elements that we will explore in a future session, but navigate into the www folder and open the Index.html file using an editor. You will find the different elements that form a basic Ionic landing page. The header contains references to Ionic CSS, AngularJS, Cordova, and your application specific JS files with controller, app, and service logic.

The body element consists of the navigation structure and a place for rendering the views that are defined in the templates folder:

Now you can start creating your own views in the templates folder and build your app!


Posted by Jason Bloomberg on April 2, 2014

No, it wasn’t an April Fool’s joke: Hadoop vendor Cloudera just closed a $900 million financing round, showing the world that PowerBall isn’t the only way to crazy riches. And while on the surface it seems to be a good problem to have (like we should all have such problems!), $900 million in the bank may actually be more trouble than it’s worth. What’s Cloudera going to do with all that green?

Clearly, at those stratospheric investment heights, the only possible exit is to go public. So, what should Cloudera spend money on to build a market cap even higher than its current $3.37 billion valuation? Yes, that’s billion with a B, or $3,370,000,000 for all you zero-counters out there.

Clearly, they need to improve their product. While the Big Data opportunity is unarguably large, Hadoop as a platform has its challenges. The problem with sinking cash into the tech is that they’ll quickly run into the “mythical man month” paradox: simply throwing people (in other words, money) at a piece of technology can only improve that technology so fast. All those zeroes won’t buy you a baby in a month, you know.

Perhaps they’ll invest in other products, either by assembling a gargantuan team of developers or by acquiring other companies, or both. Such a move is likely – but they’ll end up with a mishmash of different technologies, or they’ll run into the man-month problem again. Or both.

They’re also likely to grow their team. More sales people selling Hadoop to all 500 of the Fortune 100. More Hadoop experts – going after all 1000 of the 500 top gurus out there. More recruiters perhaps, looking to squeeze more blood out of the Silicon Valley techie turnip. True, such fuzzy math works to your benefit if you’re one of said gurus, but fuzzy math it is. You can only do so much hiring before you’re hitting the bottom of every barrel.

Whatever happens, there’s plenty of money to go around – unless, of course, you’re already a holder of Cloudera stock or options. If so, you may have just been diluted to the point you could call yourself a homeopathy treatment. But regardless of where you stand with respect to the Cloudera nest egg, it’s nigh impossible to divine a path that works out well for any of the parties involved – Cloudera employees, investors, or customers. But in the meantime, I’m sure they’ll throw some kick-ass parties. Pass the shrimp, please!


Posted by Jason Bloomberg on March 28, 2014

This week I attended the bpmNEXT Conference in California. Unlike virtually every other conference I’ve ever attended, this one attracted Business Process Management (BPM) vendors and analysts, but not customers – and the vendors were perfectly happy with that. Essentially, this event was in part an opportunity for vendors to show their products to each other, but primarily an excuse to network with other people in the BPM market over drinks and dinner.

You would expect such a crowd to be cheerleaders for BPM, and many of them were. But all was not right in the world. One fellow quipped that not only was BPM dying, it was never alive in the first place. Another vendor pointed out that BPM is never on CIO’s “must have” lists. And then we have vendors spending time and money to come to a conference devoid of sales opportunities.

So, what’s wrong with the BPM market? True, there is a market for this gear, as many of the presenters pointed out in discussions of customers. But there was always the undercurrent that this stuff really isn’t as useful or popular as people would like.

Welcome to the BPM zombie apocalypse. Zombies, after all, are dead people who don’t realize they’re dead, so they attempt to go about their business as though nothing was amiss. But instead of acting like normal, living people, they end up shuffling around as they shed body parts, groaning for brains. Time to get my shovel and escape to hype – and customer – filled conferences focusing on Big Data and Cloud.


Posted by Sandeep Chanda on March 25, 2014

I always look forward to attending retrospective meetings in an agile setup. It is the time to reflect upon how the team fared and make amendments for the future. There is always tons of learning and from every project springs some unique surprises during a retrospective session.

Team Foundation Server (TFS) Analytics can aid a retrospective discussion, and, more interestingly, be used as a form of gamification to bring in a bit of competitiveness amongst peers. The most interesting of the analytics you can bring into the discussion is the Code Churn report. It helps gauge the lines of code written by each member of the team and illustrates the churn generated. It can be a reflection of how much refactoring has gone in by comparing the deleted and added LOC. It may not be very useful directly for project budget and forecasting, but definitely gives a sense of achievement to the team and provides non-tangible benefits in the form of motivation. It is very easy to run the analytics reports provided by TFS. You however need to make sure that you have appropriate permissions to run.

Open Microsoft Office Excel 2013. You will see an option to connect to different sources under the Data tab. Select the option to create a connection to SQL Server analytics services as illustrated in the following figure.

This will open the data connection wizard. Type the TFS server name in the first step of the wizard and click next to step 2 that will bring the list of available cubes and perspectives as shown in the figure below:

Notice that Team System is the cube holding all possible dimensions and facts that you can create the analytics on, however specific perspective analytics are pre-created like Code Churn and Code Coverage.

Select the Code Churn perspective and finish. You will be prompted to choose the format in which you want to import the data. Select the PowerPivot option as shown:

From the PivotTable fields, choose the Code Churn Attributes as Column Values.

Scroll down the fields' information and select "Checked In By" under the Version Control Changeset category. This will get added as a Row Value and you will see a report generated as shown in the following figure.

This imported data shows the Code Churn Count, Lines Added, Deleted and the total impact in the form of Total Lines of Code. You can further dissect the information by adding Year / Month attributes to determine the highest and lowest code churn months / years. In addition, comparing with estimated hours of effort, you can use this information to drive sizing discussions.

There are additional perspectives that TFS has pre-generated like the Builds, Code Coverage, Tests, and Work Items. Each of these perspectives are useful for an effective retrospective discussing build quality, work progress, and tested code paths for sunny and rainy day scenarios.


Posted by Jason Bloomberg on March 24, 2014

When you write a computer program, you’re providing instructions to one or more computers so that they can do whatever it is you’ve programmed them to do. In other words, you programmed the computers to have one or more capabilities.

According to Wikipedia, a capability is the ability to perform or achieve certain actions or outcomes through a set of controllable and measurable faculties, features, functions, processes, or services. But of course, you already knew that, as capability is a common English word and we’re using it in a perfectly common way.

But not only is the term common, the notion that we program computers to give them capabilities is also ubiquitous. The problem is, this capability-centric notion of software has led us down a winding path with nothing but a dead end to greet us.

The problem with thinking of software as providing capabilities to our computers is that the computers will only be able to do those things we have programmed them to do. If our requirements change, we must change the program. Only once we deploy the program, it becomes instant legacy – software that is mired in inflexibility, difficult or even impossible to reprogram or replace. Hence the proverbial winding path to nowhere.

Our computers, however, are really nothing but tools. When they come off the assembly line, they have really no idea what programs they’ll end up running – and they don’t care. Yet while we’re comfortable thinking of our hardware as tools, it takes a mind shift to fully grasp what it means to consider all of our software as tools.

Tools, you see, don’t have capabilities. They have affordances. Affordance is an unquestionably uncommon word, so let’s jump right to Wikipedia for the definition: an affordance is a property of an object, or an environment, which allows an individual to perform an action. The point to a tool, of course, is its affordances: a screwdriver affords users the ability to turn screws or open paint can lids, as well as unintended affordances like hitting a drum or perhaps breaking a window. But the screwdriver doesn’t have the capability of driving screws; rather, a person has that capability when they have a screwdriver – and furthermore, it’s up to the user to decide how they want to use the tool, not the manufacturer of the tool.

The software we use every day has affordances as well. Links are for clicking, buttons are for pushing, etc. Every coder knows how to build user interfaces that offer affordances. And we also have software we explicitly identify as tools: development tools (which afford the ability to develop software, among other affordances) for example. The problem arises when we cross the line from coding affordances to coding capabilities, which happens when we’re no longer coding tools, but we’re coding applications (generally speaking) or solutions.

Such applications are especially familiar in the enterprise space, where they are not simply single programs running on individual computers, but complicated, distributed monstrosities that serve many users and leverage many computers. We may use tools to build such applications, but the entire enterprise software lifecycle focuses on delivering the required functionality for bespoke applications – in other words, capabilities, rather than affordances. Even when you buy enterprise applications, the bulk of the value of the software comes from its capabilities. It’s no wonder we all hate legacy applications!

The challenge for the enterprise app space – and by extension, all categories of software – is to shift this balance between capabilities and affordances to the extreme of maximum affordance. In other words, instead of building or buying software that can do things (in other words, has capabilities), we want software that can enable users to do things – and then maximize the affordances so that we have software smart enough to afford any action.

Superficially this goal sounds too good to be true, but remember what computers are for: they’re for running programs which give them instructions. In other words, computers are examples of maximum affordance in action. The next step is to build software with the same purpose.


Posted by Jason Bloomberg on March 19, 2014

In a recent article for ComputerWorld, Howard Baldwin took a well-deserved poke at leading consulting punditocracy for pushing “Digital Transformation” on their customers. You must build a “digital industrial economy” opines Gartner! Or perhaps a “digital business” that includes a “comprehensive strategy that leads to new architectures, new services and new platforms” according to Accenture and McKinsey. Or maybe PricewaterhouseCooper’s “digital IQ” is more your cup of tea?

The thrust of Baldwin’s article, however, is that CIOs are pushing back against all this consultant newspeak. Readers of this blog may well be wondering where I fall in this discussion. After all, I recently penned The Agile Architecture Revolution. In the book I make the case that we are in the midst of a true revolution – one that reinvents old ways of doing IT, replacing them with entirely new approaches. You might think, therefore, that I align with the gurus of Gartner or the mages of McKinsey.

Sorry to disappoint. Just because we’re in the midst of broad-based transformation in enterprise IT doesn’t necessarily mean that “digital transformation” should be on your corporate shopping list. Digital transformation, after all, isn’t a business priority. Making money, saving money, and keeping customers happy are business priorities. You should only craft a digital transformation strategy for your organization if it promises to improve the bottom line – and you can adequately connect the dots to said bottom line.

I’m sure the pundits at Pricewaterhouse and the others understand this point, and if you hire them, they’ll connect the dots between their whiz-bang digital whatzit and, you know, actually making money. But if you read their white papers or see their executives pontificate at a conference, that’s when they bring out the flashing lights and fireworks.

Bottom line: yes, we’re in a period of great transformation, and yes, you’ll need to figure out how to deal with it. But your business strategy must always focus on your core business priorities, not some flashy collection of buzzwords. Tech fads come and go, but business fundamentals remain the same.


Posted by Sandeep Chanda on March 17, 2014

In one of the previous blogs, I discussed building background services that can scale by tenant. There could however be scenarios where you want the system to scale by load (e.g. number of messages to be processed from queue). Often in such scenarios you want to have control over the way the load is generated to avoid redundancy, but want the processing to happen as soon as the message arrives. You would also want maximum utilization of the CPU for processing to minimize costs of scaling. Having an effective worker role design can help you better understand the efficiency of background services. The following figure illustrates one such design using the OnStart and Run methods inside the RoleEntryPoint class.

You can use the OnStart method to create instances of scheduled services using utilities such as Quartz.Net cron service scheduler that can then run at a pre-defined interval and populate designated queues with messages to be processed. Typically, you would want only one instance from the configured instances to be able to write into the queue to avoid duplication of messages for processing. The following code shows a typical cron schedule. The service configured will have the implementation of the leased method (we discussed in the previous blog post) that will schedule the messages in queue.

public override bool OnStart()
    {
      NetworkSettings.SetServicePointManagerDefaultSetting();
 
      UnityContainer = UnityHelper.ConfigureUnity();
      QueueProvider = UnityContainer.Resolve<IQueueProvider>();
      LogService = UnityContainer.Resolve<ILogService>();
      ScheduleServices();
      return base.OnStart();
    }

The code inside the ScheduledServices method could look like:

DateTimeOffset runTime = DateBuilder.EvenMinuteDate(DateTime.Now);
        JobScheduler scheduler = new JobScheduler();
        scheduler.Schedule<SiteConfigurationService>(runTime,
          "SiteConfigurationSchedule",
          "SiteConfigurationGroup",
          RoleEnvironment.GetConfigurationSettingValue("SiteConfigurationCronSchedule"));
        scheduler.Schedule<AggregationService>(runTime,
          "AggregationSchedule",
          "AggregationGroup",
          RoleEnvironment.GetConfigurationSettingValue("AggregationCronSchedule"));

These are examples of different types of cron services that are run by Quartz.net based on the defined schedule.

The following code illustrates the implementation inside the Run method of a worker role that uses Task Parallel Library to process multiple queues the same time.

public override void Run()
    {
      while (true)
      {
        try
        {
          Parallel.Invoke(
            
            () =>
            {
              ProcessMessages<ISiteConfigurationManager, MaintenanceScheduleItem>(Constants.QueueNames.SiteConfigurationQueue, (m, e) => m.CreateSiteConfiguration(e));
            },
            () =>
            {
              var hasMessages = ProcessMessages<IAggregationManager, QueueMessage>(Constants.QueueNames.PacketDataQueue, null, (m, e) => m.ComputeSiteMetrics(e));
              if (!hasMessages)
                Thread.Sleep(200);
            });
        }
        catch (Exception ex)
        {
          LogService.Error(ex);
        }
      }
    }

This can scale to as many instances as you want, depending on the load on the queue and the expected throughput. The parallel processing will ensure that the CPU is optimally utilized in the worker role, and the run method will generate a continuous run of the instance to process items from the queue. You can also use the auto scale configuration to automatically scale the instances based on load.

Known Issues

There is one known issue you must be aware of in this design regarding the data writes on an Azure Table storage. Since multiple instance will be writing to the table, if you are running updates, there is a chance that the data could have been modified between the time you picked the record and updated it back after processing. Azure, by default, rejects such operations. You can, however, force an update by setting the table entity's ETag property to "*". The following code illustrates a generic table entity save--with forced updates.

public void Save<T>(T entity, bool isUpdate = false) where T : ITableEntity, new()
    {
      TableName = typeof(T).Name;
      if (isUpdate)
        entity.ETag = "*";
      operations.Value.Add(isUpdate ? TableOperation.Replace(entity) : TableOperation.Insert(entity));
    }

A word of caution though. This may not be the design you want to pursue if the system you are building is completely intolerant to a certain degree of data corruption at any point in time, since a forced update may result in such a behaviour.


Posted by Jason Bloomberg on March 13, 2014

All developers these days are familiar with the second statement in the Agile Manifesto: customer collaboration over contract negotiation. You’re on the same team as your customer or stakeholder, the reasoning goes, so you don’t want an adversarial relationship with them. Instead, you should work together to achieve the common goal of working software that meets customer needs.

If you’re not heavily involved with Agile, or even if you are and you step back a moment and look at this principle in a new light, you’ll see that it comes across as calling for some kind of unrealistic Kumbaya moment. Throw away decades of friction and misunderstanding between stakeholders and developers, and miraculously work together in the spirit of love and cooperation! Gag me now, seriously.

In reality, there’s nothing particularly Kumbaya about your run-of-the-mill stakeholder. They’re too busy with, you know, their jobs to spend time holding hands with developers – people who make them feel uncomfortable on the best days. From their perspective, the coders are there to build what they ask, so go away and don’t bother them until you’re done already!

What’s an Agile developer or Scrum master to do when your stakeholders are correspondingly intractable? No amount of touchy-feeliness is going to bring them around. But you can’t really be Agile without their participation, either.

Time for in-your-face Agile.

The bottom line is that the “customer collaboration” from the manifesto isn’t meant to indicate that the parties will be friends or even willing participants. Draw a line in the sand. Make it clear to your stakeholders that the project simply won’t get done unless they cooperate.

You’ll need to use your best judgment, of course – I’m not recommending threats of violence here. But sometimes you have to get tough. If you’re a development firm serving a paying customer, threaten to give their money back. You don’t want business from customers who want the benefits of Agile but aren’t willing to do their part.

For an internal stakeholder, it’s your call whether you want to put your job on the line – but sometimes that might be your best option, if the alternative is to spend months of your time working on a project that you know is doomed to failure due to stakeholder intransigence. However, if you join with the rest of your team and simply refuse to work on a project that lacks proper stakeholder participation, you’re spreading the risk. Remember, if your team is any good, better jobs with more cooperative stakeholders are always plentiful anyway.

In-your-face Agile is unlikely to make you any friends. Don’t expect warm fuzzies around the holidays. But if your efforts lead to successful projects, everyone wins in the end – including even the most obstinate of customers.


Sitemap