Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on August 14, 2014

In Visual Studio 2013, the team unified the performance and diagnostics experience (memory profiling, etc.) under one umbrella and named it Performance and Diagnostics Hub. Available under the Debug menu, this option reduces lot of clutter in terms of profiling client and server side code during a debug operation. There was lot of visual noise in the IDE in the 2012 version and the hub is a significant addition in improving developer productivity.

In the Performance and Diagnostics hub, you may select the target, and specify the performance tools with which you would want to run diagnostics. There are various tools that you can use to start capturing the performance matrices like CPU Usage and Memory Allocation. You can collect CPU utilization matrices on a Windows forms based or WPF application.

The latest release of Update 3 brings with it some key enhancements to the CPU and memory usage tools. In the CPU usage tool, you can now right-click on a function name that was captured as part of the diagnostics and click View Source. This will allow you to easily navigate to the code that is consuming CPU in your application. The memory usage tool now allows you to capture memory usage for Win32 and WPF applications.

The hub will also allow you to figure hot paths in the application code that might be causing more CPU cycles and may need refactoring.

You can also look for functions that is doing most work as illustrated in the figure below.

Overall, the Performance and Diagnostics hub has become a useful arsenal for developer productivity and catering to non-functional aspects of the application scope.


Posted by Jason Bloomberg on August 12, 2014

Two stories on the Internet of Things (IoT) caught my eye this week. First, IDC’s prediction that the IoT market will balloon from US$1.9 trillion in 2013 to $7.1 trillion in 2020. Second, the fact it took hackers 15 seconds to hack the Google Nest thermostat – the device Google wants to make the center of the IoT for the home.

These two stories aren’t atypical, either. Gartner has similarly overblown market growth predictions, although they do admit a measure of overhypedness in the IoT market (ya think?). And as far as whether Nest is an unusual instance, unfortunately, the IoT is rife with security problems.

What are we to make of these opposite, potentially contradictory trends? Here are some possibilities:

We simply don’t care that the IoT is insecure. We really don’t mind that everyone from Russian organized criminals to the script kiddie down the block can hack the IoT. We want it anyway. The benefits outweigh any drawbacks.

Vendors will sufficiently address the IoT’s security issues, so by 2020, we’ll all be able to live in a reasonably hacker-free (and government spying-free) world of connected things. After all, vendors have done such a splendid job making sure our everyday computers are hack and spy-free so far, right?

Perhaps one or both of the above possibilities will take place, but I’m skeptical. Why, then, all the big numbers? Perhaps it’s the analysts themselves? Here are two more possibilities:

Vendors pay analysts (directly or indirectly) to make overblown market size predictions, because such predictions convince customers, investors, and shareholders open their wallets. Never mind the hacker behind the curtain, we’re the great and terrible Wizard of IoT!

Analysts simply ignore factors like the public perception of security when making their predictions. Analysts make their market predictions by asking vendors what their revenues were over the last few years, putting the numbers into a spreadsheet, and dragging the cells to the right. Voila! Market predictions. Only there’s no room in the spreadsheet for adverse influences like security perception issues.

Maybe the analysts are the problem. Or just as likely, I got out on the wrong side of bed this morning. Be that as it may, here’s a contrarian prediction for you:

Both consumers and executives will get fed up with the inability of vendors to secure their gear, and the IoT will wither on the vine.

The wheel is spinning, folks. Which will it be? Time to place your bets!

the IoT will wither on the vine


Posted by Jason Bloomberg on August 8, 2014

One of the most fascinating aspects of the Agile Architecture drum I’ve been beating for the last few years is how multifaceted the topic is. Sometimes the focus is on Enterprise Architecture. Other times I’m talking about APIs and Services. And then there is the data angle, as well as the difficult challenge of semantic interoperability. And finally, there’s the Digital Transformation angle, driven by marketing departments who want to tie mobile and social to the Web but struggle with the deeper technology issues.

As it happens, I’ll be presenting on each of these topics over the next few weeks. First up, a Webinar on Agile Architecture Challenges & Best Practices I’m running jointly with EITA Global on Tuesday August 19 at 10:00 PDT/1:00 EDT. I’ll provide a good amount of depth on Agile Architecture – both architecture for Agile development projects as well as architecture for achieving greater business agility. This Webinar lasts a full ninety minutes, and covers the central topics in Bloomberg Agile Architecture™. If you’re interested in my Bloomberg Agile Architecture Certification course, but don’t have the time or budget for a three-day course (or you simply don’t want to wait for the November launch), then this Webinar is for you.

Next up: my talk at the Dataversity Semantic Technology & Business Conference in San Jose CA, which is collocated with their NoSQL Now! Conference August 19 – 21. My talk is on Dynamic Coupling: The Pot of Gold under the Semantic Rainbow, and I’ll be speaking at 3:00 on Thursday August 21st. I’ll be doing a deep dive into the challenges of semantic integration at the API level, and how Agile Architectural approaches can resolve such challenges. If you’re in the Bay Area the week of August 18th and you’d like to get together, please drop me a line.

If you’re interested in lighter, more business-focused fare, come see me at The Innovation Enterprise’s Digital Strategy Innovation Summit in San Francisco CA September 25 – 26. I’ll be speaking the morning of Thursday September 25th on the topic Why Enterprise Digital Strategies Must Drive IT Modernization. Yes, I know – even for this marketing-centric Digital crowd, I’m still talking about IT, but you’ll get to see me talk about it from the business perspective: no deep dives into dynamic APIs or Agile development practices, promise! I’ll also be moderating a panel on Factoring Disruptive Tech into Business with top executives from Disney, Sabre, Sephora, and more.

I’m particularly excited about the Digital Strategy Innovation Summit because it’s a new crowd for me. I’ve always tried to place technology into the business context, but so far most of my audience has been technical. Hope you can make it to at least one of these events, if only to see my Digital Transformation debut!


Posted by Sandeep Chanda on August 5, 2014

Microsoft Azure Service Bus Event Hubs provide a topic based publish/subscribe messaging platform that allows for high throughput and low latency message processing. A preview version has been recently released by the Microsoft Azure team.

Event Hubs is a component of the service bus, and works alongside service bus topics and queues. Event Hubs provide a perfect platform for collecting event streams from multiple devices and sending them to an analytics engine for processing. This makes it ideal for an Internet of Things (IoT) scenario, where you can capture events from various connected devices and make meaningful decisions based on the ingested event stream.

You can also make use of analytics on the ingress to perform tenant billing, and performance monitoring amongst many possibilities. Event Hubs not only provide a reliable message processing platform, but also support durability for a predefined retention period allowing consumers to connect back in case of a failure.

Getting Started

An Event Hub is part of a Service Bus namespace, and typically consists of a Publisher Policy, Consumer Groups and Partition Keys. A publisher is a logical concept for publishing a message into an Event Hub, and a consumer is a logical concept for receiving messages. Partition allows for scaling Event Hubs and subscribers connect to a partition. Events in a partition are also ordered for delivery.

Currently the supported protocols for pub-sub are HTTP and AMQP. Note that for receiving data only AMQP is currently supported.

The Azure Service Bus NuGet package provides the EventProcessorHost and the EventHubClient API to process messages and send messages to the hub respectively. To start a host that can listen for incoming messages you can create a new instance of the EventProcessorHostas shown below:

host = new EventProcessorHost(
                hostName,
                eventHubName,
                consumerGroupName,
                eventHubConnectionString,
                storageConnectionString, eventHubName.ToLowerInvariant());

Note that it is a good practice to share the hub name in lowercase to avoid any case conflicts on names that the subscribers may present. You need to provide the connection strings for the event hub on the service bus namespace, the storage connection string for the queue, the name of the consumer group and a host name. You can then create a processor factory implementation using the IEventProcessorFactory interface to provide a factory implementation for processing the incoming messages. The host instance can then register the factory to listen for ingress using the RegisterEventProcessorFactoryAsync method. Similarly from the client, you can create an instance of the Event Hub Client using the EventHubClient.CreateFromConnectionString method, and then start sending messages using the SendAsync method that the client exposes.


Posted by Jason Bloomberg on August 1, 2014

What’s wrong with this scenario? Bob, your VP of Engineering brings a ScrumMaster, a Java developer, a UX (user experience) specialist, and a Linux admin into his office. “We need to build this widget app,” he says, describing what a product manager told him she wanted. “So go ahead and self-organize.”

Bob’s intentions are good, right? After all, Agile teams are supposed to be self-organizing. Instead of giving the team specific directions, he laid out the general goal and then asked the team to organize themselves in order to achieve the goal. What could be more Agile than that?

Do you see the problem yet? Let’s shed a bit more light by snooping on the next meeting.

The four techies move to a conference room. The ScrumMaster says, “I’m here to make sure you have what you need, and to mentor you as needed. But you three have to self-organize.”

The other three look at each other. “Uh, I guess I’ll be the Java developer,” the Java developer says.

“I’ll be responsible for the user interface,” the UX person says.

“I guess I’ll be responsible for ops,” the admin volunteers.

Excellent! The team is now self-organized!

What’s wrong with this picture, of course, is that given the size of the team, the constraints of the self-organization were so narrow that there was really no organization to be done, self or not. And while this situation is an overly simplistic example, virtually all self-organizing teams, especially in the enterprise context, have so many explicit and implicit constraints placed upon them that their ability to self-organize is quite limited. As a result, the benefits the overall application creation effort can ever expect to get from such self-organization is paltry at best.

In fact, the behavior of self-organizing teams as well as their efficacy depend upon their goals and constraints. If a team has the wrong goals (or none at all) then self-organization won’t yield the desired benefits. Compare, for example, the hacker group Anonymous on the one hand with self-organizing groups like the Underground Railroad or the French Resistance in World War II on the other. Anonymous is self-organizing to be sure, but has no goals imposed externally. Instead, each individual or self-organized group within Anonymous decides on its own goals. The end result is both chaotic and unpredictable, and clearly makes a poor example for self-organization for teams within the enterprise.

In contrast, the Underground Railroad and the French Resistance had clear goals. What drove each effort to self-organize in the manner they did were their respective explicit constraints: get caught and you get thrown in jail or executed. Such drastically negative constraints led in both cases to the formation of semi-autonomous cells with limited inter-cell communication, so that the compromise of one cell wouldn’t lead to the compromise of others.

In the case of self-organizing application creation teams, goals should be appropriately high-level. “Code us a 10,000-line Java app” is clearly too low-level, while “improve our corporate bottom line” is probably too high-level. That being said, expressing the business goals (in terms of customer expectations as well as the bottom line) will lead to more effective self-organization than technical goals, since deciding on the specific technical goals should be a result of the self-organization (generally speaking).

The constraints on self-organizing teams are at least as important as the goals. While execution by firing squad is unlikely, there are always explicit constraints, for example, security, availability, and compliance requirements. Implicit constraints, however, are where most of the problems arise.

In the example at the beginning of this article, there was an implicit constraint that the team had precisely four members as listed. In real-world situations teams tend to be larger than this, of course, but if management assigns people to a team and then expects them to self-organize, there’s only so much organizing they can do given the implicit management-imposed constraint of team membership.

Motivation also introduces a messy set of implicit constraints. In enterprises, potential team members are generally on salary, and thus their pay doesn’t motivate them one way or another to work hard on a particular project. Instead, enterprises have HR processes for determining how well each individual is doing, and for making decisions on raises, reassignments, or firing – mostly independent from performance on specific projects. Such HR processes are implicit constraints that impact individuals’ motivation on self-organizing teams – what Adrian Cockcroft calls scar tissue.

A Hypothetical Model for True Self-Organization on Enterprise Application Creation Teams

What would an environment look like if the implicit constraints that result from traditionally run organizations, including management hierarchies and HR policies and procedures, were magically swept away? I’m still placing this discussion in the enterprise context, so business-driven project goals (goals that focus on customers/users and revenues/costs) as well as external, explicit constraints like security and governmental regulations remain. Within those parameters, here’s how it might work.

The organization has a large pool of professionals with a diversity of skills and seniority levels. When a business executive identifies a business need for an application, they enter it into an internal digital marketplace, specifying the business goals and the explicit constraints, including how much the business can expect to pay for the successful completion of the project given the benefits to the organization that the project will deliver, and the role the executive as project stakeholder (and other stakeholders) are willing and able to play on the project team. The financial constraint may appear as a fixed price budget or a contingent budget (with a specified list of contingencies).

Members of the professional pool can review all such projects and decide if they might want to participate. If so, they put themselves on the list for the project. Members can also review who has already added themselves to the list and have any discussions they like among that group of people, or other individuals in the pool they might want to reach out to. Based upon those discussions, any group of people can decide they want to take on the project based upon the financial constraints specified, or alternately, propose alternate financial arrangements to the stakeholders. Once the stakeholders and the team come to an agreement, the team gives their commitment to completing the project within the constraints specified. (Of course, if there are no takers, the stakeholder can increase the budget, or perhaps some kind of automated arbitrage like a reverse auction sets the prices.)

The team then organizes themselves however they see fit, and executes on the project in whatever manner they deem appropriate. They work with stakeholders as needed, and the team (including the stakeholders) always has the ability to adjust or renegotiate the terms of the agreement if the team deems it necessary. The team also decides how to divide up the money allotted to the project – how much to pay for tools, how much to pay for the operational environment, and how much to pay themselves.

Do your application creation teams self-organize to this extent? Probably not, as this example is clearly at an extreme. In the real world, the level of self-organization for a given team is a continuous spectrum, ranging from none (all organization is imposed by management) to the extreme example above. Most organizations fall in the middle, as they must work within hierarchical organizations and they don’t have the luxury (or the burden) of basing their own pay on market dynamics. But don’t fool yourself: simply telling a team to self-organize does not mean they have the ability to do so, given the goals and constraints that form the reality of the application creation process at most organizations.


Posted by Sandeep Chanda on July 31, 2014

In the previous post you learned how to setup an Express Node.js application in Microsoft Azure and also make it a unit of continuous deployment using Git. An Express Node.js application in silos without a data store to back it is not very useful. In this post you will explore setting up a MongoDB database using the Microsoft Azure marketplace that can then act as a repository for your Express Node.js web application to store large scale unstructured data. Hosted in Azure, it is limited only by the ability of the platform to scale, which is virtually infinite.

Getting Started

The first thing you would need to do is to subscribe to the MongoLab service from the Microsoft Azure store. MongoLab is a fully hosted MongoDB cloud database that is available with all the major cloud providers, including Azure.

To add MongoLab service to your subscription, click New in your management portal, and select the Store (preview) option.

Note that, depending on your subscription, the store may or may not be available to you. Reach out to Azure support if you need more details.

Find MongoLab from under the App Services category in the store and select to add it to your subscription.

500 MB is free to use. Enter your subscription details in the form that is presented and then click Purchase to complete the operation of adding it to your subscription. You can now use the Mongoose Node.js driver to connect to the MongoLab service database and start storing your model data.

Installation

To install the Mongoose driver, run the following command in your console:

 npm install mongoose –save

You are now all set to connect to the MongoDB database hosted in Azure. Get the connection string for the hosted instance and then use it in your express Node.js application controller code:

var mongoose = require('mongoose');
mongoose.connect('[connection string]');

You can use the model function to get associate a model with the Mongoose schema and then perform your operations on the model data.


Posted by Sandeep Chanda on July 28, 2014

Express is a powerful, yet lightweight and flexible web application framework for Node.js. In this post we will explore how you can create and deploy an express application in Microsoft Azure.

Prerequisites

First and foremost you need Node.js. Once you have installed Node.js, use the command prompt to install Express.

npm install express

You can also use the –g switch to install express globally rather than to a specific directory.

In addition, you will need to create a web site in Microsoft Azure that will host the application. If you have the Azure SDK for Node.js then you would already have the command line tools, but if not, use the following command to install the Azure command line tool:

npm install azure-cli

Create an Express App

Once you have installed Node.js, use the command prompt to create the Express scaffolding using the express command. This will install the scaffolding templates for views, controllers and other relevant resources with Jade and Stylus support:

express --css stylus [Your App Name] 

Next, run the install command to install the dependencies

npm install

This command will install the additional dependencies that are required by Express. The express command creates a package.json file that will be used by the Azure command tool to deploy the application and the dependencies in Azure.

The express command creates a folder structure for views, controllers and models to which you can add your own. To modify the default view, you can edit the index.jade file under the views folder and add your own mark-up code. The app.js file under the application folder will contain an instance of express:

var express = require('express');
var app = express();

You can now use VERBs to start defining routes.

Deploy an Express App

In order to deploy the Express app in Azure, first, install the Azure command tools for Node.js if you don’t have the SDK installed. The next thing you need to do is to get the publish settings from Azure and import it into your Node.js application using the following commands

azure account download
azure account import <publish settings file path>

Next, you need to create a web site in Azure, and also create a local git repository inside your application folder.

azure site create [your site name] --git

You can now commit your files to your local git repository and then use it to push for deployment to Azure using the following command:

git push azure master

You are now all set. The express Node.js application is deployed in Azure.


Posted by Jason Bloomberg on July 24, 2014

Parenting is perhaps the most difficult job any of us is likely to have in our lifetimes, and we earnestly do our best as a rule. And yet, some parenting styles are clearly better than others.

The same is true of architecture. Even the best architects will admit that architecture is difficult, and even though we all try to do our best, in many cases architects are at the least ineffective, and at the worst, do more harm than good.

As it happens, there are some interesting parallels between parenting and architecting. Let’s start with the two most common bad parenting styles: too strict, and not strict enough.

The too strict parent lays down the rules. There are plenty of rules to go around, and breaking them leads to adverse consequences. Such parenting leads to resentment and rebellion from the children.

Unfortunately, most architecture falls into the overly strict category. Architecture review boards that give thumbs up or thumbs down on everybody’s work. Copious design documents that everybody is supposed to follow. Policies and procedures out the wazoo. A rigid sense of how everything is supposed to work.

The result? No flexibility. Excess costs. Increased risk of spectacular failure. And of course, resentment and rebellion from the masses.

However, the opposite type of parenting style is also quite poor: the “anything goes” parent with no rules. Sure, if you’re a teenager it sounds good to have such a “cool” parent – but with no guidelines, parents aren’t teaching their children the basics of living in society. The common result: antisocial or dangerous behaviors like drug use, promiscuity, etc.

The enterprise parallel to the anything goes parent isn’t anything goes architects – it’s no architects at all (even though some people may have the architect title). Without any guidance, the architecture grows organically into a rats’ nest of complexity. No rules leads to a big mess, as well as dangerous behaviors like insufficient attention to security, disaster recovery, etc.

The best parent, of course, is the happy medium. A parent who establishes clear but reasonable guidelines that don’t prevent the kids from living their lives as they like, but keep them out of serious trouble and help them establish behaviors that will make them successful adults.

Just so with the best architects. Focus on what’s really important to architect, like your security, disaster recovery, and regulatory compliance. Provide clear but reasonable guidelines for interoperability among various teams, projects, and software. Act as a mentor and evangelist for architecture, without limiting the flexibility that people need to do their jobs well. And by all means, don’t spend too much time on artifacts, documentation, rules, policies, procedures, and other “stuff.” Yes, you sometimes need these things – but good architects know that the very minimum “stuff” that will get the job done is all the stuff you need.


Posted by Jason Bloomberg on July 18, 2014

Making up new words for old concepts – or using old words for new concepts – goes on all the time in the world of marketing, so you’d think we’d all be used to it by now. But sometimes these efforts at out-buzzing the next guy’s buzzword just end up sounding silly. Here are three of the silliest going around today.

1.       Human-to-Human, aka H2H. This one came from Bryan Kramer of PureMatter. According to Kramer, “there is no more B2B or B2C. It’s H2H: Human to Human.” In other words, H2H is the evolution of eCommerce after business-to-business and business-to-consumer. The problem? Commerce has been H2H since the Stone Age. The next generation of eCommerce is two people haggling over a fish?

2.       Business Technology. This winner comes from a recent article by Professor Robert Plant in the venerable Harvard Business Review. Dr. Plant espouses that “we should no longer be talking about ‘IT’ as a corporate entity. We should be talking about BT—business technology.” Business technology? Seriously? How long have businesses used technology? Earlier than punch card readers. Earlier even than typewriters. Perhaps blacksmiths’ tools? IT – information technology – is a worn out term perhaps, but at least we know it has something to do with information.

3.       Digital. This one is all over the place, so it’s hard to point fingers. But I will anyway: this article from MITSloan Management Review and Capgemini Consulting, for example, which defines digital transformation as “the use of new digital technologies (social media, mobile, analytics or embedded devices) to enable major business improvements (such as enhancing customer experience, streamlining operations or creating new business models).” What, pray tell, does the word digital mean? It refers to a computer that uses bits, as opposed to analog computers that use, what? Sine waves? In other words, 1940s technology.

Ironically, in spite of the digital silliness, the aforementioned article is actually quite good, and I highly recommend it. Even more ironically, I find myself describing what I do as helping organizations with their Digital Transformation initiatives. I guess if you can’t beat ‘em, you might as well join ‘em.


Posted by Jason Bloomberg on July 9, 2014

Nowhere is the poor architect’s quest for respect more difficult than on Agile development teams. Even when Agilists admit the need for architecture, they begrudgingly call for the bare minimum necessary to get the job done – what they often call the minimum viable architecture. The last thing they want are ivory tower architects, churning out reams of design artifacts for elaborate software castles in the sky, when the poor Agile team simply wants to get working software out the door quickly.

My counterpart in Agile Architecture punditry, Charlie Bess of HP, said as much in his recent column for CIO Magazine ominously entitled Is there a need for agile architecture? His conclusion: create only an architecture that is “good enough - don’t let the perfect architecture stand in the way of one that is good enough for today.”

Bess isn’t alone in this conclusion (in fact, he based it on conversations with many Agilists). But any developer who’s been around the block a few times will recognize the “good enough” mantra as a call to incur technical debt – which may or may not be a good thing, depending upon your perspective. Let’s dive into the details and see if we’re asking for trouble here, and if so, how do we get out of it.

Technical debt refers to making short-term software design compromises in the current iteration for the sake of expedience or cost savings, even though somebody will have to fix the resulting code sometime in the future. However, there’s actually two kinds of technical debt (or perhaps real vs. fake technical debt, depending on who’s talking). The “fake” or “type 1” technical debt essentially refers to sloppy design and bad coding. Yes, in many cases bad code is cheaper and faster to produce than good code, and yes, somebody will probably have to clean up the mess later. But generally speaking, the cost of cleaning up bad code outweighs any short-term benefits of slinging it in the first place – so this sloppy type of technical debt is almost always frowned upon.

In contrast, type 2 (or “real”) technical debt refers to intentionally designed shortcuts that lead to working code short-term, but will require refactoring in a future iteration. The early code isn’t sloppy as in type 1, but rather has an intentional lack of functionality or an intentional design simplification in order to achieve the goals of the current iteration in such a way that facilitates future refactoring. The key point here is that well-planned type 2 technical debt is a good thing, and in fact, is an essential part of proper Agile software design.

The core technical debt challenges for Agile teams, therefore, are making sure (a) any technical debt is type 2 (no excuses for bad code!) and (b) that the technical debt incurred is well-planned. So, what does it mean for technical debt to be well-planned? Let’s take a look at the origin of the “debt” metaphor. Sometimes borrowing money is a good thing. If you want to buy a house, taking out a 30-year mortgage at 4% is likely a good idea. Your monthly payments should be manageable, your interest may be tax deductible, and if you’re lucky, the house will go up in value. Such debt is well-planned. Let’s say instead your loser of a brother buys a house, but borrows the money from a loan shark at 10% per week. The penalty for late payment? Broken legs. We can all agree your brother didn’t plan his debt very well.

Just so with technical debt. Over time the issues that result from code shortcuts start to compound, just as interest does – and the refactoring effort required to address those issues is always more than it would have taken to create the code “right” in the first place. But I put “right” in quotes because the notion that you can fully and completely gather and understand the requirements for a software project before you begin coding, and thus code it “right” the first time is the fallacy of the waterfall approach that Agile was invented to solve. In other words, we don’t want to make the mistake of assuming the code can be complete and shortcut-free in early iterations, so we must plan carefully for technical debt in order to deliver better software overall – a fundamental Agile principle.

So, where does this discussion leave Bess’s exhortation that you should only create architecture that is just good enough? The problem: “just good enough” architecture is sloppy architecture. It’s inherently and intentionally short-sighted, which means that we’re avoiding any planning of architectural debt because we erroneously think that makes us “Agile.” But in reality, the planning part of “well-planned technical debt” is a part of your architecture that goes beyond “just good enough,” and leaving it out actually makes us less Agile.

Bloomberg Agile Architecture™ (BAA) has a straightforward answer to this problem, as core Agile Architecture activities happen at the “meta” level, above the software architecture level. By meta we mean the concept applied to itself, like processes for creating processes, methodologies for creating methodologies, and in this case, an architecture for creating architectures – what we call a meta-architecture. When we work at the meta level, we’re not thinking about the things themselves – we’re thinking about how those things change. The fundamental reason to work at the meta level is to deal with change directly as part of the architecture.

In order to adequately plan for architecture technical debt on an Agile development project, then, we must create a meta-architecture that outlines the various phases our architecture must go through as we work our way through the various iterations of our project. The first iteration’s architecture can thus be “just enough” for that iteration, but doesn’t stand alone as the architecture for the entire project, as the meta-architecture provides sufficient design parameters for iterative improvements to the architecture.

However, it’s easier said than done to get this meta-architecture right. In fact, there are two primary pitfalls here that Agilists are likely to fall into. First, they may incorrectly assume the meta-architecture is really just a part of the architecture and thus conclude that any effort put into the meta-architecture should be avoided, as it would be more than “just enough” and would thus constitute an example of overdesign. The second pitfall is to assume the activities that go into creating the meta-architecture are similar to the activities that go into creating the architecture, thus confusing the two – which can lead to architecture masquerading as meta-architecture, which would actually be an instance of overdesign in reality.

In fact, working at the meta-architecture level represents a different set of tasks and challenges from software architecture, and the best choice for who should create the meta-architecture might be different people from the architects responsible for the architecture. These “meta-architects” must focus on how the stakeholders will require the software to change over time, and how to best support that change by evolving the architecture that drives the design of the software (learn to be a meta-architect in my BAA Certification course).

Such considerations, in fact, go beyond software architecture altogether, and are closer to Enterprise Architecture. In essence, when I talk about Bloomberg Agile Architecture, I’m actually talking about meta-architecture, as the point to BAA is to architect for business agility. Building software following Agile methods isn't enough. You must also implement architecture that is inherently Agile, and for that, you need meta-architecture.


Sitemap