Posted by Jason Bloomberg
on September 10, 2014
The new Apple Watch has many cool features to be sure, but I just don’t like the fact that Apple discriminates on the basis of handedness.
The Apple Watch comes in a right-handed configuration. Yes, there’s a left-handed setting, but you need to switch the band around, and then the button on the side is in an awkward lower position.
In other words, left-handed people either have to suck it up and use the watch in the right-handed configuration, or go through the hassle of reconfiguring it only to end up with an inferior design. Thanks a lot, Apple. But hey, we're only lefties, and we're only being inconvenienced.
We should be used to it, right? After all, user interfaces have been right-handed for years. To this day the arrow cursor is right-handed, and scrollbars are always on the right. And for software that does have a left-handed configuration, more often than not some aspect of the UI doesn’t work properly in left-handed mode.
If we were a legally protected minority then it wouldn't be a question of being inconvenienced, right? Were separate water fountains simply inconvenient?
10% of the population is left-handed. And all us lefties know that left-handedness correlates with intelligence, so I wouldn’t be surprised if the percentage is higher within the storied walls of Apple. So, why didn’t Apple release a left-handed version of the Apple Watch?
I think Apple is being offensive by paying lip service to handedness, but giving lefties a second-class experience nevertheless. But that's just me. Who cares what lefties think?
Posted by Jason Bloomberg
on September 5, 2014
“Never believe anything you read on the Internet.” – Abraham Lincoln
Honest Abe never spoke a truer word – even though he didn’t say anything of the sort, of course. And while we can immediately tell this familiar Facebook saying is merely a joke, there are many documents on the Internet that have such a veneer of respectability that we’re tempted to take them at their word – even though they may be just as full of nonsense as the presidential proclamation above.
Among the worst offenders are survey reports, especially when they are surveys of professionals about emerging technologies or approaches. Fortunately, it’s possible to see through the bluster, if you know the difference between a good survey and a bad one. Forewarned is forearmed, as the saying goes – even though Lincoln probably didn’t say that.
The Basics of a Good Survey
The core notion of a survey is that a reputable firm asks questions of a group of people who represent a larger population. If the surveyed group accurately represents the larger population, the answers are truthful, and questions are good, then the results are likely to be relatively accurate (although statistical error is always a factor). Unfortunately, all of these criteria present an opportunity for problems. Here are a few things to look for.
Does the sample group represent the larger population? The key here is that the sample group must be selected randomly from the population, and any deviation from randomness must be compensated for in the analysis. Ensuring randomness, however, is quite difficult, since respondents may or may not want to participate, or may or may not be easy to find or identify.
Here’s how reputable political pollsters handle deviations from randomness. First, they have existing demographic data about the population in question (say, voters in a county). Based on census data, they know what percent are male and female, what percent are registered Democrat or Republican, what the age distribution of the population is, etc. Then they select, say, 100 telephone numbers at random in the county, and call each of them. Some go to voicemail or don’t answer, and many people who do answer refuse to participate. For the ones that do participate, they ask demographic questions as well as the questions the survey is actually interested in. If they find, say, that 50% of voters in a county are female, but 65% of respondents were female, they have to adjust the results accordingly. Making such adjustments for all factors – including who has phones, which numbers are mobile, etc. – is complex and error-prone, but is the best they can do to get the most accurate result possible.
Compare that political polling selection process to how, say, Digital Transformation, Big Data, or Cloud Computing adoption surveys assemble their populations. Perhaps the survey company emails their mailing list and asks for volunteers. Maybe it’s a Web page or a document handed out at a conference. Or worst of all, perhaps survey participants are hand-selected by the sponsor of the survey. None of these methods produces a sample that’s even close to being random. The result? The results of the survey cannot be expected to represent the opinions of any population other than the survey participants themselves.
Are the answers truthful? I’m willing to posit that people are generally honest folks, so the real question here is, what motivations would people have not to be completely honest on a survey? For emerging technologies and approaches the honesty question is especially important, because people like to think they’re adopting some new buzzword, even if they’re not. Furthermore, people like to think they understand a new buzzword, even if they don’t. People also tend to exaggerate their adoption: they may say they’re “advanced Cloud adopters” when they simply use online email, for example. Finally, executives may have different responses than people in the trenches. CIOs are more likely to say they’re doing DevOps than developers in the same organization, for example.
Are the questions good? This criterion is the most subtle, as the answer largely amounts to a matter of opinion. If the surveying company or the sponsor thinks the questions are good, then aren’t they? Perhaps, but the real question here is one of ulterior motives. Is the sponsor looking for the survey to achieve a particular result, and thus is influencing the questions accordingly? Were certain questions thrown out after responses were received, because those answers didn’t make the surveying company or sponsor happy? If scientific researchers were to exclude certain questions because they didn’t like the results, they’d get fired and blacklisted. Unfortunately, there are no such punishments in the world of business surveys.
So, How Do You Tell?
I always recommend taking surveys with a large grain of salt regardless, but the best way to get a sense of the quality of a survey is to look at the methodology section. The survey you’re wondering about doesn’t have a methodology section, you say? Well, it might be good for wrapping fish, but not much else, since every survey report should have one.
Even if it has one, take a look at it with a critical eye, not just for what it says, but for what it doesn’t say. Then, if some critical bit of information is missing, assume the worst. For example, here is the entire methodology section from a recent Forrester Research “Thought Leadership Paper” survey on Business Transformation commissioned by Tata Consultancy Services (TCS):
In this study, Forrester interviewed 10 business transformation leaders and conducted an online survey of 100 US and UK decision-makers with significant involvement in business transformation projects. Survey participants included Director+ decision-makers in IT and line of business. Questions provided to the participants asked about their goals, metrics, and best practices around business transformation projects. Respondents were offered an incentive as a thank you for time spent on the survey. The study began in February 2014 and was completed in May 2014.
How did Forrester ensure the randomness of their survey sample? They didn’t. Is there any reason to believe the survey sample accurately represents a larger population? Nope. How did they select the people they surveyed? It doesn’t say, except to point out they have significant involvement in business transformation projects. So if we assume the worst, we should assume the respondents were hand-selected by the sponsor. Does the report provide an analysis of the answers to every question asked? It doesn’t say. The methodology statement does point out respondents were offered an incentive for participating, however. This admission indicates Forrester is a reputable firm to be sure, but doesn’t say much for the accuracy or usefulness of the results of the report.
So, what should a business survey report methodology look like? Take a look at this one from the International Finance Corporation (IFC), a member of the World Bank Group. The difference is clear. Consider yourself forewarned!
Posted by Jason Bloomberg
on August 26, 2014
I attended Dataversity’s NoSQL Now! Conference last week, and among the many vendors I spoke with, one story caught my interest. This vendor (who alas must remain nameless) is a leader in the NoSQL database market, specializing in particular on supporting XML as a native file type.
Posted by Jason Bloomberg
on August 21, 2014
In my latest Cortex newsletter I referred to “tone deaf” corporations who have flexible technology like corporate social media in place, but lack the organizational flexibility to use it properly. The result is a negative customer experience that defeats the entire purpose of interacting with customers.
Not all large corporations are tone deaf, however. So instead of finding an egregious example of tone deafness and lambasting it, I actually found an example of a corporation who uses social media in an exemplary way. Let’s see what Delta Airlines is doing right.
The screenshot above is from the Delta Facebook page. Delta regularly posts promotional and PR pieces to the page, and in this case, they are telling the story of a long-time employee. Giving a human face to the company is a good practice to be sure, but doesn’t leverage the social aspect of Facebook – how Delta handles the comments does.
As often happens, a disgruntled customer decided to post a grievance. Delta could have answered with a formulaic response (tone deaf) or chosen not to respond at all (even more tone deaf). But instead, a real person responded with an on-point apology. Furthermore, this real person signed the response with her name (I’ll assume Alex is female for the sake of simplicity) – so even though she is posting under the Delta corporate account, the customer, as well as everybody else viewing the interchange, knows a human being at Delta is responding.
If Alex’s response ended at a simple apology, however, such a response would still be tone deaf, because it wouldn’t have addressed the problem. But in this case, she also provided a link to the complaints page and actually recommended to the customer that she file a formal complaint. In other words, Delta uses social media to empower its customers – the one who complained, and of course, everyone else who happens to see the link.
It could be argued that Alex was simply handing off the customer to someone else, thus passing the buck. In this case, however, I believe the response was the best that could be expected, as the details of the customer’s complaint aren’t salient for a public forum like social media. Naturally, the complaints Web site might drop the ball, but as far as Delta’s handling of social media, they have shown a mastery of the medium.
So, who is Alex? Is she in customer service or public relations? The answer, of course, is both – which shows a customer-facing organizational strategy at Delta that many other companies struggle with. Where is your customer service? Likely in a call center, which you may have even outsourced. Where is your PR? Likely out of your marketing department, or yes, even outsourced to a PR firm.
How do these respective teams interact with customers? The call center rep follows a script, and if a problem deviates, the rep has to escalate to a manager. Any communications from the PR firm go through several approvals within the firm and at the client before they hit the wire. In other words, the power rests centrally with corporate management.
However, not only does a social media response team like Alex’s bring together customer service and PR, but whatever script she follows can only be a loose guideline, or responses would sound formulaic, and hence tone deaf. Instead, Delta has empowered Alex and her colleagues to take charge of the customer interaction, and in turn, Alex empowers customers to take control of their interactions with Delta.
The secret to corporate social media success? Empowerment. Trust the people on the front lines to interact with customers, and trust the customer as well. Loosen the ties to management. Social media are social, not hierarchical. After all, Digital Transformation is always about transforming people.
Posted by Jason Bloomberg
on August 12, 2014
Two stories on the Internet of Things (IoT) caught my eye this week. First, IDC’s prediction that the IoT market will balloon from US$1.9 trillion in 2013 to $7.1 trillion in 2020. Second, the fact it took hackers 15 seconds to hack the Google Nest thermostat – the device Google wants to make the center of the IoT for the home.
These two stories aren’t atypical, either. Gartner has similarly overblown market growth predictions, although they do admit a measure of overhypedness in the IoT market (ya think?). And as far as whether Nest is an unusual instance, unfortunately, the IoT is rife with security problems.
What are we to make of these opposite, potentially contradictory trends? Here are some possibilities:
We simply don’t care that the IoT is insecure. We really don’t mind that everyone from Russian organized criminals to the script kiddie down the block can hack the IoT. We want it anyway. The benefits outweigh any drawbacks.
Vendors will sufficiently address the IoT’s security issues, so by 2020, we’ll all be able to live in a reasonably hacker-free (and government spying-free) world of connected things. After all, vendors have done such a splendid job making sure our everyday computers are hack and spy-free so far, right?
Perhaps one or both of the above possibilities will take place, but I’m skeptical. Why, then, all the big numbers? Perhaps it’s the analysts themselves? Here are two more possibilities:
Vendors pay analysts (directly or indirectly) to make overblown market size predictions, because such predictions convince customers, investors, and shareholders open their wallets. Never mind the hacker behind the curtain, we’re the great and terrible Wizard of IoT!
Analysts simply ignore factors like the public perception of security when making their predictions. Analysts make their market predictions by asking vendors what their revenues were over the last few years, putting the numbers into a spreadsheet, and dragging the cells to the right. Voila! Market predictions. Only there’s no room in the spreadsheet for adverse influences like security perception issues.
Maybe the analysts are the problem. Or just as likely, I got out on the wrong side of bed this morning. Be that as it may, here’s a contrarian prediction for you:
Both consumers and executives will get fed up with the inability of vendors to secure their gear, and the IoT will wither on the vine.
The wheel is spinning, folks. Which will it be? Time to place your bets!
Posted by Jason Bloomberg
on August 8, 2014
One of the most fascinating aspects of the Agile Architecture drum I’ve been beating for the last few years is how multifaceted the topic is. Sometimes the focus is on Enterprise Architecture. Other times I’m talking about APIs and Services. And then there is the data angle, as well as the difficult challenge of semantic interoperability. And finally, there’s the Digital Transformation angle, driven by marketing departments who want to tie mobile and social to the Web but struggle with the deeper technology issues.
As it happens, I’ll be presenting on each of these topics over the next few weeks. First up, a Webinar on Agile Architecture Challenges & Best Practices I’m running jointly with EITA Global on Tuesday August 19 at 10:00 PDT/1:00 EDT. I’ll provide a good amount of depth on Agile Architecture – both architecture for Agile development projects as well as architecture for achieving greater business agility. This Webinar lasts a full ninety minutes, and covers the central topics in Bloomberg Agile Architecture™. If you’re interested in my Bloomberg Agile Architecture Certification course, but don’t have the time or budget for a three-day course (or you simply don’t want to wait for the November launch), then this Webinar is for you.
Next up: my talk at the Dataversity Semantic Technology & Business Conference in San Jose CA, which is collocated with their NoSQL Now! Conference August 19 – 21. My talk is on Dynamic Coupling: The Pot of Gold under the Semantic Rainbow, and I’ll be speaking at 3:00 on Thursday August 21st. I’ll be doing a deep dive into the challenges of semantic integration at the API level, and how Agile Architectural approaches can resolve such challenges. If you’re in the Bay Area the week of August 18th and you’d like to get together, please drop me a line.
If you’re interested in lighter, more business-focused fare, come see me at The Innovation Enterprise’s Digital Strategy Innovation Summit in San Francisco CA September 25 – 26. I’ll be speaking the morning of Thursday September 25th on the topic Why Enterprise Digital Strategies Must Drive IT Modernization. Yes, I know – even for this marketing-centric Digital crowd, I’m still talking about IT, but you’ll get to see me talk about it from the business perspective: no deep dives into dynamic APIs or Agile development practices, promise! I’ll also be moderating a panel on Factoring Disruptive Tech into Business with top executives from Disney, Sabre, Sephora, and more.
I’m particularly excited about the Digital Strategy Innovation Summit because it’s a new crowd for me. I’ve always tried to place technology into the business context, but so far most of my audience has been technical. Hope you can make it to at least one of these events, if only to see my Digital Transformation debut!
Posted by Jason Bloomberg
on August 1, 2014
What’s wrong with this scenario? Bob, your VP of Engineering brings a ScrumMaster, a Java developer, a UX (user experience) specialist, and a Linux admin into his office. “We need to build this widget app,” he says, describing what a product manager told him she wanted. “So go ahead and self-organize.”
Bob’s intentions are good, right? After all, Agile teams are supposed to be self-organizing. Instead of giving the team specific directions, he laid out the general goal and then asked the team to organize themselves in order to achieve the goal. What could be more Agile than that?
Do you see the problem yet? Let’s shed a bit more light by snooping on the next meeting.
The four techies move to a conference room. The ScrumMaster says, “I’m here to make sure you have what you need, and to mentor you as needed. But you three have to self-organize.”
The other three look at each other. “Uh, I guess I’ll be the Java developer,” the Java developer says.
“I’ll be responsible for the user interface,” the UX person says.
“I guess I’ll be responsible for ops,” the admin volunteers.
Excellent! The team is now self-organized!
What’s wrong with this picture, of course, is that given the size of the team, the constraints of the self-organization were so narrow that there was really no organization to be done, self or not. And while this situation is an overly simplistic example, virtually all self-organizing teams, especially in the enterprise context, have so many explicit and implicit constraints placed upon them that their ability to self-organize is quite limited. As a result, the benefits the overall application creation effort can ever expect to get from such self-organization is paltry at best.
In fact, the behavior of self-organizing teams as well as their efficacy depend upon their goals and constraints. If a team has the wrong goals (or none at all) then self-organization won’t yield the desired benefits. Compare, for example, the hacker group Anonymous on the one hand with self-organizing groups like the Underground Railroad or the French Resistance in World War II on the other. Anonymous is self-organizing to be sure, but has no goals imposed externally. Instead, each individual or self-organized group within Anonymous decides on its own goals. The end result is both chaotic and unpredictable, and clearly makes a poor example for self-organization for teams within the enterprise.
In contrast, the Underground Railroad and the French Resistance had clear goals. What drove each effort to self-organize in the manner they did were their respective explicit constraints: get caught and you get thrown in jail or executed. Such drastically negative constraints led in both cases to the formation of semi-autonomous cells with limited inter-cell communication, so that the compromise of one cell wouldn’t lead to the compromise of others.
In the case of self-organizing application creation teams, goals should be appropriately high-level. “Code us a 10,000-line Java app” is clearly too low-level, while “improve our corporate bottom line” is probably too high-level. That being said, expressing the business goals (in terms of customer expectations as well as the bottom line) will lead to more effective self-organization than technical goals, since deciding on the specific technical goals should be a result of the self-organization (generally speaking).
The constraints on self-organizing teams are at least as important as the goals. While execution by firing squad is unlikely, there are always explicit constraints, for example, security, availability, and compliance requirements. Implicit constraints, however, are where most of the problems arise.
In the example at the beginning of this article, there was an implicit constraint that the team had precisely four members as listed. In real-world situations teams tend to be larger than this, of course, but if management assigns people to a team and then expects them to self-organize, there’s only so much organizing they can do given the implicit management-imposed constraint of team membership.
Motivation also introduces a messy set of implicit constraints. In enterprises, potential team members are generally on salary, and thus their pay doesn’t motivate them one way or another to work hard on a particular project. Instead, enterprises have HR processes for determining how well each individual is doing, and for making decisions on raises, reassignments, or firing – mostly independent from performance on specific projects. Such HR processes are implicit constraints that impact individuals’ motivation on self-organizing teams – what Adrian Cockcroft calls scar tissue.
A Hypothetical Model for True Self-Organization on Enterprise Application Creation Teams
What would an environment look like if the implicit constraints that result from traditionally run organizations, including management hierarchies and HR policies and procedures, were magically swept away? I’m still placing this discussion in the enterprise context, so business-driven project goals (goals that focus on customers/users and revenues/costs) as well as external, explicit constraints like security and governmental regulations remain. Within those parameters, here’s how it might work.
The organization has a large pool of professionals with a diversity of skills and seniority levels. When a business executive identifies a business need for an application, they enter it into an internal digital marketplace, specifying the business goals and the explicit constraints, including how much the business can expect to pay for the successful completion of the project given the benefits to the organization that the project will deliver, and the role the executive as project stakeholder (and other stakeholders) are willing and able to play on the project team. The financial constraint may appear as a fixed price budget or a contingent budget (with a specified list of contingencies).
Members of the professional pool can review all such projects and decide if they might want to participate. If so, they put themselves on the list for the project. Members can also review who has already added themselves to the list and have any discussions they like among that group of people, or other individuals in the pool they might want to reach out to. Based upon those discussions, any group of people can decide they want to take on the project based upon the financial constraints specified, or alternately, propose alternate financial arrangements to the stakeholders. Once the stakeholders and the team come to an agreement, the team gives their commitment to completing the project within the constraints specified. (Of course, if there are no takers, the stakeholder can increase the budget, or perhaps some kind of automated arbitrage like a reverse auction sets the prices.)
The team then organizes themselves however they see fit, and executes on the project in whatever manner they deem appropriate. They work with stakeholders as needed, and the team (including the stakeholders) always has the ability to adjust or renegotiate the terms of the agreement if the team deems it necessary. The team also decides how to divide up the money allotted to the project – how much to pay for tools, how much to pay for the operational environment, and how much to pay themselves.
Do your application creation teams self-organize to this extent? Probably not, as this example is clearly at an extreme. In the real world, the level of self-organization for a given team is a continuous spectrum, ranging from none (all organization is imposed by management) to the extreme example above. Most organizations fall in the middle, as they must work within hierarchical organizations and they don’t have the luxury (or the burden) of basing their own pay on market dynamics. But don’t fool yourself: simply telling a team to self-organize does not mean they have the ability to do so, given the goals and constraints that form the reality of the application creation process at most organizations.
Posted by Jason Bloomberg
on July 24, 2014
Parenting is perhaps the most difficult job any of us is likely to have in our lifetimes, and we earnestly do our best as a rule. And yet, some parenting styles are clearly better than others.
The same is true of architecture. Even the best architects will admit that architecture is difficult, and even though we all try to do our best, in many cases architects are at the least ineffective, and at the worst, do more harm than good.
As it happens, there are some interesting parallels between parenting and architecting. Let’s start with the two most common bad parenting styles: too strict, and not strict enough.
The too strict parent lays down the rules. There are plenty of rules to go around, and breaking them leads to adverse consequences. Such parenting leads to resentment and rebellion from the children.
Unfortunately, most architecture falls into the overly strict category. Architecture review boards that give thumbs up or thumbs down on everybody’s work. Copious design documents that everybody is supposed to follow. Policies and procedures out the wazoo. A rigid sense of how everything is supposed to work.
The result? No flexibility. Excess costs. Increased risk of spectacular failure. And of course, resentment and rebellion from the masses.
However, the opposite type of parenting style is also quite poor: the “anything goes” parent with no rules. Sure, if you’re a teenager it sounds good to have such a “cool” parent – but with no guidelines, parents aren’t teaching their children the basics of living in society. The common result: antisocial or dangerous behaviors like drug use, promiscuity, etc.
The enterprise parallel to the anything goes parent isn’t anything goes architects – it’s no architects at all (even though some people may have the architect title). Without any guidance, the architecture grows organically into a rats’ nest of complexity. No rules leads to a big mess, as well as dangerous behaviors like insufficient attention to security, disaster recovery, etc.
The best parent, of course, is the happy medium. A parent who establishes clear but reasonable guidelines that don’t prevent the kids from living their lives as they like, but keep them out of serious trouble and help them establish behaviors that will make them successful adults.
Just so with the best architects. Focus on what’s really important to architect, like your security, disaster recovery, and regulatory compliance. Provide clear but reasonable guidelines for interoperability among various teams, projects, and software. Act as a mentor and evangelist for architecture, without limiting the flexibility that people need to do their jobs well. And by all means, don’t spend too much time on artifacts, documentation, rules, policies, procedures, and other “stuff.” Yes, you sometimes need these things – but good architects know that the very minimum “stuff” that will get the job done is all the stuff you need.
Posted by Jason Bloomberg
on July 18, 2014
Making up new words for old concepts – or using old words for new concepts – goes on all the time in the world of marketing, so you’d think we’d all be used to it by now. But sometimes these efforts at out-buzzing the next guy’s buzzword just end up sounding silly. Here are three of the silliest going around today.
1. Human-to-Human, aka H2H. This one came from Bryan Kramer of PureMatter. According to Kramer, “there is no more B2B or B2C. It’s H2H: Human to Human.” In other words, H2H is the evolution of eCommerce after business-to-business and business-to-consumer. The problem? Commerce has been H2H since the Stone Age. The next generation of eCommerce is two people haggling over a fish?
2. Business Technology. This winner comes from a recent article by Professor Robert Plant in the venerable Harvard Business Review. Dr. Plant espouses that “we should no longer be talking about ‘IT’ as a corporate entity. We should be talking about BT—business technology.” Business technology? Seriously? How long have businesses used technology? Earlier than punch card readers. Earlier even than typewriters. Perhaps blacksmiths’ tools? IT – information technology – is a worn out term perhaps, but at least we know it has something to do with information.
3. Digital. This one is all over the place, so it’s hard to point fingers. But I will anyway: this article from MITSloan Management Review and Capgemini Consulting, for example, which defines digital transformation as “the use of new digital technologies (social media, mobile, analytics or embedded devices) to enable major business improvements (such as enhancing customer experience, streamlining operations or creating new business models).” What, pray tell, does the word digital mean? It refers to a computer that uses bits, as opposed to analog computers that use, what? Sine waves? In other words, 1940s technology.
Ironically, in spite of the digital silliness, the aforementioned article is actually quite good, and I highly recommend it. Even more ironically, I find myself describing what I do as helping organizations with their Digital Transformation initiatives. I guess if you can’t beat ‘em, you might as well join ‘em.
Posted by Jason Bloomberg
on July 9, 2014
Nowhere is the poor architect’s quest for respect more difficult than on Agile development teams. Even when Agilists admit the need for architecture, they begrudgingly call for the bare minimum necessary to get the job done – what they often call the minimum viable architecture. The last thing they want are ivory tower architects, churning out reams of design artifacts for elaborate software castles in the sky, when the poor Agile team simply wants to get working software out the door quickly.
My counterpart in Agile Architecture punditry, Charlie Bess of HP, said as much in his recent column for CIO Magazine ominously entitled Is there a need for agile architecture? His conclusion: create only an architecture that is “good enough - don’t let the perfect architecture stand in the way of one that is good enough for today.”
Bess isn’t alone in this conclusion (in fact, he based it on conversations with many Agilists). But any developer who’s been around the block a few times will recognize the “good enough” mantra as a call to incur technical debt – which may or may not be a good thing, depending upon your perspective. Let’s dive into the details and see if we’re asking for trouble here, and if so, how do we get out of it.
Technical debt refers to making short-term software design compromises in the current iteration for the sake of expedience or cost savings, even though somebody will have to fix the resulting code sometime in the future. However, there’s actually two kinds of technical debt (or perhaps real vs. fake technical debt, depending on who’s talking). The “fake” or “type 1” technical debt essentially refers to sloppy design and bad coding. Yes, in many cases bad code is cheaper and faster to produce than good code, and yes, somebody will probably have to clean up the mess later. But generally speaking, the cost of cleaning up bad code outweighs any short-term benefits of slinging it in the first place – so this sloppy type of technical debt is almost always frowned upon.
In contrast, type 2 (or “real”) technical debt refers to intentionally designed shortcuts that lead to working code short-term, but will require refactoring in a future iteration. The early code isn’t sloppy as in type 1, but rather has an intentional lack of functionality or an intentional design simplification in order to achieve the goals of the current iteration in such a way that facilitates future refactoring. The key point here is that well-planned type 2 technical debt is a good thing, and in fact, is an essential part of proper Agile software design.
The core technical debt challenges for Agile teams, therefore, are making sure (a) any technical debt is type 2 (no excuses for bad code!) and (b) that the technical debt incurred is well-planned. So, what does it mean for technical debt to be well-planned? Let’s take a look at the origin of the “debt” metaphor. Sometimes borrowing money is a good thing. If you want to buy a house, taking out a 30-year mortgage at 4% is likely a good idea. Your monthly payments should be manageable, your interest may be tax deductible, and if you’re lucky, the house will go up in value. Such debt is well-planned. Let’s say instead your loser of a brother buys a house, but borrows the money from a loan shark at 10% per week. The penalty for late payment? Broken legs. We can all agree your brother didn’t plan his debt very well.
Just so with technical debt. Over time the issues that result from code shortcuts start to compound, just as interest does – and the refactoring effort required to address those issues is always more than it would have taken to create the code “right” in the first place. But I put “right” in quotes because the notion that you can fully and completely gather and understand the requirements for a software project before you begin coding, and thus code it “right” the first time is the fallacy of the waterfall approach that Agile was invented to solve. In other words, we don’t want to make the mistake of assuming the code can be complete and shortcut-free in early iterations, so we must plan carefully for technical debt in order to deliver better software overall – a fundamental Agile principle.
So, where does this discussion leave Bess’s exhortation that you should only create architecture that is just good enough? The problem: “just good enough” architecture is sloppy architecture. It’s inherently and intentionally short-sighted, which means that we’re avoiding any planning of architectural debt because we erroneously think that makes us “Agile.” But in reality, the planning part of “well-planned technical debt” is a part of your architecture that goes beyond “just good enough,” and leaving it out actually makes us less Agile.
Bloomberg Agile Architecture™ (BAA) has a straightforward answer to this problem, as core Agile Architecture activities happen at the “meta” level, above the software architecture level. By meta we mean the concept applied to itself, like processes for creating processes, methodologies for creating methodologies, and in this case, an architecture for creating architectures – what we call a meta-architecture. When we work at the meta level, we’re not thinking about the things themselves – we’re thinking about how those things change. The fundamental reason to work at the meta level is to deal with change directly as part of the architecture.
In order to adequately plan for architecture technical debt on an Agile development project, then, we must create a meta-architecture that outlines the various phases our architecture must go through as we work our way through the various iterations of our project. The first iteration’s architecture can thus be “just enough” for that iteration, but doesn’t stand alone as the architecture for the entire project, as the meta-architecture provides sufficient design parameters for iterative improvements to the architecture.
However, it’s easier said than done to get this meta-architecture right. In fact, there are two primary pitfalls here that Agilists are likely to fall into. First, they may incorrectly assume the meta-architecture is really just a part of the architecture and thus conclude that any effort put into the meta-architecture should be avoided, as it would be more than “just enough” and would thus constitute an example of overdesign. The second pitfall is to assume the activities that go into creating the meta-architecture are similar to the activities that go into creating the architecture, thus confusing the two – which can lead to architecture masquerading as meta-architecture, which would actually be an instance of overdesign in reality.
In fact, working at the meta-architecture level represents a different set of tasks and challenges from software architecture, and the best choice for who should create the meta-architecture might be different people from the architects responsible for the architecture. These “meta-architects” must focus on how the stakeholders will require the software to change over time, and how to best support that change by evolving the architecture that drives the design of the software (learn to be a meta-architect in my BAA Certification course).
Such considerations, in fact, go beyond software architecture altogether, and are closer to Enterprise Architecture. In essence, when I talk about Bloomberg Agile Architecture, I’m actually talking about meta-architecture, as the point to BAA is to architect for business agility. Building software following Agile methods isn't enough. You must also implement architecture that is inherently Agile, and for that, you need meta-architecture.