Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Gigi Sayfan on January 21, 2016

Agile methodologies have been used successfully in many big companies, but it is often a challenge. There are many reasons: lack of project sponsorship, prolonged user validation, existing policies, legacy systems with no tests - and most importantly culture and inertia. Given all these obstacles how do you scale Agile processes in a big organization? Very carefully. If you're interested in introducing Agile development practices into a large organization, you can try some of these techniques:

  1. Show don't tell - Work on a project using Agile methods. Get it done on time and on budget using Agile methods.
  2. Grow organically and incrementally - If you're a manager it's easy. Start with your team. Try to gain mindshare with your peer managers - for example, when collaborating on a project, suggest the use of Agile methods to coordinate deliverables and handoffs. If you're a developer, try to convince your team members and manager to give it a try.
  3. Utilize the organizational structure - Treat each team or department as a small Agile entity. If you can, establish well-defined interfaces.
  4. Be flexible - Be willing to compromise and acknowledge other people's concerns. Try to accommodate as much as possible even if it means you start with a hybrid Agile process. Changing people and their habits is hard. Changing the mindset of veteran people in big companies with established culture is extremely difficult.

Finally, if you are really passionate about Agile practices and everything you've tried has failed, you can always join a company that already follows agile practices, including many companies from the Fortune 2000.


Posted by Gigi Sayfan on January 13, 2016

Two Meanings?

  1. The form of internal documentation appropriate for an organization following agile practices.
  2. Generating external documentation as an artifact/user story.

The first meaning is typically a combination of code comments and auto-generated documentation. A very common assertion in Agile circles is that unit tests serve as live documentation. Python, for example has a module called doctest in which the documentation of a function may contain live code examples with outputs that can be executed as tests which verify the correctness.

Behavior Driven Development

BDD is putting a lot of emphasis on even specifying the requirements in an executable form via special DSLs (domain specific languages), so the requirements can serve as both tests and live human readable documentation. Auto-generated documentation for public APIs is very common. Public APIs are designed to be used by third party developers who are not familiar with the code (even if it's open source). The documentation must be accurate and in sync with the code.

The second meaning can be considered as just another artifact. But, there are some differences. Typically, when generating external documentation for a system it is centralized. You have to consider the structure and organization and then address the content as a collection of user stories. Unlike code artifacts, external documentation doesn't have automated tests. Documentation testing is an often neglected practice. Which is fairly typical because the documentation itself is often neglected. However, some types of external documentation are critical and must serve contractual or regulatory requirements. In these cases, you must verify that the documentation is correct.


Posted by Gigi Sayfan on January 4, 2016

Agile practices have proven themselves time and again for development and evolution of software systems. But, it's not clear if the same agile approach can benefit user-facing aspects such as public APIs, user interface design and user experience. If you change your API constantly, no sane developer will use it. If your user interface design or experience keeps shifting users will get confused and angry that they have to face a new learning curve whenever you decide to make a change. Sometimes, users will be upset even if the changes are demonstrably beneficial, just because of switching costs. Remember users didn't subscribe to your agile thinking and are just interested in using your API/product.

What's the answer then? Do you have to be prescient and come up with the ultimate API and user interface right at the beginning? Not at all. There are several solutions that will allow you to iterate here as well. But, you have to realize that iteration on these aspects should and will be slower and more disciplined.

Possible approaches include A/B testing, keeping the old API/interface available, deprecating previous APIs, backward compatibility, testing rapid changes on groups of users that sign up for beta. In general, the more successful you are, the less free you are to get rid of legacy. Probably the best example is Microsoft which still allows you to run DOS programs on the latest Windows versions and used a variety of approaches to iterate on the Windows desktop experience, including handling the frustration from users whenever a new version of Windows comes out. Windows 10 is a fine response to the harsh criticism Windows 8 endured.


Posted by Gigi Sayfan on December 14, 2015

Micro service architectures gain a lot of mindshare recently for good reasons. Let's assume you did the hard work and you have switched your system to use micro services. What are the benefits to an agile team? What would an agile micro service looks like?

The best thing about a micro service is that it is micro. It has a very well scoped functionality and it's easy to wrap your head around what it's doing. That means it is very easy to deliver significant features or improvements to a micro service within a single sprint. It also means that it is easy for a new engineer that has never worked on the micro service to pair with another engineer and quickly get up to speed.

The interactions with other services are through well-defined interfaces, so it is easy to use dependency injection to mock them and create automated unit tests. The narrow scope of the micro service helps to also to ensure that the test coverage is adequate. Refactoring anything inside the micro service is very simple because there is no need to coordinate with other services. It also aids in scaling a system incrementally.

At some point, every assumption you made when you designed the smaller system will come back to bite you when you scale — that data fits in memory, the data fits on the hard disk, the data can be stored reliably in one data center, the server can return all the data without timing out, etc. One approach is to try to think about all these eventualities and engineer them into the system from the beginning. This is typically a very bad decision. Your system might not scale to the level of engineering you built in for years, the time to market will be significantly higher if you try to build infinitely scalable system from the start, operations and maintenance will be more difficult. But, the worse thing is that when your system actually scales to the levels that require such engineering effort you'll discover that your well-crafted super-engineering from three years ago completely missed the mark and you have to start from scratch.

Micro services can help with that too. You can often isolate scalability, fault tolerance and high-availability into dedicated micro services that you build later and weave with your existing business logic micro services. The loosely coupled nature of micro services lends itself very well to agile evolution of the system.


Posted by Gigi Sayfan on November 17, 2015

Agile development, when executed well, is a thing of beauty. A sprint starts, stories/tasks are assigned. People start working in a test-driven manner and the system comes to life piece after piece. With each sprint new functionality is delivered. Refactoring is ongoing. The system architecture is always close to the conceptual ideal. Stakeholders are happy. Developers are happy. Everybody is happy.

And then, there is the real world--pressure, mid sprint changes, unclear requirements, tests without much coverage, technical debt mounting, production systems crashing, etc. With agile development that's not supposed to happen, but agile practices require dedication and discipline from all stakeholders--including top management, which is not always there. Even at development team level, some people are not fully on board or are simply not organized enough.

There are some inglorious tasks such as heavy documentation, verifying compliance with obscure standards and checking backward-compatibility with Netscape Navigator 4 on Windows 95, for example. If you reach a point where you notice Agile fatigue where basic practices are not followed, you can try gamification. Gamification is all about creating a work environment where you get rewarded for performing your duties by incorporating game design elements.

Whenever you squash a bug you get some experience points, the system keeps track of your not being late for meetings streak, the whole team gets a gold star when all of the sprint is completed successfully. This may sound silly if you never tried it, but movements such as the quantified self demonstrate that people enjoy it and will adapt their behavior to accomplish little game goals that just happen to correspond to real life goals. It can add fun, as well as help keep track on actual important metrics.


Posted by Gigi Sayfan on November 3, 2015

Test Driven Development (TDD) is arguably the most impactful Agile practice. Nobody even talks about it anymore, but automated tests were revolutionary when they came on the scene. Fifteen years ago, the common practice was to have a big monolithic application. The developers would produce a build (usually an executable) once in a blue moon, throw it over the fence to the QA department which would bang on it for a while and then open tens if not hundreds of bugs. Then, the developers would engage is a bug squashing period.

Luckily, that's no longer the case and everyone recognizes the importance of automated tests and rapid iterations. TDD means that your whole development process is driven by tests. When you think about problems you phrase the discussion in terms of what tests you'll need. When you model things you consider how testable they are. You might change choose between alternatives based on the impact on your test suite.

The ultimate in TDD is the test first movement. This is truly putting tests on a pedestal and treating tests as the most crucial building blocks of your system and its architecture. There are multiple advantages to TDD. The best one is that you get to say TDD all the time. Try it. It's fun: TDD, TDD, TDD. Almost as important is the fact that you will have automated tests for all your code and those tests will even be kept up to date as you evolve and refactor your system. Don't underestimate TDD. Every development team starts with the best intentions regarding all kinds of best practices, but as the pressure mounts many of them go down the drain: test coverage, modularity, coding conventions, refactoring, clean dependencies, etc.

Having tests as your primary driver ensures they will not be neglected, which will give you better a chance at evolving your systems.


Posted by Gigi Sayfan on October 20, 2015

Agile practices work best with small cohesive and co-located teams. How do you scale it for a larger organizational with multiple teams, possibly in different locations and even time zones? One approach would be to treat all these teams as one big agile team and try to work around the issues of remoteness and time zones. This approach is doomed for multiple reasons.

A better approach would be to break the development into vertical silos in which each team is completely responsible for a particular set of applications or services. This approach scales well as long as you can neatly break development into independent pieces that can be fully owned by one team. One downside of this approach is that knowledge sharing and opportunities for reuse are much more difficult.

A third approach would be to treat teams as users of each other. This allows collaboration and combining multiple teams to tackle big projects without losing the benefits of the original Agile team. This requires careful management to ensure teams' schedules are properly coordinated and dependencies don't halt development. This is nothing new, but in an agile context the planning game is typically done at the team level. When multiple teams work together (even loosely) there needs to be another layer of management that takes care of the cross-team planning.


Posted by Gigi Sayfan on September 30, 2015

Open source development has never been more prevalent and successful than it is today. The biggest players regularly publish the latest and greatest technology with permissive licenses. Some companies even do their whole development in public — inviting anyone to fork, download and send pull requests.

Google has always been a strong advocate of open source and even funded a lot of non-Google open source projects through their Summer of Code program. Facebook publishes not just software, but even the specs to their servers.

Microsoft recently joined the party and open-sourced key parts of their technology stack and the company even develops core pieces openly on Gitgub.

The big question is whether or not the agile development style mixes well with open source. On the face of it, they are polar opposites. Agile evolved from tightly knit co-located teams. Open source is all about strangers who have never met collaborating across the globe.

All the same, there are many similarities and shared principles between the two methodologies. Both paradigms put a great deal of emphasis on testing and automation. Both favor small and quick iterations. Obviously, co-located Agile teams can publish the results of their work as open source. The more interesting case is when an agile development team, that is not co-located, can still successfully take advantage of open source methodologies such as rigorous source control policies, pull requests and continuous integration tools.


Posted by Gigi Sayfan on September 4, 2015

Pair programming is one of the most controversial agile practices and is also the least commonly used in the field, as far as I can tell. I think there are very good reasons this is the case, but perhaps not the reasons everybody thinks about.

Pair programming consists of two programmers sitting side-by-side, working on a given task. One is coding the other is observing, suggesting improvements, noticing mistakes and assisting in any other way, such as looking up documentation.

The benefits are well known. For more information you can download a PDF of The Costs and Benefits of Pair Programming

But, why didn't it take off as so many agile practices that have become mainstream staples? The reason that is often mentioned is that managers don't like seeing two expensive engineers sitting together all day and working on the same code. That may be true for some companies. But, often it's the developers themselves who dislike pair programming.

There are many reasons that some developers dislike pair programming. Many developers are simply loners and prefer to focus on the task at hand and their flow is disrupted by the constant interaction. Many developers like to work unconventional hours or from home/coffee shop and that makes them difficult to pair. The original extreme programming called for a 40 hour work week in which everybody arrived and departed at the same time, but in today's flexible work environment this is not always the case.

I, personally, have never seen full-fledged pair programming practiced and it was never even on the table as a viable alternative. My experience is based on many years of working for various startups that used many other agile practices. I tried to institute pair programming myself in a few companies, but it never caught on.

So, is pair programming a niche practice that can only be used by agile zealots that follow the letter of the law? Not necessarily. There are several situations where pair programming is priceless.

The most common one is debugging. I've used pair debugging countless times. Whenever I get stuck and can't make sense of what's going on I'll invite a fellow developer and together we are usually able to figure out the issue relatively quickly. The act of explaining what's going on (often referred to as "rubber ducking") is sometimes all it takes.

Another typical pair programming scenario is when someone is showing the ropes to a new member of the team. This a quick way to take the newcomer through each and every step involved in completing a set task and showcasing all the frameworks, tools and shortcuts that can be used.

What are your thoughts on pair programming?


Posted by Gigi Sayfan on August 26, 2015

Decision-making is at the heart of any organized activity and, as such, there are significant associated risks and costs. The higher up you are on the totem pole the more risk, cost and impact is associated with every decision you make. For example, if you are the CTO and decide to switch from private hosting to the cloud, that has enormous ramifications. Obviously, such a switch is not a simple process. It will entail a thorough evaluation, prototype and gradual migration. This is often the reason that many large organizations seem to move at such a glacial pace. But, there are many decisions that can be made and acted upon quickly and yet often take a very long time.

This is often tied to the reporting and approval structure in the organization. The level of delegation and the freedom of underlings to make decisions on their own without approval is often the key factor.

There are many good reasons for managers to require approval: maintain control, ensure that good decisions are being made, stay up to date and informed on higher-level decisions. The flip side is that the more a manager is involved in the decision-making process, he or she has less time to interact and coordinate with other managers and her own superiors, study the competition, think of new directions and many other management activities. This is all understood and every manager eventually finds the right balance.

What many managers miss is the impact on their subordinates. Very often, a delay in decision-making is much more than making a quick bad decision. Let's start from the ideal situation — your employees always make the right decision. In this case, any delay due to the need to ask for approval is a net loss. The more control a manager maintains and the more direct personnel he or she manages, the more loss will be accrued.

But what about the bad decisions that such processes prevent? This is obviously a win in terms of one less bad decision, but the downside is that in the long-run your subordinates will not feel accountable. They'll expect you to be the ultimate filter.

If you're aware of this then the path forward is pretty clear — delegate as much as you feel comfortable (or even more). Let your underlings make mistakes and help them improve over time. Benefit from streamlined productivity and focus on the really critical decisions.

Another important aspect is that not all bad decisions or mistakes are equal. Some mistakes are easily fixed. Decisions that may result in easily reversible mistakes are classic candidates for delegation. If the cost of a bad decision is low, just stay out of the loop.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date