Kanban Glossary of Terms

We thoroughly understand how confusing getting to grips with new terminology can be. We’ve pulled together an ever growing list of terms commonly used across teams using Agile methods and practices. Click on each term to read the full description.

About Ian Carroll

Ian is a consultant, coach, trainer and speaker on all topics related to Lean, Kanban and Agile software development.

Connect with Ian at the following


Acceptance Testing

In software development, Acceptance Testing is a type of testing to validate if the software developed meets acceptability. The testing looks at business requirements and validates if the software developed meets these requirements and is ready for use.
In Kanban, we look to deliver value in small, releasable chunks which in turn reduces the testing overhead. Typically, the Product Owner or end user would conduct acceptance testing. You should always look to automate acceptance testing using test automation frameworks such as Nightwatch.js.
Other aspects of acceptance testing to consider are how to build quality into the software development lifecycle earlier. If you are finding lots of defects at the acceptance testing phase then you need to understand root cause and take appropriate preventative action. To be fair, if the customer is doing acceptance testing and finding lots of defects then you have some significant communication and process issues.
Common root causes include code defects, misinterpretation of requirements, data issues, configuration mismatch across environments, not a defect, scope creep, incomplete or erroneous specification, intentional deviation from specification, inconsistent user interface, unable to recreate issue, and many more.
Common metrics associated with acceptance testing are the number of defects raised over time. To normalise this metric you may want to look at calculating the metric based on number of lines of code, i.e. defect density.
An alternative school of thought suggests that formal acceptance testing is potentially wasteful and unnecessary. Instead of creating a gateway in the form of acceptance testing, maybe it’s better to just release the software to a small subset of real users to gauge acceptance. Once you’re happy with the feedback then you can ramp up traffic to 100% of users.
The beauty of this approach is that it can reduce cycle time (a key Kanban metric), further improving your feedback loop. A reduction in the need for acceptance testing can only come with trust and confidence from your stakeholders which Agile techniques provide.

Adaptive Planning

Adaptive planning is a process of breaking a development project into small parts or slices and then projecting the delivery rate over time to determine the overall predicted delivery date. The main aim of adaptive planning is to provide ultimate flexibility to the team while it proceeds ahead with the work. Since adaptive planning has an incremental nature, it can at times yield unexpected outcomes. With Adaptive Planning, the predicted delivery date can vary depending upon how much variation the team are exposed to. Therefore, publication of the predicted completion date is critical. This leads to a bigger focus on expectation management. Typically, the Scrum Master, Delivery Lead, or Project Manager is responsible for collecting and publishing the data to support adaptive planning – but occasionally the business analyst might cover this duty. Data is extracted from the team delivery tools, i.e. Jira, Mingle, VersionOne, LeanKit, etc. or from the physical card wall. With adaptive planning, it’s critical to have goals that the team are burning up towards. Empirical planning instead of deterministic planning; which means measure and predict, instead of plan and promise. In more traditional planning approaches a plan is formulated then followed. Due to the unpredictable nature of software development, it’s not possible to follow a predetermined plan due to the many unforeseen issues that regularly occur during development. Therefore, the only sensible approach is to use ‘yesterday’s weather’ to infer the rate of delivery combined with the overall amount of work yet to do. This yields a much more accurate set of planning data. In many ways, using an empirical approach is a far simplified way to plan than using deterministic measures. Adaptive planning is a critical technique that all Project Managers, Delivery Leads, and Scrum Masters need to learn.


Here we answer the common question of What is Agile Project Management. Agile is term used to describe an alternative to traditional sequential project management approaches. Agile methodologies are undertaken in incremental, iterative phases, with emphasis on team collaboration, continuous planning, testing, integration and feedback. Agile software development started to emerge in the mid 1990’s as a response to the frustrations and flaws of sequential project management approaches. In 2001, a group of prominent software professionals got together in Snowbird, Utah and distilled Agile software development down into four key values – Individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan. These values still hold true to this day and are at the centre of Agile principles. These principles became known as the Agile Manifesto which also expanded further into the 12 Principles of Agile Software development. The Agile Manifesto is articulated in detail at http://agilemanifesto.org. The twelve principles are in some ways more interesting than the four values. Most of the twelve principles have stood the test of time however due to continuous advances in software development practices such as the Continuous Delivery movement, one of the practices in particular is starting to tire. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. The industry is moving on and this principle would be better written as ‘Deliver value frequently, from a couple of hours to a couple of days, with a preference to the shorter timescale’. The techniques that now allow for such rapid delivery of software are test automation, automated deployment, pair programming, user testing, continuous delivery, devops, and cloud computing. Together they provide teams with unparalleled levels of automation, autonomy, and productivity never before seen.

Agile Development Practices

Agile software development describes a set of principles for software development under which requirements and solutions evolve through the collaborative effort of self-organizing cross-functional teams. It advocates adaptive planning, evolutionary development, early delivery, and continuous improvement, and it encourages rapid and flexible response to change. These principles support the definition and continuing evolution of many software development methods.

Agile Project Management

Agile Project Management describes a set of principles for software development under which requirements and solutions evolve through the collaborative effort of self-organizing cross-functional teams. It advocates adaptive planning, evolutionary development, early delivery, and continuous improvement, and it encourages rapid and flexible response to change. These principles support the definition and continuing evolution of many software development methods.

Agile Software Development

Agile software development describes a set of principles for software development under which requirements and solutions evolve through the collaborative effort of self-organizing cross-functional teams. It advocates adaptive planning, evolutionary development, early delivery, and continuous improvement, and it encourages rapid and flexible response to change. These principles support the definition and continuing evolution of many software development methods.


a phase in the delivery of an Agile IS project where solutions are prototyped for user’s needs. Testing with a small group of users or stakeholders is undertaken and early feedback about the design of the service is collated. This is a specific phase in the UK Government Digital Service (GDS) which is required for all UK government departments when developing new services or enhancing existing services.

Application Lifecycle Management (ALM)

Application lifecycle management (ALM) is the product lifecycle management (governance, development, and maintenance) of computer programs. It encompasses user needs, requirements management, software architecture, computer programming, software testing, software maintenance, change management, continuous integration, project management, and release management.
Many products have emerged on the market to support ALM with many now supporting Agile software development. Some of the more forward-thinking software vendors also support Kanban methods for ALM. Examples of these products and vendors include, ThoughtWorks Mingle, Atlassian Jira, LeanKit Kanban, Kanbanize, Swift Kanban, VersionOne, and Trello.
Most ALM’s these days are visual and metaphorical to a physical card wall enabling a virtual user to drag virtual cards across a virtual wall. Common aspects of ALM to think about when setting up are what statuses does a card go through, i.e. User Research, Backlog, In Analysis, In Development, In Test, In UAT, In Pre-Live, Live, Measure. What card types will you support, i.e. Epic, Story, Defect, Task, etc. What sizing scales will you use (if any) such as Fibonacci, exponential, etc. Also think about how you are going to limit the amount of work in progress to increase throughput inline with Kanban best practice.
You need to be careful when using any Agile ALM tool. Many have templates pre-configured for Scrum or Kanban. These templates are often the vendors interpretation of how Scrum or Kanban should be implemented but these templates can quickly become major constraints for the team. A common problem is when a team comes to implement a small change to the configuration of the ALM and it affects all other teams using the ALM preventing them from making a change. Another issue is when teams become so dependent on the electronic tool that it impedes their decision making as the tool does it all for them. This massive reliance on the ALM tool often stifles continuous improvement – the ALM tail wagging the dog.


In Agile or Kanban, the backlog comprises an ordered list of requirements that a team maintains for a product or project. It consists of features, stories, bug fixes, non-functional requirements, technical debt, business as usual requests —whatever must be done to successfully deliver a viable product.
The dictionary defines the backlog as ‘an accumulation of uncompleted work or matters needing to be dealt with’ which can be damaging. Replacing the term backlog with Demand is a better description but for the purposes of this glossary definition we will continue to use the word backlog.
Backlogs are best visualised on a physical card wall – aka scrum board, Kanban board, or Agile task board. If you have hundreds of items in your backlog then don’t try to visualise all of them on the wall. Simply select the top 5 or top 10 items from each category and just write cards out for them. Stick them on the wall grouped into rows with each row representing a demand type, i.e. planned work, unplanned work, defects, technical debt, etc.
Housekeeping of the backlog is a critical ongoing activity. All teams are different but generally speaking the typical roles that keep the backlog clean are Scrum Master, Product Owner, Product Manager, Technical Lead, and Business Analyst. It’s very easy for your Agile ALM tool to become a dumping ground for requirements and quickly become difficult to manage or make sense of.
When adding items to the backlog, think carefully about the level of detail required to successfully describe the work item. A simple, brief title written on a card may suffice initially. If you put too much detail into a work item too early in the process then it may become waste. Too much or too little, too early or too late – the goldilocks effect.

Backlog Grooming

Backlog grooming is a collaborative effort involving the Product Owner, Development Team (BA’s, Dev’s, Testers), whereby they review the prioritised backlog, make adjustments to priorities, elaborate any detail where needed, and remove stories no longer required.

Backlog Item

Explained in simplest terms, a backlog stores, organizes and manages a list of things that you want to take care of in a product, arranged in the order of their priority. Thus, the most important work items should be entered at the top of the backlog list, followed by the not-so-important work items. The Backlog can contain anything from bugs to enhancements, issues, risks, stories, technical spikes, design spikes, incidents, Business As Usual (BAU) requests, Technical Debt, plus many others. Keeping the backlog clean from out of date tasks is a critical housekeeping duty. It’s important to regularly review the backlog with several disciplines involved in delivery. These disciplines often cover the Project Manager, Product Owner, Scrum Master, Technical Lead, Developers, Testers, and other interested stakeholders. In other words, it’s everyone’s responsibility to keep the backlog clean. Backlog visualisation is another important technique often overlooked. Instead of having a pile of work items in the backlog, most teams colour code them by work type, stakeholder need, or urgency. At Solutioneers, we also advise clients to lay out the backlog in horizontal rows. Each row representing a work type or stakeholder. This provides a much cleaner mechanism for teasing out competing priorities with stakeholders. Like all housekeeping duties, it’s best to keep on top of it. Reviewing and refining the backlog should be a daily task as a minimum and done in real-time, constantly, every day. There are many examples of how to visualise your backlog on our website resources section. As part of our Lean Agile Boot Camp workshop we explore many examples of how backlog visualisation can help teams to become more organised and reduce stress. One final word about backlog visualisation – you don’t have to visualise all your backlog items. Writing out 300+ cards would be futile. Instead, just write out the top 5 or 10 from each work type. This provides a useful prioritisation mechanism. The correct level of detail to write on the card is all about context but generally speaking you should be able to just write a brief title.


a phase in the delivery of an Agile IS project where development against the demands of a live environment is undertaken and a version to test in public is released.

Big Visible Charts

How we visualise the world dominates how we perceive the world. Therefore, it’s really important that teams are surrounded by big information radiators to act as amplified platforms for communication.


In the context of software development delivery, a bottleneck refers to work demand queuing within a functional area, i.e. a bottleneck in Test. Common examples of bottlenecks within software development are, developed work waiting for testing, analysed stories waiting for development, tested work waiting for deployment. This queued work vastly impedes predictability making forecasts of completion dates highly inaccurate.
The focus on reducing and removing bottlenecks originated from the Kanban method. The realisation that the bigger the bottleneck, the slower value will be released due to the increase in overall work in progress. Little’s Law provides further detail on the impacts of having too much work in progress.
Teams have developed several strategies for exploiting the bottleneck. The first one is to subordinate all activity within the process steps to the slowest point. Subordinating the activity to the slowest point materialises in the form of work in progress limits (WIP limits). These WIP limits encourage the act of team swarming. Team swarming means a multidisciplinary group of people all work at the point of the bottleneck to reduce the WIP. The net effect of this is to increase flow.
Another mechanism used within software development is to introduce queues to create a pull system. Creating a pull system is critical to reducing the possibility of bottlenecks. The queues come in the form of ‘Done’ columns for each phase of delivery. A common Kanban system may have columns consisting of Analysis WIP, Analysis Done, Dev WIP, Dev Done, Test WIP, Test Done. The aim here is to prevent specialisms from chucking work over the wall and overwhelming the next specialist in the process. Instead a specialist can only put completed work into their own ‘Done’ pile. The next specialist in the chain can now pull from this ‘Done’ column, but only when the have the capacity to deal with it.


The Build–Measure–Learn loop emphasizes speed as a critical ingredient to product development. A team or company’s effectiveness is determined by its ability to ideate, quickly build a minimum viable product of that idea, measure its effectiveness in the market, and learn from that experiment. In other words, it’s a learning cycle of turning ideas into products, measuring customers’ reactions and behaviors against built products, and then deciding whether to persevere or pivot the idea; this process repeats as many times as necessary.

Burndown Chart

A burn down chart is a graphical representation of work left to do versus time. The outstanding work (or backlog) is often on the vertical axis, with time along the horizontal. That is, it is a run chart of outstanding work. It is useful for predicting when all of the work will be completed.

Burnup Chart

A burn up chart, or burnup chart, tracks project progress, including changes in scope to enable the chart audience to predict completion dates or time. A burnup chart is a clear depiction of the completed work against the overall scope. A burnup chart helps in tracking progress. Such a chart has two lines on the chart: project scope line and work completed line. Once both the lines meet, the work would be completed. Thus, a burn-up chart helps you in predicting the time by which a piece of work would be completed. The Scrum Master, Delivery Manager, or Project Manager is usually responsible for collecting the required data and turning it into a burn-up chart. Some teams opt to update their burn-up chart at the end of every sprint. Solutioneers recommend teams update their burnup chart daily. Scope can change daily, and work items complete should continually flow through the delivery stream. It therefore makes sense to update the chart daily to get more frequent insight into progress. An important consideration is to look carefully at your definition of done. The definition of done is critical to calculating the work completed line on the burn-up chart. Calculating the work complete line from when dev is complete, but not tested, is too early. In some instances, calculating the work complete line for when work hits your live environment may be too late if you only make two live releases per year. You need to find the sensible point at which you can consider the work done. Burn-up charts are useful to demonstrate progress against several different goals. Scope completion rate is a common example. Others include budget burn rate, output completion rate, and outcome completion rate. The outcome completion rate is a very important metric for when the team are looking to achieve a certain business or commercial outcome. It’s not just all about output but more important to focus on outcomes of work completed. There are many tools out there for creating burn-up charts, usually embedded within vendor tools such as Mingle, Jira, VersionOne, etc. Solutioneers we prefer to export required data from said tools and generate our own burn-up charts in MS Excel. The required data points are simple to extract – overall total amount of work to be completed, and total amount of work completed over time. You can get more advanced views by incorporating multiple goals into your chart in the form of stacked scope lines.

Business Value

Teams that work in an Agile or Lean way will commonly discuss business value. Any form of benefit can be traced back to three core value statements: to make money, to save money, to protect money. Understanding business value is a critical part of prioritisation of work items in the backlog. The perceived business value should constantly be measured once delivered to a live environment and any lessons learned fed back into the prioritisation mechanism. The Product Owner, working with a Product Management capability should really understand the value of the prioritised user stories. This analysis of business value should be a real-time activity that Product Owners should obsess about. Members of the delivery team are also very interested in understanding the value of the work they are delivering so sharing and communicating the value statement is an important motivational factor when working with software development teams. Examples of how business value is expressed could include: reduction in customer dropout, increase in customer acquisition, increase in customer conversion, increase in platform supportability, improvements in platform stability, reduced time to market, regulatory compliance, and many more. Not all value needs to be directly associated with business value. You’ll hear Technical debt discussed regularly within development teams. The ability to express the resolution of technical debt as a value is important. Another way to express business value is in the Cost of Delay. The cost of delay refers to the concept of lost potential value. Basically, for every week we are late with delivering the new feature, how much value would we have gained had we delivered the feature on-time. Many advanced Product Management capabilities can calculate the Cost of Delay for any proposed feature or product. There are other methods of calculating relative value when faced with conflicting priorities. One of them is called Weighted Shortest Job First (WSJF). This is a simple weighted mathematical formula using several dimensions to work out an overall score. The parameters usually consist of (perceived value * cost of delay) / estimated effort required.


Cadence can be defined as the rhythm or a flow of events which every agile team strives to achieve in order to operate efficaciously. Cadence helps the agile teams to focus on delivery of the actual product rather than on the actual process. In Scrum the team cadence usually revolves around the sprint cycle. For example, if a team is operating a 2-week sprint cycle then most ceremonies are triggered by the sprint boundary. Activities such as sprint planning, sprint review, and sprint retrospective are triggered by the two-week cadence. In Kanban the cadences are decoupled. This means that each ceremony or activity operates its own cadence independent of the sprint cadence. Therefore, activities such as planning happens more frequently but regularly. Retrospectives don’t wait until the end of the sprint but can be done more frequently and therefore accelerate learning. Releases can be done on demand rather than waiting for the end of a sprint cycle. Another cadence that applies to all Agile methodologies and processes is that of Daily stand-ups. As the name implies these happen daily at around the same time. All team members are responsible for understanding what cadences are at play across the delivery lifecycle. We often find it useful to publish a list of events or cadences and display them in the team work area. This ensures everyone is fully aware of the regular team ceremonies and is another form of an information radiator. Cadence isn’t just limited to team behaviours or activities. Some technical activities also follow a cadence. For example, some Agile teams compliment their continuous integration process with a nightly build. The nightly build often includes automated tests that take longer to run that wouldn’t be sensible to run as part of the continuous integration process.


Capacity In the context of Agile, capacity refers to the balance of team throughput to demand. Capacity of an agile team is the amount of work the team can complete per time period – such as per week, per month, etc. Although this is basically the same as throughput, it’s better to treat capacity and throughput as the same concept when dealing with knowledge work. Capacity should never be calculated on an individual role basis as the capacity can only be calculated at the team level. This is because work only becomes value once it’s passed through all the team capabilities. The actual capacity figure is calculated as the total number of work items the team can output per time period, divided by the number of people in the team. All knowledge workers can only sensibly work on one work item at a time. Context switching (aka task switching) is highly inefficient, lowers quality, and impacts on morale. To increase capacity and throughput with a fixed team size, you can look at waiting time experienced throughout the delivery lifecycle of the team. Waiting time may come in the form of blockers, impediments, or queuing time. All these concepts come from the Kanban method and are extremely profound in increasing team throughput. Understanding team capacity is a critical factor in planning both at the local team level, but more widely at the enterprise or portfolio level. Once you understand the teams capacity/throughput you can balance the demand against this from a planning perspective. If you’ve exhausted all avenues for increasing throughput then all you can do is prioritise accordingly. The Scrum Master, Project Manager, or Delivery Manager/Lean are responsible for collecting throughput data for the team and publishing it accordingly. Many teams also publish the data for the team to see and use what they find to explore ways to increase throughput and capacity.

Code Quality

In the context of software engineering, software quality refers to two related but distinct notions that exist wherever quality is defined in a business context: Software functional quality reflects how well it complies with or conforms to a given design, based on functional requirements or specifications. That attribute can also be described as the fitness for purpose of a piece of software or how it compares to competitors in the marketplace as a worthwhile product; Software structural quality refers to how it meets non-functional requirements that support the delivery of the functional requirements, such as robustness or maintainability, the degree to which the software was produced correctly.
Internal quality and external quality are key categories of software or code quality. Common measures and indicators of internal quality include cyclomatic complexity, code duplication, coupling, coherence, defects, error logs, self-documenting code, automated test coverage and coding standards to name just a few. External quality covers factors such as user friendliness, page load times, accessibility and conformance to requirements.
There are many tools available to help testers and developers to get feedback quicker. Tools are usually classed as manual test tools or automated test tools. For internal code quality measurement, there are a number of tools available such as SonarQube, FxCop, StyleCop, SourceMeter, Veracode, NDepend, Parasoft, Eclipse, Flay, Reek, RuboCop, and Brakeman. There are many other tools available.
External quality tools include JMeter, Selenium WebDriver, Sahi, Watir, Telerik Test Studio, WatiN, and many more.
Developers are ultimately responsible for internal code quality. Testers can assist developers with internal code quality by helping to build the automation required to provide developers with rapid feedback. Testers work predominantly with external quality factors. Testers constantly look to reduce manual testing effort through automation of testing. Automating testing activities where possible frees up testers to work on developing better testing strategies and focus on exploratory testing. Automated tests provide a repeatable, efficient and predictable safety net for the entire team to work within.


Colocation – when developers, testers, product owner, business analysts or anyone else required to enable a team to create value without depending upon another team, sit together to tap into the benefits of ambient knowledge transfer and rapid face to face communication.

Complexity Points

Complexity Points Complexity points (aka story points) are a relative measure of a set of stories (or work). The ultimate aim of using points is to simply validate if the story has been broken down enough.

Complexity Thinking

Complexity Thinking Complexity theory refers to a study of complex system or complex systems for organisations, the application of complexity theory to strategy

Continuous Delivery

Continuous Delivery Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time.

Continuous Integration

In software engineering, continuous integration (CI) is the practice of merging all developer working copies to a shared mainline several times a day. Grady Booch first named and proposed CI in his 1991 method, although he did not advocate integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day.

Cross Functional Team

Cross-Functional Team When developers, testers, product owner, business analysts or anyone else required to enable a team to create value without depending upon another team come together as a formed team.

Daily Standup

AKA Daily Scrum

A very short and concise daily team meeting where the team gather around the board to discuss the flow of work through the team. The objective of the meeting is to remove impediments or blockers preventing the team from making progress. The meeting is conducted standing up to keep the meeting short.

Teams who are following the Scrum methodology tend to run their daily stand-ups where by each team member quickly covers their progress, planned work and any blockers to delivery.

Teams who are following the Kanban method tend to walk the board from right to left, focusing on removing blockers, impediments and drag factors.

Top Tips for better stand-ups:
• Get each team member to take a turn facilitating the daily stand-up. This stops any one person dominating the stand-up. Rotate daily.
• Walk the wall from right to left. Focus on the work, not each other. That is, starting at your right most column on the wall and working left.
• For each card on the wall ask the question “how can we get this card moved to the right?”. Too many teams treat the daily stand-up as a status update. Instead, the daily stand-up should be about problem solving.
• Gather in close around the card wall. Don’t be shy. If you can’t see the detail on the cards then you’re too far away. Get in close and jostle for position if needed.
• Come prepared. Make sure you’ve updated the card wall before the daily stand-up, not during.
• Look right or down before looking left. That is, when you’ve completed a card on the wall, first look right or in your current column to see if you can help anyone else finish their work. If you’ve exhausted all avenues to help finish work, only then should you look left to start more work.
• Focus on unblocking blocked work. Summon the necessary people required to unblock a card to the daily stand-up so they can see the impact they’re having on the team.
• Blockers with unblock ETA. If a blocker has an ETA date for unblocking (days or weeks) in the future, do you really need to have a repeat discussion about the blocker every day?
• Use green arrows to show movement. When someone in the team moves a card on the wall, stick a green arrow icon on it. As the day goes on, more and more arrows should appear on the card wall. Then, at the daily stand-up you will see a nice clear summary of movement over the previous day. At the end of the daily stand-up “reset” the card wall be removing all the green arrows. External stakeholders to the team particularly like this feature.
• Card wall admin. At the end of the daily stand-up, the person who facilitated it is responsible for tidying up the wall, i.e. straighten the cards, redraw any faded lines, remove the green arrows, and sync the physical card wall with the electronic card wall.
• Review Goals. Whatever goals you have in place, it’s really important to keep these at the forefront of everyone’s mind.

Definition of Done

Definition of Done Each team creates their own definition of done. The definition of done describes at what point a story (or piece of work) should be considered done. A simple example is: Done means when a story has been designed, developed, tested, defects resolved, and customer has signed it off.


Discovery The first phase in the development of an Agile IS project where service user’s needs are researched, what should be measured is established and technological or policy-related constraints are explored.

Distributed Development Team

Distributed Development Team A distributed development team works across multiple business worksites or locations. Team members may not see each other face to face, but they are all working collaboratively toward the outcome of the project.


An epic story can be thought of as a large user story which could be broken down into smaller stories. An epic may be delivered over more than one sprint or releases. Epics help in creating a better structure and hierarchy. Further, epics might have changes in their scope over time. Epics are a great way to roll up functionality into an abstract wrapper to defer the effort required to break it down into smaller chunks until later. Examples of epics could include: user registration, login, user preferences, search for products, search results page, add to shopping cart, cart checkout, payment, admin products, admin users, view transactions, etc. Each of these should be considered epics and will certainly require breaking down (aka splitting) at some point before they are ready for developers. Epics can form part of a story map which provides orientation to delivery teams and stakeholders on the overall scope of the work at hand. This prevents the team and stakeholders getting bogged down in the detail and provides an easy to consume overview or mental model. On an Agile project or work stream, typically the Business Analyst will work with the Product Owner and key stakeholders to identify the major epics of the system to be developed. The discussions are kept at the level as diving into the detail of each epic is usually a waste of time. It’s a waste of time because the detail will be lost over time and needs to be discussed at the point of development. When carving out epics for the work it’s important to go beyond just involving your business analyst. All members of the team should be involved in this activity so that they can provide expert input and guidance about the best way of going about solving the problem. Common related tasks associated with epics are estimation and sizing, story splitting, story writing, test strategy, and behaviour driven development techniques.

Exponential Scale

Exponential Scale 1, 2, 4, 8, 16, 32 – a common story point scale used as a relative measure of a set of stories (or work). The ultimate aim of using points is to simply validate if the story has been broken down enough.


Estimation in an Agile methodology is a prediction of how soon a certain task, project or a product would be accomplished. A proper estimation might be hard, but it is necessary to give an approximate timeline to the product owner. Teams often struggle with this part of Agile – often quoting ‘we don’t do estimation in Agile’. This isn’t helpful for most stakeholders and only serves to fuel an anti-Agile movement. Estimation is a real-world duty. Estimation can come in several forms, the most common of which are the two questions: How much will it cost? How long will it take? Estimation isn’t just a problem for Agile methodologies – it’s a problem for any software project. You can greatly reduce estimation complexity by simply exploring the constraints you face. Ian Carroll first coined this technique as Constraint Driven Estimation. Explore and capture the key constraints. Is there is there a hard deadline or soft deadline? Is there a fixed budget or cap? If not, then how much are you willing to spend to solve the problem? In other words, what’s your business case for doing this work? Another constraint could be what’s your team capacity and work backwards from that. How many people can you get under the bonnet at the same time? This will have an influence on the overall options you have for answering the cost and time questions. Once you understand your constraints you can then look to shape your scope accordingly. When estimating at the team level, teams often adopt the planning poker technique. Whilst this is a good tool for generating valuable conversation it also results in wasted time and inaccuracy. A more valuable method for estimation is relative sizing. As humans, we’re very poor at saying how big something is, but we’re extremely good at evaluating when something is bigger than something else. Use this technique for estimation.

Fail Fast

Fail fast is an Agile philosophy which places stress on incremental development and rigorous testing to test if a certain idea would add value to the product. Fail fast is important because it helps in lowering down the losses related to a certain idea by quick testing, and further, trying something else. In layman terms, the whole process of trying something quickly, fetching feedback, rapid inspecting, and thereafter the necessary adaptation can be termed as Fail Fast. Due to the negative baggage associated with the term ‘fail’, many organisations prefer instead to use the philosophy of ‘Learn Fast’. This has a much more positive take on the philosophy. There are many techniques and tools from the Lean and Agile stable of practices to help you to Learn Fast. Rapid prototyping is a great way to learn fast – but learn fast and cheaply. You can use tools as basic as Powerpoint to produce a click-through prototype to get rapid feedback from your target audience. Other more advanced tools include Balsamiq Mockups, Axure, and others. It’s important to Learn Fast/Fail Fast because if you don’t then you risk throwing away large sums of money for little or no return. Typically, most teams have the capability to test and validate ideas before they incur expensive development, test, and release effort. The Product Owner often works with a User Experience or User Researcher upstream from the delivery team to create, test, and validate new ideas. This activity usually operates on its own cadence, separate from the development team. Other Agile techniques that are often associated with fail fast or learn fast are regular demonstrations, show and tells, showcasing, gorilla testing, split testing, a/b testing, and multi-variant testing.


Features can be defined as a set of functionalities which would enhance the value of a product. Ideally, a feature should add the value to a product, be easy to estimate, be capable of fitting into iteration and be testable. However, commonly features are large and require development to be spread across multiple iterations or sprints. An important concept in Agile software development is Minimal Marketable Feature (MMF). That is, what is the minimum amount of functionality we can ship for this feature to release value? Following the initial release, further releases can be made to enrich the feature with higher fidelity functionality, again in small batches to realise the incremental value. A very common behaviour often observed is when an MMF is released, the product owner becomes frustrated because the organisational expectations have moved onto the next big feature whilst the product owner knows that not all value has yet been realised from the MMF. To combat this Product Owners should utilise product roadmaps to set expectations. Features are often mapped out in a user story map whereby stories are linked to features. The terms Epics and Features are often used interchangeably. A feature could be made up of several epics. A common requirements hierarchy is Programme -> Product -> Feature -> Epic -> Story -> Defect. Features are regularly depicted as diagrams or wireframes showing how users interact with them. Other useful ways to express a feature could be with Unified Modelling Language (UML), entity relationship diagrams, and many more.

Feature Teams

A feature team is a cross-functional team that focuses on iteratively supporting, improving, and maintaining one or more feature areas of a product. A feature team is a stable and long-lived team that focuses on customer-centric features. They are an important part of an organization and ensure that the development cycle produces minimum wastes but ships maximum value. Feature teams often have the autonomy to release new features to a live environment, whilst respecting sensible separation of duty controls, through end to end deployment and test automation. Because feature teams tend to support the software they release, we find quality increases and live incidents decrease. Moving to feature teams can be a daunting prospect for organisations who operate as project teams that hand-over to support teams. The use of automation is critical to enabling feature team formation. The decision to move to feature teams comes under the umbrella of organisational design. When considering re-organising your teams you must consider Conway’s Law. Conway’s Law talks about the influence your communications channels have, i.e. team structures, on the architecture of the solution you create. The opposite is also true which we refer to as reverse Conway’s Law. There is no blueprint for the perfect org structure where software development is concerned. There are however many signals that you can look out for to tell you if your org structure is right for you. The first one is multiple teams blocking each other. If team A is constantly getting blocked by Team B then maybe you need to merge the teams or some of the capabilities? If you’re constantly splitting stories to be delivered by different teams then again maybe you need to look at the makeup of your teams. As your architecture evolves so must your team structures.

Fibonacci Scale

Fibonacci Scale 1, 2, 3, 5, 8, 13, 21 – a common story point scale used as a relative measure of a set of stories (or work). The ultimate aim of using points is to simply validate if the story has been broken down enough.

Inspect and Adapt

Empirical software process has empirical focus. It requires the use of agile approach to understand how to improve the software product, software development process and software management.


Retrospective – the final team meeting in the sprint which is used to determine what went well, what didn’t go well and how the team can improve. The focus is on performance and improvement. Similar to a lessons learned but run every two weeks instead of right at the end of the project or stage gate.

Show & Tell / Showcase

Show and Tell / Showcase – a demonstration event for the team to present work completed during the sprint. The product owner reviews the work completed to date and stakeholders provide feedback.


Spike – a short, time-boxed piece of research, usually technical, on a single story that is intended to provide just enough information that the team can estimate the size of the story or de-risk it.

Technical Dept

Technical debt is a concept in programming that reflects the extra development work that arises when code that is easy to implement in the short run is used instead of applying the best overall solution.
Technical debt can be compared to monetary debt. If technical debt is not repaid, it can accumulate ‘interest’, making it harder to implement changes later on. Unaddressed technical debt increases software entropy. Technical debt is not necessarily a bad thing, and sometimes (e.g., as a proof-of-concept) technical debt is required to move projects forward. On the other hand, some experts claim that the “technical debt” metaphor tends to minimize the impact, which results in insufficient prioritization of the necessary work to correct it.
Some people classify technical debt in two ways – intentional and unintentional tech debt. Intentional technical debt is when a team, in conjunction with the product owner, decide to incur technical debt as a result of a trade-off. A very common cause of this kind of debt is releasing software with known defects. These defects are often low priority defects that are deferred to a later release. Unintentional technical debt is usually as a result of incompetence or low experienced developers.
It is very common for organisations not to take technical debt seriously. Only when it comes back to haunt them 18 months down the line, and usually at great expense, do they address it. Usually with a large investment to re-platform. Technical debt should be taken extremely seriously and be serviced as part of the normal course of delivery. Tech debt should be made highly visible and considered an equal work item to other types of demand such as new features. Development teams need to work harder in order to express the value that each tech debt item would deliver should it be tackled. Just in the same way that other stakeholders have to state the expected value of their demand during prioritisation sessions.

Application for a free Agile Coaching session

I would like to speak with an advisor