AI in Agile: Cutting Through the Noise

There is no shortage of people telling Project Managers and Product Managers that AI will transform agile delivery. The vendor landscape is crowded, the conference circuit is loud, and the pressure from leadership to "do something with AI" is familiar to anyone in a delivery role. What is in shorter supply is a clear-eyed account of where AI is genuinely useful in an agile context, where it is hype dressed up as capability, and what a PM should actually prioritise.

This post attempts to provide that account - drawing a clear line between signal and noise, and giving both Project and Product Managers a practical frame for where to direct their attention.

The question is not whether AI belongs in agile delivery. It already does. The question is whether you are using it where it compounds value or where it just adds activity.

What AI is actually changing in agile delivery

To cut through the noise, it helps to look at the agile ceremonies and artefacts where AI is demonstrably changing what is possible - and be honest about the ones where the gains are marginal or the risks are underappreciated.

Backlog refinement High impact

AI is producing meaningful gains in backlog quality. Given a high-level feature request or user interview transcript, a well-prompted AI can generate draft user stories, surface missing acceptance criteria, flag ambiguities, and identify dependencies that are likely to create friction in sprint planning. This does not replace a Product Manager's judgement - but it compresses the time from rough idea to refinement-ready story significantly.

Where PMs add value: Context, stakeholder intent, and the strategic prioritisation that AI cannot infer from text alone. The AI drafts, the PM shapes.

Sprint planning Moderate impact

AI tools connected to your delivery data - historical velocity, team capacity, and ticket complexity - can improve the accuracy of sprint planning by surfacing patterns the team has learned to overlook. If your team has historically underestimated stories involving a specific integration layer, AI can flag that before commitment, not during the retrospective.

Where PMs add value: Team dynamics, morale, and the contextual factors that affect capacity in ways no dataset captures - holidays, competing priorities, knowledge gaps on the team.

Standups and reporting High impact

Automated standup summaries, progress reports synthesised from Jira or Azure DevOps data, and AI-generated status updates sent to stakeholders are among the highest-ROI applications available right now. These are tasks that consume significant PM time and produce output that is often templated and repetitive. AI handles them well and frees up time for the relational and strategic work that actually moves a project forward.

Where PMs add value: Narrative framing, stakeholder calibration, and knowing when a status update needs a human voice rather than a generated summary.

Retrospectives Use carefully

AI can analyse patterns across retrospective data, flag recurring themes, and surface action items that were raised but never resolved. This is genuinely useful when applied to historical retrospective data. Where it gets complicated is in the ceremony itself. Retrospectives depend on psychological safety and candid human conversation. An AI summarising or facilitating in the room changes the dynamic in ways that may suppress the honesty the ceremony depends on.

Where PMs add value: Creating the conditions for honest conversation. AI is better used after the retrospective than during it.

Risk and dependency mapping High impact

Identifying risks and cross-team dependencies is one of the areas where AI produces the most consistently valuable output. Given a programme-level backlog or a set of delivery plans, AI can surface interdependencies, flag sequencing risks, and identify where slippage in one team is likely to cascade. This is pattern recognition at scale - and it is exactly the kind of work that is easy to underinvest in when a PM is managing delivery pressure in real time.

Where PMs add value: Validating AI-identified risks against stakeholder relationships and political context that does not appear in any system of record.

Product discovery Emerging

AI-assisted synthesis of user research, competitor analysis, and market signals is one of the most actively developing areas for Product Managers. Tools that can process interview transcripts, support tickets, and NPS data together to surface emerging themes are already available. The quality varies significantly by tool and by how well the PM has structured the inputs. This is an area worth investing time in now, with realistic expectations about current limitations.

Where PMs add value: Framing the right questions for research, interpreting findings in the context of strategy, and making the judgement calls that turn insight into prioritised action.

Signal versus noise: an honest assessment

Not everything marketed as "AI for agile" deserves attention. Some of it is existing features relabelled, some of it is capability that sounds impressive in a demo but does not survive contact with the complexity of a real programme. Here is a clear-eyed split.

Signal - worth your time

AI-generated user story drafts from feature briefs or interview notes
Automated sprint and programme-level status reporting
Pattern analysis across velocity, estimation accuracy, and retrospective data
Dependency and risk detection across a large backlog
Research synthesis from user interviews, tickets, and feedback data
Acceptance criteria completeness checks before refinement

Noise - be sceptical

AI "predicting" sprint success based on thin or poorly structured data
AI-facilitated retrospectives that reduce psychological safety
Automated prioritisation without PM oversight of strategic context
AI-generated estimates presented as more reliable than team judgment
Tools that add AI branding to basic workflow automation
Chatbot-driven stakeholder engagement replacing human relationship management

What Project Managers should prioritise

Project Managers operate at the intersection of people, process, and delivery risk. The AI applications that matter most in this role are the ones that reduce the overhead of coordination and reporting, freeing up time for the human judgment calls that no automation can make.

1
Automate your reporting cadence first

Status updates, programme-level dashboards, and stakeholder summaries are the highest-volume, lowest-differentiation work in most PM roles. Connect your project management tooling to an AI layer - via Copilot, Jira's AI features, or a custom integration - and reclaim the hours spent compiling information that a system already holds. Use those hours for the conversations that actually move the project.

2
Use AI as an early warning system

Configure AI to monitor for the signals that precede delivery risk - velocity decline, rising work-in-progress, stories that have been in refinement for multiple sprints without resolution, or dependencies that have no confirmed owner. The value is not in the AI catching what you would have caught anyway. It is in catching things earlier, when the options for responding are broader.

3
Run AI against your risk log regularly

A risk log that is updated once a fortnight in a planning meeting is already stale. Use AI to cross-reference your risk register against your delivery data, your programme schedule, and external signals on a continuous basis. Ask it which risks have changed in likelihood given recent progress. It will not have perfect judgment - but it will surface conversations worth having before they become urgent.

4
Protect your relationship time

The time AI saves in reporting and administration should not be absorbed by more process. It should flow into stakeholder relationships, team coaching, and the kind of deliberate programme-level thinking that gets deprioritised when a PM is buried in status decks. This is where AI creates leverage, not just efficiency.

What Product Managers should prioritise

Product Managers face a different set of pressures - balancing discovery with delivery, managing stakeholder expectations about what is in scope, and making prioritisation decisions under conditions of genuine uncertainty. AI's best contributions here are in expanding the information base those decisions rest on.

1
Invest in AI-assisted discovery

The signal-to-noise ratio in user research is a genuine problem. Interview transcripts, support tickets, feature requests, and NPS verbatims all contain insight - but synthesising them manually is slow, and important themes get missed. Use AI to process this data at scale, identify emerging patterns, and surface the questions your research has not yet answered. Then use that synthesis to run sharper, more focused discovery sessions.

2
Use AI to stress-test your backlog

Before sprint planning, run your upcoming stories through an AI completeness check - acceptance criteria, definition of done, dependency flags, and edge cases. This is a five-minute exercise that consistently surfaces gaps that would otherwise surface mid-sprint. It is not about replacing refinement; it is about arriving at refinement with better-prepared material so the conversation is more valuable.

3
Get AI into your competitor and market monitoring

Product Managers are expected to have a view on the market, but the time available to develop that view is usually insufficient. AI can monitor competitor release notes, industry publications, and market signals continuously, synthesising what matters into a weekly digest you actually read. This is not a substitute for strategic thinking - but it ensures that thinking is based on current information rather than whatever you last had time to read.

4
Keep humans in the prioritisation decision

AI can score, rank, and weight backlog items against your stated strategy. It is worth using for that purpose - as a first pass and a challenge to your own instincts. But the prioritisation decision itself should stay with you. AI does not know which stakeholder relationship is under strain, which initiative has an implicit political commitment attached to it, or what the team needs to maintain motivation through a difficult quarter. Those factors belong in the decision, and only you can bring them.

AI does not make a PM redundant. It makes the parts of the role that were always overhead more efficient, and raises the bar on what the human parts of the role need to deliver.

The risk of optimising for activity over outcome

There is a specific failure mode worth naming. When AI tools are adopted in an agile context without a clear view of what problem they are solving, they tend to produce more output - more artefacts, more summaries, more dashboards - without producing better outcomes. A backlog that is twice as long is not twice as valuable. A programme dashboard that updates in real time is not useful if nobody is acting on what it shows.

The discipline the best PMs bring to AI adoption is the same discipline they bring to their backlog: ruthless prioritisation. Pick the two or three applications where AI will demonstrably improve a delivery outcome, implement those well, and measure the result. Then decide what comes next. The teams that are getting the most from AI in agile delivery are not the ones using the most tools. They are the ones using the right tools, deliberately, in the right places.

A practical starting point for this sprint Identify the reporting or documentation task that costs you the most time each week - the one you do out of habit or obligation rather than because it generates decisions. Connect it to an AI layer this sprint and measure how much time you reclaim. Then ask what you did with that time. The answer tells you whether you are using AI to improve outcomes or just to move effort around.

← Back to Insights Talk to Kieran →