If your organisation runs Azure DevOps, you have probably noticed agentic workflows are gaining momentum fast. GitHub's recent launch of Agentic Workflows and showcases like Peli's Agent Factory with over 100 agentic use cases demonstrate what becomes possible when AI enters the delivery pipeline. Autonomous agents that triage issues, diagnose failures and maintain documentation are no longer theoretical.
For teams on GitHub, this is compelling. For Azure DevOps, these capabilities are not yet available natively. The use cases, however, are entirely platform agnostic.
So we asked ourselves: why wait?
At AE, we hold a clear conviction: our customers deserve to be enabled, not constrained. When a promising technology emerges, our role is to bridge the gap. We ensure our customers can benefit today, on the platform they use, instead of waiting for it to appear on someone else's roadmap.
That conviction led us to build a custom Azure DevOps extension that brings agentic AI capabilities into standard pipelines. Not as a temporary fix, but as a deliberate investment in our customers’ ability to adopt new ways of working on their own terms.
The approach is built entirely on Azure native components. Models run in Azure AI Foundry, ensuring organisations retain full control over model selection, data residency and cost. The extension integrates with MCP, the Model Context Protocol, an open standard that allows AI models to discover and use external tools at runtime. Azure DevOps work items, Jira tickets and monitoring dashboards become accessible by configuring which tools the agent is authorised to use.
The prompts that guide the agent’s behaviour are simple Markdown files, versioned alongside pipeline code. They are reviewable, testable and shareable. We maintain a growing prompt library, validated in our own delivery pipelines before reaching customers. It forms a collective knowledge base that increases in value over time.
A crucial distinction: agentic pipeline tasks are probabilistic by nature. They complement the deterministic core of CI and CD, they do not replace it. Builds, tests and deployments must remain predictable, reproducible and fully deterministic.
Agentic tasks add value where deterministic automation reaches its limits, in activities that require interpretation, context and judgement. That boundary is essential and we design deliberately around it.
The use cases we are exploring illustrate how agentic AI removes friction from everyday engineering work.
These are not theoretical scenarios. They are already running in our own pipelines, allowing engineers to focus on work that genuinely requires human judgment.
Agentic workflows cannot be discussed without addressing security and governance.
A useful mental model in this context is the “lethal trifecta”, coined by Simon Willison: when an LLM has access to untrusted input such as issues, pull request text or logs, a privileged tool surface such as service connections, repository write access, ticket updates or cloud APIs, and the ability to persist or act such as committing code, opening pull requests or changing configuration, prompt injection can move from producing incorrect output to real operational impact.
Our extension is designed to constrain that risk by default. Tools are explicitly allow listed per pipeline and environment. Identities run with least privilege. Agent outputs are limited to auditable artefacts such as comments, work item updates or draft pull requests, with human approval gates for any merge, deployment or production facing action.
We also log prompts, tool calls, model and version details and generated artefacts so teams can meet governance and audit requirements without slowing delivery.
This development connects directly to the wider platform engineering and Hyperautomation evolution across the industry. The DORA 2025 report highlights a clear insight: AI amplifies what already exists, whether efficiency or dysfunction. A well-designed internal platform, with golden paths, standardised pipelines and clear governance, creates the foundation for introducing AI safely into the delivery lifecycle.
Our agentic inference task is one building block within that platform.
Microsoft and GitHub are investing heavily in the future of AI powered software delivery, and we welcome that direction. In the meantime, organisations that have invested in Azure DevOps do not need to remain on the sidelines. By building on open standards and Azure native services, we help our customers gain hands on experience with agentic workflows today. When native capabilities arrive, they will already understand what works, which prompts deliver strong results and how to embed AI into their delivery culture.
At AE, we do not wait for the future to arrive. We help our customers engage with it early.
Curious to see agentic pipelines in action? We are looking for organisations ready to explore their capabilities together. Get in touch and discover what agentic AI can bring to your software delivery lifecycle.