.NET Injection of a dependency list
Cover Photo by @ivvndiaz on Unsplash
Most conversations about AI coding assistants focus on productivity.Tools like GitHub Copilot are usually framed as ways to write code faster, reduce boilerplate, or help developers stay in flow.
But recently I’ve been exploring a slightly different idea: what happens when you move beyond a single AI assistant and instead build a team of them.
This weekend I accidentally ran a rather unusual experiment.
I’ve spent a lot of time over the past couple of years working with AI engineering tools — helping large engineering teams adopt them, running hackathons with GitHub engineers, and speaking with organisations about how AI can be integrated into the software development lifecycle.
But there’s a big difference between understanding the tooling and enabling others to use it, and using it yourself in the trenches.
Side projects are where I get to experiment with that second part.
Like many engineers, I maintain a side project that lets me experiment with ideas and technologies outside my day job. In my case, it also happens to support one of my long-standing hobbies: an unhealthy addiction to cookbooks.
The platform I’m building is designed to make recipes easier to manage and actually use. Anyone who collects cookbooks will know the problem: dozens (or hundreds) of books, amazing recipes buried inside them, and very little practical way to organise or retrieve them.
The platform itself is a fairly typical modern architecture:
The project has been a great excuse to explore some new engineering ideas. Recently that has included experimenting with AI-enabled development workflows.
Part of that motivation is practical: I write far less React these days than many of the engineers I work with. Like many technical leaders, my work is more architecture and direction than hands-on UI development.
AI coding assistants are an interesting way to bridge that gap.
But rather than using a single assistant, I wanted to push things further.
Instead of treating Copilot as a helper sitting beside me, I started experimenting with the idea of creating a full engineering squad of AI agents.
Each agent has a specific role and responsibility, similar to how a real engineering team might be structured.
Current roles include things like:
And because engineers are still engineers, every agent is named after an Autobot from the Transformers universe.
This setup is powered by:
The agents operate with a shared context that includes:
Effectively, they maintain a collective memory of how the team works, decisions that have been made, and improvements that should persist across sessions.
Most of the time my workflow still looks fairly normal. I’m writing code, experimenting with prompts, and nudging the agents in different directions.
But the squad approach allows something interesting: the agents can work semi-independently once tasks are defined.
Typical workflow looks like this:
In other words, my role shifts slightly from developer to architect and reviewer.
I still make the core design decisions, but the agents handle a surprising amount of the implementation work.
This weekend wasn’t supposed to be an experiment.
Our 11-year-old dog Daisy has diabetes and tends to wake up very early. Sunday mornings are my shift so my wife can have a lie-in.
Normally I use those quiet hours to get ahead on some work while the house is asleep.
Unfortunately this particular Sunday I was also suffering from a fairly unpleasant bout of the lurgy. Concentration was not exactly at its best, and nobody wants to read an email on Monday that was written while your brain clearly wasn’t firing on all cylinders.
So instead I did what any responsible adult would do.
I wrapped myself in a blanket, put The Good Doctor on Netflix, and sat on the sofa feeling sorry for myself.
But because I’m still me, I also decided to try something slightly nerdy.
Rather than coding directly, I let the AI engineering squad take over the implementation work.
Over the course of several hours the agents worked through a number of tasks while I mostly acted as a reviewer and occasional guide.
Among other things they:
The most interesting part wasn’t just that they could do the work.
It was that they learned from the process.
If an agent produced a sub-optimal fix, I would correct it. Because the system maintains historical context and decision trees, those corrections became part of the team’s shared memory.
Even more interestingly, the agents can run their own retrospectives. When prompted, they analyse previous work and suggest improvements to their own processes and memory structures.
Which feels oddly familiar to anyone who has spent time in agile retrospectives.
What this experiment reinforced for me is that the role of engineers may be shifting.
Rather than acting purely as implementers, engineers increasingly become:
Future engineering teams might not necessarily grow larger in terms of people.
Instead we may see something closer to:
Each handling narrow, well-defined tasks across the development lifecycle.
This is still very early days, but the trajectory is becoming increasingly clear.
I suspect I’ll write more about agentic development and how it may reshape engineering teams in a future post.
This post was also written with the help of an AI assistant — not to invent the ideas, but to take the slightly delirious brain-dump of a sick engineer watching Netflix and turn it into something that hopefully reads like a coherent article rather than the ramblings of someone who probably should have taken more Lemsip.
Also worth noting: this isn’t a sponsored post. The tools mentioned here are simply the ones I happen to be experimenting with at the moment.