Recently at work, someone asked me why my team wasn’t creating detailed logical data models as part of their solution design. It seems quite simple to me – they don’t do it because nobody reads it. Most solution architecture documentation is out of date before it even gets logged in the architecture repository.
Why do we even write documentation?
I think we create documentation for one of two reasons:
- Collaborate now: as soon as there is a second person in a team, they need to start talking so that they can work together. The more people, the harder this is (which is why we try to limit teams to half a dozen people or so). Documentation is a great way for people to collaborate. When things are written down, everyone can see it, and by adding comments or questions we get to a resolved version.
- Communicate with the future: We make decisions today which are the best available given the information we have at the time. Sometimes we forget what we decided, or why, and documentation gives us that. It’s not about proving whether a decision was right or wrong – that’s largely irrelevant – it’s about understanding what led us to that decision in the first place. Perhaps the reasons are still valid and we just forgot what they were. Or perhaps information we have now changes things.
So really, design documentation is about ensuring that everyone’s on the same page, both now and in the future. Some designs are ephemeral – just to help us get our heads around what we want to do, and some is persistent – giving us a framework on which to hang our future plans.
Minimum Viable (Solution) Architecture
Given this framing, I think that good solution architecture contains four types of artefact. Not every solution needs all of them.
A lightweight specification
This one is slightly controversial, as plenty of teams skip this and rely purely on tickets in the backlog, but a short, high level spec adds a framework to hang those tickets off. It doesn’t have to be complete – it will evolve over time – but does need to help everyone understand common questions:
- What are we building? Why?
- What are the core goals and – often more important – what non-goals are we leaving out?
- What does “quality” look like? How resilient does it need to be? What sort of performance envelope are we targeting? How many users do we expect?
- Are there any constraints on the implementation – such as interoperability with other systems, regulatory requirements, time dependencies that drive sequencing etc.
- Finally, are there any non-obvious assumptions baked in? It’s really hard to know what you’re assuming when you’re assuming it, and so this usually comes out during the peer review.
A box or context diagram or two
How does this solution interact with others around it, and if it’s particularly large or complex, how do the parts of this solution relate to others? Typically, I’d look to see a diagram with boxes representing services or applications, lines between them annotated with the type of data being transferred (e.g. “trade” or “order”). Nothing complex. If you wanted to, you could use a structured format like C4, but that’s not necessary – my team usually create these in Figma.
Sequence diagrams or flow charts
Only needed where temporal or logical ordering is non-obvious or where unrecoverable failure modes are likely to be missed, these diagrams help developers and architects understand complex orders of events. But these diagrams can be confusing or feel “ivory tower” – so if you can understand what needs to be done without one, don’t include one.
Architecture Decision Records
The format is less important than the process. Ideally, these exist at two levels. Organisational ADRs capture things like a design approval (what was the version of the design, who was part of the review, any comments or follow up actions). A solution ADR is created to record specific design decisions. A couple of examples:
- Send notifications via email rather than SMS, even though SMS has higher open rates, as the team only has budget for a single channel and email provides functionality used elsewhere in the solution. Later, when the project grows or budgets change, the ADR explains the context behind the decision.
- Write to two independent Azure ZRS blob stores in different regions rather than rely on GRS because GRS replicates asynchronously with no guaranteed maximum replication lag and we need to guarantee an RPO of 0. In this case, the ADR justifies what would otherwise look like overly complex engineering.
What not to include
There’s plenty of stuff we leave out. The big ones, which my team often get challenged on, are:
- Schemas, such as OpenAPI specs, Kafka schemas etc. – while we might brainstorm the start of these in a design, particularly to start fleshing out the RESTful state model, they go stale very quickly. The authoritative version is always the interface contract itself as written in code. And because this is our public interface, any changes must, by team convention, go through proper peer review.
- Class diagrams or physical data models – we certainly aren’t creating UML class diagrams in 2026. But database schemas also evolve over time. Don’t get me wrong – data architects absolutely should be working with engineers to ensure that the physical data model is correct. But that’s a versioned artefact implemented as code.
- Detailed requirements and specifications – in most projects, this is best handled feature by feature in the backlog.
So what’s the minimum?
The minimum design I would expect to be asked to review would be a written description of the system or service, and a context diagram showing how that service is going to fit into the wider landscape – if there are no non-obvious decisions, then there are no solution ADRs. Once it’s approved, we’ll create the organisational ADR to keep track.

Leave a Reply