No One Could Find It.
It was an unseasonably warm March day, and I had just decided to spend some time in the garden when an email came in from a client whom I hadn’t heard from in a year.
We had worked together on and off for the better part of a decade, so long silences were not unexpected. The organization is a non-profit, and this particular project had recently changed hands. The new project lead got straight to the point. “The database is broken,” she said.
Then came the question behind the question: how much is this going to cost to fix?
She was clearly bracing herself for bad news — likely the kind that ends in a six-figure rebuild. The kind of quote she recently got to ‘fix’ a database adjacent project, not related but similar to this one. Same platform, different designers, different methods, different price tag.
Fortunately for her, I was the one who had built this particular database in the first place. So instead of jumping straight to repair costs, I slowed the conversation down. “Let’s take a breath. Tell me what is happening. Walk me through your search.”
The Request
From the client’s perspective, the issue was simple: staff couldn’t reliably find the organizations and programs they needed in order to refer their people to the right support.
This was not a minor inconvenience.
Their database was intended to work as a marketing/awareness and referral tool. It was a key feature and resource meant to help staff connect underserved people with the agencies, services, and support networks they needed. If staff could not find the right information quickly, the organization’s service suffered. And if the organization’s service suffered, so did its reputation.
So yes, from the outside, “the database is broken” sounded like a fair diagnosis.
But a fair diagnosis is not always the correct one.
It Didn’t Quite Add Up
As she walked me through the staff’s search process, the problem became clear almost immediately.
The database was not broken in the technical sense. It was ‘working’. Search worked. Records existed and the data for the agencies was accessible. The real issue was structural.
The entries had not been consistently maintained in over a decade. Some were outdated. Some had been duplicated. Some organizations appeared multiple times under different programs. Others had separate records for individual service locations. In some cases, a single organization had accumulated multiple overlapping entries over the years, all pointing to variations of the same service.
The result was not one obvious error.
It was clutter. Redundancy. Drift. Like looking through the messy kitchen drawer in search of a battery.
A Note for the Non-Database People
Most of my readers are authors, and I know the moment I say “database,” some eyes glaze over.
But stay with me.
You use databases too, even if you don’t call them that.
Online bookstores run on metadata. Your website runs on structured content. Product listings, categories, series pages, book pages, and search results all depend on how information is recorded, grouped, and retrieved.
This project may have involved community organizations instead of books, but the underlying issue is the same: the order in which information is stored affects whether people can find it, understand it, and use it.
If you catalogue six books with inconsistent titles, missing series data, and no clear author association, readers will struggle to find the right one. If you store community services the same way, staff will struggle to refer the right people to the right support.
Different content. Same problem.
The Real Conflict
This was not just a maintenance problem.
It was an information architecture problem.
The deeper issue was that the database had never been structured around how staff actually searched for information in the first place. In the original planning stages, a great deal of attention went into terminology, categorization, and policy language. Those decisions mattered, but they were not the same as designing for real-world use.
A system can be carefully named, neatly categorized, and still fail the people who rely on it every day.
In this case, the structure made sense on paper, yet created friction in practice because it did not reflect how front-line staff needed to find and refer resources. This was one of those situations where expert input was available, but not fully carried through into implementation.
As is often the case in funded projects, decision-making authority did not sit evenly across expertise, implementation, and day-to-day use. In practice, the system ended up reflecting internal decision logic more than community-facing use.
One of the hardest parts of any project is accepting that expertise is often distributed. A funder may understand the mission. A program lead may understand policy context. A technical lead may understand structure. A front-line team may understand how the system is actually used. Good decisions usually come from letting those forms of expertise inform one another. Trouble starts when one kind of knowledge is asked to carry the entire design.
The actual problem was that the system had been structured around the wrong unit of information from the start.
The Turning Point
Once I confirmed what was happening, the solution became much simpler.
Not easy. But simple.
The database didn’t need a full rebuild. It needed to be returned to a structure that matched how people actually used it.
My plan was to organize records by organization, then use the existing fields for associated programs and service locations as originally intended. That would reduce duplication, improve clarity, and make future updates far more manageable and more importantly, it would let staff search in a way that aligned with how they thought about referrals in real life.
They were not looking for abstract program fragments.
They were looking for organizations that could help.
The Cleanup
The first step was to identify active organizations and reduce the sprawl.
Rather than preserve every fragmented version of every program entry, I began cross-referencing records, folding programs back into their host organizations, and stripping away duplicate standalone entries where appropriate. That alone reduced the database from over 700 entries to closer to 500.
That number was still not pristine, but it was meaningful progress.
At that stage, the database was no longer bloated in quite the same way. We were back to something much closer to one usable record per organization, with room to rebuild clarity from there.
The next phase was content consolidation and live research: merging useful information, checking whether organizations still existed, updating profiles, and continuing the final cross-referencing needed to stabilize the system.
In other words, this was not just data entry.
It was editorial cleanup, structural repair, and practical usability work all at once.
The Result
A few weeks of work got the database roughly 70% cleaned up before the funding paused.
That pause was reasonable. Non-profit budgets are real budgets, and this particular project had limited funds. The new project lead chose to hold some money in reserve early in the fiscal year, which made sense.
But even before the work stopped, the difference was already being felt.
Staff were coming back with positive feedback. They were finding the right resources more easily. Referrals were becoming more straightforward. The project lead was thrilled.
Functional systems are often the ones people appreciate most once they no longer have to fight them.
The database had not been fully polished yet, but it was becoming trustworthy again.
Final Reflection: What This Project Actually Taught
What I took away from this project was simple:
The order in which information is recorded matters just as much as the information itself.
A small structural decision, made early, can create years of confusion later. Not because anyone intended to make a mess, but because systems always reveal what they were designed to prioritize. If the structure does not match how users actually think, search, and work, the cost shows up over time: in duplication, inconsistency, wasted effort, and eventually a loss of trust.
That lesson reaches far beyond databases.
The same pattern appears in websites, content systems, service design, and brand experience. When decisions are made around internal preference rather than user behaviour, the consequences do not stay theoretical. They show up in implementation, then in usability, and eventually in reputation.
Good projects do not require one person to have all the answers. They require the right kinds of expertise to shape the decision at the right time. Good systems are not built around what sounds right in a meeting. They are built around how people actually use them in real life.
Good systems do more than hold information.
They help people find what matters.
Need Admin Support
When the structure is messy, people waste time.
I help clean up the behind-the-scenes information (databases, digital content, and book metadata) so your team, readers, or customers can find what they need without the friction.