In collaboration with Lara Fox, Managing Director, Objective

Many AI pilots don’t fail because the technology is bad. They fail because the data is worse. Poor data quality—duplicated records, inconsistent fields, conflicting formats—turns even the most promising model into guesswork. It’s easy to underestimate just how fragmented a company’s information can be until someone tries to train a machine to make sense of it. In practice, this means missed opportunities, wasted time, and dashboards nobody trusts.

Data readiness isn’t glamorous. There are no slick demos for reconciling customer records across three departments, or standardising date formats pulled from five different systems. But this is the work that makes AI real. Before models can make decisions, humans need to make decisions about their data: what matters, what’s missing, and who’s responsible for keeping it clean. That clarity is often missing in the excitement to ‘do something with AI.’

Much of this comes down to structure. AI performs best on well-labelled, well-documented data. Yet in many businesses, vital insights live in spreadsheets no one audits, platforms that don’t talk to each other, or the heads of two people who are always in meetings. When teams start asking where the data is—and get five different answers—it’s a sign the foundation isn’t ready.

I once sat in a meeting where a predictive analytics tool returned bizarre results for a retail forecast. It wasn’t a flaw in the model. The issue? Half the sales figures were manually typed into a shared file from emailed PDFs. Nobody had mentioned it during the planning phase.

Being data-ready means more than just having data. It means knowing where it is, trusting its accuracy, and having systems in place to improve it continuously. It also means asking the right questions at the start: What will this AI tool actually be used for? What decisions will it inform? And do we have data that reflects the reality we’re trying to model?

Data governance may sound like bureaucracy, but it’s what makes machine learning models replicable and explainable. Assigning ownership, creating audit trails, and defining who can access what—these are the rules that ensure AI outputs are traceable, secure, and relevant. And when those rules are followed, they create space for creativity. Once the groundwork is solid, teams can explore more ambitious projects with confidence.

Equally critical is the human layer. You can’t build a data culture if your staff don’t understand why their input matters. Training doesn’t need to be technical. It can start with helping teams recognise the impact of small changes—like logging customer names consistently or tagging orders by region. These micro-decisions, multiplied across departments, shape the dataset your AI will eventually rely on.

AI isn’t magic. It’s pattern recognition at scale, and patterns are only as good as the data they’re based on. If the input is messy, biased, or incomplete, the output will be too—no matter how advanced the model is. Clean data doesn’t guarantee perfect results, but it does make them intelligible, traceable, and easier to improve.

Companies often believe they’re not “ready for AI” because they lack the latest tools or a dedicated data science team. More often, they’re just not ready because no one has mapped where the data lives. Starting there makes everything else—governance, modelling, insight—possible.

The foundation of successful AI isn’t futuristic. It’s painfully present-tense: spreadsheets, systems, habits. That’s the work. Not the algorithms, but the discipline behind them. And the organisations that get this right will find that by the time their AI model is ready to run, the hardest part is already behind them.

Shares: