
I’m often asked similar versions of the same question:
“I’m seeing more AI deals. I’m interested. But I don’t come from a technical background. How do I know where to even start?”
It’s a reasonable concern; AI has changed the pace of company building dramatically. Founders can ship faster than ever, demos can look impressive very early, and the language around AI can make even experienced investors feel like they are missing something critical.
What has become clear over time is this: you do not need to be technical to assess AI companies well. But you do need to update where you place your attention. Not being technical forces you to focus on what truly drives value, not on what looks impressive.
AI has not removed the need for judgment. In fact, it has made the skill of judgement the single most important edge an early-stage investor can have.
Where old instincts break in AI
Many investors instinctively start AI diligence with the technology: the model, the architecture, the tools being used.
I do not.
That information may matter later, but early on, it rarely tells you whether a real, enduring business is being built. As access to powerful models becomes increasingly commoditised, technical novelty alone is no longer a reliable indicator of long-term value.
Starting with technology can obscure the more important questions.
Where I start instead: the world before the product
I start by grounding the conversation firmly in the real world.
What is happening today, without this product?
Who is experiencing the problem?
What does it cost them in time, money, risk, or missed opportunity?
If I cannot clearly describe how the world works before the product exists, I cannot assess whether the solution truly matters. And in particular truly matters to the customers who will pay for it.
From features to behaviour and workflow
Once I understand the “before” state, I focus on behaviour and workflow rather than product features.
I ask:
What does the customer stop doing if this works?
What becomes faster, easier, or more reliable?
Where does this product sit in an existing workflow?
I am not looking for a long list of capabilities. I am looking for evidence that the product naturally embeds itself into something the customer already does, ideally something that is mission-critical, not optional.
When founders explain this clearly, without jargon, it signals that they truly understand the problem they are solving.
As the conversation moves forward, I try to stay close to behaviour.
Case study: Research Grid
Research Grid is a clear example of how this lens works in practice.
When I first engaged with the company, the conversation did not start with AI. It started with how clinical trials are run today:
Manual processes
Fragmented systems
Low patient engagement
Delays that are costly not just financially, but in patient outcomes and speed to market for new therapies
Dr Amber Hill, the Founder and CEO, had been a researcher for 14 years, felt the pain point personally and had decided to do something about it.
That context mattered far more than the specifics of the underlying models. It made clear why behaviour might change if the product worked and why adoption could be meaningful.
Research Grid is an AI-native platform that automates key clinical trial workflows, significantly improving speed and efficiency for pharmaceutical companies and research organisations. Its early traction with major big Pharma players and Sanofi was a strong signal, alongside the 100% retention rate of customers. The defensibility of Research Grid comes from workflow ownership. Once embedded, the platform becomes difficult to replace. Value compounds through usage, data, and integration into daily operations. That is the kind of advantage that matters far more than technical novelty at the early stage.
In the year of our investment, Research Grid grew 21x in Annualised Revenue Growth (ARR). Araya supported the company strategically: providing introductions to industry leaders, advising on growth, and connecting the team to opportunities for scaling. Learning velocity over technical depth
The next signal I focus on is learning velocity.
The most important question is whether the product improves as it is used, and whether that improvement is grounded in real interaction with customers, not in theory, but in practice.
Does usage generate better outcomes, stronger data, or clearer insight over time?
Does this make it more valuable over time?
Does the team ship, learn, and adapt quickly?
In AI, speed is not optional. It is a strategic necessity. Teams that move quickly and learn faster than competitors consistently outperform those that optimise for perfection too early. In a recent interview, the Head of Growth at Lovable, Elena Verna, shared that 95% of growth is driven by launching new products and features. Lovable is one of the fastest-growing companies in history.
You do not need to understand machine learning to assess this. You can hear it in how founders describe their decisions, trade-offs, and feedback loops.
Reading the team in uncertainty
Throughout all of this, I’m also paying attention to the team.
How clearly do they think out loud?
How comfortable are they with uncertainty?
How do they respond when something is not fully formed or still evolving?
In fast-moving areas like AI, there is rarely a perfect answer early on. What matters is how founders reason under uncertainty, how they adapt, and how quickly they turn insight into action. These signals show up long before they appear in metrics.
What AI diligence really is
AI diligence is not a test of technical knowledge. It is a disciplined approach to understanding:
How a team engages with a real problem
How they embed their solution into critical workflows
How value compounds over time
How effectively they are learning from the market
AI hasn’t removed the need for judgment. If anything, it has made judgment more important. But that judgment does not require you to be technical. It requires you to know where to start and what to listen for.
The most enduring AI companies are rarely the ones with the flashiest demos. They are the ones that quietly embed themselves into workflows, own distribution, generate proprietary data through usage, and move faster than everyone else.
A Simple Way to Practice This
To apply this lens yourself, try this with the next AI startup you encounter.
Before thinking about the technology at all, write a short paragraph answering:
Who is this for?
What problem are they dealing with today?
Where does this product show up in their workflow?
Do this without mentioning AI at all.
Answer those questions clearly, and you are already seeing the signals that distinguish enduring AI companies from fleeting hype.
If you can do that clearly, you are already engaging with AI diligence in the right way.
That is where I always start.
Want to learn more about investing in AI?
Join me as we take the ideas from this issue deeper in an interactive workshop. If you want to see how to apply these frameworks in real time, assess AI startups confidently, and understand where value is really created, this session is for you.
Reserve your free spot via the links below:
Warmly,
Rupa Popat
with Team Arāya
