
I always enjoy hearing from readers. If something resonates, feel free to reply and share your thoughts. Last week, reader Rupal Kantaria, Partner at Oliver Wyman Forum, wrote: "You asked exactly the right question about AI and answered it clearly in a way anyone could understand." Thanks, Rupal!
After learning how to evaluate AI startups without getting lost in the technology, a natural question often comes next.
“If so many teams have access to the same tools, what actually lasts?”
It’s a fair question. AI has made building faster and easier. Demos can be convincing. Language is polished. At first glance, many companies can look similar. Over time, I’ve found that early impressions rarely indicate what will endure.
Rethinking early assumptions
In traditional software startups, technology could often be referenced as the moat. However, with AI startups, everyone has access to the same AI models and so competitive advantages have changed and what makes these startups defensible has changed.
For example, traditional angel investing relied heavily on product market fit to assess the stage and progress of a startup. In the world of AI, this is harder to identify and the signals and metrics need to be sharper than before. Furthermore, as customer needs change, PMF can become a moving target.
Where I see durability forming
I return to a familiar starting point in diligence: behaviour.
Rather than asking what a product can do, I ask how it lives in someone’s day.
Where does the product sit once it’s in use?
Does it become part of an existing workflow, or remain something users visit intentionally?
If it disappeared, would it create real friction or only mild inconvenience?
When a product becomes embedded in how work actually happens, durability begins to accumulate. Context builds. Habits form. Integration deepens.
These signals rarely show up in a demo. They surface instead in how customers describe their routines and what they no longer think about because the product is already there.
How data strengthens durability
Data is often described as a moat, but the way data is created matters more than its size. This is especially true in early-stage AI companies, where models and tools are widely accessible.
I focus less on volume and more on origin.
Durable data emerges naturally from real use. It reflects actual decisions, edge cases, and lived behaviour. Over time, this kind of data compounds quietly, improving the product in ways that are difficult to replicate from the outside.
Fragile data is gathered separately from the core experience. It can look impressive early, but is often easier to copy or replace. When learning depends on hypothetical future scale rather than present use, durability is weaker than it appears. In some categories, scale or standardisation can eventually strengthen this kind of data, but that durability usually arrives later.
The question isn’t “How much data do you have?”
It’s “Does learning happen as a byproduct of the product doing its job?”
Data rarely creates durability on its own. It strengthens durability when it is tightly coupled to workflow, incentives, and repeated use.
How distribution shapes durability
Distribution is not separate from durability. In many AI application businesses, it is the moat.
I pay attention to how products reach users and whether that access becomes harder to replicate over time. The most durable companies do not just acquire users efficiently. They secure positions that are structurally difficult to displace through embedded workflows, partnerships, platforms, or owned channels.
This matters more in AI applications, where models are becoming cheaper, features are copied faster, and switching costs are often low. In these environments, technical advantage tends to compress. Distribution does not.
What compounds are owned attention, trust, and a direct relationship with users. When distribution is treated as infrastructure rather than promotion, it lowers acquisition costs, stabilises retention, and amplifies every product release. Over time, this makes the product harder to replace even when alternatives exist.
This is not universal. Deep tech, infrastructure, and regulated industries are still primarily won on product quality, data advantage, and trust built over long cycles.
But for many AI applications, I believe the next phase of competition will be shaped less by what companies build and more by who controls the path to the user. In that context, durable distribution loops and community are not optional. They are a core part of the moat.
Case Study: Aeon
Arāya invested in Aeon in 2025, a next-generation preventive health screening company and multi modal data platform focused on early disease detection through full-body MRI, blood biomarkers, and genomics. The company operates an asset light model, leveraging existing MRI clinics, blood labs, and diagnostic infrastructure rather than owning physical clinics. This allows Aeon to scale efficiently while focusing on software, data, and clinical insight.
The service is already partially reimbursable through insurance providers, positioning Aeon uniquely at the intersection of consumer healthcare and insurance-led preventive care.
Context
Preventive healthcare today is structurally fragmented.
Public health systems are reactive and overstretched, optimised to treat illness rather than prevent it. Private diagnostics exist, but access is expensive, experiences are disjointed, and results are often delivered in formats that are difficult for patients to interpret or act on. Most consumers experience diagnostics as one-off events rather than part of an ongoing relationship with their health.
At the same time, healthcare systems are under pressure from rising rates of non-communicable diseases, ageing populations, and increasing costs. Insurers understand that prevention matters, but lack the infrastructure, data, and product capabilities to deliver it directly.
This gap shaped how I thought about Aeon from the outset.
What Aeon built
Aeon offers a personalised, multi-modal preventive health screening service that integrates full body MRI with blood biomarkers and, over time, genomics. The product includes a seamless digital booking flow with partner clinics, AI-assisted MRI analysis, structured clinical reporting, and a consumer app for longitudinal health tracking.
Rather than focusing on a single scan or result, Aeon is building a longitudinal health record that improves over time. Each scan, biomarker, and follow-up consultation feeds into a growing multi modal dataset that supports earlier detection, better risk stratification, and more personalised preventive recommendations.
Importantly, Aeon launched in months rather than years, at a fraction of the cost of asset-heavy competitors, while maintaining clinical quality and regulatory alignment.
Where durability forms
Aeon’s durability does not come from any single technical breakthrough. It comes from how the company is positioned within the healthcare system.
First, the asset light model. By sitting on top of existing clinical infrastructure, Aeon avoids the capital intensity and operational drag of owning clinics. This allows the team to focus on software, data, and clinical intelligence while scaling rapidly across geographies.
Second, insurance integration. Aeon is the first company in its category to secure reimbursement from Swiss insurers. This fundamentally changes distribution. Access is no longer limited to affluent out-of-pocket consumers. Reimbursement lowers friction, expands the addressable market, and embeds Aeon into insurer workflows in a way that is difficult to displace.
Third, longitudinal data. Aeon’s data advantage emerges naturally from use. Each patient interaction generates real-world clinical data across imaging, blood markers, and outcomes. This data improves diagnostic models, risk prediction, and clinical recommendations over time. Learning is a byproduct of the product doing its job.
Fourth, trust. Preventive healthcare requires credibility with patients, clinicians, and payors. Aeon has prioritised clinical quality, regulatory compliance, and physician involvement from the start, building trust that compounds with every scan and every insurer partnership.
Why this matters
In AI-enabled healthcare, models alone are not durable. Access to data, patients, and distribution channels is.
Aeon has positioned itself as an integration layer across clinics, insurers, and consumers. Each new partner strengthens the network. Each additional scan improves the system. Each reimbursement agreement deepens distribution advantage.
Over time, replacing Aeon would not simply mean offering a better scan or model. It would mean rebuilding insurer relationships, clinic integrations, patient trust, and a longitudinal multi modal dataset accumulated through real-world use.
That is where durability forms.
A pattern worth noticing
What stood out to me about Aeon was not how advanced the technology sounded in a pitch, but how deliberately the company embedded itself into existing systems of care.
Rather than fighting healthcare infrastructure, Aeon aligned with it. Rather than waiting for future scale, it built durability into distribution early. Rather than treating AI as the product, it treated AI as an enabler of a broader, system-level shift toward preventive care.
That combination is rare. And it is why Aeon is a useful case study for where real durability can form in AI-driven businesses.
Patterns that make me pause
As I’ve spent more time with AI companies, there are also patterns that consistently make me slow down.
These aren’t red flags on their own, but they shape where I need to look more closely for signs of durability.
I pay attention when:
The product sits on top of a workflow rather than inside it.
Humans still do most of the work and AI remains a thin, non-essential layer.
The product looks impressive but doesn’t meaningfully change decisions or habits.
The story focuses almost entirely on the model rather than how the product is used or distributed.
In many cases, these companies can still grow. But growth is not the same as durability. When the underlying behaviour does not change, replacement tends to be easier than it appears.
I’m curious to know which of these tends to make you pause, so I’ve added a quick poll below.
When you come across an early AI company, what usually makes you pause and look closer?
A simple way to practice this
At this stage, I’m not trying to predict winners. I’m looking for where strength might quietly accumulate.
When evaluating an AI company, I ask four simple questions:
Does the product become harder to replace over time?
Does it settle into a workflow people rely on?
Does learning come from real use rather than future promise?
Does distribution compound or reset with each release?
None of these need a definitive answer. What matters is noticing where the answers feel strong and where they feel thin.
If you want to sharpen this lens, revisit an AI startup you’ve already seen and ask yourself one question:
Where, if anywhere, would this become harder to replace in two years than it is today?
Even partial clarity is useful.
Closing thought
AI has not removed the need for judgment. If anything, it has made judgment more important.
When tools are widely available and progress is fast, durability rarely comes from what is easiest to demo. It forms through behaviour, through integration, through data created in real use, and through distribution that compounds rather than resets.
Knowing where to look for those signals is now one of the most important skills in early-stage investing.
Want to go deeper?
I’ll be teaching a full-day, in-person Angel Investing Course at Regent’s University, London, on February 27th, 2026.
The programme will focus on portfolio design, deal analysis, risk management, and live case studies covering the mechanics of investing well at the early stage, along with a new module on investing in the age of AI.
Warmly,
Rupa Popat
with Team Arāya
