
I always enjoy hearing from readers. If something resonates, feel free to reply and share your thoughts. Last week, reader Andrew Pannell, Venture Capital and Private Equity, Partner at Haynes Boone, wrote: "Very clear and well written!" Thanks, Andrew!
AI has changed how value is created, captured, and defended.
What has not kept pace is how many investors interpret early signals.
Over the last year, I have reviewed thousands of AI deals. What I see repeatedly is not a lack of frameworks, but an over-reliance on signals that no longer mean what we think they mean. AI has not made investing simpler. It has made judgment more important.
Below are some of the signals I see most commonly misinterpreted.
Early traction is no longer a reliable indicator of future success
AI has dramatically reduced the cost and time required to build convincing products. Founders can access the same foundation models, ship quickly, and reach early users with very little friction. As a result, early usage and even early revenue can look strong long before a business has any real staying power.
Early traction in AI often reflects ease of experimentation rather than depth of need. The more useful question is not whether users try the product, but whether the product becomes difficult to remove once it is in place. Durability starts to form when usage compounds value and when customers begin to depend on the product inside their workflow.
Product market fit is now a moving target
In traditional software, reaching product market fit was often treated as a milestone. In AI, that assumption breaks down quickly.
Customer expectations shift as capabilities evolve. What feels differentiated today can become baseline tomorrow. As a result, early product market fit is no longer conclusive of future performance.
What matters more is whether the product is deeply integrated into how work actually happens, and whether it continues to matter as competition intensifies. In AI, product market fit is something you defend continuously rather than something you achieve once.
A concrete example: Fit Collective
Fit Collective is a useful illustration of how these signals can be misread. We invested in the company in the second half of 2025. The round went on to be almost three times oversubscribed, with strong demand from venture funds, and became the largest pre-seed round raised by a female founder in the UK. I will come back to why that last point matters in a future newsletter.
At first glance, the company could be mistaken for a narrow optimisation tool. Improving garment fit using data and AI does not immediately look like a defensible AI business, and early traction alone would not explain why this should endure.
The real signal sits elsewhere.
Fit Collective embeds itself upstream in the fashion value chain, before garments are produced. It integrates directly into existing design and manufacturing workflows rather than sitting downstream as a reporting layer. Once embedded, the product changes behaviour. Brands stop relying on guesswork and historical patterns, and decisions move earlier in the process where the financial impact is significantly larger.
Durability does not come from the model. It comes from workflow ownership.
As more brands use the product, Fit Collective generates proprietary feedback loops across fabric behaviour, returns data, manufacturing decisions, and sales outcomes. That data is created as a byproduct of real use. Over time, accuracy improves, switching becomes harder, and the product becomes infrastructure rather than software.
Those signals rarely show up in a demo. They show up in how difficult it would be for a customer to unwind the product once it is live.
Speed can look like execution, but often masks noise
The best AI companies do move fast. But speed on its own is not execution.
Frequent feature launches and ambitious roadmaps can create the appearance of progress without creating lasting advantage. What matters is what improves as a company moves faster.
Are feedback loops tight? Is real usage informing better decisions? Is learning happening continuously?
Speed without learning does not compound. Learning velocity does.
Distribution is underestimated until it becomes obvious
In AI, geography is no longer a meaningful barrier. Competition is global from day one. Despite this, distribution is still often treated as something to solve later.
Early inbound interest and generic acquisition channels can look promising, but they rarely translate into durable advantage. In many AI application businesses, distribution is not separate from defensibility. It is the defensibility.
The question is whether users would still find the product first tomorrow if alternatives existed.
The real question behind every signal
Instead of asking whether a signal looks strong in isolation, I ask a simpler question.
What compounds.
What compounds with usage? What becomes harder to replicate over time? What improves automatically as the company scales.
AI has made it easier than ever to look impressive early. It has also made it easier to misinterpret what matters.
The edge now lies in knowing which signals to trust, which to question, and where judgment still matters most.
Want to go deeper?
I’ll be teaching a full-day, in-person Angel Investing Course at Regent’s University, London, on February 27th, 2026. I’d love to extend a £50 discount to our signal subscribers who’d like to join us. Just use ARAYASIGNAL50 when purchasing.
The programme will focus on portfolio design, deal analysis, risk management, and live case studies covering the mechanics of investing well at the early stage, along with a new module on investing in the age of AI.
Warmly,
Rupa Popat
with Team Arāya
