I’m getting back into the habit of writing an end-of-year post. This one is brief, covering a few reflections and observations on AI investing.
Updates and reflections
I. I’m moving back to the Bay Area.
Although it was only a brief 2-year stint, I’m glad I got to experience a chapter in New York. Growing up, I assumed all cities were basically the same, but every place I’ve lived in has emphatically proven that assumption incorrect. The geography, economy, and people forms something hard to describe but easy to feel. Living in New York felt like being dropped onto Reddit’s front page, where you can navigate into any niche or just wander into something interesting at any time. All that said, I’m personally excited settle back in to the Bay Area (and hopefully not move soon).
II. I crossed five years in early-stage VC this year.
It feels like an incredible amount has happened in that time — covid, remote work, SPACs, LLMs, etc. Hype cycles collapsed from years to just months, making the already slow feedback loop in venture even more challenging to parse. The job is an odd mix of building strong relationships over years while being ready to drop everything for the right opportunity. Something I’m trying to channel more in 2025 is that this job can be as simple as spending as much time as you can spend with the smartest, driven people.
III. I’m leading investments in enterprise apps and infra.
I am personally spending a lot of my time at seed today. Because Canaan invests across seed/A/B, I am routinely jumping between between pre- and post-PMF companies. At seed, I am mainly looking for the right team in a dynamic problem space since the company will likely look very different in 2 years. However, at Series A and B, the trajectory of the business becomes more cemented and it requires a more thoughtful perspective on the unit economics, market size, and defensibility of the business. I’ve been fortunate to partner with a number of great teams this year
Observations on AI Applications
I. The cost to reach PMF is lower than ever.
A larger number of companies are blowing through the “typical” Series A milestones with very little capital spent. Because powerful AI abstractions reduce upfront R&D costs, startups now ship valuable products faster. At the seed stage, investors must accept that lower barriers to entry mean more competitors rushing into each market. Many of these products are wedges that will have to grow into a broader vision to deliver venture outcomes. “Why is this a good wedge?” has become one of the main questions I am asking myself nowadays.
II. Not all service categories can be disintermediated today.
Selling outcomes rather than software tools has become a well-discussed topic in the venture world. While vertical SaaS startups are selling software to assist existing service providers, Service-as-a-Software startups attempt to disintermediate service providers with end-to-end solutions. There are certain categories where disintermediation is not possible today such as regulated industries (legal, medical) or in-person services (home services, eldercare). Despite all the excitement around Service-as-a-Software, I think companies like EvenUp for personal injury law or Rilla for home services have proven there is still a lot of opportunity selling to service providers. As agentic products improve, I suspect we will see more Service-as-a-Software that can actually disintermediate service providers.
III. There are few contrarian markets at the App layer.
At the beginning of the year, it felt there were still a handful of categories that didn’t command big AI premiums. Today, those are scarce. Vertical SaaS is hot, Agents are hot, Deep Tech is hot, AI research is hot. Examples of companies that I think were very contrarian at seed are companies like Cursor or Together AI. Prevailing sentiment was that developers would not adopt new IDEs and that inference was just a crowded race to the bottom. In 2025, I am trying to think about what the biggest swings could be and the founders who can will them into existence.
Observations on AI Infrastructure
I. AI-native startups are more comfortable with abstraction.
I was speaking to an AI infra founder this year and he mentioned that this generation of app builders are more comfortable with abstraction. The last generation of ML was constrained to a few tech companies who ultimately built much of their tooling in-house (eg Uber, TikTok, etc). What has changed about the AI infra market is that AI applications are not just built by PhDs. A new generation of app builders raised on OpenAI and Anthropic may be more likely to buy AI infrastructure than previous generations.
II. The technical and non-technical divide narrows.
There is a lot of talk about the death of software engineering but I find that very hard to believe. Even though non-software engineers are now more empowered to build and modify applications with tools like Cursor, I think most mission-critical software will be built by a few and distributed to many. The low-cost of distribution in software still enables specialized companies and similarly, specialized technical talent. We will probably need less software engineers at the lower end, but more at the medium to high end.
III. Inference-time scaling may unlock a next wave of model improvements.
I thought the o1 release was a big deal because of inference-time scaling. OpenAI has essentially unlocked a new avenue for improving model performance by throwing compute at inference time instead of training (summarized here). I wrote earlier this year that AI Infrastructure would be hard to adopt until the model layer began to stagnate. In my view, this may push the adoption of adjacent AI infrastructure categories further out as the model layer continues to improve dramatically. It will be interesting to see if this is the direction the other large model builders go as well or if this becomes an notable fork in strategies.
Happy New Year!
-Nandu