Hey folks - Firas here.

This week’s PMF Playbook comes from my episode of Inside the Silicon Mind with Leonid Igolnik. Leonid has spent decades building and leading in enterprise software, sits at the intersection of engineering, AI, and operating reality, and he has a rare ability to talk about the future without losing the thread of what actually breaks in production.

What made this conversation valuable wasn’t a prediction about agents or a hot take about “one engineer startups.” It was something more durable:

AI is changing software by changing the abstraction layer and when abstraction shifts, the winners aren’t the people who generate more code. The winners are the people who can express intent clearly, master their domain deeply, and prove correctness in a world where models are non-deterministic.

Let me walk you through what stood out.

We started where the entire industry seems to be starting: agents.

And Leonid’s answer was honest in the best way:

Nobody really knows what agents are yet. But everyone wants one.

That’s not cynicism - it’s a signal.

When a term becomes unavoidable before it becomes precise, it usually means the market is grasping for a new interface. People feel the shift coming. The label is just lagging behind the reality.

What he did anchor on is the only part that matters right now:

     We’re in a period of heavy experimentation

     Tooling is being built everywhere

     And it’s not clear what will win

That ambiguity is the point. It’s also why this moment is so intoxicating if you’re building.

AI is a new abstraction layer and abstraction always creates a new PMF battlefield

Leonid framed AI the way strong engineering leaders frame every platform change:

AI isn’t just a tool. It’s a new level of abstraction.

And if you’ve been around long enough, you’ve seen this movie before:

     Assembly → higher-level languages

     Rack-and-stack → cloud

     Manual ops → managed infrastructure

     Static UI → API-first

     Now: code-as-output → intent-as-input

Each time abstraction rises, two things happen at once:

  1. More people can build more things

  2. New failure modes explode underneath the surface

The illusion is: “this will reduce complexity.”

The reality is: it shifts complexity into different layers - often into layers you can’t ignore anymore.

The uncomfortable truth: AI produces bugs at the same rate but now we ship more code

This was one of the most sobering lines in the entire conversation:

The machinery is producing as many bugs as humans, percentage-wise… except now we’re writing way more code.

That’s the hidden tax of AI-assisted engineering.

You get speed.
You also get volume.
And volume amplifies risk.

The market is celebrating output. But the real cost shows up later:

     regressions

     brittle edge cases

     subtle security holes

     operational fragility

     “it worked yesterday” incidents

This is why the “AI makes engineers 10x” narrative is incomplete.

AI can make output 10x.
But it can also make failure 10x if you don’t build guardrails.

Why testing is coming back - not as a best practice, but as survival

Leonid believes testing is going to have a resurgence, and not because engineers suddenly love writing tests.

It’s because we’re trying to do something logically inconsistent:

We’re asking for a stochastic (non-deterministic) system to produce deterministic software.

If you prompt the same model with the same request three times, you can get three different outputs.

So how do you build production-grade systems in a world where the generator is probabilistic?

His answer is simple and very practical:

You need test coverage as the safety net.

Not as dogma.
As a stabilizer.

Testing becomes the method by which you transform non-deterministic generation into deterministic behavior.

And this is where something interesting happens:

What used to be tedious becomes easier because AI can help write tests, expand coverage, and describe behavior in plain English.

Which leads to a bigger point…

“English is the new programming language” and it changes who can contribute

Leonid referenced the idea (popularized by Karpathy) that:

English is becoming the interface.

But he pushed it further into a workflow-level insight:

We can increasingly express the expected behavior of software in plain language and that creates shared understanding across roles.

Historically, the chain looked like this:

     PM writes requirements

     design translates into screens

     engineers implement

     QA validates

     customers discover what the product actually does

Now we can do something more powerful:

     write tests that define expected outcomes in plain English

     let product and engineering align on behavior early

     turn “what should happen” into executable truth

That’s not just a tooling improvement.

That’s a coordination improvement.

And PMF lives and dies on coordination.

The new engineering moat: domain mastery + intent expression

Leonid kept returning to this theme:

Mastery matters more than ever. Domain expertise matters more than ever.

Why?

Because AI amplifies skill - it doesn’t replace it.

He used an analogy I loved:

A steam hammer in the hands of a skilled craftsman is far more powerful than a regular hammer - but the craftsman still needs to understand how metal behaves.

AI works the same way:

     In the hands of someone who understands fundamentals, it multiplies capability

     In the hands of someone without fundamentals, it multiplies mistakes

This is where the market is heading:

The engineer who can “use AI” won’t win.

The engineer who can:

     define what matters

     express intent clearly

     reason from first principles

     validate outcomes

     operate systems safely

  …will win.

Because engineering is no longer just “writing code.”

It’s building systems that behave predictably under real-world constraints.

Hiring is changing: it’s not “can you code?” - it’s “can you be a professional with tools?”

One of the most practical parts of the episode was how Leonid thinks about interviews.

He described the “tipping point” his org hit:

At first, they tried filtering candidates using AI tools.

Then they crossed the bridge and accepted the real truth:

The tools are here to stay.

So the question becomes:

How do you evolve the interview loop to:

  1. allow candidates to use modern tools

  2. still evaluate them as actual software professionals

And there’s a second-order problem that most teams ignore:

Interviewers need retraining too.

Because evaluating someone with access to models, docs, and generated code is fundamentally different than evaluating them in a constrained environment.

This is the new bar:

     Can you steer the tool?

     Can you critique output?

     Can you validate correctness?

     Can you secure what you ship?

     Can you explain the tradeoffs?

That’s the definition of “seniority” now.

Why the “1–2 engineer billion-dollar startup” story breaks in the real world

I asked Leonid directly: do you see the next billion dollar startup being built by one or two engineers?

His answer: No.

And his reasoning matters for founders:

Because building a company is not only building software.

You still have to:

     acquire customers

     service customers

     support customers

     run operations

     manage reliability and trust

     build redundancy (because humans need vacations)

AI can compress effort inside functions.

It can’t delete the functions themselves.

His analogy was crisp: you can fly as a solo pilot - but United Airlines has dispatchers, meteorologists, ground crews, planners, customer support, and redundancy for a reason.

Scale forces specialization.

AI changes the yield.
Not the laws of reality.

The meta-takeaway: the great filter is happening

This might be the most important line of the entire conversation:

Expertise gets amplified. Lack of expertise can’t benefit from this.

We are entering an era where mediocrity becomes expensive - not morally, economically.

Because when tools raise the baseline, the market stops rewarding “good enough.”

That sounds harsh, but Leonid offered the counterbalance that makes it hopeful:

AI also democratizes access to knowledge and best practices.

It’s a free expert coach.
It can help anyone willing to invest in themselves level up faster than ever.

So yes - the bar rises.

But the ladder is also getting easier to climb.

Closing thought

If I compress this episode into one sentence, it’s this:

In the AI era, PMF belongs to teams who can express intent clearly, master their domain deeply, and prove correctness in a world where code is cheap but reliability is everything.

AI will keep accelerating.

But the differentiation won’t be “who can generate more.”

It will be who can build systems that behave, scale, and stay trusted when the machinery is probabilistic.

Until next time,

Firas Sozan
Your Cloud, Data & AI Search & Venture Partner

Find me on Linkedin: https://www.linkedin.com/in/firassozan/
Personal website: https://firassozan.com/
Company website: https://www.harrisonclarke.com/
Venture capital fund: https://harrisonclarkeventures.com/
‘Inside the Silicon Mind’ podcast: https://insidethesiliconmind.com/

Reply

Avatar

or to participate

Keep Reading