Hey folks - Firas here.

This week’s PMF Playbook comes from my episode with Ross Lazerowitz, founder & CEO of Mirage Security. Ross has built across product and security at companies like Splunk and Observe, and he’s now focused on one of the most uncomfortable truths of the AI era:

As technical defenses harden, humans become the path of least resistance.

What made this conversation unusually valuable wasn’t just the cybersecurity angle. It was the way three themes stitched together into one continuous PMF story: platform shift (AI-driven deception), category formation (human security), and founder reality (fundraising + endurance). You can’t separate them. The platform shift changes the attack surface, the new attack surface creates the category, and the founder’s ability to survive the emotional load determines whether the company gets to PMF at all.

Let me walk you through what stood out.

The platform shift: the “authenticity era” is over and it changes everything

I opened with a question that sounds dramatic but is rapidly becoming operational:

If AI can fake your voice, write your messages, and learn how to manipulate your emotions… is human authenticity already over?

Ross’s answer was blunt: it’s been over for quite some time.

His mental model is that we’re heading toward a world where “real vs fake” becomes less important than “verified vs unverified.” He referenced the idea that parts of the internet already feel simulated - AI-generated content feeding AI-driven engagement loops.

The PMF lesson buried in that is simple and slightly unsettling: when a platform shift makes deception cheap, trust becomes the scarce resource. And scarcity creates markets.

Founders don’t get to ignore this. If your product assumes that identity, intent, or messages are implicitly trustworthy, you’re building on a foundation that’s eroding.

The category lens: security is moving from network/app/data… to “human”

Ross’s origin story here matters. Early in his career he worked in a bank’s Security Operations Center - alert triage at massive scale and he watched enterprises get compromised constantly.

But the key insight isn’t “security is bad.”

It’s that over the last 20 years, the industry has made real progress on the classic layers:

     network and perimeter controls improved

     MFA became normal

     patching, detection, and visibility got better

     “big firewall and pray” matured into more modern approaches

So what’s left?

People.

Ross framed it cleanly: if technology controls keep improving, attackers will increasingly choose the path of least resistance and that’s human behavior under pressure.

The PMF implication: Mirage isn’t competing in “more cybersecurity tooling.” It’s competing in a new layer of security that becomes inevitable as AI accelerates social engineering.

Social engineering has evolved and AI turns it into an assembly line

Most people still think phishing is “a bad email.”

Ross’s view is that we’ve moved into multi-vector attacks: phone calls, SMS, email threads, voicemails and deepfakes are the next multiplier.

He gave the modern version of the attacker story: not a hoodie in a dark room, but often young, highly effective operators using persuasion and process weaknesses to get access - especially through help desks, password resets, and identity workflows.

This is what matters for PMF: the attack pattern isn’t technically sophisticated - it’s operationally effective.

So the winning products won’t be the ones that add another dashboard. They’ll be the ones that harden the real workflows where humans actually break.

Mirage’s wedge: don’t “train people” - pressure-test the system

Mirage’s approach isn’t to lecture employees.

It’s to simulate realistic social engineering attacks at scale and measure where the organization fails before attackers do.

Ross described Mirage’s AI doing what a human attacker would do:

     calling into a help desk

     impersonating a believable internal persona

     applying pressure

     triggering resets and access changes

     testing whether controls hold up in the messy real world

A design-partner story hit hard: in one simulation, the help desk removed a geo-restriction “to the world” within ~30 seconds of a call, in a panic. If they hadn’t slowed it down, Ross believes the attack could have gone very far.

The PMF lesson is brutally practical: you don’t know your controls until you test them under stress. Security theater disappears the moment you run real scenarios.

This is also why the product is sticky: once a customer sees that gap, they can’t unsee it.

Deepfakes: detection is the wrong hill to die on

Ross has a contrarian (and I think correct) position: he’s bearish on deepfake detection as a reliable solution.

His reasoning is straightforward: there’s no technical reason detection can’t become a losing game. If generators improve, detectors chase. And “mostly accurate” isn’t acceptable when the downside is account takeover or fraud.

So the framing flips:

Don’t detect fakeness. Verify realness.

He pointed to provenance-style thinking - cryptographic signing and chain-of-custody for content - where trust comes from source and history, not whether something “looks fake.”

PMF insight: when a problem becomes unwinnable at the detection layer, categories shift toward verification, provenance, and process design. The product moat becomes workflow trust, not model classification accuracy.

Fundraising reality: seed is vibes, and the feedback will drive you insane

Ross was unusually candid about fundraising - not the tactics, the emotional cost.

At seed, there are no clean metrics. So judgment becomes subjective. And the hardest part is that feedback is often inconsistent and not fully trustworthy because investors don’t want to burn bridges.

The PMF lesson for founders: don’t over-iterate on noise. You need a tight internal compass, or you’ll contort the company around contradictory opinions.

Ross also called out a mistake a lot of first-time founders make:

Investor marketing ≠ customer marketing.

Customers don’t care about your grand vision deck. Investors don’t care about product nuance in the way customers do. Mixing the two wastes time and creates confusion.

Founder-market fit: trust = intent + expertise

Ross believes founder-market fit matters especially in security.

When you’re selling into CISOs at massive enterprises, they’re not just buying software. They’re buying belief that you understand the threat model, the workflows, and what breaks in production.

His trust equation was essentially:

     Do I believe your intent is to help?

     Do I believe you have the expertise to solve this?

If either is missing, the deal doesn’t happen.

This is why Mirage’s origin story matters: Ross isn’t trying to “learn security on the job.” He’s building directly inside a domain where credibility is the entry ticket.

Closing thought

If I compress the entire episode into one sentence, it’s this:

As AI makes deception cheap and scalable, PMF will belong to companies that stop trying to “educate humans” and instead redesign the workflows humans operate inside - shifting trust from perception to verification.

That’s what Mirage is building toward: human security as a first-class layer, tested under pressure, built for the world that’s arriving - not the one we wish we still lived in.

Until next time,

Firas Sozan
Your Cloud, Data & AI Search & Venture Partner

Find me on Linkedin: https://www.linkedin.com/in/firassozan/
Personal website: https://firassozan.com/
Company website: https://www.harrisonclarke.com/
Venture capital fund: https://harrisonclarkeventures.com/
‘Inside the Silicon Mind’ podcast: https://insidethesiliconmind.com/

Reply

or to participate

Keep Reading

No posts found