Hey folks - Firas here.

This week’s PMF Playbook comes from my podcast episode with Gidi Cohen, Founder and CEO at Bonfy.ai.

Success isn’t final. Failure isn’t fatal. What matters is the ability to keep moving when the path is unclear.

That’s where Gidi Cohen starts. Not with product features, not with funding, not with pitch decks. With a mindset: chase the problems that don’t look solvable yet, and earn your edge by decomposing the impossible into the doable.

The real AI story inside companies isn’t just speed. It’s scale without supervision. The volume of generated content is rising, humans are leaving the loop, and the cost of being wrong is climbing fast. The future isn’t a single catastrophic breach. It’s a thousand tiny “harmless” moments that quietly erode customer trust until the brand is bleeding and nobody can explain where it started.

That’s the battlefield Bonfy is stepping into.

The real problem isn’t data. It’s judgment.

For two decades, “data security” has mostly meant pattern matching. Find nine digits. Flag the social security number. Detect a keyword. Trigger an alert. It feels like control, but it’s often theatre.

Gidi’s argument is blunt: the industry optimized for what was easy to build, not what businesses actually needed. The thing that matters most isn’t whether sensitive data exists in a message. The thing that matters is whether that message is appropriate in context.

A social security number sent from a customer to a service provider could be legitimate. The same number forwarded externally could be a disaster. If your system can’t tell the difference, you don’t have a security program - you have noise.

And noise has consequences. Every enterprise is drowning in tools, alerts, dashboards, and workflows stitched together under pressure. Complexity keeps rising, headcount never catches up, and teams lose the forest for the trees. In that environment, false positives don’t just annoy people. They train the organization to ignore the machine.

But Gidi’s more controversial point is the one most leaders miss: false negatives are the real crisis. If the majority of incidents aren’t even detectable by the existing approach, then the market doesn’t just need incremental improvement. It needs a reset.

Why now: GenAI turns a weakness into an existential risk

Before GenAI, bad tools were tolerated because humans were still doing most of the thinking. Yes, mistakes happened. Yes, someone occasionally sent the wrong attachment to the wrong person. But humans were still the gate.

Now, we’re moving into a world where content is produced at industrial speed, distributed across channels, and often shipped with minimal review. That changes what “risk” means.

Gidi’s investment bank example makes it real. Imagine analysts generating decks, summaries, filings, and communications through copilots and chat interfaces. What data went in? What was learned? Who validated the outputs? Who checked for hallucinations? And even if the output looks polished, what happens when it’s wrong, or when it accidentally merges client data across deals?

The failure mode isn’t theoretical. It’s already happening in quieter forms: the wrong portfolio landing in the wrong inbox, the wrong link permissions, the wrong shared repository. GenAI doesn’t introduce the first mistake. It multiplies the number of mistakes you can make per day while reducing the number of humans available to catch them.

That’s why Bonfy isn’t positioning as “a better DLP.” It’s positioning around something more fundamental: replacing pattern matching with context-based decision-making, and doing it in a way that can keep up with the volume that AI creates.

The product thesis: Adaptive Content Security

Bonfy calls it ACS: Adaptive Content Security.

The claim isn’t “we detect more patterns.” The claim is: we can understand content inside the business context of the organization and make better decisions about what’s risky and what isn’t.

Context means asking questions older tools don’t know how to ask:

Who is sending this?
Who is receiving it?
What relationship exists between them?
What rights do they have to the information?
What customer, deal, project, or system is the content tied to?
What channel is being used, and how does that change risk?

This is the shift from “what’s in the message” to “what does this mean in this moment.”

And once you accept that idea, a second insight becomes obvious: content risk doesn’t live in one channel. It moves through email, chat, file shares, SaaS apps, copy/paste workflows, and now GenAI prompts. Solving one channel doesn’t solve the business problem. That’s why the architecture has to be multi-channel by design, not bolted-on after the fact.

The PMF lesson: customers don’t design products but they expose truth

Gidi has spoken with an enormous number of CISOs, and his posture is healthy: listen aggressively, but don’t outsource judgment.

Customers can articulate pain, priorities, and constraints. They can validate whether your approach resonates. They can also surprise you and force you to update your roadmap. But if you ask customers to invent the product, you’ll build a better version of yesterday’s tool and lose to someone who rewrites the category.

This is a sharp PMF principle: learn the problem from customers, but build the solution from first principles.

That’s especially true in emerging markets, where customers are still asking for improvements to old systems because they haven’t yet internalized what the new world demands. PMF isn’t about giving buyers what they request. It’s about giving them what they’ll eventually realize they needed.

The founder operating system: optimism and paranoia aren’t opposites

One of the most useful takeaways from this conversation has nothing to do with security.

Gidi describes a daily cycle: waking up early with a list of what could go wrong, then converting that stress into a plan by the time the workday begins. That’s the founder pattern in its cleanest form.

Paranoia keeps you alive. Optimism keeps you moving. The trick is not choosing one. The trick is turning fear into execution.

It’s also why his Churchill framework lands so well. Leadership isn’t just seeing the future. It’s translating the future into actions that happen today, and rallying people around those actions long enough for momentum to form.

Where this is going

The market is about to split into two camps.

One camp will try to patch old approaches onto new GenAI workflows and call it “AI security.” It will mostly be feature marketing.

The other camp will treat the problem as what it really is: the need for machine-level judgment at machine-level scale, because humans cannot remain at the gate when content generation becomes exponential.

If Gidi is right, the winners won’t be the vendors who detect the most patterns. They’ll be the ones who help organizations maintain trust when every process becomes AI-assisted, every employee becomes a content producer, and every mistake moves faster than your ability to respond.

In the enterprise AI era, context isn’t a feature. It’s the moat.

Until next time,

Firas Sozan
Your Cloud, Data & AI Search & Venture Partner

Find me on Linkedin: https://www.linkedin.com/in/firassozan/
Personal website: https://firassozan.com/
Company website: https://www.harrisonclarke.com/
Venture capital fund: https://harrisonclarkeventures.com/
‘Inside the Silicon Mind’ podcast: https://insidethesiliconmind.com/

Reply

Avatar

or to participate

Keep Reading

No posts found