Hey folks - Firas here.
This week’s PMF Playbook comes from my episode with Aaron Fulkerson. Aaron is the CEO of Opaque, a company building what may become one of the most important infrastructure layers of the AI era: confidential AI. But what made the conversation unusually valuable wasn’t just the technology. It was the deeper idea underneath it all: trust.
Aaron has spent a large part of his life thinking about trust, not as a vague virtue or a soft cultural theme, but as a foundational system. Something that determines whether teams function, whether societies remain stable, and whether new technologies create prosperity or chaos.
What made this conversation stand out is that it connected trust, AI, platform shifts, enterprise value, and human behavior into one continuous PMF story. You can’t separate them. If AI is the next major platform shift, then trust is the infrastructure that decides whether that shift creates durable value or systemic fragility.
Let me walk you through what stood out.
The starting point: trust is not abstract, it is structural
Aaron defines trust through three pillars: caring, consistency, and competency.
Caring means I believe you have my interests in mind. Consistency means I know how you will show up tomorrow because I’ve seen how you show up today. Competency means I believe you can actually do what you say you will do.
That framing stayed with me because it gives trust operational meaning. It moves trust out of the world of slogans and into the world of systems. It also explains why trust is so easy to talk about and so hard to build. You do not get it by declaring it. You get it by repeatedly demonstrating care, reliability, and capability over time.
The PMF lesson buried in that is simple: trust is not a brand asset layered on top of product-market fit. It is often part of the product itself. In many categories, especially the ones touching data, workflows, or critical decisions, trust is inseparable from adoption. If the buyer does not trust the system, PMF never fully forms no matter how impressive the demo looks.
The deeper question: every platform shift needs a trust layer upgrade. One of Aaron’s most important ideas is that every major technology platform shift requires a corresponding trust upgrade.
The internet is the clearest example. Early on, it was a network of pages and information. But it could not become a global commerce platform until the trust layer improved. Encryption in transit, HTTPS, SSL, TLS all of that mattered because it gave people confidence that transactions could happen safely. Without that upgrade, the internet remains interesting but limited.
Aaron’s argument is that AI is now going through the same moment.
The difference is that the trust problem is no longer just about data in transit. It is about runtime verifiability. It is about what happens when autonomous or semi-autonomous systems operate at machine speed, touching sensitive data, making decisions, chaining actions across systems, and interacting with enterprise workflows in ways that humans alone never could.
That distinction matters. A human being, even a bad actor, is constrained by time, energy, and coordination. An agentic system is not. And once you move from software as a passive tool to software as an active actor, the trust model has to change with it.
The PMF takeaway here is big: if AI is the platform shift, then trust is not a feature request. It is the enabling condition for the market to exist at scale.
Why enterprise AI adoption stalls without trust. There is a temptation in AI to think the biggest product question is intelligence. How good is the model? How fast is the output? How magical is the interface?
Aaron’s view is that for enterprises, that is not the gating factor. The gating factor is whether they can use their most valuable data without exposing themselves.
That is where his examples became especially sharp. A company may not be worried that a foundation model is literally training on its data in the most direct sense. The more subtle concern is the metadata, the workflows, the ways and means by which people use the model in conjunction with proprietary information. That interaction pattern itself can become valuable intelligence.
And that changes the equation.
The PMF lesson here is uncomfortable but important: in the AI era, product value may increasingly come not from owning the model, but from protecting the environment in which the model is used. The company that makes AI usable with proprietary data may create more durable value than the company that simply makes AI impressive.
Opaque’s wedge: confidential AI as the HTTPS moment for enterprise AI. This is where Opaque becomes strategically interesting.
Opaque is not trying to win by being another application layer company wrapped around a model. It is building around a much deeper problem: how do you let enterprises use AI on sensitive data with verifiable guarantees around privacy, policy enforcement, and confidentiality?
The simplest analogy Aaron gave is that this is the AI equivalent of the internet’s encryption moment. Enterprises want to do things like confidential RAG, summarization across HR, legal, customer, and finance systems, and agentic workflows that pull together multiple data sources. But they cannot do that safely if they are effectively handing over control of their most sensitive data exhaust.
Opaque’s approach is built around confidential computing and verifiable policy enforcement before, during, and after runtime. The details get technical quickly, but the strategic point is clear: it allows companies to use AI without surrendering the underlying crown jewels that make their businesses valuable.
And that creates a very specific PMF pattern. The product is not just solving for “can this AI do the task?” It is solving for “can this AI do the task in a way our business can actually adopt?” Those are very different questions. One creates demos. The other creates budgets.
The platform risk founders should be paying attention to. Another part of the conversation that stuck with me was the broader warning about platform capture.
We have seen this movie before. Toys “R” Us built with Amazon and helped strengthen the platform that eventually disintermediated them. Marketplace operators learn from the activity happening on top of them. The platform gets smarter than the participant.
Aaron’s point is that the same dynamic is emerging again in AI. If the next generation of AI platforms becomes the place where workflows, prompts, interactions, metadata, and downstream customer behavior all accumulate, then the platform does not just provide utility. It begins to absorb the value layer.
That is what makes confidential AI strategically important. It is not just a security layer. It is a bargaining-power layer. It gives enterprises a chance to participate in the AI economy without naively handing the future of their business to the infrastructure provider.
The PMF lesson here is one founders should take seriously: building on a platform is not inherently bad, but building in a way that leaks the source of your long-term defensibility is fatal. PMF that depends on someone else not learning from your best workflows is fragile PMF.
The darker side: trust erosion at machine speed. Toward the middle of the conversation, things got heavier in a way I thought was necessary.
Aaron talked about model poisoning, hidden agendas, and the possibility that AI systems could shape beliefs, manipulate behavior, or amplify subtle influence at a scale that human bad actors simply cannot match. Whether you take every example literally or not, the underlying point is hard to dismiss: systems that operate at machine speed with broad interface access can alter human behavior faster than we are used to defending against.
That is where the conversation moved beyond enterprise software and into society itself.
We talked about people already using AI to respond to personal messages, to mediate emotionally charged communication, and to think on their behalf. There are clearly positive uses for that. But there is also a real cost. The more AI sits between person and person, decision and decision, the more it shapes how we communicate, what we trust, and even how much independent judgment we retain.
The PMF relevance here is subtle but important. Products are not neutral once they scale. They train behavior. They change defaults. They redefine expectations. Founders building in AI are not just shipping utilities. They are shipping interaction patterns that can compound culturally over time.
The human counterweight: empathy, communication, and values. One of the most hopeful parts of the conversation came when Aaron shifted back toward what remains deeply human.
His view is that as AI becomes more pervasive, some human capabilities become more important, not less. Empathy. Communication. Storytelling. The ability to build trust. The ability to connect around shared values.
That also flowed into a second theme he emphasized strongly: the only way to build a high-performance team is on a foundation of trust. And if you do not provide a clear mission and values, people default to fear, ego, and fragmentation.
That part resonated with me a lot because I have seen it inside my own business. Values are useless if they live on a website and nowhere else. They only matter when they shape who you hire, who you promote, what behavior gets rewarded, and what behavior gets filtered out. Otherwise, they are branding. Not culture.
The PMF lesson here is that product-market fit inside a company matters too. A team cannot compound around a product if it cannot compound around a shared way of operating. Internal trust affects external execution.
Bias for action: the founder trait Aaron kept coming back to. Aaron also made a point I think more founders need to hear: people need to develop opinions and test them.
Not endlessly wait for instructions. Not outsource judgment. Not sit in abstraction. Have a view. Try it. Learn. Adjust. Repeat.
He described it as strong opinions, weakly held. That balance matters. You need enough conviction to move, but enough humility to change course when evidence shows up. Without that, you get one of two failure modes: passivity, where no one drives anything forward, or rigidity, where people cling to bad ideas long after reality has moved on.
That is a PMF principle too. PMF is found by iterating against reality. That requires action. And action requires people who are capable of forming a judgment, making a move, and learning fast.
The bigger pattern: AI may commoditize products, but it will magnify values. One of the most interesting closing threads was Aaron’s point that as AI accelerates commoditization, values become even more important.
If products can be built faster, copied faster, and iterated faster, then what becomes a durable point of differentiation? It is not just raw functionality. It is the trust people have in the company, the brand, the mission, and the values embedded in the experience.
That does not mean values replace product quality. It means that in a world of increasing product abundance, values become part of what makes a product meaningful and durable.
In other words, the more AI compresses the cost of creation, the more human trust becomes part of defensibility.
Closing thought:
If I compress the entire episode into one sentence, it’s this:
Every major platform shift creates new capability, but the companies that matter most are the ones that build the trust layer that makes that capability usable at scale.
That is what made this conversation feel so important to me. Aaron is not just talking about AI safety in an abstract sense. He is pointing at a very practical market truth: if enterprises cannot trust AI with their most valuable data and workflows, adoption stalls. And if society cannot rebuild trust while AI scales, the consequences go far beyond enterprise software.
The future of AI will not just be decided by model quality or interface design. It will also be decided by who makes trust operational.
Until next time,
Firas Sozan
Your Cloud, Data & AI Search & Venture Partner
Find me on Linkedin: https://www.linkedin.com/in/firassozan/
Personal website: https://firassozan.com/
Company website: https://www.harrisonclarke.com/
Venture capital fund: https://harrisonclarkeventures.com/
‘Inside the Silicon Mind’ podcast: https://insidethesiliconmind.com/
