The Model Has Commoditized.
The Moat Shifted.
From TDX and Agentforce World Tour NYC to the East Bay CXO room in Pleasanton β April made one thing unmistakable: intelligence without context is the most dangerous outcome in enterprise AI. Confident, fast, and wrong.
Context is now the
competitive advantage.
April opened in Moscone West and closed in Pleasanton β two rooms, three weeks apart, arriving at the same conclusion. The enterprise AI conversation has moved past model selection. The question now is whether your organization has made its institutional knowledge legible to the AI you have already deployed.
Prukalpa Sankar, Founder & Co-CEO of Atlan, framed it precisely at our April 30 gathering: P = f(I, C). Performance is a function of intelligence and context. Intelligence has compounded roughly a thousand times in the last decade. Context β the situated knowledge of how your business actually operates β has barely moved. A powerful model with no context about your business is not just less useful. It is the most dangerous outcome: confident, fast, and wrong.
"We've spent a decade making AI smarter. We forgot to make it knowledgeable about us."
TDX 2026 introduced the platform architecture to close that gap. Agentforce World Tour NYC showed what it looks like when it runs. The East Bay CXO room on April 30 got honest about what building it actually requires.
Intelligence Without Context
Is the Most Dangerous Outcome
The models are deployed. The demos performed. And yet, across conversations with enterprise technology leaders this month β from the Javits Center to Pleasanton β the question is the same: why hasn't the business moved?
Prukalpa Sankar gave the room the clearest answer yet at our April 30 gathering. The formula is simple. Performance is a function of intelligence and context. Intelligence has compounded roughly a thousand times in the last decade. Context β the situated knowledge of how your business actually operates, your institutional memory, your domain logic β has barely moved in the same period.
That gap is not a technology problem. It is a strategy problem. Organizations are deploying more capable models on top of the same incomplete picture of themselves. The result is not improved performance β it is faster errors, at greater confidence, at machine scale.
The enterprises pulling ahead are not the ones that adopted AI first. They are the ones that started treating their institutional knowledge as the asset β making it structured, governed, legible, and continuously updated. That is the context layer. And it is the architectural decision that separates organizations building a durable moat from those building an impressive demo.
The full room conversation β keynote, panel observations, and verbatim quotes from five practitioners who have tried to build this in production β is documented in Β§05.
Read the April 30 full recap βSalesforce declared a direction:
platform as infrastructure, not interface.
TDX 2026 was the clearest signal yet about where Salesforce is taking the platform. Headless 360 β the headline announcement β was not a product launch. It was an architectural repositioning. The entire Salesforce platform is now accessible via APIs, MCP tools, and CLI, with no browser required. Parker Harris's provocation in the developer keynote asked why you would ever log into Salesforce again if agents can act on it directly. That question framed two days of technical sessions and still frames the decisions enterprise architects face this quarter.
The agentic enterprise is no longer
a vision. It's a reference architecture.
One day after TDX, 130+ sessions at the Javits Center answered the question TDX raised: what does the agentic enterprise look like when it is actually running? Financial services, retail, media, healthcare β every industry track carried the same theme. Agentforce 360 is the moment AI agents stop being demos and start being infrastructure. The production proof was on stage, and the conversation in the room was not about adoption β it was about governance at scale.
got honest in the room.
March gave us the diagnosis β AI is making decisions faster than organizations can govern them. April gave us the architecture of the answer. One keynote that reframed the question entirely. A panel that got candid about the gap between what the slides say and what implementation actually looks like in production.
Intelligence has compounded roughly a thousand times in the last decade β model benchmarks that stood at 9% in 2023 reaching near-human performance by 2025. And yet a significant share of CEOs report zero measurable financial benefit from AI. The formula explains why. P = f(I, C): Performance is a function of Intelligence and Context. Intelligence is commoditizing. Context β the situated knowledge of how your business actually operates, your institutional memory, your domain logic, your customer history β has barely moved in a decade. A powerful model with no context about your business is not less effective. It is actively dangerous: confident, fast, and wrong. Enterprises representing over $10 trillion in market cap trust Atlan with their context layer. The pattern across all of them is the same: the winners are not the ones with the best models. They are the ones that have made their institutional knowledge legible to those models. "Context is king. Context is your IP."
The Panel : Moderated by
Pranav S (VP IT, Mozilla) Β· Nishant Arya (Director of Engineering, Stryker) Β· Pari Ambatkar (Head of Enterprise AI & Platforms, Marvell) Β· Hardeep Singh (Sr. Director Enterprise System & AI, Procore Technologies) Β· Dhiraj Sharda (Sr. Director Product, Blackhawk Network) Five practitioners, four industries, the same wall.
Venue & Community Host: Dhiraj Sharda, Sr. Director Product, Blackhawk Network. The April gathering was hosted at Blackhawk Network's Pleasanton campus. Dhiraj's generosity in hosting the community β and his consistent role as an East Bay CXO leader β made the April session possible.
Four shifts every technology leader
should be tracking this quarter
Graph-based retrieval augmented generation β which adds a knowledge graph layer to enable multi-hop reasoning across connected data β is moving into enterprise production architectures. For organizations with complex, relationship-dense data (financial networks, clinical pathways, supply chain dependencies, contract graphs), GraphRAG unlocks reasoning capability that flat vector search cannot provide. The organizations building this infrastructure now are treating it as a strategic data asset, not a feature request.
Organizations are discovering that the metric for AI success in production is not technical accuracy β it is user adoption. Systems that perform well in testing get ignored in production because the people they're built to assist don't trust them. Every practitioner in the April 30 panel confirmed this independently. The path runs through transparency: start with small user groups, expose reasoning, iterate on real-world feedback, build the evidence base that earns trust incrementally.
TDX introduced Agent Fabric. Microsoft, AWS, and others are building competing orchestration layers. The question for enterprise architects is no longer "which model?" β it is "which control plane?" This is the new lock-in question, and it is more consequential in many respects than the foundation model choice. Enterprises that have not made a deliberate orchestration decision are already making a default one.
The organizations building context layers today are building durable competitive advantage. Proprietary data, domain terminology, customer history, and operational logic β made legible to AI and continuously updated β cannot be replicated by a competitor that deploys the same models. The context layer is where AI value compounds over time. As Prukalpa Sankar put it in Pleasanton: context is your IP. The enterprises that understand this in 2026 will hold an asymmetric advantage over those that treat context as a later-phase problem.
ITSM in an Agentic World:
What IT leaders need to rebuild before AI can operate at scale.
The room doesn't close when you leave. The East Bay CXO community gathers monthly β each session builds on the last. May continues where April left off.
No slides. No vendor pitches. A peer-driven conversation on what it actually takes to build an enterprise AI stack that holds.
Reserve Your Seat βPart of the East Bay CXO Community β a Teqfocus-led initiative to foster trusted peer relationships and collective innovation in the Bay Area technology leadership circle.
On the next episode of TeqTalk, Caroline Chung joins to explain what makes AI deployments succeed in healthcare β covering governance, data frameworks, de-identification tradeoffs, and digital twins. She offers CDOs, CTOs, and technology leaders a practical view of building trustworthy, scalable AI under real clinical and regulatory pressures. Few people in healthcare data bring the operational depth and intellectual clarity she does to this conversation.