Hello Cyber Builders đź––
I spent the last week of March on the floor at RSA Conference 2026 in San Francisco. 50,000+ attendees. Nearly 700 exhibitors. Four days of keynotes, demos, side meetings, and conversations.
I came back with more clarity than I’ve had in years. It wasn’t that everything was new. The signals were consistent. Founders, investors, CISOs, researchers — everyone saw the same problems.
Here are my seven takeaways. It is for builders. Treat it as a field guide.
At RSAC 2026, AI was the core of every conversation, keynote, product launch, and investment thesis.
The question has changed. We are no longer about whether AI will change cybersecurity. What matters now is how you use AI as a defender, builder, or buyer.
The industry is moving from reactive to autonomous. We’re shifting from tools that only surface problems to systems that act on them.
There’s a nuance worth naming, though. Ross Haleliuk, writing in Venture in Security after RSAC, observed that “every security leader I talked to shared security concerns that are grounded, realistic, and pragmatic” — not AI armageddon, but practical things like shadow AI (employees using tools nobody approved, data going to places nobody mapped) and attackers using AI to do reconnaissance at scale against gaps in identity and vulnerability management that teams simply couldn’t manage at volume before.
There was a sense of urgency. People at RSAC know what they’re facing and why they need to move faster.
The numbers are tough to accept.
Google Mandiant’s M-Trends 2026 report — drawn from over 500,000 hours of frontline investigations — showed that the time between initial access and handoff to secondary threat groups has collapsed from 8 hours in 2022 to 22 seconds in 2025. [Google Mandiant M-Trends 2026, via Resilient Cyber Newsletter #90]
22 seconds. The triage window is gone. Manual review of possible infections needs a complete overhaul. By the time you know you’re breached, attackers have already mapped your IT system.
The same compression is happening on zero-days. What used to take months to weaponize after a patch was published now takes days — sometimes one. By the end of the year, we’re likely talking hours. Phil Venables, former CISO of Goldman Sachs, described himself as “short-term pessimistic but wildly long-term optimistic” — worried that more software, more vulnerabilities, and industrializing attackers are compounding simultaneously.
This narrative is not fluffly marketing. In the Cyber Apocalypse CTF with over 18,000 participants, AI agents landed in the top 10% — outperforming 90% of human entries. Sophisticated attacks no longer require sophisticated human operators. [Resilient Cyber Newsletter #90]
The main message from RSAC 2026:Â move faster. Human-in-the-loop is no longer enough for first-level triage and response.
The official theme was Power of Community. But the real focus was agentic AI and how to secure autonomous systems.
AI agents are real and in use across enterprises. They work together, act on critical systems and data, and behave unpredictably.
Traditional security models assume known users, resources, and permissions. Agents break these rules. Their behavior is probabilistic, and their permissions must be wide (at least once you start) to match the promised outcomes.
The major players are converging on the same response. CrowdStrike is evolving the EDR concept toward AIDR — AI Detection & Response — where the goal is no longer to detect malware but to trace the decision logic of AI agents. Cisco has proposed Defense Claw, an open-source framework for automated inventory and control of AI agents. Wiz launched a new AI application protection platform. Microsoft introduced a predictive AI security platform that learns continuously (and they are pitching “double agent” as the problem – love it🤵🏻‍♂️🦞).
RSA Innovation Sandbox — the best barometer for what’s coming — had 10 finalists out of 200+ candidates, and the majority focused on agentic AI governance. The winner, Geordie, delivered a global governance platform that applies least-privilege policies to AI agents, with real-time observability and risk remediation. [RSA Conference Innovation Sandbox 2026]
If you’re building here, agentic AI security is not a niche. It’s the next frontier for all security categories.
As agents proliferate, identity is changing. Non-human identities, such as machines, APIs, and AI agents, now make up most identities in enterprises. They’re decentralized, often lack a clear owner, and don’t face background checks or consequences.
Agents make high-stakes decisions without human oversight. Managing this risk means shifting from access control to behavioral control — not just whether this identity can do X, but whether it should do X in this context.
A new category is emerging: advanced governance for non-human identities. This means extending Zero Trust to agents, building registries to support accountability, and using AI to monitor behavior rather than just enforcing access.
Two finalists at the RSA Innovation Sandbox illustrate that Identity is always a source of Innovation: Token (intent-based identity management) and Glide Identity (SIM-based passwordless authentication). [RSA Conference Innovation Sandbox 2026]
Look at software engineers over the last two years. Their job shifted from writing code line by line to directing AI agents, reviewing outputs, catching errors, and setting architecture. The role moved up a level.
Many developers find this liberating. For those who love perfecting implementations, it’s frustrating and hard to accept.
Now, every cybersecurity function is seeing the same shift.
SOC analysts are becoming orchestrators. Instead of manually triaging thousands of alerts, they supervise AI systems that handle first-level detection, correlation, and remediation. They step in only for the highest-stakes decisions.
Carnegie Mellon’s CyLab calls this “cyber autonomy.” [Carnegie Mellon CyLab]
Threat intelligence analysts now use AI systems that ingest, correlate, and score signals at machine speed, across volumes no human team could handle. Their job shifts to judgment, sourcing, and strategy.
The penetration testers already feel this with the numerous AI-driven offensive startups.
The compliance expert is not immune either. Automated policy engines can now map controls, flag gaps, review third-party reports, and generate evidence at a fraction of the time. The expert’s value shifts toward interpretation, edge cases, and stakeholder navigation.
None of this means these jobs aren’t going away. They’re getting harder to do poorly and more valuable when done well. People who learn to direct AI systems will be in high demand.
The funding environment has changed. The RSA floor showed it. Energy and confidence are back. Seed rounds are now around $10M for companies that barely have a product or design partners. Repeat founders are closing $50–100M rounds at early stages. Deal count in 2025 matched 2024, but capital deployed roughly doubled. The market is back to peak levels. [Resilient Cyber Newsletter #90]
For context: Series A rounds in Europe are usually €10–15M. In the US, they’re $60M or more. This gap matters if you’re competing globally.
But here’s what capital is concentrating on. Generic AI wrappers aren’t getting funded. Investors want deep domain expertise solving hard, specific problems, or teams disrupting established markets with agentic AI. The premium goes to experienced teams and repeat founders who know the problem firsthand. But going two levels deeper than your pitch deck — into edge cases, operational realities, failure modes — that’s what separates the funded from the passed.
This is the point I keep returning to.
AI-assisted coding makes software easy to copy. A competitor ships a feature, someone reverse-engineers it, and ships an equivalent in days. Software alone is no longer defensible.
Sequoia argued in one of the most influential investment theses of the AI era that for every dollar spent on software, six dollars go to services — and that the next trillion-dollar company will sell the work done, not the tool that does it. Andreessen Horowitz was even more direct: software companies now face a binary choice — become an AI platform or become an AI-powered service. There is no third path. [Sequoia Capital; Andreessen Horowitz, via Resilient Cyber Newsletter #90]
The new moats are things AI can’t copy with a prompt.
Horizon3.ai demonstrated what the full-cycle model looks like: 102% year-over-year ARR growth with a continuous “hack, fix, verify, repeat” proactive testing loop. Not a single step in the chain — the whole chain. [Resilient Cyber Newsletter #90, via Fast Company Most Innovative Companies 2026]
The Build vs. Buy debate sharpened at RSAC 2026, too — and it’s moving in a direction that should concern platform vendors. Darwin Salazar reported that more security leaders than ever are leaning toward building on certain categories, now that agentic development has dramatically lowered the barrier. A BSidesSF panel with security leaders from Anthropic, OpenAI, Perplexity, and Cursor made the nuances clear: the gap between MVP and production-ready is still messy, shipping is only the beginning (you still need to staff for maintenance), and some problems are genuinely too hairy to own internally.
Jackie Bow, Head of Detection & Response at Anthropic, cut through the debate: “Do you want to be the one on-call when the thing you built breaks at 2 am?” [The Cybersecurity Pulse] It’s a real limit. The categories where teams are choosing to build are expanding. If your product sits in one of them, your moat is shrinking.
SIEMs are losing their hold. MCP, data lakes, data pipelines, and triage agents have disrupted the SIEM moat. Teams now store telemetry cost-effectively and use data pipelines for routing and AI readiness. Security teams are moving away from vendor lock-in and building composable, best-of-breed stacks. If your product is a monolith in a composable world, you have a positioning problem.
Ask yourself: what about your platform can’t a competitor copy just by looking at it? That’s your real business. Build for that from day one.
I have to include this. Amid all the talk about agentic AI and autonomous SOCs, the most grounded observation at RSAC 2026 came from elsewhere.
Ross Haleliuk wrote that his biggest takeaway was that security leaders are going back to fundamentals. After years of tool sprawl and chasing trends, teams see the real gaps: asset visibility, identity hygiene, access reviews, and patching. The basics are what keep companies safe.
What changed? Two things. First, recent years have proved that most breaches come from basics: default credentials, unknown exposed assets, and exceptions that become permanent. Second, AI now lets teams execute fundamentals at scale.
The real competition for most cybersecurity startups isn’t other vendors — it’s doing nothing. Most security teams he spoke with “aren’t actively doing POCs with 10 vendors that solve the same problem. Instead, most of the time they’re deciding whether to even prioritize the problem this quarter.”
For founders, this is a signal. If your product helps teams do the basics faster, at scale, with less headcount, you’re in the right place. Solving hard, old problems with new tools might be what the market wants right now.
RSAC 2026 was a turning point. AI is the main event. The threat environment is accelerating faster than most realize. Agentic AI is creating new attack surfaces and forcing a redesign of identity, detection, and response.
Capital is available. The urgency is real. The window for expert-led teams is open.
But the playbook has changed. Speed, full-cycle outcomes, and AI-cann’t-copy moats are what win now.
Build fast. Build for outcomes. Don’t stop at the finding.
Laurent đź’š
Field notes from RSA 2026 (San Francisco, 23–26 March 2026).
Sources:
