Hello Cyberbuilders đ
Weâre back in our series on the 12 cybersecurity platforms and how theyâre using AI. Last week, in part 2, we saw a common pattern: most of the flashy AI features are focused on UX. Friendlier dashboards. Smarter explanations. Auto-generated content when you ask a question.
UX improvements are a first step, and I hope cybersecurity vendors wonât just surf the AI hype by adding features to their roadmap and rebranding everything as âAIâ with generated reports or configurations. There is a considerable risk of AI Hangover.
This week, as we explore the next group of platform categories, youâll notice the same UX-heavy theme plus some interesting âagenticâ use cases, mainly in the SOC space. AI can transform not only the look and simplicity of the interface but also its functionality. The real game-changer is how it can change decisions, speed, and outcomes in cybersecurity.
In this post:
-
The crux of my argument: superficial AI features are not enough. True value comes when AI drives real operational transformation, improving security and efficiency.
-
I show how AI-driven solutions should shift cybersecurity from reactive, manual approaches to proactive, automated operations.
-
I focus on three new cybersecurity platforms: SOC Enablement Technologies, Application Security, and Identity & Access Management.
Most of the AI âupgradesâ weâve seen in cybersecurity platforms so far have been UI enhancements.
When the cybersecurity tool generates a report for you, retrieves the correct documentation, or explains an alert in plain language, it saves security professionals time spent on training. It saves hours digging through manuals and helps security newcomers feel supported. Thatâs valuable.
I will argue that it doesnât change the game.
You still need two to five people to operate each platform. With 12 major cybersecurity platforms in the mix, thatâs easily a 30-person team just to keep the lights on. AI-driven UX tweaks are not yielding enough productivity gains to reduce headcount.
As Cyber Builders, whether you sell or use a cyber product, AI UX improvement doesnât remove the âwho is going to use itâ question.
The real breakthrough comes when AI not only smooths the workload but also cuts it. When the platform doesnât just help you check alerts, it checks them for you. When it doesnât just explain the risk, it automatically reduces it.
Thatâs where generative AI would transform cybersecurity:
-
Productivity gains: Less manual work, fewer clicks, fewer repetitive tasks.
-
Headcount reduction: Reduce the number of people needed per platform without compromising coverage.
-
Better security outcomes: Faster detection, smarter prevention, and rules that adapt to the business without an army of analysts babysitting them.
Think about it:
-
Data security could stop being a nightmare of data classification if AI learns whatâs truly valuable in your business.
-
Identity governance could become more manageable if AI maps actual access needs and flags potentially toxic rights.
-
Platform operations could finally scale without the need for armies of analysts constantly tweaking alerts and rules.
Thatâs the core promise of generative AI: reducing headcount, eliminating busywork, and delivering stronger securityânot just making GUIs friendlier, but redefining how much work people have to do at all.
Identity has become the new perimeter. Attackers no longer need to break firewalls when they can steal or misuse credentials. AI is now reshaping how access is monitored, adjusted, and defendedâboth as a query builder and as a set of anomaly detection algorithms.
The query-builder use case is about making identity governance less about spreadsheets and more about insight. Analysts and IAM teams can ask questions directly in natural language to identify and uncover risks and misconfigurations.
-
SailPoint Atlas AI uses AI-driven insights to recommend least-privilege access and model roles. It identifies over-provisioned accounts and highlights unusual entitlements without requiring manual reviews.
-
SailPointâs machine identity security goes further by discovering hidden service accounts and cloud identities, flagging which ones are risky. This is not a trivial issueâmany IAM managers Iâve spoken with recently have expressed that managing machine identities, especially Non-Human Identities (or NHIs), is becoming a significant operational challenge. The growth in the number of service accounts and cloud identities, along with new AI Agents and API Keys, each with distinct access requirements, renders manual tracking increasingly unsustainable. Questions like âHow can I effectively scale my NHI management? Which accounts should be retired or have their permissions restricted?â come up frequently. This shift not only improves security posture by reducing the attack surface, but it also saves countless hours previously spent on manual audits and reviews.
Outcome: IAM teams donât need to manually parse through entitlements or role hierarchies. AI surfaces anomalies, patterns, and governance recommendations in plain language.
The agent use case is about real-time decisions. Instead of waiting for periodic reviews, AI detects misuse and takes immediate action. I would say that the features we are seeing here are really a first initial collection. AI, or at least machine learning, has been utilized in anomaly detection algorithms for approximately a decade. Still, we see more and more automation in the space, and some vendors are really building valuable time savers.
-
Okta Identity Threat Protection with Okta AI monitors active sessions in real-time, detecting anomalous behavior (e.g., session hijacking, device mismatch) and triggering automated responses, such as step-up authentication or forced logout. This is very close to anomaly detection done using machine learning algorithms such as time series forecasting, clustering, and outlier detection etc.
-
Okta also integrates with Okta Workflows, allowing teams to automate remediation when suspicious behavior is detected, such as disabling accounts, revoking tokens, or notifying downstream systems. This could be accelerated using Generative AI, as AI can generate the workflow and act upon events.
-
PingOne Protect applies AI-driven fraud detection to login flows, utilizing behavioral and device risk scores to determine whether to allow, step up, or block an authentication attempt.
Outcome: Identity systems no longer authenticate once and assume trust. They continuously assess sessions, adapt to the context, and shut down misuse before it escalates.
This is the epicenter of AI adoption in cybersecurity. The SOC is where scale meets fatigueâand AI is being applied in two distinct ways: as a query builder and as an agent.
The first wave of AI in SIEM/IR is about making data accessible without requiring analysts to memorize query languages. Instead of writing SPL or KQL, analysts can ask questions in natural language.
-
Microsoft Security Copilot is embedded into Defender and Sentinel, where it generates KQL queries, summarizes incidents, and suggests investigation paths.
-
Splunk AI Assistant supports natural-language query generation for SPL searches, lowering the barrier for junior analysts.
-
Elastic AI Assistant for Security enables users to query Elastic Security data in plain language, making correlation and hunting more accessible.
Outcome: Analysts no longer need to memorize complex query syntax. They ask, get structured results, and move faster through investigation steps.
The second wave is about doing the work: AI agents that donât just generate queries but autonomously triage, investigate, and sometimes remediate. That is where I see more value.
-
Microsoft Copilot Agents extend Copilot into task-specific assistants, such as phishing triage agents or vulnerability management agents. These handle defined workflows end-to-end.
-
Google is integrating Gemini into SecOps for guided hunts and AI-driven detection workflows.
-
Prophet Security built an autonomous SOC platform, where AI analysts can autonomously triage alerts, investigate their causes, and propose responses.
-
Qevlar AI positions its AI SOC Analyst as a real-time agent, automatically investigating alerts and escalating only what matters.
Outcome: Instead of manually clicking through alert queues, analysts either converse with an assistant or let agents resolve low-value tasks, thereby reserving human attention for more complex calls.
The goal is clear: do more SOC work with fewer human analysts.
Developers hate vulnerability noise. The magic arrives when security tools stop just alerting and start remediation. AI is turning those alerts into actionable code changes, sometimes automatically.
GitHubâs Copilot Autofix now does more than suggest a patchâit automatically proposes multi-file fixes for code scanning alerts, explains the vulnerability, and even helps roll out the change. Itâs available as part of GitHub Advanced Security (GHAS).
GitHub has rolled out Security Campaigns (public preview), which enable teams to batch-apply Copilot Autofix across hundreds or thousands of historical alerts, opening pull requests en masse for supported fixes.
Because of all this, GitHub touts that Copilot Autofix lets developers remediate â3x fasterâ than before.
However, some are moving faster in helping software teams resolve their issues. I wrote extensively on the startup we built around Application Security over the last summer.
Glev.ai built what every software engineering team needs but has never been delivered. A way not only to detect but also to remediate issues and collaborate with developer teams.
AI wonât automatically solve all aspects. Some code is in production, some is for testing. Some containers are publicly exposed, some are not. Developers have their own input sanitization and wonât let AI reinvent the wheel or generate poor code.
As we discussed in the previous posts, AI in AppSec must:
-
Orchestrate the remediation
-
Deduplicate issues from the dozens of scanners
-
Provide contextual guidance to software developers, based on previous fixes and company-wide AppSec savoir faire
And more. I am not here to pitch you all what weâve built. Please contact me to learn more.
The real breakthrough comes when security tools open remediation tickets instead of security alerts.
Developers stay in their flow, security debt shrinks more quickly, and human reviewers are freed to focus on complex logic and design decisions rather than boilerplate patches.
AIâs true promise in cybersecurity isnât just about sleeker interfaces or faster queries. Itâs about fundamentally reshaping how we defend, build, and operate.
As platforms evolve from reactive helpers to proactive agents, the focus must shift from marginal productivity gains to operational transformation. The future belongs to solutions that eliminate busywork, automate decisions, and enable humans to focus on strategy and creativity.
The organizations that embrace this shift, moving past AI as a dashboard feature, will not only outpace threats but also redefine what it means to be secure in a rapidly changing world.

