Close Menu

    Subscribe to Updates

    Get the latest creative news from infofortech

    What's Hot

    Instagram Users Urged to Save Encrypted DMs Before Feature Disappears

    March 17, 2026

    File Your Taxes With TurboTax Full Service Now Before Prices Go Up

    March 17, 2026

    Death by Tariffs: Volvo Discontinuing Entry-Level EX30 EV in the US

    March 16, 2026
    Facebook X (Twitter) Instagram
    InfoForTech
    • Home
    • Latest in Tech
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    Facebook X (Twitter) Instagram
    InfoForTech
    Home»Cybersecurity»The 12 Cybersecurity Platforms at the Age of AI (Part 3)
    Cybersecurity

    The 12 Cybersecurity Platforms at the Age of AI (Part 3)

    InfoForTechBy InfoForTechJanuary 24, 2026No Comments9 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    The 12 Cybersecurity Platforms at the Age of AI (Part 3)
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email


    Hello Cyberbuilders 🖖

    We’re back in our series on the 12 cybersecurity platforms and how they’re using AI. Last week, in part 2, we saw a common pattern: most of the flashy AI features are focused on UX. Friendlier dashboards. Smarter explanations. Auto-generated content when you ask a question.

    UX improvements are a first step, and I hope cybersecurity vendors won’t just surf the AI hype by adding features to their roadmap and rebranding everything as “AI” with generated reports or configurations. There is a considerable risk of AI Hangover.

    This week, as we explore the next group of platform categories, you’ll notice the same UX-heavy theme plus some interesting “agentic” use cases, mainly in the SOC space. AI can transform not only the look and simplicity of the interface but also its functionality. The real game-changer is how it can change decisions, speed, and outcomes in cybersecurity.

    In this post:

    • The crux of my argument: superficial AI features are not enough. True value comes when AI drives real operational transformation, improving security and efficiency.

    • I show how AI-driven solutions should shift cybersecurity from reactive, manual approaches to proactive, automated operations.

    • I focus on three new cybersecurity platforms: SOC Enablement Technologies, Application Security, and Identity & Access Management.

    Most of the AI “upgrades” we’ve seen in cybersecurity platforms so far have been UI enhancements.

    When the cybersecurity tool generates a report for you, retrieves the correct documentation, or explains an alert in plain language, it saves security professionals time spent on training. It saves hours digging through manuals and helps security newcomers feel supported. That’s valuable.

    I will argue that it doesn’t change the game.

    You still need two to five people to operate each platform. With 12 major cybersecurity platforms in the mix, that’s easily a 30-person team just to keep the lights on. AI-driven UX tweaks are not yielding enough productivity gains to reduce headcount.

    As Cyber Builders, whether you sell or use a cyber product, AI UX improvement doesn’t remove the “who is going to use it” question.

    The real breakthrough comes when AI not only smooths the workload but also cuts it. When the platform doesn’t just help you check alerts, it checks them for you. When it doesn’t just explain the risk, it automatically reduces it.

    That’s where generative AI would transform cybersecurity:

    • Productivity gains: Less manual work, fewer clicks, fewer repetitive tasks.

    • Headcount reduction: Reduce the number of people needed per platform without compromising coverage.

    • Better security outcomes: Faster detection, smarter prevention, and rules that adapt to the business without an army of analysts babysitting them.

    Think about it:

    • Data security could stop being a nightmare of data classification if AI learns what’s truly valuable in your business.

    • Identity governance could become more manageable if AI maps actual access needs and flags potentially toxic rights.

    • Platform operations could finally scale without the need for armies of analysts constantly tweaking alerts and rules.

    That’s the core promise of generative AI: reducing headcount, eliminating busywork, and delivering stronger security—not just making GUIs friendlier, but redefining how much work people have to do at all.

    Identity has become the new perimeter. Attackers no longer need to break firewalls when they can steal or misuse credentials. AI is now reshaping how access is monitored, adjusted, and defended—both as a query builder and as a set of anomaly detection algorithms.

    The query-builder use case is about making identity governance less about spreadsheets and more about insight. Analysts and IAM teams can ask questions directly in natural language to identify and uncover risks and misconfigurations.

    • SailPoint Atlas AI uses AI-driven insights to recommend least-privilege access and model roles. It identifies over-provisioned accounts and highlights unusual entitlements without requiring manual reviews.

    • SailPoint’s machine identity security goes further by discovering hidden service accounts and cloud identities, flagging which ones are risky. This is not a trivial issue—many IAM managers I’ve spoken with recently have expressed that managing machine identities, especially Non-Human Identities (or NHIs), is becoming a significant operational challenge. The growth in the number of service accounts and cloud identities, along with new AI Agents and API Keys, each with distinct access requirements, renders manual tracking increasingly unsustainable. Questions like “How can I effectively scale my NHI management? Which accounts should be retired or have their permissions restricted?” come up frequently. This shift not only improves security posture by reducing the attack surface, but it also saves countless hours previously spent on manual audits and reviews.

    Outcome: IAM teams don’t need to manually parse through entitlements or role hierarchies. AI surfaces anomalies, patterns, and governance recommendations in plain language.

    The agent use case is about real-time decisions. Instead of waiting for periodic reviews, AI detects misuse and takes immediate action. I would say that the features we are seeing here are really a first initial collection. AI, or at least machine learning, has been utilized in anomaly detection algorithms for approximately a decade. Still, we see more and more automation in the space, and some vendors are really building valuable time savers.

    • Okta Identity Threat Protection with Okta AI monitors active sessions in real-time, detecting anomalous behavior (e.g., session hijacking, device mismatch) and triggering automated responses, such as step-up authentication or forced logout. This is very close to anomaly detection done using machine learning algorithms such as time series forecasting, clustering, and outlier detection etc.

    • Okta also integrates with Okta Workflows, allowing teams to automate remediation when suspicious behavior is detected, such as disabling accounts, revoking tokens, or notifying downstream systems. This could be accelerated using Generative AI, as AI can generate the workflow and act upon events.

    • PingOne Protect applies AI-driven fraud detection to login flows, utilizing behavioral and device risk scores to determine whether to allow, step up, or block an authentication attempt.

    Outcome: Identity systems no longer authenticate once and assume trust. They continuously assess sessions, adapt to the context, and shut down misuse before it escalates.

    Substack generated this picture for me with “platform automation.”

    This is the epicenter of AI adoption in cybersecurity. The SOC is where scale meets fatigue—and AI is being applied in two distinct ways: as a query builder and as an agent.

    The first wave of AI in SIEM/IR is about making data accessible without requiring analysts to memorize query languages. Instead of writing SPL or KQL, analysts can ask questions in natural language.

    • Microsoft Security Copilot is embedded into Defender and Sentinel, where it generates KQL queries, summarizes incidents, and suggests investigation paths.

    • Splunk AI Assistant supports natural-language query generation for SPL searches, lowering the barrier for junior analysts.

    • Elastic AI Assistant for Security enables users to query Elastic Security data in plain language, making correlation and hunting more accessible.

    Outcome: Analysts no longer need to memorize complex query syntax. They ask, get structured results, and move faster through investigation steps.

    The second wave is about doing the work: AI agents that don’t just generate queries but autonomously triage, investigate, and sometimes remediate. That is where I see more value.

    • Microsoft Copilot Agents extend Copilot into task-specific assistants, such as phishing triage agents or vulnerability management agents. These handle defined workflows end-to-end.

    • Google is integrating Gemini into SecOps for guided hunts and AI-driven detection workflows.

    • Prophet Security built an autonomous SOC platform, where AI analysts can autonomously triage alerts, investigate their causes, and propose responses.

    • Qevlar AI positions its AI SOC Analyst as a real-time agent, automatically investigating alerts and escalating only what matters.

    Outcome: Instead of manually clicking through alert queues, analysts either converse with an assistant or let agents resolve low-value tasks, thereby reserving human attention for more complex calls.

    The goal is clear: do more SOC work with fewer human analysts.

    Developers hate vulnerability noise. The magic arrives when security tools stop just alerting and start remediation. AI is turning those alerts into actionable code changes, sometimes automatically.

    GitHub’s Copilot Autofix now does more than suggest a patch—it automatically proposes multi-file fixes for code scanning alerts, explains the vulnerability, and even helps roll out the change. It’s available as part of GitHub Advanced Security (GHAS).

    GitHub has rolled out Security Campaigns (public preview), which enable teams to batch-apply Copilot Autofix across hundreds or thousands of historical alerts, opening pull requests en masse for supported fixes.

    Because of all this, GitHub touts that Copilot Autofix lets developers remediate “3x faster” than before.

    However, some are moving faster in helping software teams resolve their issues. I wrote extensively on the startup we built around Application Security over the last summer.

    The Only AI Tool AppSec Needs. Obviously

    The Only AI Tool AppSec Needs. Obviously

    Application Security - AI Won’t Save You

    Application Security – AI Won’t Save You

    Glev.ai built what every software engineering team needs but has never been delivered. A way not only to detect but also to remediate issues and collaborate with developer teams.

    AI won’t automatically solve all aspects. Some code is in production, some is for testing. Some containers are publicly exposed, some are not. Developers have their own input sanitization and won’t let AI reinvent the wheel or generate poor code.

    As we discussed in the previous posts, AI in AppSec must:

    • Orchestrate the remediation

    • Deduplicate issues from the dozens of scanners

    • Provide contextual guidance to software developers, based on previous fixes and company-wide AppSec savoir faire

    And more. I am not here to pitch you all what we’ve built. Please contact me to learn more.

    The real breakthrough comes when security tools open remediation tickets instead of security alerts.

    Developers stay in their flow, security debt shrinks more quickly, and human reviewers are freed to focus on complex logic and design decisions rather than boilerplate patches.

    AI’s true promise in cybersecurity isn’t just about sleeker interfaces or faster queries. It’s about fundamentally reshaping how we defend, build, and operate.

    As platforms evolve from reactive helpers to proactive agents, the focus must shift from marginal productivity gains to operational transformation. The future belongs to solutions that eliminate busywork, automate decisions, and enable humans to focus on strategy and creativity.

    The organizations that embrace this shift, moving past AI as a dashboard feature, will not only outpace threats but also redefine what it means to be secure in a rapidly changing world.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    InfoForTech
    • Website

    Related Posts

    Instagram Users Urged to Save Encrypted DMs Before Feature Disappears

    March 17, 2026

    Why Security Validation Is Becoming Agentic

    March 16, 2026

    Meta to Shut Down Instagram End-to-End Encrypted Chat Support Starting May 2026

    March 15, 2026

    OpenClaw AI Agent Flaws Could Enable Prompt Injection and Data Exfiltration

    March 15, 2026

    GlassWorm Supply-Chain Attack Abuses 72 Open VSX Extensions to Target Developers

    March 14, 2026

    Chinese Hackers Target Southeast Asian Militaries with AppleChris and MemFun Malware

    March 13, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

    February 2, 20264 Views

    Rising Digital Financial Fraud in South Africa

    January 15, 20264 Views
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Advertisement
    About Us
    About Us

    Our mission is to deliver clear, reliable, and up-to-date information about the technologies shaping the modern world. We focus on breaking down complex topics into easy-to-understand insights for professionals, enthusiasts, and everyday readers alike.

    We're accepting new partnerships right now.

    Facebook X (Twitter) YouTube
    Most Popular

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

    February 2, 20264 Views
    Categories
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    • Latest in Tech
    © 2026 All Rights Reserved InfoForTech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.