Close Menu

    Subscribe to Updates

    Get the latest creative news from infofortech

    What's Hot

    Instagram Users Urged to Save Encrypted DMs Before Feature Disappears

    March 17, 2026

    File Your Taxes With TurboTax Full Service Now Before Prices Go Up

    March 17, 2026

    Death by Tariffs: Volvo Discontinuing Entry-Level EX30 EV in the US

    March 16, 2026
    Facebook X (Twitter) Instagram
    InfoForTech
    • Home
    • Latest in Tech
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    Facebook X (Twitter) Instagram
    InfoForTech
    Home»Cybersecurity»The 12 Cybersecurity Platforms at the Age of AI (Part 2)
    Cybersecurity

    The 12 Cybersecurity Platforms at the Age of AI (Part 2)

    InfoForTechBy InfoForTechJanuary 24, 2026No Comments10 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    The 12 Cybersecurity Platforms at the Age of AI (Part 2)
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email


    Hello Cyber Builders 🖖

    AI is showing up in every cybersecurity platform.
    That’s not news anymore. What matters now is how AI is actually being used and whether there are real patterns worth your attention. As Cyber Builders, whether you’re creating new products or using cybersecurity platforms, you need to understand these patterns to build better tools or choose more effective solutions.

    AI is shifting the balance from users learning complex systems to systems that understand and respond to users in plain language.

    This week, we’re exploring a few more categories (CTEM, User Awareness, GRC) where vendors are integrating AI into their products and services. I’ll highlight what’s different, what’s copy-paste marketing, and what might actually change the way you work.

    Next week, I’ll wrap it all up into a bigger picture view. We’ll explore key questions, such as: How are AI-driven interfaces reshaping efficiency and user experience in cybersecurity? What challenges and solutions are emerging in aligning AI applications with human thinking? Right now, it’s messy, noisy, and more than a little confusing, but clarity is on the horizon.

    When you look across industries, most products use generative AI the same way: to make the interface smoother. I’ve already covered that angle in depth.

    A new UX with AI: LLMs are a Frontend Technology

    A new UX with AI: LLMs are a Frontend Technology

    Detection Engineering UX: 5 Key Principles to Simplify, Personalize, and Amplify AI & Non-AI Alerts

    Detection Engineering UX: 5 Key Principles to Simplify, Personalize, and Amplify AI & Non-AI Alerts

    Cybersecurity is no different. These products have always been challenging to set up, tune, and master. Rules and queries are powerful, but they weren’t designed for humans. They were designed for efficiency, and they speak a language most people never want to learn. Anyone who has attempted to create a KQL query or a Snort signature is familiar with this issue.

    That’s where AI is quietly changing the game. It serves as the middleman between users and tools. Instead of forcing you to adapt to a cryptic query language, the software starts adapting to you. You describe what you want in plain language, and the AI translates it into something the system understands.

    Another common UX-driven use case is creating personalized summaries from a large dataset generated by IT and cybersecurity platforms, resulting in a report that addresses the questions users raise.

    But the real innovation would be to see these synthesis tools get smarter. Instead of producing generic summaries, AI-powered platforms should tailor reports to the specific needs, risk profiles, and preferences of each team or user, integrating their own data or alerts.

    You should be able to prioritize the most urgent threats in your context and even turn findings into actionable recommendations—almost automatically. We’re not there yet, but progress is clear. Soon, AI-powered tools will help connect technical data with business needs, providing each team with insights they can actually use. This shift is crucial. It will increase productivity and transform how you utilize these products.

    And it’s one of the key threads I’ll be pulling on in next week’s wrap-up: is AI finally helping security tools work the way humans think, instead of the other way around?

    As AI/LLM tools pervade corporate environments, new risks emerge, including prompt injection, shadow AI/LLM usage, model theft, and the leakage of sensitive data. It is striking to see that AI Security (how you assess your exposure to AI threats and detect them) is a set of features that is spreading across multiple categories. Last week, we saw that Cloudflare integrated it like an “AI Firewall” and Wiz added it to its Cloud Security platform as an “AI-SPM (Security Posture Management)”.

    In the CTEM category, vendors are responding by deeply embedding AI into exposure management. Below are concrete examples from leading vendors, illustrating the real features, their functionality, and the outcomes they produce. Once again, they also integrate AI Security and utilize AI to enhance cybersecurity.

    In the GRC platforms, you are seeing AI Security Governance features. In contrast, in the User Awareness category, platforms are bringing new content to make end-users aware of AI Security risks.

    While I understand all these new features and see them as valuable, I think they are confusing. Yes, misused AI technologies are a new “threat” within risk matrices. However, I think it will be confusing over time for MSSPs, CISOs, and the entire cybersecurity community if AI Security is somewhat ubiquitous yet unclear at the same time. We’ll see how this goes in the following years.

    Continuous Threat Exposure Management (CTEM) is an ongoing approach to identifying, assessing, and managing an organization’s vulnerabilities and potential attack surfaces. CTEM approach includes continuous monitoring for exposures, enabling organizations to address risks as they emerge proactively.

    🔐 Includes: Automated vulnerability scanning, attack surface management, penetration testing, and exposure triage.

    Tenable has gone beyond traditional vulnerability scanning with its ExposureAI and AI Exposure features. (Note to the Tenable Product Marketing team, isn’t it a bit confusing?)

    AI Exposure

    • Discovery of shadow AI. AI Exposure inventories AI apps, libraries, and plugins across environments—surfacing risky usage that would otherwise not appear on a CVE list.

    • Governance and policy enforcement. Beyond discovery, AI Exposure introduces policy controls, enabling enterprises to monitor and restrict the use of Copilot or ChatGPT Enterprise.

    ExposureAI

    • Summarized attack paths and guided fixes. ExposureAI generates plain-language summaries of attack paths and recommends mitigations, so analysts don’t have to manually parse graphs before filing remediation tickets.

    • Unified prioritization. ExposureAI integrates context from EDR, cloud, OT, and ITSM connectors, surfacing toxic combinations and ranking issues by business impact.

    User value: faster triage, clear fixes, and visibility into AI risk that was previously invisible.

    Qualys has extended its TruRisk platform with TotalAI.

    • AI and LLM workload discovery. TotalAI inventories models, GPUs, cloud services, and shadow AI workloads.

    • Scanning for AI/LLM risks. It identifies prompt injection, data leakage, jailbreaks, and model theft, mapped against the OWASP Top 10 for LLMs.

    • Risk scoring and compliance. Findings are integrated into TruRisk, so AI risks are ranked alongside other exposures. Compliance reports are generated for GDPR, PCI, and other relevant regulations.

    • Scenario-based coverage. TotalAI already tests against dozens of attack scenarios, from multilingual exploits to bias amplification.

    User value: treating AI as part of the exposure landscape, not an afterthought—so AI assets are inventoried, tested, scored, and governed with the same rigor as everything else.

    Rapid7 is embedding AI directly into its detection and exposure stack with Incident Command and AI Alert Triage. It appears that they are utilizing the AI integration also to facilitate a category shift. From a vulnerability scanner, to a CTEM platform.. now to a wider positioning with CTEM but also SIEM, MDR, and more.

    • Unified context for investigations. Incident Command combines exposure visibility with threat detection, allowing analysts to view alerts and the asset’s risk posture within the same workflow.

    • Agentic AI workflows. These workflows triage, investigate, and propose response steps, trained on Rapid7’s own SOC data.

    • Alert triage at scale. The AI Alert Triage classifies alerts automatically with a claimed 99.93% accuracy in identifying benign cases, thereby reducing the number of false positives.

    • Report automation. AI drafts investigation summaries, thereby reducing the time analysts spend on documentation.

    User value: reduced alert fatigue, consistent investigations, and faster hand-offs to IT teams for closure.

    Phishing and social engineering are now often generated by machines. Awareness programs had to evolve from annual slide decks to continuous, data-driven training that mirrors live attacks and adapts to each person. Below are concrete examples of how KnowBe4 and Hoxhunt are applying AI—what the feature does, where it resides in the product, and the value it provides to a security team.

    KnowBe4’s AIDA adds four agents on top of its training stack: an Automated Training Agent that assigns modules based on user risk, a Template Generation Agent that creates phishing simulations aligned to current attack patterns, a Knowledge Refresher Agent that pushes short spaced-repetition quizzes, and a Policy Quiz Agent that turns your policies into checks for understanding. Datasheet (PDF) · Admin guide.

    User value: less manual campaign design, training tied to observed behavior, and policy comprehension you can audit rather than assume.

    Hoxhunt’s GenAI Content Generation converts your security policies or playbooks into publishable lessons (cards + quiz) in minutes, then lets you AI-translate them into ~40 languages for global rollouts.

    For simulations, Hoxhunt explains how it utilizes real phishing reports across its network to keep templates current and how scenarios adapt to user behavior, including factors such as difficulty (easier or harder next time) and role/location context. The article also outlines the multi-language generation path and how training content aligns with company policy inputs. See how Hoxhunt uses GenAI in training.

    User value: policy updates become training within the same week; localized content is shipped promptly without waiting for translators; simulations closely mirror live attacker tactics.

    GRC used to be about paperwork: policies in Word, controls in spreadsheets, and evidence in emails. AI is changing the loop: drafting policies from obligations, mapping regulatory changes to controls, and transforming evidence into control assertions that can be queried. Below are concrete, vendor-specific examples from ServiceNow and OneTrust—what the feature does, where it lives, and the value to your team.

    ServiceNow’s recent Zurich-family updates add several “do-the-work” capabilities inside Integrated Risk Management (IRM):

    • Common control objective creation. Now Assist for IRM identifies similar control objectives across your library and generates a consolidated “common control objective,” reducing duplication before audits and attestations.

    • Regulatory change → control mapping. When a new or changed requirement is introduced, Now Assist proposes mappings to your internal controls, allowing you to run gap checks without manually matching clauses to procedures.

    • Risk event summarization. For incidents and losses recorded in IRM, Now Assist summarizes lifecycle details (root cause, actions, recoveries) so you can speed risk write-ups and RCA reviews.

    • AI governance in-platform. AI Control Tower adds AI inventory and governance capabilities, aligning with ISO/IEC 42001 and the EU AI Act—useful if you want AI risk managed alongside other obligations in ServiceNow.

    • Automated item generation. Outside of GenAI, IRM can auto-generate risks and controls from policies/standards (“item generation”) to keep your register consistent with the policy library.

    User value: less time reconciling frameworks and more time validating what changed (and where). AI handles the tedious mapping and summarization; your team decides what to accept and what to fix.

    OneTrust is bringing AI/ML risk into mainstream GRC operations:

    • Model & agent registry with data-plane context. Build an auditable inventory of AI systems and sync metadata from platforms like Databricks Unity Catalog to keep models/agents and datasets visible to compliance.

    • Framework-based risk assessments. Assess AI use against NIST AI RMF, OECD, and other frameworks using out-of-the-box templates, which help standardize intake and impact scoring.

    • AI intake & workflow embedding. Intake forms auto-score risk and can be embedded in tools like Jira, allowing projects to be registered and routed for review before build or buy decisions.

    • Third-party AI risk. Extend questionnaires with AI-specific due diligence topics to evaluate vendor models and hosted services.

    • EU AI Act readiness. Solution pages and resources map obligations and roles (provider/deployer) into operational tasks—helpful to separate “what’s in scope” from “who must do what.”

    User value: Your AI program becomes traceable (identifying what models exist, who owns them, and what data they utilize) and defensible (assessed against a known framework, with third-party risk and regulatory mapping built in).

    In summary, the integration of artificial intelligence across cybersecurity platforms is accelerating. We’ve seen impact in domains such as Continuous Threat Exposure Management (CTEM), User Awareness Training, and Governance, Risk, and Compliance (GRC).

    AI is also a new threat, and AI Security has become a feature of many existing platforms to address the latest issues and market hype. I believe many cybersecurity professionals find it confusing, expecting a more “holistic” approach and still seeking strategies and guidelines where the market offers new features.

    Anyway, AI has already shaped cybersecurity products, and we are shifting from complex systems to user-friendly platforms with more automated detection. I call this Intelligent Security, as I’ve discussed in previous articles.

    I am continuing this review across platforms and will do a wrap-up in the following publication. In the meantime, subscribe if you don’t want to miss it.

    Laurent 💚

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    InfoForTech
    • Website

    Related Posts

    Instagram Users Urged to Save Encrypted DMs Before Feature Disappears

    March 17, 2026

    Why Security Validation Is Becoming Agentic

    March 16, 2026

    Meta to Shut Down Instagram End-to-End Encrypted Chat Support Starting May 2026

    March 15, 2026

    OpenClaw AI Agent Flaws Could Enable Prompt Injection and Data Exfiltration

    March 15, 2026

    GlassWorm Supply-Chain Attack Abuses 72 Open VSX Extensions to Target Developers

    March 14, 2026

    Chinese Hackers Target Southeast Asian Militaries with AppleChris and MemFun Malware

    March 13, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

    February 2, 20264 Views

    Rising Digital Financial Fraud in South Africa

    January 15, 20264 Views
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Advertisement
    About Us
    About Us

    Our mission is to deliver clear, reliable, and up-to-date information about the technologies shaping the modern world. We focus on breaking down complex topics into easy-to-understand insights for professionals, enthusiasts, and everyday readers alike.

    We're accepting new partnerships right now.

    Facebook X (Twitter) YouTube
    Most Popular

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

    February 2, 20264 Views
    Categories
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    • Latest in Tech
    © 2026 All Rights Reserved InfoForTech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.