Close Menu

    Subscribe to Updates

    Get the latest creative news from infofortech

    What's Hot

    Musk v. Altman Evidence Shows What Microsoft Executives Thought of OpenAI

    May 8, 2026

    AI Is Helping Security Teams Move from Detection to Action

    May 8, 2026

    Twitch Has New Penalties For Streamers Caught Viewbotting, CEO Says

    May 8, 2026
    Facebook X (Twitter) Instagram
    InfoForTech
    • Home
    • Latest in Tech
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    Facebook X (Twitter) Instagram
    InfoForTech
    Home»Artificial Intelligence»AI Is Helping Security Teams Move from Detection to Action
    Artificial Intelligence

    AI Is Helping Security Teams Move from Detection to Action

    InfoForTechBy InfoForTechMay 8, 2026No Comments7 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    AI Is Helping Security Teams Move from Detection to Action
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email


    Most security teams have more data than they know what to do with. Alerts, dashboards, telemetry feeds—all of it pointing at things that need attention. The problem isn’t that they can’t see the risks. It’s that seeing them and actually fixing them are two completely different things.

    Known vulnerabilities sit unresolved for months. Orphaned accounts linger in identity systems. Cloud resources get spun up and forgotten. Certificates expire on assets nobody remembers owning. Security teams largely know about all of it. They just can’t move fast enough to do much about it.

    I had a chance to talk with Yair Grindlinger, co-founder and CEO of Surf AI, about why that gap exists and what it takes to close it. He made a point that stuck with me: “20 years ago, you had to deal with a narrow set of assets. Today, you have multiple clouds and folders and buckets and 1,000 different SaaS applications. It’s like the universe is expanding. What we used to do 20 years ago doesn’t work at all now.”

    And yet a lot of enterprise security programs are still built like it’s 20 years ago—or at least, built around tools that treat fixing problems as a side effect of finding them.

    The Operational Problem Nobody Talks About

    When you look at where security programs actually get stuck, it’s usually not detection. It’s everything that happens after detection. Who owns this asset? What breaks if I change it? Who has to approve this? Which team does this ticket go to?

    Those questions sound simple. In a large enterprise, they’re anything but. Unclear ownership, cross-system dependencies, legacy infrastructure that nobody fully understands anymore—all of that creates friction that slows remediation to a crawl. Known problems pile up because resolving them requires coordination that organizations just aren’t set up to do at scale.

    AI is making the underlying exposure worse. More identities, more permissions, more non-human accounts running automated processes—and more ways for attackers to find the gaps that haven’t been cleaned up. The riskiest exposures are often the quiet ones: dormant accounts, over-privileged service credentials, misconfigured cloud settings. They rarely trigger a high-priority alert. They just sit there.

    Large enterprises can have tens of thousands of tokens and service identities spread across systems. Managing that manually—tracking down ownership, validating whether accounts are still active, coordinating remediation across teams—isn’t realistic. The exposure exists not because anyone is negligent, but because the scale of the problem outpaced what human processes can handle.

    What Actually Has to Change

    The piece that’s missing in most environments is context—not more data about what’s wrong, but the connective tissue that tells you who’s responsible, what depends on what, and what happens if you touch something.

    Right now, a security tool will tell you an asset has a problem. It won’t tell you who actually owns that asset, whether it’s still in use, what the downstream impact of changing it might be, or who needs to sign off before anything happens. You have to go figure all of that out manually. By the time you do, you’ve already burned time that most teams don’t have.

    Building that context layer requires pulling from a lot of sources at once—identity systems, cloud environments, HR data, ticketing systems, and communication channels. And it has to stay current, because ownership changes, people leave, and resources move around. A snapshot of an environment at a single point in time isn’t enough. You need a continuous, evolving picture.

    Account ownership is a good example of how hard this gets. The last person who touched an asset isn’t necessarily the owner. The most frequent person isn’t necessarily the owner, either. You have to cross-reference HR records, look at ticket history, and factor in whether someone is on leave or has changed roles. It’s a lot of signal to synthesize—and it’s exactly the kind of work that doesn’t scale with human analysts alone.

    AI Agents for Execution, Not Just Detection

    There’s been a lot of focus on using AI for threat detection. Less attention has gone to the remediation side—the actual work of closing vulnerabilities, disabling accounts, enforcing policies, and keeping the environment clean on an ongoing basis.

    The model that makes sense here is specialized agents, each with a narrow job. One collects information about an asset. Another updates the CMDB. Another contacts the account owner to confirm whether something should be removed. Another escalates to a manager if needed. Each one has a defined set of actions it can take and no more. Consistency comes from keeping each agent’s scope small and well-defined rather than building one agent that tries to do everything.

    The audit question comes up immediately with any kind of automated remediation. If you’re running thousands of actions, who’s checking them? The practical answer is: you don’t review everything, but you audit everything. The full log is there. You can sample, spot-check and intervene when something looks off. But requiring a human to review every automated action defeats the purpose of automation in the first place.

    That’s a mindset shift as much as a technical one. Grindlinger put it plainly: “You want to audit everything, and you want to sample and get involved if necessary, but you can’t follow every action. So how do you maintain consistency?” The answer is tight guardrails on what each agent can do, combined with full transparency into what it did.

    Vendors Are Starting to Address This Differently

    Vendors are starting to take a new approach to addressing this challenge. For example, Surf AI is built specifically around the gap between understanding risk and acting on it. Rather than surfacing problems and generating tickets, the platform focuses on closing the loop—building a context graph that links assets, identities, ownership, and dependencies across identity, cloud, security, and business systems, then using specialized AI agents to coordinate and execute remediation workflows with human approvals and full audit logging built in by default.

    Early deployments have focused on identity hygiene: disabling dormant accounts, resolving duplicate identities, and enforcing access policies at enterprise scale. The company, which just emerged from stealth with a $57 million funding round led by Accel, with participation from existing investors Cyberstarts and Boldstart Ventures, says clients have recovered excess SaaS license spend, cleared thousands of orphaned accounts, and automated identity enforcement workflows that previously required manual coordination across multiple teams. Customers Cushman & Wakefield and VetCor are among the early adopters already running the platform in production.

    Surf AI is not alone in recognizing this gap. The broader shift happening across the security industry is away from tools that help analysts manage work and toward platforms that do the work—with humans setting policy, reviewing exceptions, and handling escalations rather than processing every remediation step manually.

    The Question Worth Asking

    Organizations have lived with months-long remediation cycles on known exposures because it was simply too expensive to do it differently. AI changes that cost equation. What wasn’t practical to automate a couple of years ago is practical now.

    The security programs that figure out how to close the loop between finding problems and fixing them—continuously, at scale—are going to look very different from the ones still relying on analysts to manually chase down tickets. The direction is clear. The question is how long it takes to get there.

    I have a passion for technology and gadgets and a desire to help others understand how technology can affect or improve their lives. I also love spending time with my wife, 7 kids, 3 dogs, 5 cats, a pot-bellied pig, and sulcata tortoise, and I like to think I enjoy reading and golf even though I never find time for either. You can contact me directly at tony@xpective.net. For more from me, you can follow me on Threads, Facebook, Instagram and LinkedIn.

    Latest posts by Tony Bradley (see all)

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    InfoForTech
    • Website

    Related Posts

    How AI Can Build a Future that Works for Everyone [MAICON 2026]

    May 7, 2026

    Most Companies Got Breached Through SaaS And AI Last Year

    May 7, 2026

    Study: Firms often use automation to control certain workers’ wages | MIT News

    May 7, 2026

    Web Application Firewalls Are Broken, and Everyone Knows It

    May 6, 2026

    U.S. Officials Want Early Access to Advanced AI, and the Big Companies Have Agreed

    May 6, 2026

    Games people — and machines — play: Untangling strategic reasoning to advance AI | MIT News

    May 6, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    DoJ Disrupts 3 Million-Device IoT Botnets Behind Record 31.4 Tbps Global DDoS Attacks

    March 20, 202638 Views

    Microsoft is bringing an AI helper to Xbox consoles

    March 14, 202615 Views

    We’re Tracking Streaming Price Hikes in 2026: Spotify, Paramount Plus, Crunchyroll and Others

    February 15, 202615 Views

    This is the tech that makes Volvo’s latest EV a major step forward

    January 24, 202615 Views
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Advertisement
    About Us
    About Us

    Our mission is to deliver clear, reliable, and up-to-date information about the technologies shaping the modern world. We focus on breaking down complex topics into easy-to-understand insights for professionals, enthusiasts, and everyday readers alike.

    We're accepting new partnerships right now.

    Facebook X (Twitter) YouTube
    Most Popular

    DoJ Disrupts 3 Million-Device IoT Botnets Behind Record 31.4 Tbps Global DDoS Attacks

    March 20, 202638 Views

    Microsoft is bringing an AI helper to Xbox consoles

    March 14, 202615 Views

    We’re Tracking Streaming Price Hikes in 2026: Spotify, Paramount Plus, Crunchyroll and Others

    February 15, 202615 Views
    Categories
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    • Latest in Tech
    © 2026 All Rights Reserved InfoForTech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.