Close Menu

    Subscribe to Updates

    Get the latest creative news from infofortech

    What's Hot

    Instagram Users Urged to Save Encrypted DMs Before Feature Disappears

    March 17, 2026

    File Your Taxes With TurboTax Full Service Now Before Prices Go Up

    March 17, 2026

    Death by Tariffs: Volvo Discontinuing Entry-Level EX30 EV in the US

    March 16, 2026
    Facebook X (Twitter) Instagram
    InfoForTech
    • Home
    • Latest in Tech
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    Facebook X (Twitter) Instagram
    InfoForTech
    Home»Artificial Intelligence»“This isn’t what we signed up for.”
    Artificial Intelligence

    “This isn’t what we signed up for.”

    InfoForTechBy InfoForTechFebruary 28, 2026No Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    “This isn’t what we signed up for.”
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email


    There was a palpable change in Silicon Valley this week.

    Over 200 Google and OpenAI employees called on their employers to better define the limits of how AI can be used for military purposes. Explicitly. Loudly. In a private push that Axios’s details, workers made it clear they are increasingly uneasy about how the AI tools they’re developing are being deployed.

    And honestly? You can see why.

    AI no longer just helps compose email and produce graphics. It is being talked about in relation to war logistics, surveillance and autonomous weaponry on the battlefield. That’s serious. At least one person who participated in the effort wondered aloud if these corporate checks are sufficient, or whether they merely represent aspirational prose that can be bent when needed in the face of political exigencies.

    The reason this seems déjà vu is because we’ve been here before. In 2018, Googlers revolted against the company working on Project Maven, a Pentagon project to analyze drone footage. Google responded with its AI principles, which promised the company would not build AI for use in weapons or in weapons surveillance. The trouble is, technology moves faster than principles, and things that seemed obviously out of bounds in 2018 might seem less clear-cut in 2023.

    OpenAI also has publicly accessible use cases policies that ban weapons work. On paper, it is reassuring. But employees appear to be seeking answers to a more ambiguous question: What if AI tech is dual use? What if it helps doctors do research, but also can be employed in weapons work? What’s the boundary?

    If you step back a little further, you will see the geopolitical context: AI has been designated one of the Department of Defense’s top areas of priority modernization, and there’s a whole website for the Chief Digital and Artificial Intelligence Office. They claim AI will enable faster decision-making, minimize loss of life, and deter threats. It’s all very “practical”.

    But critics, including some within tech companies, are concerned that this is the thin edge of the wedge. AI in defense systems can lead to a lack of accountability. Autonomous systems, even non-lethal ones, are another step towards delegating choices that some believe should always remain in the hands of people.

    But the international argument is far from over. The UN has been debating lethal autonomous weapons for years and, as recent reports show, nations are still a long way from agreeing what should happen next. Some want a ban. Others prefer to propose loose guidelines. AI models, meanwhile, get better every month.

    The part that sounds really human is the people who are speaking out aren’t opposed to technology. Many of them are AI enthusiasts. They’ve seen their systems enable the earlier detection of diseases, the real-time translation of languages, and easier access to learning. They support the good stuff. That’s why this is such a charged situation. It’s not a rebellion for its own sake — it’s a disagreement over values.

    There’s a generational element, too. Younger engineers aren’t so quick to shrug and say, “If we don’t do it, someone else will.” The Silicon Valley standby no longer resonates. Instead, they’re asking: If we’re going to do it, shouldn’t we create the borders, too?

    But obviously, company leaders have a different perspective. Governments are big customers. Security issues are a factor. And with AI racing going on (particularly between the U.S. and China), they don’t want to get left behind. It’s not easy to just leave. It’s strategic, it’s money, it’s politics, it’s all that.

    But the inner pressure reveals something valuable. AI isn’t just algorithms. AI is values. AI is a group of people sitting in front of a monitor and starting to understand that what they are developing could one day weigh on questions of life and death.

    Perhaps that’s the crux of the matter. This is a moral as much as a policy argument. Staff are being very clear: “We want guardrails.” Not because they’re opposed to progress — but precisely because they see its gravity.

    What’s next? It’s unclear. The corporations could tighten up the pledges. The governments could develop more defined policies. Or the friction could simply be papered over with PR announcements.

    But one thing is clear: the debate over military AI is not just theoretical anymore. It is personal. And it is taking place in the rooms where the future is being created.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    InfoForTech
    • Website

    Related Posts

    Clarifai Reasoning Engine Achieves 414 Tokens Per Second on Kimi K2.5

    March 16, 2026

    Influencer Marketing in Numbers: Key Stats

    March 16, 2026

    Tremble Chatbot App Access, Costs, and Feature Insights

    March 15, 2026

    U.S. Holds Off on New AI Chip Export Rules in Surprise Move in Tech Export Wars

    March 14, 2026

    How Joseph Paradiso’s sensing innovations bridge the arts, medicine, and ecology | MIT News

    March 14, 2026

    A better method for planning complex visual tasks | MIT News

    March 14, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

    February 2, 20264 Views

    Rising Digital Financial Fraud in South Africa

    January 15, 20264 Views
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Advertisement
    About Us
    About Us

    Our mission is to deliver clear, reliable, and up-to-date information about the technologies shaping the modern world. We focus on breaking down complex topics into easy-to-understand insights for professionals, enthusiasts, and everyday readers alike.

    We're accepting new partnerships right now.

    Facebook X (Twitter) YouTube
    Most Popular

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

    February 2, 20264 Views
    Categories
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    • Latest in Tech
    © 2026 All Rights Reserved InfoForTech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.