Close Menu

    Subscribe to Updates

    Get the latest creative news from infofortech

    What's Hot

    File Your Taxes With TurboTax Full Service Now Before Prices Go Up

    March 17, 2026

    Death by Tariffs: Volvo Discontinuing Entry-Level EX30 EV in the US

    March 16, 2026

    Nvidia launches NemoClaw, Agent Toolkit to enhance AI agents

    March 16, 2026
    Facebook X (Twitter) Instagram
    InfoForTech
    • Home
    • Latest in Tech
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    Facebook X (Twitter) Instagram
    InfoForTech
    Home»Artificial Intelligence»AI Startups and the Legal Risk of Getting It Wrong Aiiot Talk
    Artificial Intelligence

    AI Startups and the Legal Risk of Getting It Wrong Aiiot Talk

    InfoForTechBy InfoForTechJanuary 18, 2026No Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    AI Startups and the Legal Risk of Getting It Wrong Aiiot Talk
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email


    Artificial intelligence (AI) is exploding. There were 5,509 AI startups in the US between 2013 and 2023. And according to Statista, they’re receiving a massive amount of funding.

    “ In 2024, AI startups received more than $0.5 trillion and raised over $100 billion. “

    In 2024, AI startups received more than $0.5 trillion and raised over $100 billion.

    Despite the success of AI, there are still so many gray areas and legal risks. Companies and the language learning models (LLMs) they’re producing can and definitely do make errors. One study found that LLMs are incorrect 60% of the time. And it isn’t even LLMs like ChatGPT; it’s AI-controlled chatbots, finance systems, booking systems, and supply chain control systems. The scope of AI is already more than we could have ever imagined.

    The legal risks of getting it wrong almost feel more pertinent than human error. Read on to find out more.

    How AI Gets It Wrong

    The way AI makes mistakes isn’t like humans. When we get it wrong, it’s usually obvious. A miscalculation, incorrect dates, a typo. AI’s errors are trickier. They’re confident, authoritative, and buried in what looks like accuracy. Known as hallucinations, they happen across all AI systems.

    Think about an AI travel booking system that fabricates a non-existent flight. Or, more commonly lately, booking systems that overbook a flight. Or a finance tool that confidently produces a figure off by billions. Or an HR chatbot giving incorrect legal advice to employees.

    Training data is another culprit. Bias, outdated information, or a lack of context means AI models learn flawed lessons. An AI built on skewed data will produce skewed results.

    Then add complexity. Many AI systems are black boxes. Even developers don’t fully understand how outputs are generated.

    The Common Risks of AI

    The risks cover every layer of business operations. Here are some of the most common risks:

    • Data privacy breaches. AI eats data, but feeding it sensitive information without proper controls can break laws such as GDPR or CCPA. Feeding a chatbot sensitive medical records for analysis? That’s a compliance nightmare if a patient finds out.
    • Bias and discrimination. From hiring tools screening candidates unfairly to financial services denying loans based on skewed data, AI can replicate systemic biases. These turn into discrimination claims fast.
    • Intellectual property (IP) issues. If an AI generated content based on copyrighted training data, who owns the result? And who gets sued if the output infringes on someone else’s IP? Courts are still working that one out.
    • Misinformation and defamation. An AI system that outputs false or harmful information about an individual or brand can trigger libel suits.

    Operational errors. Think of a supply chain AI sending the wrong shipment to the wrong location. Or a trading algorithm executing damaging trades.

    Regulatory non-compliance. In industries such as finance, healthcare, and insurance, strict regulations exist for a reason.

    High-Profile AI Mistakes

    There have been some high-profile and widely publicized AI mistakes.

    In 2023, Air Canada found itself in hot water after its customer service chatbot promised a traveler a discount that didn’t exist and misled them into paying full price. The airline tried to argue that the chatbot was responsible. The court disagreed and ordered Air Canada to honor the discount.

    DoNotPay, a startup branded as the “world’s first robot lawyer”, faced a class action lawsuit for allegedly practicing law without a license. The platform promised to help users with legal claims through AI but lacked attorney oversight. Users argued the service was misleading, and they were correct.

    IBM’s Watson for Oncology once promised to revolutionize cancer treatment recommendations. Instead, it offered “unsafe and incorrect” suggestions, according to internal documents.

    And let’s not forget Microsoft’s infamous Tay chatbot. Within 24 hours of release, Tay went from playful to producing offensive, harmful content thanks to online manipulation.

    The Legal Risks for AI Startups

    As it stands, lawsuits where companies claim their AI system was at fault, not them, never win. You are your AI system. Some of the common legal risks include:

    • Product liability. If your AI causes harm, financial, physical, or reputational—your startup could face product liability claims. Professional errors and omissions insurance might not necessarily cover AI mistakes.
    • Contractual liability. Promising too much in your terms of service, or failing to deliver, opens the door for breach-of-contract claims.
    • Regulatory enforcement. The EU’s AI Act, California’s privacy laws, and FTC guidelines in the US are tightening AI regulations.
    • Employment law. If your AI hiring tool filters out candidates unfairly, it isn’t the algorithm’s fault.
    • IP disputes. Using copyrighted material in training without permission or generating outputs too close to existing works can lead to lawsuits. Getty Images already sued Stability AI for exactly this.

    You can’t blame the AI you’re using if it gets something wrong. That type of protection doesn’t exist yet. AI startups have to be aware that, almost always, the fault is theirs.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    InfoForTech
    • Website

    Related Posts

    Clarifai Reasoning Engine Achieves 414 Tokens Per Second on Kimi K2.5

    March 16, 2026

    Influencer Marketing in Numbers: Key Stats

    March 16, 2026

    Tremble Chatbot App Access, Costs, and Feature Insights

    March 15, 2026

    U.S. Holds Off on New AI Chip Export Rules in Surprise Move in Tech Export Wars

    March 14, 2026

    How Joseph Paradiso’s sensing innovations bridge the arts, medicine, and ecology | MIT News

    March 14, 2026

    A better method for planning complex visual tasks | MIT News

    March 14, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

    February 2, 20264 Views

    Rising Digital Financial Fraud in South Africa

    January 15, 20264 Views
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Advertisement
    About Us
    About Us

    Our mission is to deliver clear, reliable, and up-to-date information about the technologies shaping the modern world. We focus on breaking down complex topics into easy-to-understand insights for professionals, enthusiasts, and everyday readers alike.

    We're accepting new partnerships right now.

    Facebook X (Twitter) YouTube
    Most Popular

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in Grants

    February 2, 20264 Views
    Categories
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    • Latest in Tech
    © 2026 All Rights Reserved InfoForTech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.