Close Menu

    Subscribe to Updates

    Get the latest creative news from infofortech

    What's Hot

    Microsoft investors fret as capital spending and Azure growth decouple

    February 1, 2026

    Quantum AI: Google proves its superiority

    February 1, 2026

    SpaceX seeks go-ahead from the FCC to put up to a million data center satellites in orbit

    February 1, 2026
    Facebook X (Twitter) Instagram
    InfoForTech
    • Home
    • Latest in Tech
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    Facebook X (Twitter) Instagram
    InfoForTech
    Home»Artificial Intelligence»The philosophical puzzle of rational artificial intelligence | MIT News
    Artificial Intelligence

    The philosophical puzzle of rational artificial intelligence | MIT News

    InfoForTechBy InfoForTechJanuary 31, 2026No Comments5 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    The philosophical puzzle of rational artificial intelligence | MIT News
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email



    To what extent can an artificial system be rational?

    A new MIT course, 6.S044/24.S00 (AI and Rationality), doesn’t seek to answer this question. Instead, it challenges students to explore this and other philosophical problems through the lens of AI research. For the next generation of scholars, concepts of rationality and agency could prove integral in AI decision-making, especially when influenced by how humans understand their own cognitive limits and their constrained, subjective views of what is or isn’t rational.

    This inquiry is rooted in a deep relationship between computer science and philosophy, which have long collaborated in formalizing what it is to form rational beliefs, learn from experience, and make rational decisions in pursuit of one’s goals.

    “You’d imagine computer science and philosophy are pretty far apart, but they’ve always intersected. The technical parts of philosophy really overlap with AI, especially early AI,” says course instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT, calling to mind Alan Turing, who was both a computer scientist and a philosopher. Kaelbling herself holds an undergraduate degree in philosophy from Stanford University, noting that computer science wasn’t available as a major at the time.

    Brian Hedden, a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS), who teaches the class with Kaelbling, notes that the two disciplines are more aligned than people might imagine, adding that the “differences are in emphasis and perspective.”

    Tools for further theoretical thinking

    Offered for the first time in fall 2025, Kaelbling and Hedden created AI and Rationality as part of the Common Ground for Computing Education, a cross-cutting initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.

    With over two dozen students registered, AI and Rationality is one of two Common Ground classes with a foundation in philosophy, the other being 6.C40/24.C40 (Ethics of Computing).

    While Ethics of Computing explores concerns about the societal impacts of rapidly advancing technology, AI and Rationality examines the disputed definition of rationality by considering several components: the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the ascription of beliefs and desires onto these systems.

    Because AI is extremely broad in its implementation and each use case raises different issues, Kaelbling and Hedden brainstormed topics that could provide fruitful discussion and engagement between the two perspectives of computer science and philosophy.

    “It’s important when I work with students studying machine learning or robotics that they step back a bit and examine the assumptions they’re making,” Kaelbling says. “Thinking about things from a philosophical perspective helps people back up and understand better how to situate their work in actual context.”

    Both instructors stress that this isn’t a course that provides concrete answers to questions on what it means to engineer a rational agent.

    Hedden says, “I see the course as building their foundations. We’re not giving them a body of doctrine to learn and memorize and then apply. We’re equipping them with tools to think about things in a critical way as they go out into their chosen careers, whether they’re in research or industry or government.”

    The rapid progress of AI also presents a new set of challenges in academia. Predicting what students may need to know five years from now is something Kaelbling sees as an impossible task. “What we need to do is give them the tools at a higher level — the habits of mind, the ways of thinking — that will help them approach the stuff that we really can’t anticipate right now,” she says.

    Blending disciplines and questioning assumptions

    So far, the class has drawn students from a wide range of disciplines — from those firmly grounded in computing to others interested in exploring how AI intersects with their own fields of study.

    Throughout the semester’s reading and discussions, students grappled with different definitions of rationality and how they pushed back against assumptions in their fields.

    On what surprised her about the course, Amanda Paredes Rioboo, a senior in EECS, says, “We’re kind of taught that math and logic are this golden standard or truth. This class showed us a variety of examples that humans act inconsistently with these mathematical and logical frameworks. We opened up this whole can of worms as to whether, is it humans that are irrational? Is it the machine learning systems that we designed that are irrational? Is it math and logic itself?”

    Junior Okoroafor, a PhD student in the Department of Brain and Cognitive Sciences, was appreciative of the class’s challenges and the ways in which the definition of a rational agent could change depending on the discipline. “Representing what each field means by rationality in a formal framework, makes it clear exactly which assumptions are to be shared, and which were different, across fields.”

    The co-teaching, collaborative structure of the course, as with all Common Ground endeavors, gave students and the instructors opportunities to hear different perspectives in real-time.

    For Paredes Rioboo, this is her third Common Ground course. She says, “I really like the interdisciplinary aspect. They’ve always felt like a nice mix of theoretical and applied from the fact that they need to cut across fields.”

    According to Okoroafor, Kaelbling and Hedden demonstrated an obvious synergy between fields, saying that it felt as if they were engaging and learning along with the class. How computer science and philosophy can be used to inform each other allowed him to understand their commonality and invaluable perspectives on intersecting issues.

    He adds, “philosophy also has a way of surprising you.”

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    InfoForTech
    • Website

    Related Posts

    Quantum AI: Google proves its superiority

    February 1, 2026

    Lenovo’s Qira is a Bet on Ambient, Cross-device AI—and on a New Kind of Operating System

    February 1, 2026

    Pricing Overview and Feature Highlights

    February 1, 2026

    How does AI work?

    February 1, 2026

    Data, Compute & Scaling Mistakes

    January 31, 2026

    Practical Automations That Actually Work (And How You Can Use Them)

    January 31, 2026
    Leave A Reply Cancel Reply

    Advertisement
    Top Posts

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    Rising Digital Financial Fraud in South Africa

    January 15, 20264 Views

    Yahoo Scout is an AI ‘answer engine’ that wants to challenge Perplexity and Google’s AI mode

    January 28, 20262 Views
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Advertisement
    About Us
    About Us

    Our mission is to deliver clear, reliable, and up-to-date information about the technologies shaping the modern world. We focus on breaking down complex topics into easy-to-understand insights for professionals, enthusiasts, and everyday readers alike.

    We're accepting new partnerships right now.

    Facebook X (Twitter) YouTube
    Most Popular

    How a Chinese AI Firm Quietly Pulled Off a Hardware Power Move

    January 15, 20268 Views

    The World’s Heart Beats in Bytes — Why Europe Needs Better Tech Cardio

    January 15, 20265 Views

    Rising Digital Financial Fraud in South Africa

    January 15, 20264 Views
    Categories
    • Artificial Intelligence
    • Cybersecurity
    • Innovation
    • Latest in Tech
    © 2026 All Rights Reserved InfoForTech.
    • Home
    • About Us
    • Contact Us
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.