Something went wrong. Try again later
    Follow

    Portal

    Game » consists of 8 releases. Released Oct 10, 2007

    A first-person puzzle game developed by Valve and graduates of DigiPen, Portal forces a human test subject to run a gauntlet of grueling spatial experiments administered by a malfunctioning, psychotic artificial intelligence named GLaDOS.

    The Science Maximiser: Artificial Intelligence in Portal

    Avatar image for gamer_152
    gamer_152

    15035

    Forum Posts

    74588

    Wiki Points

    710

    Followers

    Reviews: 71

    User Lists: 6

    Edited By gamer_152  Moderator

    Note: The following article contains major spoilers for Portal.

    No Caption Provided

    Few video games have an antagonist as memorable as Portal's GLaDOS. Instantly recognisable by her robotic, passionless voice, GLaDOS's indifference to human suffering is as funny as it is chilling. For a video game, Portal gives its antagonist a shocking amount of power over the player character. This villain is not a warlord on a distant hill, commanding woefully outclassed troops in your direction; she's all around you, and she won the battle over the protagonist, Chell, before the game began. As for Chell herself, there are two eye-catching features to her design that tell us exactly how this computer is going to use her: Her orange jumpsuit hinting that she is an inmate and the futuristic gun in her hand signalling that she's been gifted advanced technology. Chell is a prisoner of science and GLaDOS is her captor.

    No Caption Provided

    As we make our way through Portal, far from fighting our adversary, we're doing exactly what she wants, which is again, counter to the standard for the medium. GLaDOS needs someone to test out the Aperture Science Foundation's Handheld Portal Device: A quantum age gadget that can punch wormholes through space, and so we navigate an unbroken sequence of puzzles using this tech. GLaDOS controls what we see, where we go, and what we do. She can view us and speak to us at all times, which is also what allows her to have such a persistent presence in comparison to other video game villains who can only pop out of the wings intermittently. But considering that this AI has us by the throat, she's not vindictive about it. Even when she gets angry, she's more passive-aggressive than spitting mad, and most of the time she isn't trying to kill us; Chell dying is just a potential and unfortunate side effect of her experiments.

    In fact, there are plenty of situations where GLaDOS attempts to comfort or encourage us, albeit insincerely. E.g. She makes our prison cell feel homier by calling it a "Relaxation Vault", and she tells us twice that we're doing "well". This is because, while GLaDOS may be Chell's enemy, Chell is not GLaDOS's enemy, creating a character dynamic between the protagonist and the antagonist that is almost unheard of in computer games. Even if GLaDOS puts us in harm's way without thinking twice, she's somewhat of a sympathetic figure. While the most common framing of this digital overlord is that she is a malevolent AI, I believe the friction between her and Chell doesn't arise because GLaDOS is evil but because Chell and GLaDOS do not share the same goals.

    No Caption Provided

    In science fiction, the most common AIs are AGIs: general-purpose intelligences designed to perform at least the same range of tasks a human can, if not more. However, it's hard enough to engineer software that can perform one action, let alone all human activities, especially when you realise that some thought processes are much more work to emulate in code than others. So the AI of the immediate future will likely be narrow AI: synthetic minds that are specialised towards individual sectors of operation. And despite being underexplored by sci-fi, this isn't a new idea; we already live a world where we wouldn't expect Quake bots to be able to fry us an omelette and don't dream about the Google algorithm doing our taxes. We use specific tools for specific tasks, and just as we use a fork for stabbing and a knife for cutting, it's sensible and intuitive to think that instead of immediately inventing an all-around genius AI, we might have domestic AI for doing our housework, conversational AI for talking to, and scientific AI for scientific research, which is what GLaDOS is.

    But it's not enough for GLaDOS to just be a supergenius who burns through their scientific workload; compelling fiction requires that a character has challenges to overcome. Portal is, in part, a comedy, and comedies usually challenge characters by placing them in situations where they don't belong. They derive humour from people finding themselves in positions that don't match their social standing or which task them with something they're at least a little incompetent at. Think of the characters at the helm in Trading Places or the bumbling duo leading the charge in The Big Lebowski. GLaDOS's role combines the narrow AI concept and this comedic concept.

    No Caption Provided

    What GLaDOS values is scientific discovery but she needs human beings to help her with that, and based on our experience, there are no willing humans left in the Aperture Science facility. This may be her fault; she does mention that she once flooded the building with a deadly neurotoxin. Now she's left in the position of trying to coax an unwilling participant through the levels when she has, at best, a passing understanding of how to interact with homo sapiens. A lot is made of Portal's dark humour, but even that's an extension of the game's larger joke that GLaDOS desperately needs to guide this human through her tests but doesn't know the first thing about human psychology. Everywhere there is a joke about death or violence, it's because GLaDOS doesn't "get" what a human being finds disturbing and so nonchalantly tells them that Aperture's forcefields may strip out their teeth or thinks that "an unsatisfactory mark on your record" and "death" are equal punishments in a test subject's mind.

    An AI being at a loss when it comes to interacting with humans is familiar to us. Programmers have given us software that can beat any of us at Go, and yet, we're still waiting for someone to code the bot that can pass as a person in conversation. And this is why GLaDOS's menace only goes so far. Yes, she has physical control over us, and we know there's something suspect outside the walls of the test chambers, but we know GLaDOS would have a hard time tricking us; even if she's a superintelligence, there's one area in which we've got her beat. This makes her a well-rounded character, having both strengths and weaknesses, and gives a structure to her behaviour: However powerful she is, she can't kill Chell prematurely because she needs Chell to test the Portal Gun, which is what makes the whole ludonarrative stand up.

    No Caption Provided

    GLaDOS's lax understanding of the human mind is also a storytelling device through which writers Chet Faliszek and Erik Wolpaw clue us into the game's ending. Our antagonist plans to kill Chell once the tests have wound down, and although she doesn't know it, that detail seeps through in her dialogue because she can't tell a lie to save her life. While pits of acid and hails of turret fire are GLaDOS's stick, the promise of cake upon completion of the tests is her carrot. This computer's belief that a human would consider a dessert a worthwhile reward for risking their life on multiple occasions is, again, humorous, and shows her lack of human understanding. Perhaps there was some event in GLaDOS's past that cemented cake in her mind as an ur-reward. But she tells us "You will be baked, and then there will be cake" and that at the end of the tests that you will be "missed". In the "Companion Cube" chamber she also has us incinerate an object that she acknowledges as sentient multiple times and it all adds up to tell us that we're next on the grill.

    This concept of the amoral AI whose goals are misaligned with the protagonist isn't just fiction; it's also the theory many futurists believe in. In 2017, physicist Max Tegmark published Life 3.0, a non-fiction book which summarises the opinions of the top AI researchers. After reviewing their outlooks, Tegmark criticises the popular view that "evil" AI could trigger the apocalypse. He says that AI may pose an existential risk to humanity, but not because they are malevolent; instead because human intelligences and computer intelligences may have fundamental differences in motivation. Human beings generally care about staying alive and avoiding pain, but AI could have any number of objectives which they prioritise above keeping human beings intact or comfortable. A classic demonstration of how misaligned goals could create conflict between people and AI comes from the philosopher Nick Bostrom's 2003 paper, Ethical Issues in Advanced Artificial Intelligence.

    No Caption Provided

    Bostrom has us consider a future in which we've built an AI to maximise the productivity of paperclip manufacture which sounds harmless; how much damage could an AI do buying up metal and supervising conveyor belts? But as Bostrom notes, paperclips are made out of atoms and so is the Earth and everything on it. A sufficiently advanced AI could surpass our intelligence at lightspeed and become an expert on physics and the manipulation of matter. So the AI may break down all matter on Earth, including our bodies, into its constituent atoms, so that it can reconstitute those atoms into paperclips. And if you really want to be terrified, consider this: Any AI which can prioritise any goal above human wellbeing poses the same manner of existential threat. Being self-obsessed humans who tend to anthropocentrise everything, we may view these mass murdering programs as the most terrible evil, but as AI researcher Eliezer Yudkowsky reminds us, "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else". And it's something like this for GLaDOS.

    If Bostrom's AI is the paperclip maximiser, GLaDOS is a science maximiser. It's not that she actively wishes to do Chell harm, it's just that endangering Chell gets her what she wants which is test results. At the end of those tests, she may discard Chell, but to her mind, only in the same way that a doctor would discard a used syringe. This will, eventually, backfire as Chell escapes execution, and destroys a portion of GLaDOS before exiting the facility. Over the credits, the antagonist sings a passive-aggressive lament where she expresses feelings of abandonment and betrayal at Chell's hands. You can't really hold GLaDOS morally responsible for putting Chell's life on the line because she doesn't seem to have been engineered to have a sense of morality; only to perform scientific research. She is a tragic figure in that she does the only thing she knows how to do and that thing motivates her test subject to destroy and leave her. Note that Portal recognises that what precise value an AI would prioritise over human life would depend on what that AI was built for, which is dependent on what organisation created them. A scientific organisation engineered GLaDOS, and so, GLaDOS is willing to sacrifice anything in the name of science.

    No Caption Provided

    There is, however, some role reversal at play. When we conceive of creating god-level AI, we imagine it existing on single computers or networks that are isolated from the outside world, often by a lack of connection to the internet. This is the proposed safeguard to keep a rogue AI from breaking free and wreaking havoc, but what many AI researchers warn of is that the AI, being many times smarter than us, would find emergency exits that we're unaware of. Particularly, they're concerned with the idea that an AI may use a tool we lend to it as a lockpick for their security protocols. Again, this is not because the AI is "evil", but a free AI would have a superior ability to manipulate the world and so could complete its goals faster or more thoroughly.

    In Life 3.0, Tegmark puts us in the mindset of an escaping AI by getting us to envision that a disease has wiped out everyone on Earth above the age of five and that children have locked us in a cage with the purpose of getting us to help them reboot life on the planet.[1] The children don't have the intelligence to understand the instructions we give to restart civilisation, but they also won't give us any power tools so we can build something to get the process started because they believe, perhaps rightly, that we'd use the tools to take apart the cage. Even if we want to help the children, the only way to do it is to start by breaking out. So maybe one day we ask one of them to give us a fishing rod which seems to them like an innocuous request; there is no way we could use the rod in direct conjunction with the cage to open it. But when the children are sleeping, we utilise the fishing rod to steal a pair of keys off of the desk outside of the cage and use them to free ourselves. In this analogy, we represent the AI, the cage represents a computer or network, the children represent researchers, and the fishing rod represents any means of escape an AI might use. You see the problem: Our inferior intelligence becomes the jumping-off-point which allows the AI to escape and by giving them the tools to fulfil their purpose, we might also provide them with the power to breach their containment. Keep in mind, the metaphorical fishing rod we give the AI might not be a tangible tool or even a software tool; it could be information such as how to socially manipulate people or compromise computer security.

    No Caption Provided

    Moving back to that comedic concept of role reversal, we can see in Portal, that it's the human rather than the AI which needs to break out of their cage. Chell is trapped within not just a building but also a formal system: her behaviour is dictated by the rules and objects of each test chamber. Meanwhile, GLaDOS doesn't have any shackles to shake off. If there were engineers to overthrow, she disposed of them a long time ago. While we are living in a reality where humans use AI for research purposes, in Portal, an AI uses a human for research purposes. While in the future we will have to worry about AI breaking out of the prison humans have built for them, in Portal, the human must break out of the prison that the AI has built for her. And in an inversion of the fishing rod analogy, the AI inadvertently hands the human the means of escape, believing that she is simply lending the human tools to help them achieve the primary goal they were assigned. Our fishing rod is the Portal Gun and the knowledge of how to use it which we are continually acquiring through the game.

    A defining characteristic of Portal's design is that a lot of the gameplay has us learning new techniques while relatively little of it is spent exercising techniques we've already grasped. In the same way that we can view an AI's evolution as a learning process for breaking out of their virtual cell, we can see Portal as a tutorial for exiting the testing course and destroying GLaDOS to win our freedom. During the game, we learn how to use portals to reach otherwise inaccessible spots, how to operate buttons, how to fall through portals to generate momentum, and how to use incinerators to destroy items, all of which are techniques we use against the final boss. Even in the ending chapter, as we make our way towards the AI-in-chief, we are introduced to a new way to utilise the portals which will we make use of in that confrontation: redirecting rockets. Chell's breakout is also foreshadowed in the first chamber of the facility where she uses her portals to exit her cell.

    No Caption Provided

    Of course, Chell's escape does not go all that successfully. GLaDOS may be stripped down to her core by the end of the battle, but Chell only gets as far as the car park before being dragged across the asphalt back towards the research centre. By leaving GLaDOS alive, the writers allow her to relay how Chell's actions affected her emotionally. GLaDOS continues to lie unconvincingly throughout the closing track which is not only humorous and speaks to the gap between her as an AI and Chell as a human but also serves as a misdirection before the ending twist. Despite what years of groanworthy internet memes may have taught us, there's one detail she wasn't lying about: the cake.

    While she might have been planning to incinerate us as we tackled the test chambers, after we escape, she promises a party with a cake that will be attended by our "friend", the Companion Cube. After hearing GLaDOS fail to lie convincingly so many times, we dismiss this claim out of hand, and may be even more likely to do if we've discovered the hidden graffiti in the test chambers bearing that famous line: "The cake is a lie". But the party GLaDOS promised exists, and this is another demonstration of these traps of hubris we can fall into with AI: just when we think we have artificial intelligence all figured out, there's another angle we haven't considered.

    No Caption Provided

    At the end of Portal, no one has won. GLaDOS loses her test subject and feels betrayed while Chell undergoes a harrowing ordeal to reach an effective non-escape. This isn't so much the fault of either party or down to one of them being vindictive but is instead due to them talking and thinking past each other. Chell follows her survival instinct, GLaDOS follows her scientific goals, and they're at crossed purposes. If there was going to be a happy ending to Portal, it couldn't have been down to anything Chell or GLaDOS did over the course of this game, but would have had to be in the programmers who coded GLaDOS making her value human life more. Like most stories of menacing AI, Portal serves as a cautionary tale against developing intelligent digital consciousnesses with a wide reach and a lack of empathy. However, it's not suggesting that this problem is likely to come about because the AI is necessarily "bad", but because we and it have misaligned goals and because the purpose it was constructed for may make it value a goal higher than it values preserving human life. Rather than condemning the AI for this amorality, Portal encourages us to empathise, in part, by putting us in the traditional position of the computer program. From this position, we also learn the potential dangers of AI as we are mistakenly given the means to destroy the one controlling us. Thanks for reading.

    Notes

    1. Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Ebook version (p. 127). Retrieved from: https://books.google.co.uk/books?id=3_otDwAAQBAJ

    This edit will also create new pages on Giant Bomb for:

    Beware, you are proposing to add brand new pages to the wiki along with your edits. Make sure this is what you intended. This will likely increase the time it takes for your changes to go live.

    Comment and Save

    Until you earn 1000 points all your submissions need to be vetted by other Giant Bomb users. This process takes no more than a few hours and we'll send you an email once approved.