Gaze and Golems

AN INTERVIEW WITH JULIE CARPENTER

A main topic of your research and theorizing has been related to how AI in its many forms encourages or discourages trust. What might the future of trust look like? 

The future of trust toward technology— specifically, forms of AI—is going to be a series of patterns and not one trend, going up and down as trust is gained and lost and regained between people and technological concepts and tech companies. The act of trusting is a calibrated process, ongoing and dynamic. 

     One way to define trust is the sense that you and the other actor(s) are working together toward the same goals. AI isn’t trustworthy in this way because it has no inner motivations, although one could say it has goals for outcomes. Consumers trusting who makes and owns the AI they use will become a bigger issue as we see security breaches and AI collecting more personal data from us throughout our life, from data gathered when we are in the womb onward. 

     Increasingly, the data we give away about ourselves becomes more intimate and simultaneously invisible. We let AI monitor our health through wristbands and apps, and pay for the convenience of having big business in our homes in the form of voice-enabled assistants that turn on our lights and tv for us, as well as collect information about how we live our lives. Spaces that were formerly the most intimate—our bodies and our homes—are now bodies for capitalist surveillance. Who owns your data and the products you think you own is going to become more convoluted, and that will affect trust between people and technology and, really, the makers of technology. 

     Capitalism is a model that has always required reliable and enforceable records of who owns what. The idea of a sharing economy might be mature in some aspects of practice, but it’s also continually evolving, and we are adjusting and resisting in many ways. New socioeconomic and cultural paradigms don’t shift norms, ways of thinking, and behaviors overnight. It’s a process. It makes me think of the protagonist in Philip K. Dick’s story Ubik who had to pay to use everything in his apartment, including a five-cent charge to open his front door. PKD’s idea is breaking down ownership into smaller services or goods we rent one bit at a time, a version of a sharing economy he foresaw. 

     Supposedly, a sharing economy or something similar will simplify our lives, because we are committed to owning only what we need, ideally. But to let go of our ownership paradigm is going to take giant cultural shifts in thinking by consumers, and that shift will rely on trusting a new system. Not every good or service lends itself to rental or sharing, though, so it’s still a flawed system. Business entities that create and sell or rent to us depend upon their consumers’ brand-related trust. Which corporations do you trust with your personal data? How much of your personal data are you willing to give up for convenience? 

     As everything around us becomes increasingly “smart” and connected, as with the Internet of Things, we’re giving away more data to companies in exchange for perceived convenience. Doing so has become a necessity of participating in society. We are forced to adopt AI-enabled technologies to do our jobs or communicate with others. As smart tech and AI learns about you, your habits and preferences, you are co-constructing a reality with the AI and the company behind that product or service. We’re going to find ourselves asking a lot more questions about where our data is going, who has access to it, how we can deflect data collection or control it, and so on. 

 

You have been researching the emergence of robot soldiers. What impact could robot soldiers have on the relationship between human soldiers and robots? 

So many kinds of robots are used in war, and they have so much variety in their use that this question is hard to answer in a general way. We can see the way soldiers treat semi-autonomous ground robots they work with every day and how they interact with drones they operate at great distances. Both are very different situations and sets of interactions and relationships to the technology. People go to a Terminator image when they think of future military robots. Frankly, that’s fair. More than one military has said they are moving toward humanlike robots in some situations where it’s most efficient for a robot to be bipedal or have other humanlike qualities to function well. For example, a submarine has been engineered for a human body to move through. It makes sense for some naval robots to have a humanlike shape so they can move through a submarine without the submarine needing redesign. Or, theoretically, a robot with two or four legs can move over a wider variety of terrain, natural or human-made, than a tracked or wheeled robot. Neither case means a robot that looks like a person, but it might have some design cues such as legs or arms that we have associated with people or animals in the past. Normalization will occur when we interact with robots like that every day, and those new norms might include an aspect of socialness in how we treat robots. If you had a two-legged robot in your home, the context of your relationship to that robot might change. Perhaps you might start to think of the robot less as a tool and more like a pet. Military personnel are human, too. Being isolated from friends and family back home and the stress of service could also factor into how one treats a robot in the military. 

     Many factors shape how we regard robots. Context of use is important, the situation in which people interact with a robot or robots. Another factor is control, who or what has control in decision-making between operator and robot. A robot’s role and morphology, how it moves, can shape how we interact with it. 

     Incrementally, we’re negotiating how much meaning a robot has to us, in part, by how we see it in relation to ourselves. As robots are integrated into cultures beyond military scenarios, we’re still determining what robots mean to us. Sometimes, we regard robots as tools, and at other times, we give them social meaning and treat them as social actors. We can go back and forth between treating robots like machines or something lifelike. There will be a point of normalization for robots in many peoples’ lives, and that’s when we’ll acknowledge that robots are becoming social categories for us. 

     I don’t mean we’ll treat every robot socially or in a human- or even animallike way. I mean that, as humans, we evaluate things in relation to ourselves. When you apply that to how we see other people, we are excellent at forming social categories and expectations or even prejudices based on those categories. For example, we interact with our parents differently than a stranger on the street. You might trust your doctor, but not the sales clerk trying to sell you something. We instinctively create these social orders that guide our human interactions. Similarly, robots and other forms of AI will fall into new sometimes-social categories. It’s not a spectrum you could illustrate as from tool to best friend. It will take a long time for us to negotiate the way we categorize robots and normalize the experiences. Eventually, we’ll accept that some robots are meaningful to us and others are replaceable. 

     To answer your question, soldiers are human. No matter how they are trained, some robot design, functionality, and situational factors can leverage soldiers’ instincts to socially categorize objects with qualities that imply intelligence and autonomy— objects such as robots. Therefore, militaries need to invest in research into how people work with robots so that work can be structured for emotional and physical safety. The military turns to robots because of their efficiency and effectiveness in some situations. However, as long as people are part of that equation, research must look into how robot-use impacts soldiers at visceral levels. Human-robot teams won’t be as effective as they could be without this research. 

 

In pop culture, AI is often used as a means to transmit commentary. RoboCop isn’t about AI; it’s about Reaganomics. Ex Machina isn’t about AI but toxic masculinity. Are you aware of popular novels or films that debate the possibilities and problems of AI in a straight-on, non-proxy way? 

I’m going to give you a response about AI and storytelling using gaze and golems as the framework. In any narrative, you have ways of interpreting the information given to you. To use film as one example, you have the screenwriter’s version of a tale and the studio’s version of what they want to say. The producer, director, and actors all integrate their agendas and meaning into the same story. Then, the medium itself conveys all this meaning. In the case of film, people have expectations of what components a cinematic story should have, including the type of metaphorical imagining you have described. A viewer’s experience of a film might be among an audience of strangers in a cinema, watching the movie in one sitting. Or they could be at home on the sofa, possibly distracted or pausing the movie around their schedule. Each is a very different way to experience and absorb a story, and that context impacts interpretation. All these situations can lend to accepting, rejecting, or elaborating on the goals of the makers’ metaphorical message in a movie. 

     What you’re asking is a question of intent from the maker side. It’s hard to come up with a movie or book that addresses AI and lacks layered meaning in it. AI is othered in relation to us. Once you have othered a technology as a social actor, its story is one you view through the lens of what I’ve been calling the human gaze. Essentially, the human gaze means viewing the world from our human-centered lens— using ourselves as the model for motivations to create, destroy, and regard artificial entities around us. In this case, we use the human gaze to determine if AI is an existential threat. 

     This idea is based on concepts such as Sartre’s le regard and Mulvey’s concept for the male gaze. It has been argued that Mulvey’s theory of the male gaze assumes a heteronormative stance, that the storylines we see around us are largely from a heteronormative stance. Hollywood— which influences cinema globally—still operates from a male gaze most of the time, telling stories that center on heteronormative, cisgendered men. These movies are intended for an audience that accepts men are “normal” protagonists in science fiction, with some exceptions. This cycle repeats and supports itself, as any systemic issue does, because the system is still dominated by men, and they’re the decision-makers who keep turning to these narratives. Similarly, who chooses what stories we tell in books? The decision-makers in publishing are, again, often people who choose to tell male-centered stories, especially in science fiction. 

     How does this play out with the human gaze? Humans shape these stories about AI, which is an opaque concept for many people. AI then serves as a storytelling vehicle wherein all these unknowns can play out with unpredictable effects that create unanticipated consequences. At their core, many of these stories about humans and AI are about control and power dynamics between a person or people and AI. Because so many people don’t understand AI’s true capabilities and limitations, it’s a blank slate for use as a metaphor. 

     Another way to look at these stories is as cautionary tales about when humans try to create life or to play at what gods do in mythology. I’m giving this a Western spin, though, because there are cultural exceptions. Japan has a very different history of storytelling based on a cultural framework rooted in Buddhism and Shintoism. Western storytelling often hinges on biblical concepts. Adam creating Eve and losing control of her as she gains self-determination is an influential example of the “human creates life, loses control, all Hell breaks loose” narrative. We see it play out in “mad scientist creates life, loses control, all Hell breaks loose” stories. 

     In many ways, golems in Jewish folklore are good examples of similar themes. Golems, in Jewish mysticism, are humanlike entities made from clay and mud, and then given the breath of life (ruah) by their creator. However, soul is not an accurate translation of the word ruah, which is not just a life force but a transformative force. Ruah is also referred to as the breath of God. In any case, the gift or punishment of ruah is passed from the creator to the mud-and-clay form of the golem. The creator anticipates they will be able to control the golem or artificial, humanlike life. 

     A golem is anthropomorphic but appears slightly unformed. It’s supposed to be humanlike but remains distinguishable from humans. The word golem is a Hebrew word that means unfinished. This distinction from human beings is purposeful, because the golem is a vehicle for its creator to examine what it means to distinguish humans from nonhuman others, Self from Other. Furthermore, it’s an opportunity for the creator to think deeply about what makes us human at a fundamental moral level. In this way, we see a strong parallel to many modern narratives about AI systems, such as robots. They can be designed to resemble humans, but because of their imperfect understanding of human culture and context, they are unfinished from our point of view or our human gaze. Another parallel is that culture, understanding situational context, and human emotions are things modern AI doesn’t yet do well, much like the unformed golem. 

     Kabbalists believed creating a golem was a feat that proved their magical expertise. Golems became a popular part of Jewish folklore, where they often play the role of defender or savior because of their brutish strength and power. Golems are sympathetic characters viewed as protectors of Jewish people. However, golems are described as literal-minded and not very bright, which results in instructions given to the golem having unanticipated outcomes. Narratives around golems can illustrate the abuse of human power over golems, especially when the golems do labor for the creator’s gain or comfort. Another illustrative point of golem stories is about respecting self-determination and gaining a considerate understanding of what life is—what it means to be responsible for other living things. The golem is a didactic tool to explore themes such as autonomy, self-determination, and power dynamics. In other words, golems warn against the narcissistic aspects of creating artificial humanlike life. They are a warning against treating people you have othered in inhumane ways. 

     We see these themes play out repeatedly in modern popular storytelling with AI. Western storytelling clings to the dynamic between a mad scientist and their robot creations. Like most tales of golem creators, a mad-scientist character is usually a man who, sometimes with good intentions, creates an artificial system and loses control of it. The artificial system’s increasing autonomy from its maker is portrayed as dangerous, undesirable. The Terminator, RoboCop, Hal 9000, Chappie, and Ash and David in the Alien films are examples of this trope and other themes you identified. 

     Female humanlike robots are often portrayed with another layer of danger if they become autonomous. Their storyline is often based on a creator, usually a male scientist, developing a womanlike robot for companionship or sexual pleasure. The more autonomous the gynoid becomes, the more dangerous it is. This theme has played out since antiquity—Ovid and Pygmalion, False Maria in Metropolis, the Stepford wives, Westworld, and Blade Runner. 

     In a nutshell, AI is used as a storytelling device and metaphorical proxy based on a historical tradition of storytelling that goes back to antiquity on a global level. Almost every culture has didactic stories about people creating artificial life. 

     TL;DR: Electric Dreams is a 1984 movie about a love triangle between a jealous home computer and a human couple. You could read many things into it, but the story isn’t very nuanced. It’s an exploration of how technology that was novel in 1984 might impact our social relationships, but primarily, it’s a comedy-romance. 

GOL02

How can we find a way to convey important new topics without hyping them into oblivion? It seems that media-darling Elon Musk is quick in posing radical ideas as doable, but I always have the feeling he’s jinxing it. 

Elon Musk is many things to many people. One thing that appeals to his fans is that he has an exuberance for emerging technologies and ideas that he communicates well. Exuberance is wonderful, but we’re talking about technologies that have enormous impacts on peoples’ lives. For example, let’s say we had fully autonomous cars. Who is making these cars? It will likely be companies that look like Tesla who sell to early adopters with lots of money to spend. Maybe another phase is a company such as Uber banking on shared-ride services in an autonomous fleet. But a world of autonomous cars means revising city planning and road infrastructure, eliminating some jobs and creating new ones, rethinking how we deal with vehicle safety, and integrating less autonomous cars with a world designed around fully autonomous cars. More data will be collected about you, where and when you travel, and the world around you as you travel. Will the outcome be the optimistic vision of less private car ownership and a shared-ride economy, or will it be a spectacular dystopian failure? 

     People such as Elon like to talk about that future without a deep dive into the cultural shifts that need to happen to make any version of that future safe and well thought out. If a business entity wants you to invest in ideas that require so much conceptual change, they’ll create hype to drive that change. 

     Real robots go viral on YouTube, appear on talk shows, or are awarded citizenship as if they were a person. It’s the maker’s ethical responsibility to talk about the robots’ limitations while showing off its capabilities, but we don’t see that side presented to the public very often. It is a rare roboticist who shows off an invention that took years of research and cost a lot of money while saying, “Yes, the way this robot moves and talks seems amazing, and it’s very cool how far we’ve come on the technology side. But this robot is not autonomous. I’m controlling it now.” Or to say that someone else is controlling the robot or feeding it dialog, that the robot might look human, but it’s not intelligent in the way people might expect it to be. 

     I talk to the media about robots and AI to communicate about science, as do many of my colleagues. I have nothing to sell, and I don’t intend to profit financially by doing interviews. I’m trying to clarify where we are now and what we should expect from technology. But even sincere efforts at science communication can be misquoted or parts of interviews are edited out for valid reasons. I also know I have no power in how a journalist crafts the overall narrative for publication, or how an editor writes the headline for the story, which can be clickbait. It isn’t in the power of scientists or businesses to avoid being part of the hype machine. I participate in it to a degree. It feels inescapable, like the choice is to hype or not be heard. 

     Hype has an existential-threat aspect, which is valid. However, it’s only one factor in the hype sphere, not a complete answer to the question of how hype is generated and dispersed in a cultural sense. Those engaging in science communication— academics, makers, industry people, science journalists, anyone talking about AI from a place of expertise—must be aware of the messages we send to people. Will robots change the job market? Will autonomous AI-enabled weapons be used in war? We are having urgent conversations about these issues, but we also need laws and policies about regulating AI use. We need to talk about how to educate people about living in a world integrated with AI. And we need to elevate the voices of people who work on ethics and policy-making, because these discussions are being had from classrooms to the UN. 

     Media coverage about policy- making or developing ethical frameworks doesn’t fall into the hype category. Such talk doesn’t garner as much attention, so people worry because it seems like big questions aren’t being addressed. If I went by popular headlines, I’d think sex robots could be in every home soon, autonomous cars and weapons might be imminent and without legislation, and a robot might take my job tomorrow. It’s understandable that people have those takeaways when they lack information. 

     Unfortunately for them, individuals are left to try and keep up with technology advancements, which is impossible when AI has so many different possible uses. People don’t have the energy or time to think critically about a headline that says, “Will Sex Robots Put Sex Workers Out of Business?” How could any person sort through all the information critically, especially when it comes from a credible source but uses a sensational headline? I don’t have a simple answer for people about how to do that. I do hold makers and businesses responsible for the messages they send, including myself. As makers, we have a great deal of responsibility for what we put into the world. 

 

A massive overabundance of information— some accurate and some not—has accompanied 2019-nCoV outbreak and response. Will AI be able to help with sorting such information? 

AI can help in many ways during the pandemic. Throughout the tech world are urgent calls for proposals and hackathon announcements for research and development related to resolving COVID-19 challenges. But it’s important to temper the capabilities of AI with the reality of its limitations. For example, AI might be able to help sort information and news about the virus as likely to be accurate or not. However, to be accurate, AI analysis must go beyond scanning online posts for word choice to evaluating the source for a history of veracity. The AI also has to understand the context. 

     What criteria can AI use for decision- making regarding source veracity? Will criteria be established for particular journalists, influencers, public figures? Will only traditional news organizations be scrutinized at the source, or will this concept be applied to other figures of great public influence, such as celebrities, politicians, and religious leaders? What about social media domains? Should it be applied to posts on social media? If something is flagged by AI as potentially inaccurate, are there consequences for the person or organization sharing misinformation? Will AI decision-making include scrutinizing the network or organization behind the information to determine the likelihood of veracity. If so, by what criteria? 

     Beyond fact-checking or wordscraping, trust and trustworthiness are states that are dynamic. For example, you can trust a person or system one moment, lose that trust, and have your trust repaired— or not. Who decides the parameters of the AIs decision-making about trustworthiness, a concept that is human-centered and constantly calibrated? 

     It’s possible the AI can do part of this process—such as fact-checking—and then human decision-making can come in, determining factors such as trustworthiness, reliability, or intent. A human can decide if an agenda of disinformation exists or if inaccuracies are because of mistakes or ignorance. 

     AI might be very useful in helping identify early onset of a disease, modeling predictions of its spread in different scenarios, and helping with similar problems. Time is critical, and everyone wants to use whatever tools we have to meet these challenges. But we have to proceed in a careful way so we don’t rely on potentially false findings. This human-AI collaboration demonstrates that AI can be a great foundation for exploring a problem, but we often need people to analyze the cultural context of AI findings. 

 

What are you currently working on? Any pet projects? 

I’m working on fleshing out a theoretical framework of the human gaze toward technology. I’ve been playing with the idea for a long time, and I anticipate publishing something about it soon. I’m not sure what final form this work will take. I’d like to continue to do peer-reviewed writing, but I’ve always wanted to expand into writing and other ways of communicating ideas that are more accessible to an audience outside a handful of academic disciplines. 

     A book that I contributed to was published recently, Living with Robots: Emerging Issues on the Psychological and Social Implications of Robotics. “Kill switch: The evolution of road rage in an increasingly AI car culture” is my chapter. In it, I explore our relationship to cars not just as drivers or passengers, but culturally and emotionally. A lot of predictions exist about the proposed functionality and socioeconomic impact of semiautonomous and autonomous cars entering our lives. Less work has been done on the emotional and behavioral impact on people as our relationship with driving and transportation changes so significantly with increased vehicle automation. I wanted to look at how road rage will morph. 

     Regardless of the technologies we incorporate into transportation, we’re still humans. Therefore, we’re emotionally and behaviorally messy. Just because we aren’t a vehicle’s driver doesn’t mean we’ll be less frustrated or angered by unpredictable road situations. We’ll still get mad at something while in a car driving itself, but what might that look like? And how can we prevent or mitigate it? What triggers us negatively while in transit might change, and how we react will change. What are the new stress triggers we might expect as passengers in a self-driving vehicle? How will our anger play out differently, and what are the emotional and physical dangers for us and those around us? Can a smart vehicle anticipate our anger or read road situations to react in a way that soothes us? Should it? These are all questions I explore in the “Kill Switch” chapter. 

 


DR. JULIE CARPENTER is an American research scientist whose work focuses on human behavior with emerging technologies, especially within vulnerable and marginalized populations. She is best known for her work in human emotional attachment to robots and other forms of artificial intelligence. Typically using ethnographic methods of inquiry, she situates human experiences within their larger cultural contexts and social systems. In doing so, she offers a framework for describing phenomena and explains how peoples’ expectations, behaviors, and ideas change over time as they work with technology. Carpenter is currently a Research Fellow in the Ethics + Emerging Sciences group at California State Polytechnic University.

 

Follow Julie on Twitter: @jgcarpenter

MEMO 01 - JULY 2020
Copyright 2020 TFLC
Ideas for change