Of Accountability and Methodological Silos

Of Accountability and Methodological Silos

AN INTERVIEW WITH JUDITH SCHOSSBÖCK

Trust is an important concept in the digital age, yet most people don’t grasp what it means to create trustworthy systems. What’s your take on this conundrum? 

I’d even go so far as to say that trust in technological systems has become one of the most valuable currencies in informational capitalism. Digital trust is a systemic issue. Most of us are worried about too much surveillance and believe tech companies hold too much power. At the same time, people always trust something they don’t see—an algorithm or a company that constantly collects their data. Most of the time, the issue is convenience. However, it’s also because people feel confident trusting something they don’t know. I find this situation interesting, because it could also mean that trustworthy systems don’t have to be as transparent as possible, but they should focus on making people feel comfortable in their “relationship” with the system. People feel comfortable when they feel the system is “one of them,” on their side, that their values and norms align. So, you could say a form of relational trust always exists, even when we interact with an algorithm. A concept such as algorithmic accountability seeks to promote social trust in systems, and I’m glad it’s gaining momentum. 

 

Could you give some examples of algorithmic accountability (AA)? 

The main idea behind AA is that an algorithm isn’t an inhuman or objective construct. It transports biases or opinions, but they are embedded in mathematics. Existing injustice or inequalities can be replicated, particularly when we can only observe input or output, but not the workings of a system. And if the data already has bias, that ↗ BIAS is likely going to be in the outcome. Some good examples can be found in the book Automating Inequality by Virginia Eubanks, which criticizes automated decision-making in America’s public services and welfare system, and how doing so discriminates against the poor and the vulnerable. ↗ AA means, broadly speaking, to take responsibility for the results of algorithms and their impact on society. 

AC01

The workings of algorithmic systems are often presented in a bad light. Such as when Uber’s self-driving car ran a red light in San Francisco, or when Google’s photo app labeled images of black people as gorillas. Is that just sensationalist media, or is there more to it? 

Media should certainly report bugs or fuck-ups of big tech companies. The firms can fix the problem and apologize. In the best case, it puts a finger on effects that society doesn’t want. Maybe we need to be more careful with tagging faces in general, or it might tell us that labeling works better for some groups than others. However, an algorithm or app isn’t inherently racist, and the media often oversimplify. It usually reflects wider landscapes of meaning in society. Sometimes, this meaning is lost.

When Google shows us black people for the search term unprofessional hair, apart from the fact that Trump might take the lead there 😃, it might also include images from blogs that criticize racist attitudes. What is crucial with these issues is that we need to improve functionality for all subgroups of society. 

 

Can you see a trend in how social media platforms are approaching the topic of moderating their content? Will such tasks move over to algorithms? 

Well, choosing content that people see in their feeds is already a form of moderation. On social media, nearly all content is chosen by algorithms, based on the bulk of data stored about us. As for moderation, a hidden workforce often does it, particularly in the industrial context. Sarah T. Roberts calls this ↗ CCM, commercial content moderation, in Behind the Screen. Usually, we don’t know much about the people undertaking this important, lowwage, low-status work, and they’re often poorly trained. When we look at CCM, one trend is training datasets for machine learning instead of dealing with live content, intending to replace humans for some tasks. CCM isn’t the only way content moderation is done today. Some areas are too complex for machine learning and need a more “artisanal approach” (a term by Robyn Caplan from the Data and Society Institute). Take Vimeo, for instance, which allows nudity for artistic purposes but no pornography. This sort of distinction is still much too complex for AI; it can be difficult for humans! 

I believe technologies such as deepfakes will render video evidence in courts useless. How do you think this problem will affect legal procedures? Or am I too pessimistic? 

My guess is that it will transform how evidence is created. But usually, the more sophisticated a technology gets, the more developed the attempts to counteract or to crack it. For deepfakes, this counter means that we’ll invent methods to better identify them, like forensic algorithms or authentication agencies that evaluate the strength of a source. And without being a legal expert, I’d estimate that deepfakes are just one recent example of many pieces of evidence that can be faked. If we see everything that can be faked as useless, what would be left? However, I do imagine a much more complicated procedure. For instance, we might have to prove that something is not fake by default. Social issues arise with this choice. It can create an atmosphere in which we find it hard to trust anything, and we probably already give more influence to those who claim something is fake now. 

 

Isn’t the concept of fake challenging? What’s true; what’s not? The scientific method that hardcore skeptics worship doesn’t seem to apply to many problems we face today. Or does it? 

One problem with the scientific landscape is that it can operate in highly specialized contexts. However, if we manage to go beyond disciplinary and methodological silos, we might get more to the core of today’s problems. For instance, with the issue of autonomous vehicles, philosophers have been working together with engineers to write algorithms based on ethical theories. One of the biggest challenges with the construction of authorized truth is that people tend to doubt expertise if it comes from isolation. We know from journalist studies that declaring “truth” based on autonomous expertise can suggest irrelevance today. So, anything seems to be more believable when paired with a collectively formed opinion. And you find skeptics on both sides of the truth coin— those who worship and those who reject the scientific method, from data geeks to ↗ 5G skeptics. 

AC02

Barbrook and Cameron published a pamphlet called The Californian Ideology back in 1995. They postulated that hippiedom and hyper-capitalism created the mental landscape for Silicon Valley. Does that idea still hold up?

 A similar and current phenomenon might still be valid. Maybe advanced technology and networking promote structures of neoliberalism after all. In today’s late-stage capitalism, we’re promised freedom, mostly through technological innovations, while we’re truly contributing to another kind of ideology. That tendency follows us everywhere. One can observe similar things with the critique of certain communities. Take Burning Man, for instance. While a radical counter-culture, it’s also an inspiration for tech corporations, and people often debate what could be done to avoid reinforcing the status quo. So, luckily, some awareness about this issue exists in modern hippiedom. 😃 

 

What’s your favorite story of a complete fuck-up in technology, startup culture, or e-governance? It’s time for some schadenfreude. 

You can’t beat Uber, can you? But on a more personal account, I giggle almost every day about Alexa, the voiceassistant AI my flatmate uses, which rarely works. 😃 I don’t know whether it’s schadenfreude I feel then, but it gives me a good laugh. Alexa isn’t allowed to swear, no matter how often you scream “fuck you” at it, but if you scream a whole sentence including “fucking,” the system seems to work better than a simple “stop!” In the area of e-governance, a recent fuck-up was the mobile app developed for reporting the results of the 2020 Iowa Caucus. It was planned to speed up the process, but it failed spectacularly and led to a massive delay. This case is so interesting because security details remained secret due to fears that hackers could exploit the system, but the situation shows that security via obscurity can create more problems. And maybe we don’t need an app for everything in the end. 😃 

 

Do you think that the infodemic (to quote the WHO) during the 2019-nCoV situation is a litmus test for our digital ecosystem? 

It’s another challenging outbreak. On one hand, we’ve seen it many times before—the battle over conflicting narratives coupled with the dismissal of expertise. On the other hand, new in this situation is the intense speed and uncertainty of pandemic information and its direct influence on everyone’s lives. Because we are dealing with predictions and contradictions, we have an ideal breeding ground for misinformation. One would think that, in times of crisis, people would like accurate scientific data, but they also love conspiracies, because we are all vulnerable in times of crisis. Didn’t we somehow love the idea of nature making a full recovery in those fake news stories, telling us about the swans and dolphins returning to the clear waters during the lockdown in Venice? Such stories can give comfort when people are sick of negative news. 

     Another thing happening with this infodemic is that tech companies seem to be willing to take on more responsibility when it comes to content selection, and WHO staff now scan social media and respond to the public. This work is one way of developing immunity against the infodemic. At the same time, several important pieces that went viral these days did not come from virologists, epidemiologists, or public health specialists, but from data scientists or blog authors, whom public media then cited and disseminated. This process has been criticized as “armchair epidemiology” and another form of “misinformation virus,” but whatever you think of it or however you name it, you can’t deny its influence on public opinion. 

     Share wisely, and watch out for trolls!


JUDITH SCHOSSBÖCK is a PhD candidate in the Department for Media and Communication at City University of Hong Kong, an HKPFS award recipient, and an affiliated researcher at the Centre for E-Governance at Danube University Krems, Austria. She’s managing editor of the open-access e-journal JeDEM (jedem.org) and scientific co-director of paraflows.at, a symposium for digital arts and culture in Vienna. Her research and publications cover participation, activism, e-governance, and social media.

Follow Judith on Twitter: @judyintheskynet

MEMO 01 - JULY 2020
Copyright 2020 TFLC
Ideas for change