YESTER YEAR HAS FALLEN

YESTER YEAR HAS FALLEN

THE ONLY WAY YOU CAN SURVIVE, LITERALLY NOT DIE, IS IF SOME MEANS BEYOND YOUR ASSESSMENT CAN DETERMINE WHETHER THINGS ARE TRUE.

Cory, I’d like to talk about trust in the digital age. Why do you think trust is still so relevant? At the same time, trust itself is scarce. Before we got into the digital age, trust was important because you can’t, as an individual in a high-tech society, ascertain on your own whether important things you must know are true. Do you know whether the foodhygiene standards for the restaurant down the road are sufficient? Is the de-icing your city does on roads when it snows going to destroy your car? Is the sanitation process used to filter your water enough to keep you from getting cholera? Are building standards for the steel joist over your head sufficient, or is your ceiling going to fall and kill you? The only way you can survive, literally not die, is if some means beyond your assessment can determine whether things are true. Historically, at least in the modern era, we have used a process that’s legible to people who rely on it, even if the conclusions of that process aren’t necessarily legible. 

     You often hear nerds talk about lawmakers being too old to understand computers, and that’s why we have bad computer law. I don’t think that’s true. Lawmakers managed to make good laws about things like sanitation, even though most legislatures don’t include microbiologists who became lawmakers. They’re able to make good policy, even though they’re not experts in the policy domain, because they have a robust process. That process centers on a truth-seeking exercise. That’s when you have a body, an expert regulator or a commission, that takes testimony from multiple parties who have a stake in the issue. They resolve conflicts and views. You might say, we’re going to allow only this much and no more run-off from this factory or this chemical processing in our drinking water. You might have people who say, “That’s too much; it’s going to make people sick,” and you might have others who say, “That’s not nearly enough, you could have much more and people will still be safe.”

You adjudicate those claims by gathering evidence from competing experts. That evidence is assessed by a different set of neutral experts, and those neutral experts divulge their conflicts of interest. If they have meaningful conflicts of interest, they have to recuse themselves from the process. Then, the remaining experts come to a conclusion. That conclusion isn’t just delivered to the public as a fait accompli. It comes with a rationale. You can see how the experts chose one version of the truth over another and understand that. That is legible. Even if you don’t understand the question being answered, you can understand how the compromise was reached. 

     A robust process exists for revisiting a conclusion that turns out wrong. It doesn’t assume that everyone’s going to be perfect. It doesn’t just work well, but it also fails gracefully. You can go back and say, we have new evidence about safe levels of lead, about whether this earthquakeproofing measure is sufficient, or about whether Volkswagen is cheating on its diesel. You can revisit those conclusions. Something has changed along the way in the last 40 years that has made that process significantly less robust. 

     The major change has been market concentration, where the number of firms that compete in every industry has gone down very quickly until almost all major industries are dominated by a handful of players. Sometimes as few as two or three. When you look at something like search, it’s down to one player. Industries concentrated like that have a much easier time coming up with a harmonized regulatory or lobbying position. 

     A lot of people who saw that photo of Trump in Trump Tower with all the tech leaders sitting around one table were appalled that the tech leaders were meeting with Trump. There are plenty of good reasons to be upset tech leaders were meeting with Trump. But if you’re paying attention, you should be upset that all those leaders can sit around one mid-sized table. 

     When your industry is reduced to four or five firms, then anyone qualified as an expert regulator, supposedly neutral, has probably worked for two or three of them. That expert is probably married to someone who works for one of the firms, and maybe they’re a godparent or willexecutor to someone from another one. When an industry is that concentrated, one way you ascend the corporate ladder is by going to work for a competitor. Novartis gets someone from Merck, Apple gets someone from Google, or Sheryl Sandberg goes to Facebook and ends up running the show there. 

     With concentrated industries like that, people who represent them have deep ties to every firm in the industry. Then, to top it all off, when an industry only has four or five big players, the only qualified regulators probably came out of the executive suite of one or more. When you look at the US regulatory context, for example, you see that the pro-neutrality FCC chairman Obama hired, Tom Wheeler, was a Comcast lobbyist. The bad FCC chairman who dismantled it, who Trump hired, is a Verizon lawyer [Ajit Pai]. That’s not a coincidence. There are too few qualified people. I mean, there’s one academic, Susan Crawford, who might be qualified to be commissioner of the FCC, the chairwoman of the FCC, but apart from that, there’s almost no one. So you have what amounts to a conspiracy, which is to say all the people in the industry are getting together and agreeing either tacitly or explicitly on a single lobbying position. That’s pretty much the formal definition of a conspiracy, and you also have corruption. 

     Where people who are running the show, who are supposed to be sitting in judgment, are working for the industry they’re supposed to be regulating, you end up with self-regulation. In the US, the National Traffic Highway Safety Board left safety testing for child car-seats to the manufacturers. The car-seat manufacturers knew from research that parents were looking for seats robust in side impacts, where the car gets T-boned. So the manufacturers started testing for side impact. Now, first of all, their tests were grossly inadequate. They were 20 MPH tests instead of 70 MPH tests, they used no collisions, and so on. 

y1

 But, even with those very mild tests, they found the test dummies in these car seats, who were in simulated side-impact events, suffered internal decapitation. That’s when forces on the baby’s head are so intense that they tear through spinal tissue. The industry advertised these seats as sideimpact tested, and they were tested. What the ads didn’t say is that the simulated children died in these tests. They sold these seats as side-impact tested. 

     This is a long way of getting to the final point, which is that we’re living in a moment of vastly increased conspiratorial thinking. People are anti-vaxxers, 911- truthers, and even flat-earthers. We’re in a moment in which the number of people who’ve seen the curvature of the earth out of an airplane window has never been higher, and yet we haven’t had more flatearth sentiment since the enlightenment.

You have to ask yourself, what is it that gave rise to this conspiratorial thinking? People such as Shoshana Zuboff say the rise in conspiratorial thinking is driven by machine learning, which is so good at manipulating people it can make them think up is down and left is right. I don’t think that’s true. When you look at conspiracists, they know more details of actual conspiracies, and these details are used as the framework for understanding hypothetical conspiracies. 

     When people say, “The election is rigged,” they’re talking about stuff like what happened in the Iowa Democratic Caucus. People say, “In Katrina, they blew up the levees to drown the black neighborhoods so the white neighborhoods would be spared.” Those who believe that claim know an awful lot about the Great Mississippi Flood of 1927, when that actually happened. Anti-vaxxers are experts on the Sackler family and the opioid epidemic, where the FDA and regulators did collude with pharmaceutical companies, who are super-concentrated and whose executives have moved into regulatory roles. The opioid epidemic has killed more Americans than were killed in the Vietnam War. 

     The biggest correlate to belief in conspiracy is knowledge of real conspiracy. When we talk about the collapse of trust, we get sidetracked onto things like deepfakes. Then, we say, “Well, now people can’t trust their eyes.” They never could trust their eyes, because you never know what’s out of frame in a video or photo. What people trusted was a system that produced reliable, impartial conclusions with editing that was never deliberately misleading. The reason trust collapsed isn’t because people were brainwashed by machine learning. It’s because the firms involved demonstrated themselves to be untrustworthy. The rise in conspiratorial thinking is explained by a rise in conspiracies. Totalitarian systems we’ve been talking about for decades, such as portrayed in 1984, never happened in Western societies. What happened is a form of surveillance capitalism, a combination of totalitarian aspects and profit-driven motives. How do you see that? You can see that story playing out when you look at the history of mass surveillance. You know the Stasi, we don’t know exactly how many people the Stasi employed, but it looks like it was about one in 60 East Germans were either informants or officers of the Stasi. It took about one person to spy on about 60 people. The NSA, again, we don’t know exactly how many people the NSA uses to spy on people, but we know the US tries to spy on every human being alive. We know only about 1.4 million Americans have the security clearance to be a part of that project. 

     We know the minimum number of people one US spy can surveil. It’s about 10,000. We went from one in 60 to one in 10,000 in a generation. The way we got there, that productivity game, is about the efficiency of spying on people using digital tools. That’s a big piece of it. But if you were to give the Stasi access to tools the NSA has, they still wouldn’t be able to spy in the same way. The NSA doesn’t spy by putting bugs

in our houses or hiring people to follow us around. It gets its power by buying data or stealing data from tech companies, and the tech companies get that data because we pay them to collect it. 

     When you pay your phone bill, you’re paying your phone company to spy on you. When you buy an app, you’re paying the app company to spy on you. When you buy a phone, you’re paying the company that made the phone and the operatingsystem vendor to spy on you. Then, you get spied on with your own money. One of the supposed great cruelties of China during the Cultural Revolution was if they took your father away and shot him, you found out when they sent you a bill for the bullet. Today, when they spy on you, they send you a bill for the phone. 

     I like the phrase surveillance capitalism, but the surveillance-capitalism hypothesis doesn’t encompass this. Zuboff thinks corporate surveillance is a very different animal from governmental surveillance. People who worry about the resurgence of the Stasi miss the big picture, which is that Google or Facebook is going to be the Stasi. Weirdly, she thinks it’s not going to be Apple, even though Apple removed all working VPNs from their app store in China so the authoritarian government there could round up Uighurs, Falun Gong, and so on. 

     People from Falun Gong are incarcerated and have their organs harvested for party members. Apple doesn’t spy on you to sell you refrigerators, they spy on you so Chinese party members can have your kidneys. That’s not much of an improvement. If you understand what’s going on here, you understand states and corporations need each other to effect their surveillance projects. Surveillance data isn’t very valuable to corporations; that’s one reason they collect so much of it. This area is another where I disagree with Zuboff. She thinks they collect a lot of data because it’s worth a lot. 

     I think they must collect a lot because they eke out marginal gains and ad targeting even with massive amounts of data. It’s hard to predict what people want to buy, and diminishing returns are diminished very quickly. If you’re trying to sell cheerleader uniforms and you know that someone is a cheerleader, you’ve got 99% of all the data that you need to target an ad to them. Knowing that they’re a cheerleader in a specific town for a specific team, that they also like this TV star, and that they recently went through a breakup doesn’t sell extra cheerleading uniforms. 

     To understand the relationship between private and public surveillance, state and commercial surveillance, you have to understand their symbiosis. If you want to spy on people with impunity as a corporation, you need governments to let you do it. Although it’s not very helpful to spy on people, it can potentially be very harmful to spy on people. The data might not be useful to sell someone something, but it could still contain something you can use to destroy their life. 

Today, we live in a society in which spurious realities are manufactured by the media, by governments, by big corporations, by religious groups, political groups… So I ask, in my writing, ‘What is real?’ Because, unceasingly, we are bombarded with pseudo-realities manufactured by very sophisticated people using very sophisticated electronic mechanisms. I do not distrust their motives; I distrust their power. They have a lot of it. And it is an astonishing power: that of creating whole universes, universes of the mind. I ought to know. I do the same thing.” —Philip K. Dick

 Any kind of medical or personal information might be used to compromise their family life, their personal life, their education life, their employment life, and so on. That’s all in there. Firms that warehouse data, it’s like they’re warehousing plutonium. We should regulate them like they’re holding onto potentially deadly stuff. Instead, we regulate them like they’re a direct-marketing company that has a list of people who’ve subscribed to a magazine, and they sell that list to people who might want to sell a different magazine. We underregulate data collection. One way you get there, get policymakers to deny good public policy and to be negligent in their duties, is by making them reliant on your data. 

     States also want to collect huge amounts of data for lots of reasons. Not least because it’s unstable to live in a world as unequal as ours. Failing to make good policy because it might piss off a rich person or leaders of a concentrated industry (that controls its regulator) means things can get really bad. If you can’t make good health rules, then people die. If you can’t make good climate rules, then cities burn down or flood. If you can’t make good food-safety rules or toy-safety rules, people die. Living in a very unequal world is very unstable. One reason East Germany needed so much spying was because living under that regime’s conditions upset people and made them want to replace their government. By spying on people, you can maintain corruption and arrest anyone upset about that corruption. 

     But if you’re going to be corrupt, at some point, it’s cheaper to just make good policy. It’s cheaper to build some roads, hospitals, and schools than it is to hire guards to arrest and hold people who are angry that they have no roads, hospitals, or schools. Guards who keep the disgruntled from breaking down the walls, stealing all your stuff, and putting you in a guillotine. Eventually, it’s cheaper to prevent guillotines than it is to police the populace. One of the things that surveillance does is make policing a lot cheaper, because rather than watching everyone, you can just watch those identified as likely to be building the guillotines. That’s much more efficient. 

     States, especially corrupt and plutocratic states, want a lot of surveillance of the population. Firms want a lot of surveillance of the same people because the firms have a lowgrade commercial advantage from gathering data. The only way to make that data profitable is to gather titanic amounts, more than any sane regulator would ever let them. This situation creates mutual benefits between the states that profit dictators and the monopolistic firms that emerge in dictatorships. 

     That symbiotic relationship, you can see it playing out, for example, in the way Amazon markets its Ring products, which are surveillance doorbells. Amazon gives them to police stations and says, “Go out and create buzz about our product to people you’re supposed to be protecting. Tell them that crime is terrible where they live. To be safe, people should put these cameras on the front of their houses.” 

     But American violent crime has been on the decline. 

     Then, Amazon tells the cops, “If you give away a few and then get 50 more people to buy them, we’ll give you warrantless access to the feeds from those cameras so that you can spy on people in a way that no court would allow.” Buried in our terms of service is something like, “By putting this camera on your house, you agree we’re allowed to coach the cops on how to obtain the video from you without a warrant.” 

     If you think lawmakers are going to tell Amazon they have to limit or put safety features into their surveillance cameras, you have to reckon the cops are going to show up and say, “No, no, no. If you limit Amazon, we won’t be able to fight crime.” And then, no real distinction exists between public and private surveillance. They’re two facets of the same coin. 

y2

People distrust the state, yet they cry for the state to protect them and solve their problems. People also distrust corporations, yet they eagerly wait for tech that demands a great deal of trust in complex systems, such as selfdriving cars. Isn’t that a strange form of cognitive dissonance? 

Absolutely. You answered your own question. Self-driving cars don’t work well. They won’t anytime in the foreseeable future. They’re a solution in search of a problem. We know how to solve the problem of not being able to get around without a driver’s license. It’s called good public transit. 

     We had that in most of our cities for a long time, and we started to draw it down, in part, because the concentrated automotive industry was able to lobby against it. Self-driving cars are irrelevant to traffic problems in most cities. Take all the people who need to go from the Eastside of Los Angeles, where I live, to the Westside of Los Angeles, where I go sometimes to go to the airport or the beach. Then, see how many people need to do that every day, multiply how much space a car takes up on the motorway, and multiply by the number of people. Then, divide by the amount of motorway we have. 

     It doesn’t matter who, if anyone, is behind the wheel of the car. Even if you improve the throughput by an order of magnitude, you will still have massive congestion. That’s not even taking into account the idea that every time we make it faster to get from one side of LA to the other in a private vehicle, more people go out and drive private vehicles. They quickly absorb excess capacity, and it gets slower and slower again. It’s a Red Queen’s Race. 

     If you imagine you had a bus network feeding into a high-speed rail network that ran through the city, and then you multiply the number of people who sit in each one of those vehicles by the amount of space that it would require, you can see that the only potential solution for our cities is to have excellent transit. 

     People’s imaginations have been constrained. Our imagination about something like climate change has been constrained. In the US, our imagination about public healthcare has been constrained by years and years of propaganda. This is where I agree with Zuboff that messaging matters. But it happens slowly. The reason our social imagination changed is 40 years of relentless propaganda, not because machine learning figured out how to reach into your brain and change your mind. 

     Because it takes three hours to cross LA during rush hour and we can’t imagine public transit, we start to fish around for other solutions, even outlandish ones, because we know what’s going on isn’t sustainable. Three hours might become five hours, might become eight hours. We don’t know what to do to resolve that question. And so, we’re like, “maybe robots” because we can’t imagine we might rebuild systems and mutual aid that got us to where we are, that made our civilization possible. 

     In some way, we’re caught up in this lapsarian narrative—the idea we have fallen from grace, that the great collective accomplishments of yesteryear were the accomplishments of a civilization that’s gone. Just like we don’t know how the Egyptians built the pyramids, we might never know how our ancestors built trams and trains, and we might as well not try, right? That’s why we end up with this forward-looking thinking. 

     It’s exciting to see these cults of personality built around individuals who maintain the fiction that they’re responsible for what their organizations do. People such as Elon Musk, who bills himself as the chief engineer of Tesla but does little engineering. Then, when the old chief engineer of Tesla quit and the tech press wrote it up, they got letters from Tesla saying, “That was not the chief engineer. Musk is the chief engineer.” 

     This idea that Tony Stark is in his lab inventing robot cars to solve traffic jams is a very exciting one, but it’s completely disconnected from how stuff gets done. 

Of all the possible futures that are being sold to us right now—in sci-fi, politics, or marketing—which is the most appealing one for you? 

The Green New Deal. 

Do you see 2019-nCoV as a potential catalyst for a Green New Deal? Could there be some commotion in the coming months? Well, I’m not the first person to point out that it’s weird and largely unprecedented to have a civic duty to enact. 

     In terms of what comes next, to be frank, we’re on a knife-edge of authoritarianism or sweeping political change toward solidarity. The people who take away from this the idea that we can’t or shouldn’t all survive will be driven toward exterminism—let’s kill all the olds, spoonies, and poors because they are reservoirs for diseases. The people who correctly perceive that this was a human-engineered crisis—because, in the runup to the crisis, we didn’t address homelessness, pharma patents, public health, labor rights, emergency stockpiling, communications infrastructure, and so on—will see it as a wakeup to restructure our production to do better. 

     WWI was a human-made disaster. The dynastic fortunes of the wealthy had multiplied during the age of colonization, ending centuries of primogeniture and allowing each son of each wealthy person to inherit. (Under primogeniture, only the oldest son inherited, to keep the dynasty intact and undiluted.) After two or three generations of this, though, no more land was left to steal to found colonies and bud new dynasties. Rather than return to primogeniture, the world’s wealthy began to fight over each other’s colonies, hoping to preserve the pattern of each son inheriting their national aristocracies. 

     The result was a global cataclysm. But WWI wasn’t so terrible that it destroyed the fortunes of the wealthy people who started it. They retained enough wealth and influence that they were able to insist that surviving nations (not only the losers!) pay off their war debts, which were owed to those aristocrats. 

     The result was immiseration on a scale never seen, austerity without limit, and finally, WWII. WWII destroyed orders of magnitude more capital than WWI, and it started under conditions that were even more unequal than WWI. That is, the only people with capital to be destroyed at the start of WWII were the wealthy, who’d acquired everything there was to acquire during the interwar years. After WWII, the wealth of the super-rich had been brought to a historic low. In Capital in the Twenty-First Century, Thomas Piketty shows the share of wealth controlled by the top 10% had not been that low in more than 300 years. That change upended the political process. With power diffused among large populations rather than concentrated in aristocratic hands, our politics became much more evidence-based and pluralistic. You could go where the evidence took you, even if it gored a rich person’s ox. 

     That’s the Tommy Douglas era, the NHS era, the French les Trente Glorieuses—the creation of the world’s welfare states, with widespread opportunity. Not for all, but it was also the era of decolonization, racial justice struggles, and uprisings to support the rights of women and sexual minorities. 

     So, that’s the other side of the peak of misery we stand atop. On one face, we tumble into fascism and austerity. On the other, we descend to a valley of shared destiny, solidarity, pluralism, and prosperity made possible by the shrunken fortunes and influence of the wealthy. They’re the only people who own substantial amounts of stocks and bonds, all of which are tumbling with no bottom in sight. 

     To me, the pandemic is the start of the big project for the years to come. There won’t be much rebuilding after the crisis passes, because this isn’t the Blitz. All the factories and raw materials will still be intact, but the system that organized them and the workers so badly for survival— though they were superbly tuned for profit extraction—will not be able to mobilize them in ways that produce the material outputs needed for surviving and thriving. After this emergency, we need the skill to avert the 2008 situation, when the crisis of capitalism begets moar capitalism instead of a rethink of the system that led us to a pandemic we were so monumentally unprepared for. It’s great to see nationalized pharma patents for the duration, but let’s reconsider whether we want private pharma development at all. Maybe we should follow the contours of the Access to Medicines Treaty, whereby nations that spend a certain percentage of GDP on pharma research— whether through private firms or state spending—get unlimited license to other nations’ pharma, and vice versa. 

     A repeat of 2008—giving everything to rich people, again—would create massive political instability and make it politically impossible to spend the way we need to prevent the next pandemic from being even worse. That’s the thing we must head off. Cheers to that.

“IT’S CHEAPER TO BUILD SOME ROADS, HOSPITALS, AND SCHOOLS THAN IT IS TO HIRE GUARDS TO ARREST AND HOLD PEOPLE WHO ARE ANGRY THAT THEY HAVE NO ROADS, HOSPITALS, OR SCHOOLS.”


CORY DOCTOROW (craphound.com) is a science fiction author, activist, and journalist. He is the author of Radicalized and Walkaway, science fiction for adults, a young-adult graphic novel called In Real Life, the nonfiction business book Information Doesn’t Want to Be Free, and young-adult novels such as Homeland, Pirate Cinema, and Little Brother. His next book is Poesy the Monster Slayer, a picture book for young readers. He works for the Electronic Frontier Foundation, and he is an MIT Media Lab Research Affiliate, a Visiting Professor of Computer Science at Open University, and a Visiting Professor of Practice at the University of North Carolina’s School of Library and Information Science. Cory cofounded the UK Open Rights Group. Born in Toronto, Canada, he now lives in the US in Los Angeles.

Follow Cory on Twitter: @doctorow

MEMO 01 - JULY 2020
Copyright 2020 TFLC
Ideas for change