Delete Your Account Now: A Conversation with Jaron Lanier

By Harper SimonOctober 8, 2018

Delete Your Account Now: A Conversation with Jaron Lanier
JARON LANIER IS ONE of the leading philosophers of the digital age, as well as a computer scientist and avant-garde composer. His previous books include Dawn of the New Everything: Encounters with Reality and Virtual Reality, Who Owns the Future?, and the seminal You Are Not a Gadget: A Manifesto. His latest book bears a self-explanatory title: Ten Arguments for Deleting Your Social Media Accounts Right Now. We first met while performing on stage together in San Francisco several years ago and developed a friendship. On August 14, he sat down with me to talk about the perils of social media and the promise of better things to come.

¤


HARPER SIMON: A little while ago I was visiting you and you put some VR goggles on me for the first time, and it was kind of comical, because we were having all these problems making it work. And at one point you said something like, “You know, I love making music. I love writing books and giving talks, but when it comes to technology I just don’t know that we really made the world a better place.” And I was really struck by that! Is that actually how you feel or were you just getting irritated that we couldn’t get the VR goggles to work?

JARON LANIER: [Laughs.] Well, I mean, there’s a little bit of both … What was probably going through my mind is this sense of what I call “digital nervousness” — that every little aspect of life these days involves navigating some digital system that forces you to act both perfectly and arbitrarily to suit its needs. And I do think that that world is a world I enjoy less than the world that came before it. And I think we did kind of screw up, and it’s a complicated screw-up. It has a number of dimensions.

But I think in that particular case I was just concerned with the texture of life when we have to conform to all these digital systems, and that’s a separate question, to a degree, from my specific complaints about how the systems become manipulative and all that, but not entirely …

I think that the patterns that have led us to this world of ambient manipulation are making themselves felt in everything digital. And so it does really have a little to do with why everything is a bit more annoying than it would otherwise be. A lot of the systems are designed to serve some central kind of mob scheme rather than the user. So a lot of them only feel good when you do exactly what the system expects you to do and don’t really give you any latitude. People then end up behaving according to the expectations of the software design, because it’s just so hard to resist and you have this massive cultural and, kind of … intimate conformity, is what I would call it.

I feel like it’s hard to discuss the basic premises of a lot of your arguments in your latest book without talking about what a “BUMMER platform” is. Can you describe it and tell what prominent companies have designs that fall under this umbrella?

Well, it’s a funny thing about that term … I just needed some simple term because it was getting so repetitive to say “behavior modification empire” or something similar with a lot of syllables. I received some criticisms for the term. Some people think it’s a little too schoolboy-naughty or something. Sherry Turkle wrote me a really nice letter saying she loved the book but felt there was just too much “tush” in it. [Laughs.]

Huh. I wasn’t thinking of it in that way. I was thinking more of the Haight-Ashbury “bummer.” Like a bad trip.

Oh yeah, well, that works better. I mean that’s kind of how I was thinking about it. But anyway, it stands for “Behaviors of Users Modified and Made into Empires for Rent.” And it might not be the most graceful acronym in the world, but at any rate …

I thought that was pretty damn good.

It’s just a way to summarize the business plan behind some of the largest companies in history, with the idea that money is made whenever two people exchange any value, whether it’s just one datum being measured from somebody that’s used to run a machine-learning application or people sending messages to each other, uploading videos, or whatever. The companies are not paying for it. It’s not being paid for by angels from the sky. It’s not a nonprofit charity. It is being paid for by customers — but the customers are not the people who are actually doing the thing. They are these other people who decide, called advertisers — or I prefer to call them manipulators, because they have been sold on the idea that they’re not just advertising. They’re not just getting a message in front of you, but are part of a mathematical scheme that will predictably addict you and then modify your behavior.

And I think sometimes that’s oversold — sometimes it may not be as true as some of the people who are paying for the system have been led to believe — but at any rate, it is this huge business model. So the two big companies that are bound by this are Facebook and Google, and there are some smaller ones that often feel that it’s been thrust upon them and they wish they could do something different, like Twitter. Some people don’t know that a lot of the internet brands that might seem independent are actually owned by Facebook, and are just different faces of Facebook …

Like Instagram, for instance …

Instagram, WhatsApp … these are all just parts of Facebook.

So did Facebook start out with an advertising component? I was very late to all of this, so …

Yeah, actually, one thing that’s really interesting is that Facebook is not a normal company, in the sense that its valuation when it went public wasn’t based on how much money it made, which is what would normally happen with a business. It actually somehow talked the SEC into creating this other category, where it would be valued based simply on how much it was used, just on user engagement. And I think that was one of the most dreadful decisions in the history of financial governance, because, unfortunately, it set the pattern for other companies that went public later, like Twitter. So it’s almost like a government mandate that, instead of actually making money and serving customers, a company will become an addiction and behavior modification empire.

Can you describe some of the consequences of what you refer to as “network effects” or “lock-ins,” and how these kinds of design flaws have made it critical for people to quit social media entirely in order to ensure fundamental change?

Well, those are two slightly different ideas. So let me address them each separately. The network effects and lock-in have to do with what you call natural monopolies that arise in some situations. In the case of Facebook … there’s a very widespread dissatisfaction with Facebook. It’s been criticized almost constantly by everybody. And yet, somehow, people are still on it. And the reason why is twofold in this case. One thing is it’s the network effect, which we were just mentioning. What that means is that, because everybody is already on it, it would require a supernatural degree of coordination for everybody to get off it at the same time. So to get off it means that you’re sort of isolating yourself. And this is typical of a lot of network systems — why it’s hard to have a lot of competing electric companies or competing highway systems. In general, it just gets easier if there’s one of them. And so Facebook is one of these natural monopolies. Sometimes it’s more proper to call these things both monopolies and monopsony.

The key idea here is that it’s just hard to get off it. Mathematically, people are kind of stuck. Then, added to that, in this case, is the addiction factor. People are addicted to it. So if you have both network effect and addiction, then it becomes really hard to leave.

There have been examples, though. How was it that a site that was as ubiquitous as MySpace suddenly dropped off and then everyone jumped onto different platforms?

Yeah, that was impressive. The fall of MySpace is one of the few examples of a new network taking over an old network. And, of course, MySpace fell totally. It’s not like it just lost 20 percent of its people; because of network effects, it just had to die. And in fact, I still think that, eventually, if we just ran the clock forward, either Google will kill Facebook or vice versa. I don’t think they can coexist indefinitely, although they could for a long time. They could for decades, but they can’t indefinitely.

That’s a fascinating idea. What is the fundamental conflict between Facebook and Google?

Well, the thing that separates them now is that they’re getting different kinds of data from people. Facebook has personal relationships and intimate behaviors. And Google has interesting attention measurements of a different kind. Google probably has superior technology, better engineers, and whatnot, and Facebook has an entrenched, probably overall less elite usership than Google does. It’s kind of interesting. They have enough distinctions that they’re coexisting for now. They do compete for the same advertisers, though, and the real reason I think that one hasn’t killed the other is that the advertisers largely formed themselves into these giant holding companies to have enough strength to negotiate with Facebook and Google. There was this wave of extraordinary consolidation in the advertising business. And so it’s in the interest of those people not to deal with a single monolithic platform. To some degree, it’s almost like collective bargaining — like a labor union or like cartels or something — on the part of the advertising industry that’s preventing a takeover of one by the other. And you have a sort of duopoly.

Do you think Google might end up buying Facebook, perhaps?

Well, given the artificially hyped prices of these things, I don’t think they would choose to. More likely, Facebook might eventually just be killed by Google. But it would take a really long time. Like I say — for the moment, the ad holding companies are a bulwark against that happening. But there’s so much else that can be said about this. I mean, these companies are really interesting objects.

Facebook and Google are the only two big tech companies that are unable to diversify their profit centers. They’re addicted to their stupid business model in the same way that a petro state is addicted to oil. You know, once you’re hooked on oil as your national economy, you just kind of stay there. And that’s exactly what’s happened with Google. They pay crazy money and expand their cost centers to all these balloons all over the internet, like one that will supposedly end death. They have all these crazy companies they spawn, and yet their profits still remain essentially the same, as does their model: manipulating everybody for money. But I wanted to get to something else you said, if I could.

Yes, please.

You brought up this other question of why is it so important for people to leave, and the thing I want to say is that, because of addiction and because of network effects, I realize that only a small minority of people can do it, and I don’t know how many I have influenced to get off of there. I don’t have a means of measuring that. But here’s what I would say: there have been times before, recently and even in America, when society had to confront mass addiction that had a commercial component.

I’m thinking of Mothers Against Drunk Drivers. And I’m thinking of the public smoking of cigarettes. In both of those cases, there were industries that relied on mass addiction, but somehow or other, a rational conversation gradually created a change. In neither case was the substance totally outlawed, but its role in society was vastly changed. And I just want to point out that this process of having a rational conversation about how things can change is only possible when at least somebody isn’t addicted. If the MADD Mothers were themselves alcoholics, it would have been harder for them to organize. A space for conversation where some people aren’t addicted facilitates thinking different thoughts and being able to envision different outcomes. And what I do believe is that we can get at least enough people to quit this stuff to create such a space.

You say in your book that if you don’t quit social media you are not creating the space in which Silicon Valley can act to improve itself. You also say people should delete their accounts until non-toxic varieties are available. Can you explain how Silicon Valley might create new social media platforms that would solve some of the problems we currently face as a result of Facebook, Instagram, or Twitter?

I mean, inventing solutions is harder than criticizing. For instance, I think Marx was a brilliant critic and a terrible inventor. And most people don’t even try. I’ve seen so many articles or comments about Facebook that are, like, “Oh, it’s terrible; the business model is fundamentally horrible.” But then, any alternative is just viewed as being impossible — that’s it. As soon as we can criticize it we’re done, and now we’ll just live with it and watch civilization fall apart. And there’s a temperature rise and we’ll all just be stupid and that’s it. There’s this kind of crazy lack of faith in one’s own intellect. People just give up. Anyway, I am trying to invent alternative ones. I’m certain that none of mine are perfect and that many others are better. I think we should all be talking to each other.

If you change the business model and provide a new social network where the advertising component is not intrinsic, then I don’t see why people wouldn’t jump on to this new platform and let go of Facebook — which is kind of old and boring anyway.

I mean, I’d say Facebook does feel old and boring, but some of the Facebook companies, like Instagram, have been able to addict the younger demographic. And a lot of people in public life have pretty bad Twitter addictions. A great example is Elon Musk, who started accusing this rescuer who was trying to free the Thai soccer team.

I heard about that. Craziness.

Yes. So here’s this guy who is trying to free these boys trapped in a cave in Thailand, and he got into the center of a dispute about whether a submarine would be helpful or not, which should really just be a discussion about utility. There’s no reason for it to turn into this personal thing. But on Twitter — if you’re addicted, it does become like that — Musk is calling this guy a pedophile, and then his own stock drops and he gets punished for it. And that’s exactly what’s happened to Trump and to so many other people, with the Twitter addiction bringing out the very worst side of a person. And in Trump, that’s saying a lot, because he has a number of pretty bad sides.

So I think the path to a better design is not to have AI algorithms policing our behaviors and our thoughts, but instead to allow people to form together into groups that try to achieve excellence and have motivation through normal commerce instead of surveillance commerce. In the past, the reason that science got better was not necessarily that governments would enforce good scientific behavior, but that scientific journals did. And the reason that the press might have been accurate instead of constantly bogus wasn’t so much that the government would enforce it, although once in a while there are libel laws, but just that newspapers and other news organizations would create a brand that was based on their reliability. In order to do this, the finance has to go back to the people. All the money can’t flow to the center; there has to be some way that individuals and other smaller centers of excellence can also make money for doing good work — and the only way to do that is to get away from the “everything has to be free.”

And so I think monetizing it in an intelligent way is going to be part of the solution.

This is especially important since the same companies that are spying on everybody are also saying that they’re going to create AI to put everybody out of work, which is probably not true, but it’s the official line. And so paying people would allow them to view their own data as valuable, and to have an optimistic sense of a future instead of feeling like they’re the last generation who might have been worth something. It’s a way out of this feeling of false human obsolescence. And it allows people to be responsible for their data, to have ownership of it, to have pride in it, not to view the data that comes from people as some kind of exhaust. Exhaust is what we call people’s data in Silicon Valley. It’s a very uncharitable way to think of our fellow human beings.

You’ve also discussed, in your earlier books, that we might directly monetize services such as search and social. Why do you think this is such a hard sell?

You know, I found it a hard sell when I first started reading about this years ago. But recently, I have to say, I’ve been finding really tremendous support. I mean, it’s really a completely different world now than it used to be. It’s been amazing and gratifying. I no longer feel at all like it’s hard.

And do you think that it’s going to come from a change that Facebook or Google are going to make to their business model or that there’s going to be an attempt to make those companies obsolete by someone who invents the new model?

That’s a great question. I think it’s too early to say. I would prefer to see the companies evolve rather than be overtaken and destroyed, because I think a little bit of smoothness and lack of drama might be helpful in the human experience right now. I understand the value of creative destruction in a market and in a civilization. But there’s also a value to smooth transitions, where lessons can be learned and history can be remembered, and I just feel like we’ve had too much destruction lately. This idea of “move fast and break things” has created a gap in the human story, where a lot has been lost.

Also it’s not as if the founders of Facebook or Google don’t have their hearts in the right place. They want to improve the future of our society. I assume that’s why they got into it in the first place, as well as to accumulate wealth.

Well, I know most of them to some degree, and they vary … Facebook’s essentially a one-person company and I find Zuckerberg to be a bit of a cipher sometimes. But the other founders, like Sean Parker, I think they vary a lot. Chris Hughes has really become someone who is trying very hard to find solutions, and I view him as an ally at this point. I was involved, to a degree, in the early days of Google, and they were very sweet and idealistic back then, and I suspect a lot of people at Google still are, as far as I can tell. I feel like Facebook has a bit of menace in its story, but I feel Google is more like a tragedy — more unintended consequences and hubris, but not bad intentions.

Are they starting to see it this way? Or are they entrenched in their own positions?

I would say there’s been this extraordinary shift in Silicon Valley culture since the election. Brexit made a difference, and then the Trump election really nailed it home that something was changing in society, and in a way that was potentially going to get worse and worse and weirder and weirder. You’re starting to see an openness and an activism and a kind of courage from the people in the big companies in Silicon Valley. That’s one of the healthiest and most satisfying developments I’ve ever seen in the Silicon Valley community. So I guess it’s a little bit of a silver lining to all of the horrible things that have happened lately.

One thing I have a problem with is how social media reflects everything as a numbers game. In politics, numbers are intrinsically more important, because our democratic system has to do with ballots cast and majority rule — putting aside the absurdity of the Electoral College — but culture shouldn’t be defined as a numbers game. Maybe it’s just an old story about art versus commerce, but it seems to me social media is a system in which numbers — likes, followers, clicks — are dictating value.

This is an eternal conflict between quantity and quality. If you try to run human affairs based on quality you run into the problems of human bias, perception, malice, racism, and so on. And if you try to run them purely on quantity, you miss the real value of people. And so I think the only answer is to try to get the benefits of both. In particular, I think the key to using quantity in human affairs is to have enough different ones, with each reflecting different qualities that overlap, so there’s no single overwhelming number. I’m terrified of this one number, like one Social Credit score, as is implemented in China. If there are 40 different things that are a little like Social Credit scores, and people specialize in different ones, and the system reflects its own inherent uncertainty, the inherent diversity and unknowingness of the depths of quality that might exist in other people, that’s not necessarily terrible. And a little bit of quantization can correct for the tendency of people to perceive each other in bigoted or malicious ways. So I think we need to have a hybrid, complicated world in order to manage this confluence of quantity and quality. I think that’s an intrinsic problem that’ll never go away.

So can we talk a little bit about the social justice or activism aspects of social media? Aside from taking selfies, self-promotion, and communicating with friends, the main justification people have for being on social media is that it enables so much community activism and social justice.

Yeah. A lot of people have felt that using social media is a way to organize for mutual betterment, whether it’s a social justice movement or other things. You’re absolutely correct: in the immediate sense their experience of that is authentic. I think they’re reporting on real events. The problem, however, is that behind the scenes there are these manipulation, behavior modification, and addiction algorithms that are running. And these addiction algorithms are blind. They’re just dumb algorithms. What they want to do is take whatever input people put into the system and find a way to turn it into the most engagement possible. And the most engagement comes from the startle emotions, like fear and anger and jealousy, because they tend to rise the fastest and then subside the slowest in people, and the algorithms are measuring people very rapidly, so they tend to pick up and amplify startle emotions over slower emotions like the building of trust or affection.

And so you tend to have the algorithms trying to take whatever has been put into the system and find some way to get a startle emotion out of it in order to maximize its use for addiction. What we call engagement should be called addiction and then behavior modification. And so you tend to have this phenomenon where there will be, let’s say, a social justice movement of some kind; it’s initially successful, but then the same data is instead optimized to find whoever is irritated by that social justice movement. Those irritated people are introduced to each other and put into this amplifying cycle where they’re more and more agitated until they become horrible. So, you start with the Arab Spring, but then you get ISIS getting even more mileage from the same tools. Or you start with Black Lives Matter and you come up with this resurgent bizarre racist movement that had been dormant for years. And this just keeps on happening.

So the problem is that when people say, “Oh, we use social media for social justice,” they’re typically correct. And yet in the longer story they’re really vulnerable to a far greater backlash than they would have gotten if they used another technique. At the end of the day, it’s hard to say whether they really benefited or not.

That’s such a fascinating point, and I don’t feel it’s been absorbed into the culture. People still romanticize what they perceive as their own effectiveness in the space of activism.

That’s very well put: people romanticize their own effectiveness …

A lot of the ideas in your book are starting to penetrate and permeate the culture. People are beginning to recognize the cruelty of Twitter and the narcissism and vanity of Instagram. But I feel like they’re still very much hooked into this idea that hashtag social justice movements like #MeToo or Black Lives Matter are unfailingly effective.

Well, that’s the weird thing: the degree to which they’re effective is also the degree to which they’re vulnerable to a countermovement. Whenever I say something dark like that, my fondest hope is to be proven wrong. I would love to be wrong. You know, maybe it will change. I mean one of the other possible things that could happen is that a sufficient number of people could become aware enough of these things that they’re not as vulnerable to them anymore. Maybe that’ll happen. In such a situation, maybe the algorithms wouldn’t be able to harvest resentment and irritability and amplify them so successfully. I don’t know — but for the moment I fear that this very dark analysis is the correct one. I still really do feel hopeful, though, that we can come up with better models and that, for the most part, the companies want to reform.

So just to be clear for our readers: what are the specific BUMMER platforms or behavior modification empires that you would discourage people from using, and which ones do not fall under this umbrella?

Well, a pretty good test for whether a platform is BUMMER, is whether the Russian intelligence warfare units like the Internet Research Agency decided to target it and use it for manipulating people. That’s a really good measure. And so, if that’s the way you’re going to classify BUMMER, then the list is Facebook, Google including YouTube, Twitter, Reddit, and, of course, a few other Facebook properties like Instagram. There are a few others out there, but those are the primary ones. Snapchat is an interesting one. I just spent some time with them and they’re sort of an edge case; they’re better, but they still have some problems.

Another example is LinkedIn, which has some addictive techniques but doesn’t seem to bring out the worst in people. So, you know, I don’t think this is exactly a universal criticism of the whole idea of social media — in fact, I’m sure it isn’t. I’m sure there could be better social media. Basically, if Putin was there, maybe you shouldn’t go there. Maybe that’s a good rule of thumb. Basically, don’t sleep in the bed where anybody who works for Putin has been lately. I think it’s a very good rule for us all to follow.

I’m sold. In fact, I’d already deleted my Twitter and Instagram accounts before reading your book, although I’m waiting to post this interview on Facebook as my last post before deleting. I have no problem letting go of social media, but I use Google for research all the time. Can you recommend another search engine?

Use Google without having a Google account. Just never have a Google account. Delete all cookies on whatever machine you use and then use a browser that has lots of privacy plugins — Firefox with eight or 10 plugins is an example — ghostery, ad blockers, et cetera. DuckDuckGo is a browser that emphasizes user privacy, but it ends up relying on Bing a lot. Bing is a fully self-sufficient search engine, the only one other than Google, that isn’t attached to as big a manipulation engine behind the scenes.

It occurs to me that it now requires real bravery to quit your social media accounts, and I imagine that “ influencers” might see it as career self-sabotage. So what would you say to them?

Well, one other thing I emphasize, to the point of being annoyingly repetitive in the book, is that each life is different. And I don’t want people to self-sabotage. I want people to be successful. My personal opinion is that even if social media as it exists is bad for the world, if it helps you for now, it might be a really legitimate decision for you to stay on it. I’m not going to judge you for it. And I don’t think there are any easy, one-size-fits-all answers. Having said this, I do want to point out that, as far as I can tell, I have a pretty successful writing career. I have a pretty successful speaking career. I have a pretty successful career as a public intellectual. And I’ve never had one of these accounts. And whenever I point that out, people say, “Well, but you’re an exception.” And, you know, maybe — but I can’t be that exceptional. I don’t think I’m that unusual. I think there might be a degree to which people are afraid that if they did anything different, their life would be completely destroyed, but they may not be correct. It might actually be fine.

I think that’s absolutely right. And I do wonder how much pressure studios or record labels or other powerful companies put on talent to use social media.

Yes, I know, there is a bizarre conformity thing; sometimes, if you’re not on social media and you did something with someone, they feel let down that you’re not there to promote them. But ultimately, the whole thing is just a fake, you know — again, I’m not on it and I seem to be doing fine. I mean, maybe if I was on social media I’d have some spectacularly bigger career or something, but it seems to me like I’m doing pretty well, really.

I don’t believe that the numbers would bear that out. You know I really don’t. I feel like even a robust social media following doesn’t actually translate into sales and real numbers. But I have no way of knowing that for sure.

Well, also, a lot of the numbers that are out there are fake. You can buy fake followers. But also, even the sales numbers might be fake these days. I mean, that’s one of the problems of being excessively quantitative. It’s so easy to fake numbers. A lot of these things don’t even mean anything.

Jaron, on behalf of LARB and myself thank you for taking the time to talk today.

Wonderful. Thank you so much for doing this. And there are so many other things I want to talk to you about. I’m learning pedal steel, which has been a long-term ambition, and having a wonderful time, and it’s just amazing.

Well, I know very good pedal steel players if you ever want to be in touch with them. By doing this interview with me, you’ve made me look a lot smarter than I am, so it’s the least I can do.

¤


Harper Simon is a musician who lives in Los Angeles.

LARB Contributor

Harper Simon is a musician who lives in Los Angeles.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!