Social media has drastically restructured the way we communicate in an incredibly short period of time. We can discover, âLike,â click on, and share information faster than ever before, guided by algorithms most of us donât quite understand.
And while some social scientists, journalists, and activists have been raising concerns about how this is affecting our democracy, mental health, and relationships, we havenât seen biologists and ecologists weighing in as much.
Thatâs changed with a new paper published in the prestigious science journal PNAS earlier this month, titled âStewardship of global collective behavior.â
Seventeen researchers who specialize in widely different fields, from climate science to philosophy, make the case that academics should treat the study of technologyâs large-scale impact on society as a âcrisis discipline.â A crisis discipline is a field in which scientists across different fields work quickly to address an urgent societal problem â like how conservation biology tries to protect endangered species or climate science research aims to stop global warming.
The paper argues that our lack of understanding about the collective behavioral effects of new technology is a danger to democracy and scientific progress. For example, the paper says that tech companies have âfumbled their way through the ongoing coronavirus pandemic, unable to stem the âinfodemicâ of misinformationâ that has hindered widespread acceptance of masks and vaccines. The authors warn that if left misunderstood and unchecked, we could see unintended consequences of new technology contributing to phenomena such as âelection tampering, disease, violent extremism, famine, racism, and war.â
Itâs a grave warning and call to action by an unusually diverse swath of scholars across disciplines â and their collaboration indicates how concerned they are.
Recode spoke with the lead author of the paper, Joe Bak-Coleman, a postdoctoral fellow at the University of Washington Center for an Informed Publicâ¨, as well as co-author Carl Bergstrom, a biology professor at the University of Washington, to better understand this call for a paradigm shift in how scientists study the technology we use every day.
The two interviews have been combined and lightly edited for length and clarity.
You tweeted that this paper is one of the most important ones youâve published yet. Why?
My original background is in infectious disease epidemiology, respiratory viruses. And so I was able to do some stuff thatâs reasonably important during Covid. What Iâm doing there is really filling in the details in a well-established framework. So itâs more, you know, dotting the iâs and crossing the tâs.
And I think whatâs really important about this paper is that itâs not doing that at all. Itâs saying, âHereâs a massive problem, and the way to conceptualize it, that is critically important for the future. â
And, you know, itâs suggesting an alarm going off upstairs. Itâs a call to arms. Itâs saying, âHey, weâve got to solve this problem, and we donât have a lot of time.â
And what is that problem? What are you sounding the alarm bell on?
My sense is that social media in particular â as well as a broader range of internet technologies, including algorithmically driven search and click-based advertising â have changed the way that people get information and form opinions about the world.
And they seem to have done so in a manner that makes people particularly vulnerable to the spread of misinformation and disinformation.
Just as one example: A paper â a poorly done research paper â can come out suggesting that hydroxychloroquine might be a treatment for Covid. And in a matter of days, you have world leaders promoting it, and people struggling to get [this medicine], and it being no longer available to people who need it for treatment of other conditions. Which is actually a serious health problem.
So you can have these bits of misinformation that explode at unprecedented velocity in ways that they wouldnât have prior to this information ecosystem.
[Now], you can create large communities of people that hold constellations of beliefs that are not grounded in reality, such as [the conspiracy theory] QAnon. You can have ideas like anti-vaccination ideas spread in new ways. You can create polarization in new ways.
And [you can] create an information environment where misinformation seems to spread organically. And also [these communities can] be extremely vulnerable to targeted disinformation. We donât even know the scope of that yet.
The question we were trying to answer was, âWhat can we infer about the course of society at scale, given what we know about complex systems?â
Itâs kind of how we use mice models or flies to understand neuroscience. Part of this came back to animal societies â namely groups â to understand what they tell us about collective behavior in general, but also complex systems more broadly.
So our goal is to take that perspective and then look at human society with that. And one of the things about complex systems is they have a finite limit to perturbation. If you disturb them too much, they change. And they often tend to fail catastrophically, unexpectedly, without warning.
We see this in financial markets â all of a sudden, they crash out of nowhere.
My hope is very much that this [paper] will sort of galvanize people. The issues that are in this paper are ones that people have been thinking about from many, many different fields. Itâs not like these are new issues entirely.
Itâs rather that I think this paper will hopefully really highlight the magnitude of whatâs happened and the urgency of fixing it. Hopefully, itâll galvanize some kind of transdisciplinary collaborations.
So itâs important because it says this needs to be a crisis discipline, this is something that we donât understand. We donât have a theory for how all of these changes are affecting the way that people come to form their beliefs and opinions, and then use those to make decisions. And yet, thatâs all changing. Itâs happening. …
Thereâs a misperception that weâre saying, âExposure to ads is bad â thatâs causing the harm.â Thatâs not what weâre saying. Exposure to ads may or may not be bad. What weâre concerned about is the fact that this information ecosystem has developed to optimize something orthogonal to things that we think are extremely important, like being concerned about the veracity of information or the effect of information on human well-being, on democracy, on health, on the ecosystem.
Those issues are just being left to sort themselves out, without a whole lot of thought or guidance around them.
That puts it in this crisis discipline space. Itâs like climate science where you donât have time to sit down and work out everything definitively. This paper is essentially saying something quite similar â that we donât have time to wait. We need to start addressing these problems now.
What do you say to the people who think this is not really a crisis and argue that people had similar concerns when the printing press came out that now seem alarmist?
Well, with the printing press, I would push back. The printing press came out and upended history. Weâre still recovering from the capacity that the printing press gave to Martin Luther. The printing press radically changed the political landscape in Europe. And, you know, depending on whose histories you go by, you had decades if not centuries of war [after it was introduced].
So, did we somehow recover? Sure we did. Would it have been better to do it in a stewarded way? I donât know. Maybe. These major transitions in information technology often cause collateral damage. We tend to hope that they also bring about a tremendous amount of good as we move toward human knowledge and all of that. But even the fact that youâve survived doesnât mean that itâs not worth thinking about how to get through it smoothly.
It reminds me of one of the least intelligent critiques of the [Covid-19] vaccines that weâre using now: âWe didnât have vaccines during the Black Death plague. And weâre still here.â We are, but it took out a third of the population of Europe.
Right, so there is pain and suffering that happened with all those transformational technologies as well.
Yeah. So I think itâs important to recognize that. Itâs still possible to mitigate harm as you go through a transformation, even if you know youâre going to be fine. I also donât think itâs completely obvious that we are going to be fine on the other end.
One of the really key messages of the paper is that there tends to be this general trust that everything will work out, that people will eventually learn to screen sources of information, that the market will take care of it.
And I think one of the things that the paper is saying is that weâve got no particular reason to think that thatâs right. Thereâs no reason why good information will rise to the top of any ecosystem weâve designed. So weâre very concerned about that.
One important defense of social media is that Facebook and Twitter can be places where people share new ideas that are not mainstream that end up being right. Sometimes media gatekeepers can get things wrong and social media can allow better information to come out. For example, some people like Zeynep Tufekci were sounding the alarm on the pandemic early, largely on Twitter, back in February 2020, far ahead of the CDC and most journalists.
Yeah, to look at the net, you have to look at the net influence of the system, right? If somebody on social media has things right but if the net influence on social media is to promote anti-vaccination sentiment in the United States to the point that weâre not going to be able to reach herd immunity, it doesnât let social media off the hook. …
I was enormously optimistic about the internet in the â90s. [I thought] this really was going to remove the gatekeepers and allow people who did not have financial, social, and political capital to get their stories out there.
And itâs certainly possible for all that to be true and for the concerns that we express in our paper to also be correct.
Democratizing information has had profound effects, especially for marginalized, underrepresented communities. It gives them the ability to rally online, have a platform, and have a voice. And that is fantastic. At the same time, we have things like genocide of Rohingya Muslims and an insurrection at the Capitol happening as well. And I hope that itâs a false statement to say we have to have those growing pains to have the benefits.
How much do we know about whether [misinformation] has increased in the past year or five years, 10 years, and by how much?
Thatâs one of the real challenges that weâre facing, actually, is that we donât have a lot of information. We need to figure out how, to what degree, people have been exposed to misinformation, to what degree is that influencing subsequent online behavior. All of this information is held exclusively by the tech companies that are running these platforms.
[Editorâs note: Most major social media companies work with academics who research their platformsâ effects on society, but the companies restrict and control how much information researchers can use.]
What does treating the impact of social media as a crisis discipline mean?
For me, a crisis discipline is a situation where you donât have all of the information that you need to know exactly what to do, but you donât have time to wait to figure it out.
This was the situation with Covid in February or March 2020. Weâre definitely in that position with global climate change. Weâve got better models than we did 20 years ago, but we still donât have a complete description of how that system works. And yet, we certainly donât have time to wait around and figure all that out.
And here, I think that the speed with which social media, combined with a whole number of other things, has led to very widespread disinformation â [that] here in the United States [is] causing major political upheaval â is striking. How many more elections do you think we have before things get substantially worse?
So there are these super-hard problems that take radical transdisciplinary work. We need to figure out how to come together and talk about all that. But at the same time, we have to be taking actions.
How do you respond to the chicken-and-egg argument? You hear defenders of technology say, âWeâre just seeing real-world polarization reflected online,â but thereâs no proof that the internet is causing polarization.
This should be a familiar argument. This is what Big Tobacco used, right? This is Merchants of Doubt stuff. They said, âWell, you know, yeah, sure, lung cancer rates are going up, especially among smokers â but thereâs no proof itâs been caused by that.â
And now weâre hearing the same thing about misinformation: âYeah, sure, thereâs a lot of misinformation online, but it doesnât change anyoneâs behavior.â But then all of a sudden you got a guy in a loincloth with buffalo horns running around the Capitol building.
The paper calls for people to more urgently understand the impacts of these new rapid advancements in communication technology in the past 15 years. Do you think that this isnât being addressed enough by academic scientists, government leaders, or companies?
Thereâs been a lot of work thatâs been done here, and I donât think weâre trying to reinvent that wheel at all. But I think what weâre really trying to do is just highlight the need for urgent action and draw these parallels to climate change and to conservation biology, where theyâve been dealing with really similar problems. And the way theyâve structured themselves, like climate change now involves everything from chemists to ecologists. And I think social science tends to be fairly fragmented in subdisciplines, without a lot of connection between them. And trying to bring that together was a major goal of this paper.
Iâm biased to be very aware of this problem because my job is to report on social media, but it feels like there is a lot of fear and concern about social mediaâs impact. Misinformation, phone addiction â these seem to be issues that everyday people worry about. Why do you think there still isnât enough attention on this?
When I talk to people about social media, yes, thereâs a lot of concern, thereâs a lot of negativity, and then thereâs bias by being a parent as well. But the focus is often on the individual-level effects. So itâs, âMy kids are developing negative issues around self-esteem because of the way that Instagram is structured to get âLikesâ for being perfect and showing more of your body.â
But thereâs less talk about the entire large-scale structural changes that this is inducing. So what weâre saying is, we really want people to look at the large-scale structural changes that these technologies are driving in society.