24.05.2022 Views

The 2022 Social Media Summit@MIT Event Report

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>2022</strong> SOCIAL MEDIA SUMMIT @ MIT<br />

WHAT’S NEXT FOR<br />

SOCIAL MEDIA?<br />

5 GLOBAL TRENDS TO WATCH IN VOLATILE TIMES


SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

A MINEFIELD OF<br />

ONLINE VOLATILITY<br />

<strong>The</strong> second annual <strong>Social</strong> <strong>Media</strong> <strong>Summit@MIT</strong> focused on the information<br />

war in Ukraine, fake news, the need for greater algorithmic<br />

transparency, and the importance of ethics in artificial intelligence.<br />

“It’s extremely important to keep the conversation going,<br />

and that’s exactly what we intend to do.”<br />

SINAN ARAL<br />

<strong>The</strong> <strong>Social</strong> <strong>Media</strong> <strong>Summit@MIT</strong> (SMS),<br />

hosted by MIT’s Initiative on the Digital<br />

Economy (IDE), was launched last year in the<br />

midst of unprecedented upheavals sparked<br />

and organized on social media platforms—<br />

including the January 6, 2021, storming of the<br />

U.S. Capitol. And the turmoil didn’t stop there.<br />

In February, former President Donald Trump<br />

launched his own social media platform,<br />

Truth <strong>Social</strong>, after he was permanently<br />

banned from Twitter and suspended from<br />

Facebook for two years. In another shakeup,<br />

<strong>The</strong> Wall Street Journal published “<strong>The</strong><br />

Facebook Papers,” a damning, multipart<br />

report based on more than 10,000 documents<br />

leaked by a company whistleblower.<br />

We’re less than halfway into <strong>2022</strong>, yet already<br />

it’s shaping up to be another pivotal year for<br />

social media, where we are witnessing Russia’s<br />

brutal invasion of Ukraine with both real bombs<br />

and fake news.<br />

All of these developments resonated during<br />

the <strong>2022</strong> SMS event—in particular, in a<br />

conversation between IDE Director Sinan Aral<br />

and the Facebook whistleblower, Frances<br />

Haugen (see details, page 4). Calls for more<br />

transparency from platform companies and<br />

algorithm designers were dominant throughout<br />

the day. Since the event, Twitter became a<br />

takeover target by Elon Musk.<br />

Discussions focused on the pressing<br />

concerns of misinformation amplified by<br />

social media and how to achieve the goals of<br />

AI and algorithmic transparency and ethics.<br />

Panels were led by top MIT researchers—<br />

David Rand, Dean Eckles, and Renée<br />

Richardson Gosline—who, according to<br />

Aral, are engaged in “groundbreaking<br />

research that is making meaningful inroads<br />

into solving the social media crisis.”<br />

<strong>The</strong> moderators were joined by a diverse<br />

group of academics, social media pros, a<br />

state senator, and others, providing a rich<br />

day of contrasting views and opinions. One<br />

obvious trend is that social media’s clout is<br />

growing, and so is its scrutiny. “It’s extremely<br />

important to keep the conversation going,”<br />

Aral told SMS attendees, “and that’s exactly<br />

what we intend to do.”<br />

03 OVERVIEW<br />

04 FIRESIDE CHAT WITH FRANCES HAUGEN<br />

08 MISINFORMATION AND FAKE NEWS<br />

12 ALGORITHMIC TRANSPARENCY<br />

16 THE INFORMATION WAR IN UKRAINE<br />

20 RESPONSIBLE AI<br />

24 FINAL THOUGHTS & THANKS<br />

Click the play icon throughout<br />

the report to view session videos<br />

On March 31, <strong>2022</strong>, MIT’s Initiative on the<br />

Digital Economy (IDE) hosted the second<br />

annual <strong>Social</strong> <strong>Media</strong> Summit (SMS@MIT).<br />

<strong>The</strong> online event, which attracted more<br />

than 12,000 virtual attendees, convened<br />

technology and policy experts to examine<br />

the growing impact of social media on our<br />

democracies, our economies, and our public<br />

health—with a vision to craft meaningful<br />

solutions to the growing social media crisis.<br />

5 GLOBAL TRENDS<br />

1<strong>Social</strong><br />

media’s<br />

impact on<br />

child<br />

and adult<br />

psychology2<br />

<strong>The</strong> threat<br />

of online<br />

misinformation<br />

and<br />

the need<br />

for systemic<br />

solutions<br />

<strong>The</strong> importance<br />

of<br />

algorithmic<br />

transparency<br />

—and how to<br />

3achieve it<br />

Expansion<br />

of social<br />

media’s<br />

impact on<br />

geopolitics<br />

4and war<br />

<strong>The</strong> formalization<br />

of AI ethics<br />

standards<br />

and 5training<br />

2<br />

3


FIRESIDE CHAT SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

FRANCES HAUGEN<br />

SPEAKS OUT<br />

<strong>The</strong> Facebook whistleblower says the company must acknowledge its<br />

tremendous impact and become more transparent.<br />

Frances Haugen is a former<br />

Facebook algorithmic product<br />

manager who today is better<br />

known as the company’s chief<br />

whistleblower. She joined Sinan<br />

Aral to discuss Facebook’s<br />

impact on society, how the<br />

company has resisted efforts to<br />

analyze its algorithms, and what<br />

actions it can take in the future.<br />

Haugen, who earned an MBA<br />

from Harvard Business School<br />

and worked as an electrical<br />

engineer and data scientist<br />

before joining Facebook in<br />

2019, said no one intends to<br />

be a whistleblower. “Living with<br />

a secret is really, really hard;<br />

especially when you think that<br />

secret affects people’s health,<br />

their lives, and their well-being,”<br />

she said.<br />

That’s why Haugen said she<br />

left the company in 2021 and<br />

provided more than 10,000<br />

internal documents to <strong>The</strong><br />

Wall Street Journal. <strong>The</strong>se<br />

documents became the basis<br />

for the newspaper’s series, “<strong>The</strong><br />

Facebook Files.” As the Journal<br />

wrote, “Facebook knows, in acute<br />

detail, that its platform is riddled<br />

Frances<br />

Haugen<br />

provided<br />

more than<br />

10,000<br />

internal<br />

Facebook/<br />

Meta<br />

documents<br />

to the<br />

press.<br />

with flaws that cause harm,<br />

often in ways only the company<br />

fully understands.”<br />

One of Haugen’s biggest<br />

criticisms of Facebook, which<br />

was renamed Meta in 2021,<br />

concerns the way the company<br />

has, in her opinion, conflated<br />

the issues of censorship and<br />

algorithmic reach. Most social<br />

media critics say that algorithms<br />

promote dangerous and extreme<br />

content such as hate speech,<br />

vaccine misinformation, and<br />

poor body image messaging to<br />

young people.<br />

Yet Facebook has been quick<br />

to frame the issue as one of<br />

censorship and free speech—not<br />

its proprietary algorithms, Haugen<br />

said. For example, the remit of the<br />

company’s Oversight Board, of<br />

which Haugen was a member, is<br />

to censor those who don’t comply<br />

with content policies. This charter,<br />

Haugen noted, is deliberately<br />

narrow. “Facebook declined to ever<br />

let us discuss the non-contentbased<br />

ways we could be dealing<br />

with safety problems”–such as<br />

building in some “pause” time<br />

before someone can share a link.<br />

“It sounds like a really small thing,<br />

but it’s the difference of 10% or<br />

15% of [shared] misinformation,”<br />

she said. “That little bit of friction,<br />

giving people a chance to breathe<br />

before they share, has the same<br />

impact as the entire third-party<br />

fact-checking system.”<br />

Aral agreed that there is a gap<br />

“between free speech and<br />

algorithmic reach,” and that<br />

fixing one doesn’t infringe on the<br />

other. He pointed to MIT research<br />

showing that when social-media<br />

users pause long enough to think<br />

critically, they’re less likely to<br />

spread fake news. “It’s a cognitive,<br />

technical solution that has<br />

nothing to do with [free] speech,”<br />

Aral said.<br />

Kids and <strong>Social</strong> <strong>Media</strong><br />

Haugen also described the<br />

disturbing ways social media<br />

and targeted advertising affect<br />

teens and children. For example,<br />

Facebook’s surveys, some<br />

involving as many as 100,000<br />

respondents, found that socialmedia<br />

addiction—euphemistically<br />

known as “problematic use”—is<br />

most common among 14-yearolds.<br />

Yet when <strong>The</strong> Wall Street<br />

Journal gave Facebook a chance<br />

to respond to these findings, the<br />

company pointed to other surveys<br />

with smaller sample sizes and<br />

different results.<br />

“I can tell you as an algorithm<br />

specialist that these algorithms<br />

concentrate harms...in the form of<br />

vulnerability,” Haugen said. “If you<br />

have a rabbit hole you go down,<br />

they suck you toward that spot.<br />

Algorithms don’t have context.<br />

<strong>The</strong>y don’t know if a topic is good<br />

for you or bad for you. All they<br />

know is that some topics really<br />

draw people in.”<br />

Unfortunately, that often means<br />

more extreme content gets the<br />

most views. “Put in ‘healthy eating’<br />

on Instagram,” she said, “and in the<br />

course of a couple of weeks, you<br />

end up [with content] that glorifies<br />

eating disorders.” (Meta has owned<br />

Instagram since 2012.)<br />

She’d like to see legislation to<br />

keep children under 13 off most<br />

social media platforms. Meta<br />

documents show that 20% of<br />

11-year-olds are on the platform.<br />

She’d also like adults to have<br />

the option to turn off targeted<br />

ads. Similarly, she would like to<br />

see a ban on targeted ads, such<br />

as those for weight loss<br />

supplements aimed at children<br />

and teens under the age of 16.<br />

Haugen also suggested that<br />

Facebook dedicate more resources<br />

to fighting misinformation, fake<br />

news, and hate speech. “We need<br />

flat ad rates,” she said. “Facebook’s<br />

own research has said over and<br />

over again that the shortest path<br />

to a click is hate, is anger. And so<br />

it ends up that angry, polarizing,<br />

divisive ads are five to 10 times<br />

cheaper than compassionate or<br />

empathetic ads.”<br />

“If we want to be safe,” Haugen<br />

concluded, we need to have<br />

open conversations about<br />

these practices and “invest<br />

more on transparency.”<br />

3 WAYS<br />

TO FIX<br />

FACEBOOK<br />

Ban targeted ads to<br />

children under 16<br />

Dedicate more<br />

resources to fight<br />

fake news and hate<br />

speech<br />

Keep kids under 13<br />

off the platform<br />

“Living with a<br />

secret is really,<br />

really hard,<br />

especially when<br />

you think that<br />

secret affects<br />

people's health,<br />

their lives and<br />

their well-being.”<br />

FRANCES HAUGEN<br />

4<br />

5


FIRESIDE CHAT SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

6<br />

7


PANEL : MISINFORMATION SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

FAKE NEWS,<br />

REAL IMPACT<br />

Is social media to blame for producing<br />

and spreading misinformation—or is it<br />

part of a broader problem?<br />

David Rand Professor, MIT Sloan, MIT IDE group leader<br />

Renée DiResta Research Manager, Stanford Internet Observatory<br />

Rebecca Rausch Massachusetts State Senator<br />

Duncan Watts Professor, University of Pennsylvania<br />

Fake news and misinformation<br />

headlines are rampant: <strong>The</strong> presidential<br />

election was stolen. Vaccinations kill.<br />

Climate change is a hoax. To what<br />

extent is social media responsible?<br />

That was the critical question raised by<br />

expert panelists in the second session<br />

of SMS@MIT <strong>2022</strong>.<br />

<strong>The</strong> dangers are undoubtedly real,<br />

but Duncan Watts, Stevens University<br />

Professor at the Annenberg School<br />

for Communications at the University<br />

of Pennsylvania, observed that<br />

social media is one small cog in a<br />

larger set of mass media gears. “For<br />

the average American, the fraction<br />

of fake news in their media diet is<br />

extremely low,” he argued.<br />

Research shows that most Americans<br />

still get their news primarily from<br />

television. And of the news they do<br />

consume online, fake news represents<br />

only about 1% of the total. “We need<br />

to look much more broadly than<br />

social media,” Watts said. “We need<br />

to look across all types of platforms<br />

and content” to determine the source<br />

of fake news. Today, there are<br />

interconnected ecosystems—from<br />

online influencers to cable networks<br />

and print media—that all contribute to<br />

amplifying misinformation.<br />

Small Groups, Big Impacts<br />

Fellow panelist Renée DiResta,<br />

research manager at Stanford Internet<br />

Observatory, maintained that even small<br />

groups of people spreading fake news<br />

and misinformation can have outsized<br />

reach and engagement online.<br />

“<strong>The</strong> literal definition of the word<br />

propaganda means to propagate,<br />

the idea that information must be<br />

propagated,” she said. “And social<br />

media is a phenomenal tool for this,<br />

particularly where small numbers of<br />

people can propagate and achieve<br />

very, very significant reach in a way<br />

that they couldn’t in old broadcast<br />

media environments that were much<br />

more top-down.”<br />

Specifically, DiResta explained<br />

how information goes viral and<br />

echo chambers arise. “Influencers<br />

have an amplification network,<br />

the hyper-partisan media outlet<br />

has an amplification network, until<br />

[misinformation] winds up being<br />

discussed on the nightly news,”<br />

she said. “And it’s a phenomenally<br />

distinct form of creating what is<br />

effectively propaganda...it’s just<br />

this fascinating dynamic that we<br />

see happening with increasing<br />

frequency over the last few years.”<br />

Research upholds the idea that<br />

familiarity with a topic “cues”<br />

our brains for accuracy. In other<br />

words, the more often you hear an<br />

assertion—whether true, false or<br />

neutral—the more likely you are to<br />

believe it, according to moderator<br />

David Rand, MIT professor of<br />

management science and brain<br />

and cognitive sciences, and a<br />

group leader at the IDE.<br />

Healthcare’s Unhealthy Messages<br />

This repetition of false news<br />

is “massively problematic” for<br />

the public, said Massachusetts<br />

State Senator Rebecca Rausch.<br />

As an example, she cited reports<br />

stating that 147 of the leading<br />

anti-vaccination feeds, mainly on<br />

Instagram and YouTube, now have<br />

more than 10 million followers, a<br />

25% increase in just the last year.<br />

“A number of anti-vax leaders<br />

seized the COVID-19 pandemic<br />

as a historic opportunity to<br />

popularize anti-vaccine sentiment,”<br />

Rausch said. One result, she said,<br />

is that vaccination hesitancy is<br />

now rising, even for flu and other<br />

routine shots.<br />

Rausch also cited reports that<br />

say 12 anti-vaccination sources<br />

are responsible for 65% of all<br />

anti-vax content online. Some<br />

people also profit from their<br />

misinformation by selling pills and<br />

other supplements they claim<br />

can act as vaccine alternatives.<br />

Watts agreed that “small groups<br />

of people with extreme points of<br />

view and beliefs can indeed inflict<br />

disproportionate harm on society.”<br />

But, rather than saying, “we’re<br />

all swimming in this sea of<br />

misinformation and there’s some<br />

large average effect that is being<br />

applied to society,” we should be<br />

looking at the broader context,<br />

he said.<br />

Algorithmic Influences<br />

Watts said that people often<br />

seek out certain content by<br />

choice. “It’s not necessarily that<br />

the platform is driving people into<br />

a particular extreme position,”<br />

he said. Evidence based on<br />

YouTube, for example, shows<br />

there is a lot of user demand<br />

driving traffic—people search<br />

for specific content, find it, and<br />

share it widely. Unfortunately,<br />

he said, “It’s shocking to be<br />

confronted with that, but it’s not<br />

necessarily a property of the<br />

social media platform.”<br />

Yet DiResta said we can’t<br />

underestimate algorithmic<br />

influence. “<strong>The</strong>re’s an expression:<br />

‘We shape our systems; thereafter,<br />

they shape us,’” DiResta said.<br />

“We’re seeing the extent to which<br />

the network is shaped by the<br />

platform’s incentives.”<br />

Both DiResta and Rausch believe<br />

some of the solutions rest with<br />

legislation. But Rausch asked at<br />

what point laws can supersede<br />

algorithms that promote fringe<br />

content. “What should we be<br />

changing, if anything?” she asked.<br />

“We are very far from knowing<br />

what policies should be proposed<br />

in terms of changing social<br />

media, like platform behavior,<br />

and regulating it,” said Rand.<br />

“But we really need policy around<br />

transparency and making data<br />

available, breaking down the<br />

walled gardens so people from the<br />

outside can learn more about what<br />

is going on.”<br />

For Watts, solutions are complex:<br />

“We can’t go back to a world where<br />

we don’t have the technology to<br />

communicate in this way...and it’s<br />

not at all clear that you can say,<br />

‘Well, you guys can talk to each<br />

other about cargo bikes [online],<br />

but you can’t talk to each other<br />

about vaccines.’ I don’t deny that<br />

it’s a terrible problem, but I feel<br />

very conflicted about how we think<br />

about solutions.”<br />

FACT CHECK<br />

1 %<br />

of online content<br />

is fake news<br />

12<br />

sources are<br />

responsible for 65%<br />

of anti-vax content<br />

10mil<br />

people follow top<br />

anti-vax news feeds<br />

8<br />

9


PANEL : MISINFORMATION SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

10<br />

11


PANEL : TRANSPARENCY<br />

SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

SEEING THROUGH<br />

SOCIAL MEDIA<br />

ALGORITHMS<br />

<strong>The</strong> software that drives social media<br />

is top secret. But given platforms’<br />

huge impact on society, should social<br />

media companies provide greater<br />

algorithmic transparency?<br />

Dean Eckles Assistant Professor, MIT Sloan, MIT IDE group leader<br />

Daphne Keller Director of Platform Regulation, Stanford University<br />

Kartik Hosanagar Professor, <strong>The</strong> Wharton School<br />

“Simply gaining<br />

access to<br />

social media<br />

algorithms isn’t<br />

the complete<br />

answer.”<br />

DEAN ECKLES<br />

“Algorithmic transparency”<br />

may not be an everyday phrase,<br />

but what’s behind it is simple<br />

enough: <strong>Social</strong> media platforms,<br />

including Facebook, Twitter,<br />

and YouTube, are having such<br />

a significant impact on society,<br />

researchers should be allowed<br />

to study the software programs<br />

that drive their recommendation<br />

engines, content rankings, feeds<br />

and other processes.<br />

Transparency was a common<br />

theme throughout the day, but<br />

one session at SMS@MIT <strong>2022</strong>,<br />

focused entirely on the topic,<br />

digging deep into how the goal<br />

can be achieved, and what the<br />

tradeoffs may be. Panelists<br />

explained that many social<br />

media companies have treated<br />

their software code as a state<br />

secret to date. Access to<br />

proprietary algorithms is granted<br />

to company insiders only—and<br />

that’s a problem in terms of<br />

verifying and testing the platforms<br />

and their content.<br />

“Platforms have too much<br />

control,” said Kartik Hosanagar,<br />

professor of operations,<br />

information, and decisions at <strong>The</strong><br />

Wharton School, referring to the<br />

relationship between algorithmic<br />

transparency and user trust.<br />

“Exposing that [information] to<br />

other researchers,” he added, “is<br />

extremely important.”<br />

Complex Interactions<br />

At the same time, simply gaining<br />

access to social media algorithms<br />

isn’t sufficient, said Dean Eckles,<br />

the panel’s moderator and an<br />

associate professor of marketing<br />

at MIT Sloan School. He said his<br />

own research shows “how hard it<br />

is to quantify some of the impacts<br />

of algorithmic ranking,” such as<br />

bias and harm.<br />

Eckles noted that algorithms and<br />

consumers are in a feedback loop.<br />

<strong>The</strong>re is an interdependence of<br />

sorts “because the algorithms<br />

are responding to user choices,<br />

and then users are making<br />

choices based on the algorithm.”<br />

Hosanagar added, “It’s a very<br />

complex interaction. It isn’t<br />

that one particular choice of<br />

algorithm always increases or<br />

always decreases filter bubbles.<br />

It also depends on how users<br />

respond.” <strong>The</strong> narrative that<br />

algorithms cause the filter bubble<br />

is too simple, he said, adding, “it’s<br />

far more nuanced than that.”<br />

Daphne Keller, director of the<br />

Program on Platform Regulation<br />

at Stanford University’s Cyber<br />

Policy Center, would like to see<br />

more academic research into<br />

how social media platforms<br />

moderate and amplify content<br />

and what sorts of content<br />

control—such as taking down<br />

offensive or false posts—their<br />

terms of service permit.<br />

Unfortunately, “data scientists<br />

inside the platform have all of<br />

this [information],” Keller said.<br />

“Lots of people outside platforms<br />

have really compelling arguments<br />

about the good they can do in the<br />

world if they had that access. But<br />

we have to navigate those really<br />

significant barriers and competing<br />

values.” Opening APIs to other<br />

data scientists—as they are doing<br />

in the EU—would be a helpful start<br />

to more transparency. According<br />

to panelists, less clear is what<br />

data access consumers would<br />

want or use.<br />

Political Power<br />

<strong>The</strong> panel also discussed the<br />

difficulty of learning the exact<br />

impact of social media on politics.<br />

As Hosanagar—and Frances<br />

Haugen, formerly of Facebook—<br />

pointed out, the public sees<br />

only reports the social media<br />

companies make public. “We don’t<br />

BREAKING<br />

THE CODE<br />

1<br />

Most<br />

2<br />

3<br />

<strong>Social</strong><br />

platform<br />

companies continue<br />

to treat their<br />

algorithms as<br />

classified secrets.<br />

Greater access<br />

to social media<br />

algorithms would<br />

allow researchers<br />

to explore<br />

platforms’ impacts<br />

and intentions.<br />

media<br />

companies might<br />

share their<br />

code if offered<br />

incentives.<br />

know about the ones that are not<br />

approved internally,” he explained.<br />

Keller added: “We need to have<br />

researchers and algorithm<br />

experts try to figure out a ranked<br />

list of priorities because we’re not<br />

going to get everything.”<br />

One way forward without stifling<br />

innovation, Eckles suggested,<br />

would be to incentivize social<br />

media firms to share their<br />

internal data with other<br />

researchers. Those incentives<br />

could include public pressure and<br />

the threat of lawsuits. It’s already<br />

happened with Facebook sharing<br />

data with <strong>Social</strong> Science One,<br />

an independent research<br />

commission formed to study<br />

the effects of social media on<br />

elections—potentially a sign of<br />

good things to come.<br />

12<br />

13


PANEL : TRANSPARENCY SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

14<br />

15


PANEL : INFOWARS<br />

SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

UKRAINE’S SECOND<br />

BATTLEFIELD:<br />

INFORMATION<br />

While Russia’s military invasion of Ukraine<br />

involves all-too-real soldiers, guns, and<br />

tanks, the two nations are also fighting a<br />

war of information. <strong>The</strong>ir main weapon of<br />

choice? <strong>Social</strong> media.<br />

Sinan Aral Director, MIT IDE<br />

Clint Watts Distinguished Research Fellow, Foreign Policy Research Institute<br />

Richard Stengel Political Analyst, CNBC<br />

Natalia Levina Professor, NYU Stern School of Business<br />

WAR OF WORDS<br />

In Ukraine, social media<br />

is being enlisted for<br />

grassroots organizing<br />

and assistance.<br />

<strong>The</strong> most widely used<br />

social media platforms<br />

include Telegram and<br />

TikTok.<br />

Video is the most popular<br />

format for social<br />

media during the war.<br />

<strong>The</strong>re was nothing fake about<br />

Russia’s military assault on<br />

Ukraine on February 24, <strong>2022</strong>.<br />

Soldiers attacked the country<br />

with guns, tanks, and bombs,<br />

and the war still rages. But<br />

on a second front—a parallel<br />

information war waged via social<br />

media—the most dangerous<br />

weapons were misinformation<br />

and fake news, said IDE director<br />

Sinan Aral as he introduced his<br />

expert panelists.<br />

Without downplaying the severity<br />

of the violence committed<br />

by Russia’s military against<br />

Ukrainian civilians, panelists<br />

considered the implications of<br />

the shadow information war in<br />

Ukraine, as well as how both<br />

sides are using social media to<br />

rally global support and spread<br />

information and disinformation.<br />

Natalia Levina, professor of<br />

information systems at NYU’s<br />

Stern School of Business, who<br />

grew up in Ukraine’s secondlargest<br />

city, Kharkiv, said, “My<br />

day often starts with looking<br />

[online] at what’s going on in<br />

Kharkiv. And every day, I hope the<br />

bombardment of the city is less.”<br />

Russia’s designs on Ukraine<br />

date back to at least 2014, the<br />

year Russia annexed Crimea.<br />

Panelist Richard Stengel, a<br />

CNBC analyst and former U.S.<br />

undersecretary of state, said<br />

that’s also when Russia launched<br />

an early and intense information<br />

war. At the time, Russian<br />

President Vladimir Putin denied<br />

the very existence of the invasion,<br />

even after Russian troops had<br />

crossed the border. Other Russian<br />

propaganda disseminated on<br />

social media and elsewhere<br />

falsely accused Ukrainians of<br />

being antisemitic Nazis.<br />

Eight years later, the situation has<br />

shifted. This time, Stengel said,<br />

Russia’s propaganda on social<br />

media appears “antiquated,”<br />

“clunky,” and “uncreative.” That’s<br />

surprising, he said, since the<br />

Russians follow a strategy stating<br />

that four-fifths of war is not<br />

kinetic but information. “<strong>The</strong>y<br />

have a sophisticated 30,000-foot<br />

view,” Stengel said, but they don’t<br />

seem to be executing it.<br />

Levina was less sanguine,<br />

saying, “it’s unfortunately<br />

premature to say that Ukraine<br />

has won the information war.”<br />

Russia is disseminating its<br />

messages via TikTok and its<br />

own state-run media.<br />

Video, Telegram Rule<br />

TikTok and video have emerged<br />

as top weapons in this new<br />

information war, explained Clint<br />

Watts, a research fellow at the<br />

Foreign Policy Research Institute.<br />

“Video is king now across all<br />

platforms,” he said. “Over the<br />

last decade, video-enabled<br />

social media has become much<br />

more available and ubiquitous…<br />

everybody has a camera.”<br />

Watts added that President<br />

Volodymyr Zelensky, a<br />

former actor, is a good video<br />

communicator. In addition,<br />

Zelensky and his government<br />

have used the power of platforms<br />

to crowdsource a virtual army.<br />

While Ukraine doesn’t have many<br />

physical resources, Watts said, it<br />

has a “worldwide audience that<br />

wants to help,” including people<br />

to fight, provide materials and<br />

donate through cryptocurrencies,<br />

and more. “A decade ago, we<br />

were talking about Twitter as<br />

the distribution platform,” Watts<br />

added. “<strong>The</strong> information battle<br />

today is on Telegram.”<br />

At the same time, false videos<br />

are proliferating in Russia and in<br />

China—the latter, home of TikTok.<br />

As Aral noted, the messaging app<br />

Telegram, which is widely<br />

used in both Ukraine and<br />

Russia, is owned by a Russian<br />

who opposes moderating or<br />

removing disinformation from<br />

Russia or elsewhere.<br />

“<strong>The</strong>re is a ton to learn from<br />

what’s happening right now<br />

that would be instructive for<br />

democracies,” Watts said.<br />

“All our norms around how<br />

state conflicts run will change<br />

because this is the first social<br />

media-powered state conflict<br />

I’ve seen.”<br />

For example, he said, Ukrainian<br />

soldiers are essentially<br />

conducting psychological<br />

operations on their adversary by<br />

directly text messaging soldiers<br />

on the front lines. “This is<br />

remarkable,” Watts added.<br />

Reviving Personal Networks<br />

<strong>Social</strong> media is also being used<br />

for positive ends. For example,<br />

Levina uses social media to<br />

check in with Ukrainian relatives,<br />

some of whom were too old or<br />

ill to evacuate.<br />

Levina said that social media<br />

is also reviving old-fashioned<br />

personal networks within the<br />

greater Ukrainian community.<br />

“Somebody on Facebook—a<br />

friend of a friend—may say,<br />

‘We know that people in this<br />

hospital...really need catheters,’”<br />

she said. “So everybody in the<br />

network is looking for medical<br />

catheters of a particular size that<br />

would then be shipped.” This kind<br />

of strong, grassroots organizing<br />

has been “amazing,” Levina said.<br />

Still, Levina called for continued<br />

vigilance. “We have to be active<br />

skeptics, not cynics,” she said.<br />

“We really need to keep checking<br />

the information and not be lazy.”<br />

In the new information war, that’s<br />

a powerful command.<br />

16<br />

17


PANEL : INFOWARS SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

18<br />

19


PANEL : RESPONSIBLE AI SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

WHO’S<br />

RESPONSIBLE<br />

FOR<br />

IRRESPONSIBLE<br />

AI?<br />

Software does whatever it’s programmed to<br />

do. <strong>The</strong> primary factor behind AI ethics is<br />

the people who design and create it.<br />

Renée Richardson Gosline Professor, MIT Sloan, MIT IDE group leader<br />

Rumman Chowdhury Director, Twitter<br />

Chris Gilliard Professor, Macomb Community College<br />

Suresh Venkatasubramanian Asst. Director, U.S. Office of Science and Technology Policy<br />

<strong>The</strong> talk about artificial intelligence<br />

(AI) being ethical and responsible<br />

can be a bit misleading. Software<br />

itself is neither ethical nor<br />

responsible; it does what it’s been<br />

programmed to do. <strong>The</strong> greater<br />

concern is the people behind<br />

the software. Unfortunately, said<br />

panelists in this SMS@MIT <strong>2022</strong><br />

session, the ethics of many AI<br />

developers and their companies<br />

fall short.<br />

Some irresponsible or biased<br />

practices are due to a kind of<br />

high-tech myopia, said Rumman<br />

Chowdhury, Twitter’s director<br />

of machine learning ethics,<br />

transparency, and accountability. In<br />

Silicon Valley, “people fall into the<br />

trap of solving the problems they<br />

see right in front of their faces,”<br />

she said, and those are often<br />

problems faced by the privileged.<br />

As a result, she added, “we can’t<br />

solve, or even put adequate<br />

resources behind solving larger<br />

issues of imbalanced data sets<br />

or algorithmic ethics.”<br />

“<strong>The</strong> most fascinating part<br />

of working in responsible AI<br />

and machine learning is that<br />

we’re the ones that get to think<br />

about these systems, truly<br />

as socio-technical systems,”<br />

Chowdhury said.<br />

“<strong>The</strong>re isn’t just one single thing that government or industry or academia<br />

needs to do to address these broader questions. It’s a whole coalition of<br />

efforts that we have to build together.”<br />

SURESH VENKATASUBRAMANIAN<br />

Myopia also can be seen in<br />

business-school students, noted<br />

Renée Richardson Gosline, the<br />

panel’s moderator and a senior<br />

lecturer in management science<br />

at MIT Sloan School and a leader<br />

at the MIT IDE. MBA students<br />

“have all of these wonderful ideas<br />

for companies that they’d like to<br />

launch,” she said. “And the ethics<br />

of the AI conversation oftentimes<br />

lags behind other concerns that<br />

they have.”<br />

‘Massive Harms’<br />

Panelist Chris Gilliard, Professor<br />

of English at Macomb Community<br />

College and an outspoken social<br />

media critic, took a more direct<br />

stance. “We should do more than<br />

just wait for AI developers to<br />

become more ethical,” he insisted.<br />

Instead, Gilliard advocates for<br />

stringent government intervention.<br />

<strong>The</strong> tradeoff for having sophisticated<br />

technology should not<br />

be surveillance and sacrificing<br />

privacy, in his view: “If we look at<br />

how other industries work…there<br />

are mechanisms so that you are<br />

typically not allowed to just release<br />

something, do massive amounts<br />

of damage, and then perhaps<br />

address those damages later on.”<br />

Gilliard acknowledged that his proregulation<br />

stance is opposed in<br />

Silicon Valley, where unfettered<br />

innovation is coveted. “Using<br />

that as an excuse for companies<br />

to perpetuate all manner of<br />

harms has been a disastrous<br />

formulation,” Gilliard said, “not just<br />

for individuals, but for countries<br />

and society and democracy.”<br />

Chowdhury acknowledged the<br />

responsibility corporations bear.<br />

“In industry, doing responsible AI<br />

means that you are ensuring that<br />

what you are building is, at the<br />

very least, not harming people at<br />

scale, and you are doing your best<br />

to help identify and mitigate those<br />

harms,” she said. Beyond that,<br />

she added, “responsible AI is also<br />

about enabling humans to flourish<br />

and thrive.” Chowdhury sees many<br />

startups building on these ideas as<br />

they develop their companies and<br />

ethical AI may actually “drive the<br />

next wave of unicorns,” she said<br />

Working Together<br />

Suresh Venkatasubramanian,<br />

assistant director of the U.S. Office<br />

of Science and Technology Policy,<br />

a branch of the White House,<br />

has a pragmatic perspective. He<br />

maintained that “there isn’t a single<br />

thing that government or industry<br />

or academia needs to do to<br />

address these broader questions.<br />

It’s a whole coalition of efforts that<br />

we have to build together.”<br />

Those efforts, he added,<br />

could include “guardrails” and<br />

best practices for software<br />

development, making sure that<br />

new products are tested on<br />

the same populations that will<br />

ultimately use them. More rigorous<br />

testing is also needed to protect<br />

people from what he called<br />

“discriminatory impacts.”<br />

Chowdhury summed it up by<br />

saying that “responsible AI is not<br />

this thing you do on the side after<br />

you build your tech. It is actually<br />

a core part of ensuring your tech<br />

is durable.” She urged companies<br />

to “carve out meaningful room for<br />

responsible AI practices, not as a<br />

feel-good function, but as a core<br />

business value.”<br />

Venkatasubramanian agreed that<br />

articulating ethical values and<br />

rights is important. But once that’s<br />

done, he added, it’s time to “allow<br />

our technologists and our creative<br />

folks to build technologies that can<br />

help us respect those rights.”<br />

3 GROUPS WORKING FOR MORE ETHICAL TECH<br />

Startups & Society<br />

Initiative promotes<br />

the adoption of more<br />

ethical and socially<br />

responsible practices<br />

in technology firms.<br />

Parity Responsible<br />

Innovation Fund invests<br />

in innovation that<br />

protects privacy and<br />

security rights and the<br />

ethical use of technology.<br />

National AI Research<br />

Resource Task Force is a<br />

joint venture of the U.S.<br />

National Science Foundation<br />

and the Office of Science and<br />

Technology Policy.<br />

20<br />

21


PANEL : RESPONSIBLE AI SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

22<br />

23


SOCIAL MEDIA SUMMIT @ MIT <strong>2022</strong><br />

FINAL<br />

THOUGHTS<br />

<strong>The</strong> rise of social media has created new and<br />

complex challenges, and there’s no silver bullet to<br />

solve them all. That’s why the MIT Initiative on the<br />

Digital Economy will continue to be a nexus for<br />

ongoing social media research. It’s also why we<br />

intend to keep our annual <strong>Social</strong> <strong>Media</strong> Summit@<br />

MIT event free and open to the public.<br />

<strong>The</strong> IDE serves as an important hub for<br />

academia, industry, public policy, the economy<br />

and society. Understanding the consequences<br />

of social media, both intended and unintended,<br />

is a mission we take seriously. <strong>The</strong> sometimes<br />

contradictory goals are to protect privacy<br />

while enabling democracy, and to prevent<br />

abuse while providing a platform where<br />

everyone can share their views.<br />

We hope you’ll help keep the IDE at the<br />

forefront of these issues with your support<br />

and participation. To learn more about the<br />

IDE, and to provide support to our important<br />

research and events, please reach out to<br />

Devin Cook or David Verrill.<br />

“<strong>The</strong> goal is to protect privacy while enabling democracy.”<br />

SINAN ARAL<br />

MANY<br />

THANKS<br />

Thank you to our SMS@MIT channel partners.<br />

<strong>The</strong> MIT <strong>Social</strong> <strong>Media</strong> Summit was made<br />

possible by a collaboration with the MIT Office<br />

of External Relations.<br />

CONTENT Peter Krass, Paula Klein SESSION ILLUSTRATIONS DPICT<br />

24<br />

25

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!