Senators Charge Addiction Drives Social Media Platforms’ Business Models
WASHINGTON – Even after the call-to-arms of the Kenosha Guard militia group had been flagged on Facebook 450 times, it was not taken down because it did not “meet the standards” for removal, said Sen. Dick Durbin, D-Ill., at a Senate hearing on Tuesday. This led to the deaths of two protestors shot with an AR-15 by Kyle Rittenhouse, who drove from Illinois to Wisconsin to answer the call last August.
According to Durbin, chairman of the Senate Judiciary Committee, the “consequences” – from Kenosha to January’s Capitol storming – of the algorithms embedded into social media platforms “have never been clearer.” These algorithms have not only changed the way people engage and what they watch, buy and read, but “can drive people to a self-reinforcing echo chamber of extremism,” he said at a subcommittee hearing on the impact the platforms have in shaping the choices, opinions and actions of everyday Americans.
Even though witnesses from social media platforms Facebook, Twitter and Google’s YouTube listed their efforts to mitigate the spread of disinformation or harmful content, the lawmakers slammed their business models, which they charged are primarily based on addiction.
“The business model is addiction. Money is directly correlated to the amount of time that people spend on the site,” said Sen. Ben Sasse, R-Neb.
The “power” of these algorithms “to manipulate social media addiction,” especially of teenagers, is “something that should terrify each of us,” said Sen. Marsha Blackburn, R-Tenn.
This is not the first time the platforms have testified before Congress on how the algorithms embedded in their sites affect their users, with lawmakers questioning whether the platforms can properly police themselves absent federal regulation.
Yet, as the world transitions into a digital economy, “existential threats” are growing exponentially quicker than the country’s’ ability to “mitigate or respond” to them, said Tristan Harris, former design ethicist at Google and now president of Center for Humane Technology. For every 200 billion daily posts in Facebook’s WhatsApp, he noted, fact-checkers review only 100.
One particular threat he mentioned was China’s rise in the world’s transition into a “digital society.” China, he explained, has a “digital closed society” of surveillance, censorship, thought-control, behavior modification and the like.
On the other hand, he said, the U.S. has a digital “open” society that has turned Americans into a culture that is “constantly immersed in distractions” and “unable to focus on our real problems” like increasing suicide rates from cyberbullying, pandemic mis- and disinformation, civil rights and national security. If China were to fly a fighter jet above the U.S., the Defense Department would shoot it down. “But if they try to fly an information bomb, they are met with a white-gloved algorithm” by one of the platforms asking them which ZIP code they’d want to target, he claimed.
The U.S. values free speech more than most other values, he said, but this comes at a cost when free speech is used as “a Frankenstein monster that spins out blocks of attention virally” in personalized ways to different people to “outrage” them. Harris explained that this personalization just leads a user into their own “rabbit hole of reality.”
The problem, he said, is not just the business model based on engagement, but the actual design model of the platforms themselves, which have created “yellow journalists” out of their users to generate “attention production” for letting loose our “five minutes of moral outrage,” which the users then share with each other, he said. By using algorithms in their editorial strategy instead of a human sorting the content for the user, he added, they have created a “values-blind process” that then allows harms to pop up “in all the blindspots.”
And “value blindness destroys our democracy faster than people are essentially raising the alarms,” he said.
This is not to say that the platforms have not been trying their best to curtail these issues, he said, but they are “trapped” by their algorithms and end up favoring content that is “addicted, outraged, polarized, narcissistic and disinformed.”
The more “extreme” the content, the more “likes” and “follows” a user will get, and the more the user will continue to engage in this “attention treadmill,” he explained, because “if it bleeds, it leads.”