Fake News / Truth / and AI

The phenomenon has undermined our trust in electoral systems, in vaccines — and in what happened at the U.S. Capitol on January 6th. False statements, misdirection, half-truths and outright lies: When promoted and repeated in the echo chambers of social media, they can shape attitudes, influence policy and erode democracy wrote Richard Stengel, the former U.S. Under Secretary of State for Public Diplomacy and Public Affairs. Hannah Arendt in “The Origins of Totalitarianism” —wrote that people “believe everything and nothing, think that everything was possible and nothing was true.” If this sounds familiar, it should.

“Social media is rewiring the central nervous system of humanity in real-time,” said MIT Sloan professor Sinan Aral, “We’re now at a crossroads between its promise and its peril.” False news spreads quickly online, aided by social media algorithms that amplify popular, and often incendiary, content. And social media companies and their advertisers often benefit from it said Aral.

The Center for Sustainable Media believes that trust in information, and clarity around it’s provenance is critical for information and media to act as the underpinning for creative freedom and democracy.

The CSM believes that the information explosion has created an urgent need to limit bad actors, create cross-platform standards for attribution, and a more nuanced understanding of ‘truth’ that will need to be taught to students so that digital data is viewed with sophistication and healthy skepticism.

CSM Proposes research, project exploration, and amplification of the following 6 steps:

– Focused, cross-platform prosecution and limitation of known bad actors, spammers, stalkers, and creators of bad bots.
– Federal Legislation to give users control and authority over their user data that puts limitations on surveillance marketing.
– Understanding and cross-platform review of how algorithms contribute to bias, racism, and polarization. A platform-wide standard to label and limit these behaviors.
– Encouragement of subscription alternatives that give consumers a choice between free (ad-supported) and paid (subscription) service plans.
– Public clarity around the danger of speech amplified by algorithms. Consider industry-wide labeling of organic vs. algorithms enhancement.
– As AI continues to automate formerly human-controlled behaviors, early study and consideration of the danger of AI determined ‘truth’.

From MIT / Sloan.

Clint Watts, a research fellow with the Foreign Policy Institute says one solution is to crack down on the most prolific known offenders.

The difficult balance between user privacy and platform transparency
Social media poses what Aral calls a “transparency paradox.” Researchers and the public have the right to know how social media platforms are accessing and using consumer data. But there’s also a need to protect user privacy and security.

Algorithmic transparency that lets researchers examine peer-to-peer information sharing without sharing personal information would lead to greater understanding about malicious use and how to prevent it, said Kate Starbird, an associate professor at the University of Washington. Some platforms are already more transparent than others. “We’re able to review data patterns on Twitter because their data is public,” she said. “Facebook and YouTube do not readily share data and we can’t study them very well.”

The difficult balance between user privacy and platform transparency
Social media poses what Aral calls a “transparency paradox.” Researchers and the public have the right to know how social media platforms are accessing and using consumer data. But there’s also a need to protect user privacy and security.

Algorithmic transparency that lets researchers examine peer-to-peer information sharing without sharing personal information would lead to greater understanding about malicious use and how to prevent it, said Kate Starbird, an associate professor at the University of Washington. Some platforms are already more transparent than others. “We’re able to review data patterns on Twitter because their data is public,” she said. “Facebook and YouTube do not readily share data and we can’t study them very well.”

Lack of regulation for social media companies
Nick Clegg, vice president of global affairs at Facebook, said he agreed that independent oversight is a necessity.“We’re way beyond the stale debate of whether we need new rules of the road,” Clegg said during a discussion with Aral. Clegg also noted that if different areas of the world regulate social media differently, it could balkanize the internet. The U.S. and European Union need to work together, he said, and bring India into the fold.

Lack of competition
Competition is a big incentive for companies to change behavior, Aral noted, but there is market concentration in the social economy with Facebook, Twitter, and Google.

“We’re dealing with an array of issues, including concentration that is choking off innovation, harming advertisers and small businesses, and leading to less competition for quality and privacy,” said Zephyr Teachout, an assistant professor of law at Fordham Law School.

The European Union is considering the Digital Markets Act, which would address anti-competitive practices and dictate corporate responsibility for non-compliance. This might be a model for other areas.

Algorithms contribute to bias, racism, and polarization
Social media and search engines have become the main way people organize and access information, said Safiya Noble, co-founder of the Center for Critical Internet Inquiry at UCLA. But companies that run them are guided by profit, and not things like democracy or human rights, she noted, and sometimes the most popular, profitable speech promotes racism, misinformation, and polarization.

Part of the problem is frictionless systems that allow users to easily retweet and share this kind of information, saidRenée Richardson Gosline, a principal research scientist at MIT Sloan. Introducing friction by slowing online interactions and giving users the chance to think before sharing information is one solution, she said.

Social media business models don’t always serve users
Social media business models are built on the attention economy, in which platforms sell users’ attention for advertising. But what gets attention isn’t always good for users, or society. Revising business models away from the attention economy could help.

Subscription-based models, which aren’t tied to adverting, are an alternative, said Scott Galloway, an adjunct professor of marketing at New York University, though he noted that there is a danger if the best, fact-checked information is available only behind a paywall.

The line between free speech and harmful speech is sometimes unclear
Section 230 of the Communications Decency Act provides websites with immunity from third-party content. It needs to be reformed to make platforms more liable for the content they publish, said Richard Stengel, a former undersecretary of state for Public Affairs and Public Diplomacy and former managing editor of Time magazine. “Regulations have to incentivize platforms to take responsibility for illegal content just as Time magazine was,” he said, noting that platforms are currently in a gray area when it comes to regulating content.

Renée Diresta, research manager at the Stanford Internet Observatory, said policy should also differentiate between free speech and free reach. The right to free speech doesn’t extend to a right to have that speech amplified by algorithms.

“There’s always been this division between your right to speak and your right to have a megaphone that reaches hundreds of millions of people,” she said.

Sources:

https://mitsloan.mit.edu/ideas-made-to-matter/social-media-broken-a-new-report-offers-25-ways-to-fix-it

Hi, I'm Droid.

How can I help you today?