top of page

Social Media

continued

Remember the good ‘ol days, when the Internet was just a simple, innocent place to go for useful information like movie times and weather? Just a simple place where you could track down your old high school boyfriend or girlfriend to see if their life had indeed gone on without you and to make sure their significant other was not as pretty or fun as you are?

 

​How times have changed in such a short period of time. Over the past few years we have, unfortunately, seen the Internet’s dark side. We have seen intimate photos of women posted without their consent for revenge or just a cheap thrill; fake social media accounts created to harass and embarrass ex-boyfriends and girlfriends; and Americans accused of murder and other horrible crimes, with zero evidence – openly defamed, maligned and slandered with little recourse.

 

We have seen terrorist propaganda accounts enabled and empowered and foreign countries maliciously attack our sacred elections. We have seen social media firms shamelessly sell us out by not only failing to protect our personal information but actively pimp it out, and truth and productive discourse replaced by disinformation and hate.

Like a slow-moving car crash, we have seen the Internet mutate from an innocent, cuddly Calico kitten into an irresponsible, out-of-control Bengal tiger – weaponized for the destruction of almost everything we hold dear, from our personal privacy to our hallowed democracy.

We must get a handle on this, and fast... especially in the age of Artificial Intelligence (AI), which is just going to make all this worse.

 

There are tons of issues that need to be addressed involving the Internet – from cybersecurity to online influence operations to cyber bullying – but, in my mind, the accountability of social networks is at the top of the list …. and their being held accountable for their actions needs to extend beyond misinformation on their platforms to how we are treated as consumers.

Why should social networks have accountability to consumers, you may ask, because our Facebook, Instagram, TikTok and X accounts are free anyway, right?  What do we care? No! No! No! Many services in the digital economy appear to be free, but you pay dearly for them not with money, but with your personal data. In fact, your personal information is a currency far more valuable to social media companies than if you paid them a large monthly fee.

We are the product being sold here. Our likes and dislikes, our desires and preferences, our vulnerabilities and insecurities. What we eat, when we sleep, why we vote, what we buy, where we shop. Who we worship, who our friends are, who our enemies are…all sold to the highest bidder.

Social media companies not only have access to a mind-boggling amount of our personal data, but they also possess an unprecedented “social graph” that allows them to not only know the desires and habits of each of their members, but also how each of their members connects and interacts with their other members. This goldmine is invaluable to advertisers. In truth, you can’t even put a value on it – but if you could, the numbers would be $44 billion, $330 billion and $1.9 trillion, the most recent valuations of X, ByteDance (TikTok) and Meta (Facebook and Instagram).

Until fairly recently, when the public finally became more aware of their behavior, these companies showed little regard for their actions, even though they knew exactly how they were manipulating their users and negatively affecting society. Their irresponsible behavior didn’t stop with enabling ​Russian bots and fake Antifa accounts, or even the spread of disinformation, conspiracy theories and hate speech. They also punted on basic human decency.

For example, in 2018 Facebook employees created a slide presentation as part of an internal effort to understand how Facebook shapes user behavior, and how the company could possibly alleviate potential harmful effects. One of the slides said: “Our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked, Facebook would feed users more and more divisive content in an effort to gain user attention and increase time on the platform.” 

Facebook founder and chief executive Mark Zuckerberg, along with other senior members of his team, seemingly buried the results of the research. What’s even more disturbing is that, according to The Wall Street Journal, “the concern was that some proposed changes would have disproportionately affected conservative users and publishers, at a time when the company faced accusations from the right of political bias.” In other words, the leaders of Facebook threw us all under the bus because of potential political consequences.

 

The Wall Street Journal also reported that “a 2016 presentation that names as author a Facebook researcher and sociologist, Monica Lee, found extremist content thriving in more than one-third of large German political groups on the platform.” Her presentation found that, “swamped with racist, conspiracy-minded and pro-Russian content, the groups were disproportionately influenced by a subset of hyperactive users. Most of them were private or secret. The high number of extremist groups was concerning, the presentation says.” The WSJ article continued, “Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation stated that ‘64 percent of all extremist group joins were due to our recommendation tools’ and that most of the activity came from the platform’s Groups You Should Join and Discover algorithms: Our recommendation systems grow the problem.’”

So, Facebook knew for years before the 2020 election and the January 6th Capitol attacks that its own algorithms promoted and even encouraged extremism. That is truly beyond the pale.

In July 2020, Facebook released the results of a long-awaited audit of its civil rights policies. It wasn’t good: “With each success the auditors became more hopeful that Facebook would develop a more coherent and positive plan of action that demonstrated, in word and deed, the company’s commitment to civil rights. Unfortunately, in our view Facebook’s approach to civil rights remains too reactive and piecemeal.”

Perhaps most exasperating to the auditors was Mark Zuckerberg’s stance on political speech. Using the example of Donald Trump’s May 2020 Facebook post that warned protesters “when the looting starts, the shooting starts,” they said: “After the company publicly left up the looting and shooting post, more than five political and merchandise ads have run on Facebook sending the same dangerous message that ‘looters’ and ‘Antifa terrorists’ can or should be shot by armed citizens.  The auditors do not believe that Facebook is sufficiently attuned to the depth of concern on the issue of polarization and the way that the algorithms used by Facebook inadvertently fuel extreme and polarizing content… When powerful politicians do not have to abide by the same rules that everyone else does, a hierarchy of speech is created that privileges certain voices over less powerful voices.” 

To be fair, social media firms seemed to be far more disciplined right before, during and after the 2020 election. According to The Economist, Facebook removed ten times the number of hate speech posts than they had two years before. They also deactivate 17 million fake accounts every single day, double the number from three years prior. Facebook also reinforced its security teams, conducted practice drills to plan for every possible election outcome, blocked new political ads for certain time periods, limited the number of people and/or groups with which a message can be shared, and strengthened transparency rules for advertisers.

Before the 2024 election, Meta (Facebook and Instagram) revealed it had invested over $20 billion around safety and security issues for elections around the world since 2016. More than 40,000 employees worked on these efforts. Meta also worked with fact-checkers – including PolitiFact, Reuters and USA Today – to add fact-check labels to election content that had been discredited, amplify verified voting resources, and label AI-generated content. 

In January 2024, TikTok announced its plans for election integrity for the upcoming election. Partnering with the nonprofit Democracy Works, TikTok worked with electoral commissions and fact-checking organizations, plus built Election Centers to connect people to trustworthy information about voting. The company also expressed its commitment to consistently enforce its community rules to counter misinformation, deter covert influence operations, and address misleading AI-generated content. TikTok shared an update on their election integrity plan in September, adding they expected to invest over $2 billion in trust and safety in 2024 alone.

So, given these improvements, we were beginning to feel a little better about this, thinking that these companies were finally trying – or at least acting like they were trying – to do better. Then, two things happened.

First, Inauguration Day 2025. There the social media guys were, bending the knee to Donald Trump as shamelessly as any of his Cabinet members, and buying access to him like the oligarchs they have now proven to be. Shou Zi Chew, the CEO of TikTok was there, as was Mark Zuckerberg and, of course, Elon (pre-bromance break up). Because federal law prohibits foreign nationals from making donations in connection with U.S. elections, ByteDance (TikTok) did not donate directly to the presidential inauguration committee. However, Pennsylvania investor and longtime Republican megadonor Jeff Yass, a major investor in the company, donated $16 million to MAGA Inc. (a super PAC that supports Donald Trump) in the first half of 2025 alone.​ Even though they couldn’t officially bribe anyone via the actual 2024 election, ByteDance found a way in through lobbying the U.S. government, spending $10.3 million in 2024 and $4.8 million in the first nine months of 2025 alone.

But that is child’s play compared to Meta, which spent $24.4 million on lobbying in 2024 and $13.8 million in the first nine months of 2025. Two weeks after Mark Zuckerberg met with President-elect Trump privately at Mar-a-Lago in November 2024, Meta donated $1 million to his inaugural fund – but the brown-nosing started way before then. After Trump’s assassination attempt in July, Zuckerberg said, “Seeing Donald Trump get up after getting shot in the face and pump his fist in the air with the American flag is one of the most badass things I’ve ever seen in my life.”

And it just got more vomit from there. At a White House dinner in early September 2025, President Trump asked Zuckerberg – who was sitting, literally, at his right-hand – “How much are you spending, would you say, over the next few years?” Mark, obviously unprepared for the question, stammered, “Oh gosh. Um, I mean I think it’s probably going to be something, like, I don’t know, at least $600 billion through ‘28 in the U.S. Yeah.” (this mishmash answer came after Mark admitted he hadn’t been listening when an earlier question was directed at him) But that wasn’t the embarrassing part. That came a few minutes later, when Zuckerberg was caught on a hot mic groveling like a … well… groveling to Trump. “I’m sorry, I wasn’t ready to do our... I wasn’t sure what number you wanted to go with.”

As it turns out, even that wasn’t the most embarrassing part of the evening. The truly humiliating part was the titans of tech, sitting around a huge rectangle table, fawning over The Donald, North-Korean style. (If you haven’t seen video of this circle jerk, we urge you to search for it right this second. It comes off way worse when you actually experience it first-hand.)

OpenAI CEO Sam Altman: “Thank you for being such a pro-business, pro-innovation president. It’s a very refreshing change. I think it’s going to set us up for a long period of leading the world, and that wouldn’t be happening without your leadership.” Apple Chief Executive Tim Cook, after saying Apple is expected to invest $600 billion in the U.S: “I want to thank you for setting the tone such that we can make a major investment in the United States and have some key manufacturing here. I think it says a lot about your leadership and focus on innovation.” < Google CEO Sundar Pichai and IBM CEO and Chairman Arvind Krishna were also there. >

 

Good grief, guys. Have a little dignity. Seriously. Are you really that willing to trade your integrity and self-respect for a little less regulation? How much money do you need?

The second thing that dashed our newfound hope came on September 8, 2025, when The Washington Post reported on “documents from inside Meta that were recently disclosed to Congress by two current and two former employees who allege that Meta suppressed research that might have illuminated potential safety risks to children and teens on the company’s virtual reality devices and apps.”

 

The story began like this: “At her home in western Germany, a woman told a team of visiting researchers from Meta that she did not allow her sons to interact with strangers on the social media giant’s virtual reality headsets. Then her teenage son interjected… he frequently encountered strangers, and adults had sexually propositioned his little brother, who was younger than ten, numerous times. I felt this deep sadness watching the mother’s response,’ one of the researchers, Jason Sattizahn, told The Washington Post regarding the April 2023 conversation. ‘Her face in real time displayed her realization that what she thought she knew of Meta’s technology was completely wrong.’”

The Post continued, “Meta had publicly committed to making child safety a top priority across its platforms. But Sattizahn and a second researcher, who specializes in studying youths and technology, said that after the interview, their boss ordered the recording of the teen’s claims deleted, along with all written records of his comments. An internal Meta report on the research said that in general, German parents and teens feared grooming by strangers in virtual reality – but the report did not include the teen’s assertion that his younger sibling actually had been targeted.”

From the looks of it, deniability was part of Meta’s strategy all along. The Washington Post, again: “After leaked Meta studies led to congressional hearings in 2021, the company deployed its legal team to screen, edit and sometimes veto internal research about youth safety in VR (virtual reality), according to a joint statement the current and former employees submitted to Congress in May. They assert Meta’s legal team was seeking to ‘establish plausible deniability’ about negative effects of the company’s products.”

 

< Meta did not directly dispute or confirm the events in Germany, but its spokeswoman Dani Lever said that the allegation that Meta curtailed research is based on a few examples “stitched together to fit a predetermined and false narrative… “We stand by our research team’s excellent work and are dismayed by these mischaracterizations of the team’s efforts.” >

​​This chronic irresponsible behavior brings to mind the famous piece of advice attributed to Maya Angelou: “When someone shows you who they are, believe them.” It’s clear we can’t rely on these guys to self-police themselves. Because of their size and scale of impact on communication, media, and civil society overall, the stakes are just way too high.

For one, cleaning this mess up flies directly in the face of their entire profit model, which is an obvious disincentive. Social media algorithms are designed to attract as much of the user’s attention as possible, then push the user to interact with others. The algorithms don’t distinguish between “good” and “bad” content, they just understand that they need to push the content that gets the most comments, clicks and shares. Given all the examples we just covered, we know that primitive emotion and extreme behavior generate more attention and interest than cats playing Pat-a-Cake, meaning these companies make more money on the extremes – which is exactly the reason Facebook executives buried their own research.

Another reason we can’t rely on self-policing alone is that the leaders of these organizations are just humans, who have their own political views, proclivities and biases.

 

Take Elon Musk, for example, the Chairman and Chief Technology Officer of X. According to X, the company spent most of 2024 “meeting with Secretaries of State, State Election Directors, law enforcement, civil society groups, campaigns, and committees" to brief them on their "policies and preparations (for the 2024 election), as well as to get their input on prospective threats and emerging challenges.” X’s safety team “proactively monitored activity on their platform and employed advanced detection methodologies to enforce its rules related to authenticity, such as platform manipulation and spam” and “actively worked to thwart and disrupt campaigns that threaten to degrade the integrity of the platform.”

From June to November 2024, X reported it had actioned over 536,000 accounts under its Platform Manipulation and Spam policy; suspended over 1.87 million accounts related to information operations; and actioned over 3,200 accounts under its Misleading and Deceptive Identities policy for impersonation of political candidates and elected officials.” X also actioned over 160,000 posts under the Abusive Behavior, Hateful Conduct, Violent Speech, and Sensitive Media policy. These violations included wishes of harm against candidates; abusive language; threats of violence against supporters of the main political parties; and slurs used against minority groups post-election day. The company removed 2,000 posts that contained content that may suppress participation, mislead people about when, where, or how to participate in a civic process, or lead to offline violence during an election; 3,700 posts that violated the Synthetic and Manipulated Media policy; and 770 accounts under the Violent and Hateful Entities policy. All great stuff!

But ... (there’s always a but!) … Elon Musk’s political action committee created a group on X called the “Election Integrity Community” that encouraged people to report instances of “voter fraud or irregularities.” Problem is, many of the posts were unsubstantiated, misleading or completely fabricated and, because of the algorithms, gathering them all in one place only fueled their spread.

Max Read, a senior researcher at the Institute for Strategic Dialogue – a group of independent, nonprofit organizations dedicated to safeguarding human rights and reversing the rising tide of polarization, extremism and disinformation worldwide – told CBS News that this X community could become a “one stop shop” for people looking to amplify election fraud claims:  It’s “sort of a consolidation point of a lot of different false, unverified claims about the election process.”

Even before this group, there were major issues with Musk – who, at the time, was an outspoken supporter and close friend of President Trump who has almost 3.3 billion cumulative views on X – spreading misinformation on X, including the old, tired discredited claims about Dominion voting machines and widespread election fraud. For example, in one post in July 2024, Musk wrote, “The goal all along has been to import as many illegal voters as possible” – a falsehood that was viewed 45.8 million times. A comment Musk made in response to a thread from Speaker of the House Mike Johnson about legislation requiring proof of citizenship to vote said, “Those who oppose this are traitors. All Caps: TRAITORS. What is the penalty for traitors again?” As a reminder, the penalties for treason in the United States include death. The repost had 56.8 million views.

After reviewing thousands of Musk’s posts, the CBS News Confirmed team found that 55 percent of them contained misleading or false statements, or amplified posts that did. Plus, 40 percent of the accounts Musk had replied to or reposted were accounts that had been identified as promoters of voter fraud claims. These posts each had an average of 9.3 million views.

When Musk took Twitter over in 2022, he laid off most of the department responsible for trust and safety and reversed many of the previously established norms and community guidelines – replacing it all with a model that relies on X users to fact-check one anotherHowever, Community Notes, as it’s known, is not effective in providing a meaningful check on misinformation, according to the nonprofit Center for Countering Digital Hate. Analysis from the group released in October 2024 found that “despite a dedicated group of X users producing accurate, well-sourced notes, a significant portion never reaches public view.”

Their analysis found that "74 percent of accurate Community Notes on false or misleading claims about U.S. elections never got shown to users. This allowed misleading posts about voter fraud, election integrity, and political candidates to spread and be viewed millions of times. Posts without Community Notes promoting false narratives about U.S. politics garnered billions of views, outpacing the reach of their fact-checked counterparts by 13 times.”

One of the gloomiest examples of the failure of Community Notes is the Haitian migrants are eating residents’ pets in Springfield, Ohio lie. On September 9, 2024, Vice-President JD Vance posted this on X: “Months ago, I raised the issue of Haitian illegal immigrants draining social services and generally causing chaos all over Springfield, Ohio. Reports now show that people have had their pets abducted and eaten by people who shouldn’t be in this country. Where is our border czar?”

Even after Springfield City Manager Bryan Heck disputed the story and blamed its ability to spread on “misinformation circulating on social media, further amplified by the political rhetoric in the current, highly charged presidential election cycle,” Donald Trump repeated this ridiculousness to the 67 million people watching the September 11th presidential debate. This sparked dozens of bomb threats in Springfield, a Proud Boy march, and plenty of other anti-immigrant hatefulness. It also, according to Mayor Ron Rue, cost the city hundreds of thousands of dollars in expense.

Two days later, Erika Lee, the Springfield resident behind the first Facebook post about Haitian migrants eating pets, told NBC News that she had no firsthand knowledge of the alleged incident and only learned of it through what she called a “game of telephone.” Filled with regret for her part in the ensuing upheaval, she said, “It just exploded into something I didn’t mean to happen… I didn’t think it would ever get past Springfield.”

Even though Ms. Lee removed the Facebook post and disavowed it, The Washington Post found that it shot through conservative groups on other social media sites after being amplified on X on September 5th by a verified, anonymous account with 3.1 million followers called End Wokeness. The post on X went unaddressed by the Community Notes contributors for four days, until finally one contributor pointed out the Springfield police and city officials had discredited the claim. Nevertheless, even though the contributor cited five articles/posts as proof, the factcheck didn’t get enough votes from the other contributors to be publicly attached to the post. By then, the post had been viewed over 5 million times and reshared 20,000 times.

Perhaps the best way to know Community Notes isn’t working that great is this: On July 26, 2024, Elon shared a video that manipulated Kamala Harris’ voice to make it seem like she was making derogatory comments. “This is amazing,” he wrote, alongside a laughter emoji. Even though 25 Community Notes contributors said the video was not authentic, 24 others found that, since it was clearly satire, no additional context was needed. The latter group prevailed, and the fake video of Kamala Harris that already had 243,000 reshares and 136.6 million views – and that had been posted by the richest man in the world who has, by far, the most X followers – remained untouched.

bottom of page