Plus, a question about giving Tucker Carlson January 6 footage.
I’m Isaac Saul, and this is Tangle: an independent, nonpartisan, subscriber-supported politics newsletter that summarizes the best arguments from across the political spectrum on the news of the day — then “my take.”
Are you new here? Get free emails to your inbox daily. Would you rather listen? You can find our podcast here.
Today's read: 11 minutes.
From today's advertiser: Let’s face it — in today’s global chaos, it can be impossible to keep up with international affairs. If you don’t have time to read The Economist, NYT, and Washington Post before heading to work, we recommend checking out International Intrigue. Founded by a team of Diplomats, Scholars, and Journalists, this daily newsletter scours 600 publications every day to deliver the most important headlines surrounding global affairs.
- Want to know why Brazil and Argentina are integrating currencies?
- Or why China is listing an important ban on Australian commodities?
We do too — it’s why we read International Intrigue. Join us, along with 25,000 other transnational thinkers. Sign up for free.
Will we see you tomorrow?
Tomorrow, I am going to be writing about the latest revelations from the Dominion Voting Systems lawsuit against Fox News, which exposed the text messages of several of the network's top personalities. I'll be referring back to my previous writing, and explaining what the now-public messages teach us about media bias, poor incentives in television news, and the way individual pundits can act irresponsibly.
- China's Xi Jinping will visit Russia, according to Vladimir Putin, raising concerns that Beijing may begin providing military support for the invasion of Ukraine. (The visit)
- Sen. Jon Tester (D-MT) says he will seek reelection for a fourth term. (The announcement)
- Jennifer McClellan (D) won a special election in Virginia to fill Rep. Donald McEachin's seat after his death last year. McClellan is the first Black Congresswoman from Virginia. (The victory)
- Israeli forces carried out a raid in the West Bank yesterday, setting off clashes that killed 10 Palestinians, including six militants. Israel was seeking to arrest three Palestinian militants who were allegedly planning attacks. (The gunfight)
- The FDA proposed a new rule, over objections from dairy farmers, that would allow plant-based milk alternatives to continue using the word "milk" in their labeling. (The rule)
Section 230. On Tuesday, the Supreme Court heard arguments in Gonzalez v. Google, a case that could change the liability social media companies have for the content published on their platforms.
What is Section 230? Section 230 is a part of the Communications Decency Act, passed in 1996, that helps govern liability on the internet. Often referred to as "the 26 words that created the internet," Section 230 says:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Practically speaking, it means platforms like YouTube, Facebook and Twitter are not liable for what someone publishes on their platforms in the same way that The New York Times would be liable for what a journalist writes in its paper.
There are some exceptions to Section 230; illegal sex trafficking, violations of federal criminal law, or copyright violations all confer upon the platforms a higher degree of liability to monitor and remove offending content.
What's happening now? The family of an American killed in a 2015 terrorist attack in Paris is suing Google. The Gonzalez family says that Google, through its subsidiary YouTube, violated the anti-terrorism law by promoting ISIS's videos with its recommendations algorithm, which helped increase recruitment and led to the death of their family member, Nohemi Gonzalez.
Lower courts have ruled in favor of Google, saying Section 230 protects them from being liable for the third party content posted on its service. For different reasons, during oral arguments, nearly all the justices seemed skeptical of changing the law. Justice Clarence Thomas suggested recommendations were protected by Section 230 so long as the algorithm was treating content similarly. Justices Elena Kagan and Brett Kavanaugh suggested that if Section 230 was not the best way to govern the internet, Congress should make a change — not the Supreme Court. “These are not, like, the nine greatest experts on the internet,” Kagan said in a comment that drew headlines.
Justice Ketanji Brown Jackson was the only justice who appeared interested in tearing down the law, but even her line of questioning suggested she was not convinced by the arguments the Gonzalez family's lawyer was making.
Once again, the arguments shot life into a debate about what should happen to Section 230. On Wednesday, the court is hearing a second case, Twitter v. Taamneh, which touches on related issues but is not relevant to Section 230.
Both conservative politicians and progressive activists oppose Section 230 for different reasons, while many people revere it as the law that helped create the internet as we know it.
Since this issue does not fall down traditional partisan lines, today we're going to break up our arguments based on those who support Section 230 as it is and those who want the court to tear it down or Congress to reform it.
Against changing it...
- Many argue that the Supreme Court striking down parts of Section 230 would destroy the internet as we know it, worsening the functionality of the most popular platforms.
- Some praise the majority of justices who seem to understand this is a problem for Congress.
- Others argue that removing Section 230 would lead to more online censorship, not less, which everyone should be wary of.
In Slate, Mark Joseph Stern said Brett Kavanaugh just "made the best argument" for saving the internet.
"The plaintiffs have zero evidence that any of the Paris terrorists saw these suggestions. They simply speculated that users may have been radicalized into joining ISIS because of YouTube’s algorithmic recommendations. At this point, you might ask: If there’s no proof the algorithm played any role in radicalizing or inciting the terrorists, why did the plaintiffs mention it at all? Why not sue over the mere existence of ISIS recruitment videos on YouTube, which is the true gravamen of the complaint, anyway? That’s where Section 230 comes in," Stern said. "The law, passed in 1996, generally bars lawsuits against a website for hosting other people’s expression—even if that expression is harmful and illegal.
"Section 230 expressly protects any 'interactive computer service' that chooses to 'filter, screen,' or 'organize' content. Filtering and organizing content, of course, is precisely what algorithms do. It makes no sense to claim that a website simultaneously gains and loses immunity by organizing speech," Stern said. "As Justice Brett Kavanaugh explained: 'It would mean that the very thing that makes the website an interactive computer service also mean[s] that it loses the protection of [Section] 230. And just as a textual and structural matter, we don’t usually read a statute to, in essence, defeat itself.'... It was also Kavanaugh who delivered a remarkable defense of the law as it’s read today... Why not let Congress 'take a look at this and try to fashion something along the lines of what you’re saying?' Or, as he put it later: 'Isn’t it better to keep it the way it is [and] put the burden on Congress to change that?'"
In National Review, Bobby Miller said reforming the law would have dire consequences for the right.
"Conservatives have long claimed, often rightfully so, that big tech is silencing their voices," Miller wrote. "As a remedy, they have sought an overhaul of Section 230 of the Communications Decency Act of 1996, the foundation of the modern internet. But experts are warning that attempts to persuade the Supreme Court to roll back the liability protections in Section 230 enjoyed by internet platforms are ill-advised… the petitioners in Gonzalez and Taamneh argue that websites can be held liable for the algorithms they use to curate and present content. Shane Tews, a senior fellow at AEI, disagrees. 'Narrowing Section 230 will create an appetite for greater censorship, resulting in more, not less, content moderation. More moderation means curtailing access to information and freedom of expression.'
"Tews is correct. If Section 230 is circumscribed in either Gonzalez or Taamneh, social media companies will censor conservative speech more, not less," Miller said. "If Silicon Valley knows it can be held liable for algorithms that promote speech with even the slightest intonation of incitement or defamation, the tech firms already predisposed to fear conservative views will invariably clamp down. Those pushing for Section 230 reform ought to tread lightly."
In Vox, Ian Millhiser praised the Supreme Court for understanding it could break the internet.
"Gonzalez v. Google, the case heard today, could subject social media websites and even search engines to ruinous liability, potentially forcing these companies to abandon their business models or even shut down," he wrote. "That said, most of the justices appeared sufficiently spooked by the possibility that they could destroy how the modern-day internet operates that they are likely to find a way to prevent that outcome. As Justice Elena Kagan warned at one point during the Gonzalez argument, the justices are ‘not the nine greatest experts on the internet.’ So it makes sense for them to approach a case that could fundamentally change how foundational websites operate with a degree of humility.
"The potential consequences of this legal theory are breathtaking," Millhiser wrote. "If Twitter, YouTube, or Facebook may be held liable for any content that is served to users by one of their algorithms, then these websites may need to dismantle the algorithms that make it possible for users to sort through the billions of videos, tweets, and other content published on these websites. The Gonzalez case itself, for example, claims that Google should be liable because YouTube’s algorithm, which Google owns, sometimes served up ISIS recruitment videos to some users — and thus Google is legally responsible for the ISIS-led attacks that killed American citizens and their relatives. This same theory could hamstring search engines too."
For changing it...
- Many argue that the internet's most destructive elements are proliferating because of Section 230.
- Some make the case that Big Tech companies have almost total immunity for their actions as long as Section 230 exists as it does.
- Others argue that Section 230 should be reformed, but that reform would be best done not by the courts, but by Congress.
In The New York Times, Julia Angwin said "it's time to tear up" Big Tech's get-out-of-jail-free card.
"The law, created when the number of websites could be counted in the thousands, was designed to protect early internet companies from libel lawsuits when their users inevitably slandered one another on online bulletin boards and chat rooms," Angwin said. "But since then, as the technology evolved to billions of websites and services that are essential to our daily lives, courts and corporations have expanded it into an all-purpose legal shield that has acted similarly to the qualified immunity doctrine that often protects police officers from liability even for violence and killing. As a journalist who has been covering the harms inflicted by technology for decades, I have watched how tech companies wield Section 230 to protect themselves against a wide array of allegations, including facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking — behavior that they would have likely been held liable for in an offline context.
"There is a way to keep internet content freewheeling while revoking tech’s get-out-of-jail-free card: drawing a distinction between speech and conduct. In this scenario, companies could continue to have immunity for the defamation cases that Congress intended, but they would be liable for illegal conduct that their technology enables," Angwin said. "Courts have already been heading in this direction by rejecting the use of Section 230 in a case where Snapchat was held liable for its design of a speed filter that encouraged three teenage boys to drive incredibly fast in the hopes of receiving a virtual reward. They crashed into a tree and died... Drawing a distinction between speech and conduct seems like a reasonable step toward forcing big tech to do something when algorithms can be proved to be illegally violating civil rights, product safety, antiterrorism and other important laws."
In Newsweek, Theo Wold made the case that Big Tech's immunity thanks to Section 230 is way too far reaching.
"Big Social Media has been wielding Section 230 in court this way for years—and often successfully. When parents have sued Meta after their teenage daughters developed eating disorders promoted by Instagram's algorithm, or when parents have sued TikTok after their children died attempting dangerous viral challenges the app's videos promoted, Big Social Media has asserted Section 230 as a purported affirmative liability shield," Wold wrote. "Indeed, Big Social Media invokes Section 230 not merely to rebuff these plaintiffs' claims, but to prevent courts from hearing the cases at all. Because Section 230 conveys immunity, a plaintiff cannot even try to hold Big Social Media accountable for the harm its algorithms cause to consumers.
"This expansive view of Section 230 should alarm all Americans because it represents an industry asking for the unthinkable: complete immunity from civil liability for harms caused by the very core of its profit-generating business," he said. "It is comparable to Exxon seeking legal immunity for all of its drilling-related activities; or Southwest Airlines seeking immunity for all activities related to air travel; or Ford seeking immunity from claims related to the manufacture of any wheeled vehicle. In each of these hypothetical scenarios, such sweeping immunity would perversely incentivize the underlying company to seek profits no matter the human cost. What Big Social Media seeks is no different."
In The Washington Post, Henry Olsen said regardless of the court's ruling, Congress needs to take action to limit Section 230's reach.
"[Section 230] is clearly right in one sense. No one would reasonably suggest that the U.S. Postal Service or a phone company should be held liable for defamatory letters or phone calls. Internet companies should have similar protections when they act in similarly passive manners. The problem, however, is that many tech companies do not act passively," Olsen said. "Instead, they push content to users with algorithms, arguably acting more like a traditional publisher — which is liable for defamatory content it chooses to print — than merely a system operator... they have also exposed children to pornography and online bullies, driving countless teenagers to depression and suicide. Add in the known use of social media by malign state actors and terrorists to spread misinformation and radicalizing content, and it becomes clear that the internet is no bed of roses.
"The case against Google will not definitively solve this problem. Even if the court rules against the company, it would merely begin to weaken the legal protections that shield Big Tech from paying for the damage it facilitates. The real solution must ultimately come from politics," Olsen wrote. "Congress and the president will have to create regulatory frameworks that dramatically reduce the harm created by online companies, much as the Clean Air Act significantly cut down air pollutants. That new framework can take many paths, depending on what harms are considered most unacceptable and which are most expensive to enact. Limiting children’s unsupervised internet access, as Sen. Josh Hawley (R-Mo.) has proposed, seems to be a case of high-impact, loss-cost regulation. Efforts to minimize the ability of terrorists to share provocative or inflammatory material will be much costlier to implement."
Reminder: "My take" is a section where I give myself space to share my own personal opinion. If you have feedback, criticism, or compliments, don't unsubscribe. You can reply to this email and write in. If you're a subscriber, you can also leave a comment.
- In this case, it seems clear to me Google should not be liable.
- I broadly support Section 230 and how it is written.
- That being said, Congress does have room to make it easier to hold social media companies liable.
In the Gonzalez case, I think my position is rather straightforward: The Court should stay out of the way. Section 230 is quite clear as it’s written, and it seems obvious to me that it protects what YouTube did in this case, and that the line of causation that follows from its algorithm is specious at best. Most of the justices seem to view it this way, too, and I don't think we're going to see this case upend the law or how Section 230 functions.
In general, it will probably surprise no one that I support Section 230 and the rules it creates for the internet. Platforms like YouTube, Facebook, and Twitter allow for free-flowing information sharing. This free flow, especially in the political spaces I care about, allows for non-institutional voices to have an impact on the world. If platforms were held liable for the things their users published, the platforms would grind this free flow of information to a halt in order to focus more on moderation, and this would fundamentally harm the robust information ecosystem the globe currently enjoys.
That doesn't mean all is well, though. The devil's bargain of this arrangement is that I can use YouTube to look up how to fix a headlight in my 2006 Honda CRV and I can also use it to look up how to make homemade explosives. Or, more precisely, a tech company’s moderation engine (like YouTube's algorithm) could send me from a rather innocuous search on combustion engines to more dangerous videos about bomb making. The curational effect of an algorithm and the company’s liability for it feels like something notably different than the question of what people are posting on the platform and that platform's responsibility for it.
But the line of culpability is incredibly difficult to draw. Ian Millhiser offered a good analogy: If a bookstore organizes its tables by topics, and it stacks all the sports books together, it would essentially be functioning the same way algorithms do for users who clearly like sports. The question of culpability arises when that organization goes from benign to dangerous. If a book store had a section on immigration, and a sign insisting customers peruse aisle six for books on how Jews are ushering in minorities to replace white people, the benign nature of organizing suddenly becomes a lot more complicated. But this very complexity is why the court — with zero experts on technology or the core issues at hand — should have no role in drawing that line.
This brings us to Congress. The fact that a Supreme Court ruling here could be disastrous does not mean there is no room to do something about the liability these companies should have. If companies have built systems that, for one reason or another, promote illegal content — terrorism recruitment videos, for example — it’s only reasonable to confer upon them some kind of liability. The same goes for when new products (like a SnapChat filter encouraging people to drive fast) are so negligently conceived that their bad outcomes should have been obvious.
Congress could carefully craft legislation that creates a pathway to sue companies like Google if a plaintiff is able to prove a connection between a criminal act and the promotion of criminal behavior by the platform. In Gonzalez, no such explicit connection exists. But if it did, the Gonzalez family would certainly benefit from there being a means to hold Google accountable, as would society in general.
Ultimately, though, that language is not something the nine justices on the Supreme Court should try to suss out. Kudos to them for a rare bit of humility on this issue. I’m hopeful that if, or when, they rule in favor of Google, members of Congress should continue working to draft legislation that limits the broad immunity these companies have, and offers some kind of recourse for when they facilitate horrible outcomes.
Your questions, answered.
Q: What is your feeling on Speaker Kevin McCarthy releasing the video tapes to only Fox News?
— Richard from Shelton, Washington
Tangle: For anyone who missed this story: On Fox News, host Tucker Carlson told viewers that House Speaker Kevin McCarthy had given him thousands of hours of security footage from January 6. Presumably, Carlson is going to do a segment about what the footage actually shows. Democrats have criticized McCarthy for the move, saying the footage contains closely held secrets about how members of Congress protect themselves during an attack.
There is one specific reason I'm intrigued by this move: The counter-narrative. Because this footage was so closely held by the January 6 committee, we only saw clips that were given to us through the filter of the committee's effort to prosecute rioters. I'm sure there are some interesting things in there that never got any news coverage, and knowing Carlson's team, they are sure to find them.
But for basically every other reason I have a bad feeling about it. For one, as we'll discuss in tomorrow's Friday edition (subscribe here!), recent revelations about Carlson and Fox News show demonstrably that they are not being honest with their viewers. Especially about issues like January 6, which Carlson has already claimed was a "false flag" operation (it wasn't), and the 2020 election more broadly. So I certainly won't take whatever he releases at face value, nor should anyone take anything the nightly networks do at face value anymore.
Second, I think the security concerns are real. Again, we don't know a lot about what footage was turned over, but it's not hard to imagine why giving 40,000 hours of footage to a bunch of staffers at Fox News is a bad idea. If any of it leaks online, or if it is published in full, it would give anyone who wants it a detailed look at how members of Congress are evacuated and protected, which seems dangerous.
Third, I'll be keeping an eye on what McCarthy does next. He promised the footage to Carlson and is fulfilling that promise. But he also said he would hand it over to news organizations more widely once Carlson gets his exclusive. If he does that, and the footage becomes widely available to more news organizations, I'd feel slightly better about it — though still a little bit uneasy about so much of it existing out there.
Want to have a question answered in the newsletter? You can reply to this email (it goes straight to my inbox) or fill out this form.
Under the radar.
The United States is hoping to create two semiconductor chip "manufacturing clusters" by 2030 in an effort to bring more chip manufacturing back to the U.S. Commerce Secretary Gina Raimondo said the United States will target the $53 billion Chips Act to bring together research labs, fabrication plants, and packaging facilities for the assembly of chips. It’s unclear where the clusters would be located, but The Wall Street Journal reports that Arizona, Ohio, and Texas are at the top of the list. U.S. and foreign manufacturers have already unveiled more than 40 projects for a total investment of close to $200 billion. The Wall Street Journal has the story.
- 2.6 billion. The number of active users on YouTube, as of 2023.
- 720,000. The number of hours of content uploaded to YouTube every single day.
- 1 billion. The hours of video watched by YouTube users every single day.
- 2.96 billion. The number of active users on Facebook, as of 2023.
- 350 million. The number of photos uploaded to Facebook every day.
- 10 billion. The number of Facebook messages sent each day.
- One year ago today, we were covering rising violent crime.
- The most clicked link in yesterday's newsletter: The story about Vivek Ramaswamy running for president.
- 70.1%. The percentage of Tangle readers who said they supported Biden's visit to Ukraine.
- Nothing to do with politics: A frog that doesn't croak, but communicates via touch.
- Take the poll: Should we remove or reform Section 230? Let us know.
Have a nice day.
A solo Atlantic rower has set a new world record. East Yorkshire, England, resident Miriam Payne, 23, rowed from the Canary Islands to Antigua faster than any woman in history. Her time of 59 days, 16 hours and 36 minutes is a new world record. Payne said she was "absolutely knackered" by the experience and she "was so tired, I just wanted to get to the end so I could stop rowing." In order to qualify for the record, Payne had to complete the entire trip by herself, making all her own repairs to the boat, Seas the Day, and raised money for mental health charities in East Yorkshire along her way. The 3,000-mile voyage is considered one of the hardest rows in the world. BBC News has the story.
In order to spread the word about our work, we rely heavily on readers like you. Here are some ways to help us...
📣 Share Tangle on Twitter here, Facebook here, or LinkedIn here.
💵 If you like our newsletter, drop some love in our tip jar.
🎉 Want to reach 55,000 people? Fill out this form to advertise with us.
😄 Share https://readtangle.com/give and every time someone signs up at that URL, we'll donate $1 to charity.
📫 Forward this to a friend and tell them to subscribe (hint: it's here).
🎧 Rather listen? Check out our podcast here.
🛍 Love clothes, stickers and mugs? Go to our merch store!