For those who follow the daily churnings of Congress, Capitol Hill is a familiar venue for the official political theater of committee hearings — hearings so routine and numerous they can sometimes fade into the public subconsciousness, if registering at all.
But Jan. 31 — the day the U.S. Senate Judiciary Committee held a full committee hearing to review “Big Tech and the Online Child Sexual Exploitation Crisis” — seemed different, with senators vowing to use the occasion to propel stalled bills to the floor of both congressional chambers to curtail growing online abuse and increase safety.
Characterized by a remarkable and yet prevailing spirit of bipartisanship urgency, Democratic and Republican senators alike spent over four hours forcefully and often loudly interrogating the tech titans of major social media platforms, including Linda Yaccarino, CEO of X, formerly Twitter; Shou Zi Chew, CEO of TikTok; Evan Spiegel, co-founder and CEO of Snap; Mark Zuckerberg, founder and CEO of Meta, which is the parent company of Facebook, WhatsApp and Instagram, among others; and Jason Citron, CEO of Discord (a virtual hangout space).
The recent online release of deepfake pornographic images of pop megastar Taylor Swift — images that quickly populated X, with one post alone viewed a reported 45 million times before it was taken down — added an exceptionally topical complexion to the proceedings.
But unlike the super-famous singer — whose “The Eras Tour” was credited by the U.S. Federal Reserve Bank of Philadelphia and other American cities with single-handedly boosting local economies, an impact polling and research platform QuestionPro predicts may be replicated worldwide up to $5 billion — the everyday victims of online explicit deepfakes are not always equipped with either the influence or the fortune to fight their exploitation.
Will anything finally change for them?
When the Senate Judiciary Committee hearing ended, there was still no clear path forward. Nor — after years of bipartisan congressional solidarity — has any new or meaningful legislation on artificial intelligence, or AI, and social media yet become law.
“We have encouraged lawmakers to better protect children from the scourge of online pornography,” said Chieko Noguchi, spokesperson for the U.S. Conference of Catholic Bishops. “Everyone vulnerable to these emerging threats deserves our best effort at upholding healthy individuals and families. We are supportive of dialogue and attention on the many complex matters that arise with AI.”
Brian Patrick Green, director of technology ethics at Jesuit-founded Santa Clara University’s Markkula Center for Applied Ethics, in Santa Clara, Calif., told OSV News he’s frustrated by congressional posturing.
“There’s a disconnect between talk and action that’s been going on here, and it’s shameful,” Green said. “Every human being has just as much human dignity as Taylor Swift — and every time they’re violated by something like this, then how much attention does that get?”
“But when it’s someone famous, then all of a sudden, it gets attention,” he said. “Legislation would be reasonable for something like this. But the question is, what exactly would the legislation be?”
Green said he’s “not completely convinced that we don’t have the right laws already. We might be able to address this with laws that say that we have control over our likeness, or that are banning slander and libel.”
“We might have the right rules, but not have applied them in this new case in an accurate way,” suggested Green. “It’s a new technology. Every new technology opens up this question of whether we need a new rule or we just need to apply an old rule in a new way.”
“Microsoft built one of the tools that is being used for these deepfake videos — and so their CEO has come out and said, ‘We need to fix this problem immediately.’ So they’re trying to go into the program and fix it,” Green explained. “But there’s always going to be another program that somebody else can use.”
“So it’s good that the tech companies want to be associated with this violation of people’s dignity, but at the same time, we need to do more than that,” he said. “We need to get some sort of legislation — preferably at the federal level — but even internationally would be more important, too. Of course, how do you actually get international agreement on this?”
The European Union — which consists of 27 member state countries — has attempted the closest thing with the world’s first comprehensive AI law. On Dec. 8, 2023, the European Parliament and Council reached political agreement on the EU AI Act. The use — and abuse — of artificial intelligence in the EU will be regulated by it.
AI-generated photos — purportedly showing the pontiff sporting a white designer puffer coat that typically retails for thousands of dollars — were readily accepted on social media as proof of Francis’ luxurious and trendy new wardrobe.
“We need but think of the long-standing problem of disinformation in the form of fake news, which today can employ ‘deepfakes,’ namely the creation and diffusion of images that appear perfectly plausible but (are) false. I too have been an object of this,” Pope Francis said in his message for the 57th World Day of Social Communications Jan. 24.
Like Green, Pope Francis doesn’t appear to automatically trust Big Tech to regulate itself. In his 57th World Day of Peace message issued Dec. 8, 2023, ahead of its Jan. 1 observance, the pontiff said society cannot “presume a commitment on the part of those who design algorithms and digital technologies to act ethically and responsibly.”
“There is a need to strengthen or, if necessary, to establish bodies charged with examining the ethical issues arising in this field and protecting the rights of those who employ forms of artificial intelligence or are affected by them,” he said.
The pope also encouraged regulation for what he termed AI’s “galaxy of different realities. We cannot presume a priori that its development will make a beneficial contribution to the future of humanity and to peace among peoples,” said Pope Francis. “That positive outcome will only be achieved if we show ourselves capable of acting responsibly and respect such fundamental human values as ‘inclusion, transparency, security, equity, privacy and reliability.'”
Asked by OSV News if he felt a tipping point had been reached, Father Philip Larrey, professor of philosophy at Jesuit-run Boston College and author of “Connected World” and “Artificial Humanity,” replied, “I don’t. But that’s just because I’m a little pessimistic.”
“When you have market forces that come into play, a lot of the debate goes out the window — because they want to make money. That’s it,” Father Larrey said. “I tend to be a little bit skeptical when there is an overriding market force which kind of nullifies all these other important distinctions.”
He noted that Florida’s House of Representatives recently passed legislation banning children younger than 16 from using social media. The bill next moves to the Florida Senate.
Father Larrey was once again skeptical.
“I don’t think that’s realistic; I don’t think you can enforce something like that,” Father Larrey said. “But it does tell you how worried they are. If some sort of legislation can come about that would decrease the risk for children of being harmed, that’s good.”
In his Boston College class concerning technology and AI, Father Larrey asked his students for their take on the Taylor Swift incident.
“They were all very disappointed, especially the girls,” Father Larrey shared. “They said, ‘This could happen to you.'”