Avsnitt
-
Subcommittee on Innovation, Data, and Commerce held a hearing: “Legislative Solutions to Protect Kids Online and Ensure Americans’ Data Privacy Rights.” Between the Kids Online Safety Act (KOSA) and the American Privacy Rights Act (APRA), both of which have bipartisan and bicameral support, Congress may be closer to acting on the issues than it has been recent memory.
One of the witnesses that the hearing was David Brody, who is managing attorney of the Digital Justice Initiative of the Lawyers' Committee for Civil Rights Under Law. Justin Hendrix caught up with Brody the day after the hearing, we spoke about the challenges of advancing the American Privacy Rights Act, and why he connects fundamental data to privacy rights to so many of the other issues that the Lawyers' Committee cares about, including voting rights and how to counter disinformation that targets communities of color.
-
This episode features two conversations. Both relate to efforts to better understand the impact of technology on society.
In the first, we’ll hear from Sayash Kapoor, a PhD candidate at the Department of Computer Science and the Center for Information Technology Policy at Princeton University, and Rishi Bommasani, the society lead at the Stanford Center for Research on Foundation Models. They are two of the authors of a recent paper titled On the Societal Impact of Open Foundation Models.
And in the second, we’ll hear from Politico Chief Technology Correspondent Mark Scott about the US-EU Trade and Technology Council (TTC) meeting, and what he’s learned about the question of access to social media platform data by interviewing over 50 stakeholders, including regulators, researchers, and platform executives.
-
Saknas det avsnitt?
-
Last week, a federal judge granted a motion to dismiss and strike a lawsuit brought by X Corp, formerly known as Twitter, against a nonprofit research outfit called The Center for Countering Digital Hate (CCDH). To learn more about why the ruling matters, Justin Hendrix spoke to Alex Abdo, the litigation director at the Knight First Amendment Institute at Columbia University; Imran Ahmed, the CEO and founder of the Center for Countering Digital Hate; and Roberta Kaplan, a partner at the law firm of Kaplan, Hecker, and Fink, which represented CCDH in this matter.
-
On this show, when we talk about technology and democracy, guests are often talking about the relationship between technology and existing democratic systems. Today's guest wants us to think more expansively about what doing democracy means and the role the technology can play in it. Nathan Schneider, an assistant professor of media studies at the University of Colorado Boulder, is the author of Governable Spaces: Democratic Design for Online Life.
-
Last year, researchers at Human Rights Watch wrote about the global backlash against women’s rights. In multiple countries, they say, hard-won progress has been reversed amidst a wave of anti-feminist rhetoric and policies, and it may take decades to reverse the trajectory. It’s against that backdrop that today’s guest pursues concerns at the intersection of tech and digital rights with women’s human rights. Justin Hendrix speaks with Lucy Purdon, the founder of Courage Everywhere and author of a recent report for the Mozilla Foundation titled "Unfinished Business: Incorporating a Gender Perspective into Digital Advertising Reform in the UK and EU."
-
On Monday, March 18, the US Supreme Court heard oral argument in Murthy v Missouri. In this episode, Tech Policy Press reporting fellow Dean Jackson is joined by two experts- St. John's University School of Law associate professor Kate Klonick and UNC Center on Technology Policy director Matt Perault- to digest the oral argument, what it tells us about which way the Court might go, and what more should be done to create good policy on government interactions with social media platforms when it comes to content moderation and speech.
-
On March 18, the US Supreme Court will hear oral argument in Murthy v Missouri, a case that asks the justices to consider whether the government coerced or “significantly encouraged” social media executives to remove disfavored speech in violation of the First Amendment during the COVID-19 pandemic.
Tech Policy Press reporting fellow Dean Jackson speaks to experts including the Knight First Amendment Institute at Columbia University's Mayze Teitler and Jennifer Jones, and the Tech Justice Law Project's Meetali Jain.
-
At INFORMED 2024, a conference hosted by the Knight Foundation in January, one panel focused on the subject of information integrity, race, and US elections. The conversation was compelling, and the panelists agreed to reprise it for this podcast. So today we're turning over the mic to Spencer Overton, a Professor of Law at the George Washington University, and the director of the GW Law School's Multiracial Democracy Project.
He's joined by three other experts, including:
Brandi Collins-Dexter, a media and technology fellow at Harvard's Shorenstein Center, a fellow at the National Center on Race and Digital Justice, and the author of the recent book, Black Skinhead: Reflections on Blackness and Our Political Future. Brandi is developing a podcast of her own with MediaJustice that explores 1980s era media, racialized conspiracism, and politics in Chicago;Dr. Danielle Brown, a social movement and media researcher who holds the 1855 Community and Urban Journalism professorship at Michigan State and is the founding director of the LIFT project, which is focused on mapping, networking and resourcing, trusted messengers to dismantle mis- and disinformation narratives that circulate in Black communities and about Black communities; andKathryn Peters, who was the inaugural executive director of University of North Carolina's Center for Information, Technology, and Public Life and was the co-founder of Democracy Works, where she built programs to help more Americans navigate how to vote. These days, she's working on a variety of projects to empower voters and address election mis- and disinformation.
-
On Monday, Feb. 26, 2024, the US Supreme Court heard oral arguments for Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton. The cases are on similar but distinct state laws in Florida and Texas that would restrict social media companies’ ability to moderate content on their platforms. Justin Hendrix speaks with Tech Policy Press staff writer Gabby Miller and contributing editor Ben Lennett about key highlights from the discussion.
-
This week, a public consultation period ended for a new Hong Kong national security law, known as Article 23. Article 23 ostensibly targets a wide array of crimes, including treason, theft of state secrets, espionage, sabotage, sedition, and "external interference" from foreign governments. The Hong Kong legislature, dominated by pro-Beijing lawmakers, is expected to approve it, even as its critics argue that the law criminalizes basic human rights, such as the freedom of expression, signaling a further erosion of the liberties once enjoyed by the residents of Hong Kong.
To learn more about what is happening in Hong Kong and what role tech firms and other outside voices could be doing to preserve freedoms for the people of Hong Kong, Justin Hendrix spoke to three experts who are following developments there closely:
Chung Ching Kwong, senior analyst at the Inter-Parliamentary Alliance on ChinaLokman Tsui, a fellow at Citizen Lab at University of Toronto, andMichael Caster, the Asia Digital Program Manager with Article 19.
-
If you’ve been listening to this podcast for a while, you know we’ve spent countless hours together talking about the problems of mis- and disinformation, and what to do about them. And, we’ve tried to focus on the science, on empirical research that can inform efforts to design a better media and technology environment that helps rather than hurts democracy and social cohesion.
Today’s guests are Jon Bateman and Dean Jackson. The two have just produced a report for the Carnegie Endowment for International Peace that looks at what is known about a variety of interventions against disinformation, and provides evidence that should guide policy in governments and at technology platforms.
-
A new book that ships this week from Oxford University Press titled simply Media and January 6th assembles a varied collection of experts that aim to shed light on the interplay between the media and the bloody coup attempt that then President Donald Trump led to try to hang on to power after he lost the 2020 election to Joe Biden. It delves into the reasons behind the occurrence of January 6th and highlights the pivotal role of media in this context.
The book is structured to explore three essential inquiries: What is our interpretation of January 6, 2021? How should research evolve post-January 6, 2021? And what measures can be taken to avert a similar incident in the future? Justin Hendrix spoke to three of the book's four editors: Khadijah Costley White, Daniel Kreiss, and Shannon C. McGregor.
-
It's become trite to say there are a lot of elections taking place this year. But of course, technology is playing a role in them all.
At Tech Policy Press, we're lucky to have a group of seven fellows this year who are based on four continents. They are paying close attention to elections in the nations they know best. To learn more about the recent election in Pakistan, its chaotic aftermath, and the unique role of technology and events there, I spoke to one of our fellows last week: Ramsha Jahangir, a Pakistani journalist currently based in the Netherlands.
-
Today's guests are Jonathan Stray, a senior scientist at the Center for Human Compatible AI at the University of California Berkeley, and Ravi Iyer, managing director of the Neely Center at the University of Southern California's Marshall School. Both are keenly interested in what happens when platforms optimize for variables other than engagement, and whether they can in fact optimize for prosocial outcomes. With several coauthors, they recently published a paper based in large part on discussion at an 8-hour working group session featuring representatives from seven major content-ranking platforms and former employees of another major platform, as well as university and independent researchers. The authors say "there is much unrealized potential in using non-engagement signals. These signals can improve outcomes both for platforms and for society as a whole."
-
In May 2022, Alvaro Bedoya was sworn in as a Commissioner of the US Federal Trade Commission following his nomination by President Joe Biden and confirmation in the Senate. In this conversation, Commissioner Bedoya discusses a recent settlement over the commercial use of facial recognition technologies and what it should signal to other businesses, voice cloning and the growing problem of impersonations utilizing AI, and how he thinks about the future.
-
Multiple past episodes of this podcast have focused on the topic of AI governance. But today’s guest, Blair Attard-Frost, has put forward a set of ideas they term "AI countergovernance." These are alternative mechanisms for community-led and worker-led governance that serve as means for resisting or contesting power, particularly as it manifests in AI systems and the companies and governments that advance them.
-
On Wednesday, January 31st, the US Senate Judiciary Committee hosted a hearing titled "Big Tech and the Online Child Sexual Exploitation Crisis." The CEOs of Meta, TikTok, X, Discord and Snap were called to the Capitol to answer questions from lawmakers on their efforts to protect children from sexual exploitation, drug trafficking, dangerous content, and other online harms. Gabby Miller reported on the hearing from New York, and Haajrah Gilani reported from Washington D.C.
-
Last year, the World Privacy Forum, a nonprofit research organization, conducted an international review of AI governance tools. The organization analyzed various documents, frameworks, and technical material related to AI governance from around the world. Importantly, the review found that a significant percentage of the AI governance tools include faulty AI fixes that could ultimately undermine the fairness and explainability of AI systems.
Justin Hendrix talked to Kate Kaye, one of the report’s authors, about a range of issues it covers, from the involvement of large tech companies in shaping AI governance tools the role of organizations like the OECD in developing AI governance tools, to the need to consult people and communities that are often overlooked when making decisions about how to think about AI.
-
In October 2022, a group of researchers published a manifesto establishing a Coalition for Independent Technology Research.
“Society needs trustworthy, independent research to relieve the harms of digital technologies and advance the common good,” they wrote. “Research can help us understand ourselves more clearly, identify problems, hold power accountable, imagine the world we want, and test ideas for change. In a democracy, this knowledge comes from academics, journalists, civil society, and community scientists, among others. Because independent research on digital technologies is a powerful force for the common good, it also faces powerful opposition.”
In the months since that document was published, that opposition has grown. From investigations in Congress to lawsuits aimed at specific researchers, there is a backlash particularly against those who study communications and media, especially where the subjects of that research are often those most interested in advancing false and misleading claims about issues including elections and public health.
Justin Hendrix, who is a member of the coalition, caught up with Brandi Geurkink, who was hired as the coalition's first Executive Director in December 2023, to discuss its priorities.
-
Today’s guest is Robert Weissman, president of the nonprofit consumer advocacy organization Public Citizen. He is the author of a letter addressed to the California Attorney General that raises significant concerns about OpenAI’s 501(c)(3) nonprofit status. The letter questions whether OpenAI has deviated from its nonprofit purposes, alleging that it may be acting under the control of its for-profit subsidiary, potentially violating its nonprofit mission. The letter raises broader issues about the future of AI and how it will be governed.
- Visa fler