Avsnitt
-
Join us for an insightful discussion on the intersection of AI and Green Technology as drivers of global progress and sustainable development. This roundtable features highlights from the Imperial Springs International Forum 2024, hosted by Club de Madrid, where over 130 leaders from 40+ countries gathered to explore the future of international cooperation and multilateralism.
-
Artificial Intelligence has immense potential, but it also carries risks — particularly when it comes to civil liberties. In this episode, I speak with Faiza Patel, Senior Director of the Liberty and National Security Program at the Brennan Center for Justice at NYU Law. Together, we explore how AI can be regulated to ensure fairness, accountability and civil rights, especially in the context of national security and law enforcement.
Key Takeaways:
(01:53) AI in national security, law enforcement and immigration contexts.
(05:00) The dangers of AI in government decisions, from immigration to surveillance.
(09:09) Long-standing issues with AI, including biased training data in facial recognition.
(12:55) The complexities of regulating AI-generated media, such as deepfakes, while protecting free speech.
(17:00) The need for transparency in AI systems and the importance of scrutinizing outputs.
(20:25) How marginalized communities are disproportionately affected by AI.
(23:30) Companies developing AI must embed civil rights principles into their products.
(26:45) Creating unbiased AI systems is a challenge, but necessary to avoid harm.
(29:58) The need for a dedicated regulatory body to oversee AI, especially in national security.
(34:00) AI’s potential impact on jobs and why policymakers need to prepare for labor disruption.
Resources Mentioned:
Faiza Patel -
https://www.linkedin.com/in/faiza-patel-5a042816/
Brennan Center for Justice -
https://www.brennancenter.org/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
AI Bill of Rights -
https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Brennan Center - Faiza Patel -
https://www.brennancenter.org/experts/faiza-patel
National Security Carve-Outs Undermine AI Regulations -
https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulations
Senate AI Hearings Highlight Increased Need for Regulation -
https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulation
The Perils and Promise of AI Regulation -
https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation
Advances in AI Increase Risks of Government Social Media Monitoring -
https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoring
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
Saknas det avsnitt?
-
In this episode of the RegulatingAI podcast, Sanjay Puri hosts an insightful discussion with Mr. Boris Tadić, former President of Serbia, to explore the profound implications of artificial intelligence (AI) on governance, society, and global relations at Imperial Springs International Forum 2024, Madrid, Spain. From its potential to revolutionise education and development to concerns about its effects on democracy and societal values, this conversation delves deep into the opportunities and challenges AI presents.
Resources:
https://x.com/boristadic58
https://clubmadrid.org/who/members/tadic-boris/
https://en.wikipedia.org/wiki/Boris_Tadi%C4%87
-
In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.
Resources:
https://www.linkedin.com/in/mehdi-jomaa-60a8333b/
https://x.com/Mehdi_Jomaa
https://www.facebook.com/M.mehdi.jomaa
In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.
Resources:
https://www.linkedin.com/in/mehdi-jomaa-60a8333b/
https://x.com/Mehdi_Jomaa
https://www.facebook.com/M.mehdi.jomaa
https://clubmadrid.org/who/members/mehdi-jomaa/
-
The rapid rise of AI brings both extraordinary potential and profound risks, demanding urgent global collaboration to ensure its safe development. In this episode, I’m joined by Professor S. Alex Yang, Professor of Management Science and Operations at the London Business School, to explore the complexities of regulating AI, the challenges of international collaboration, and the potential existential risks posed by AI development. With his extensive experience in AI and risk management, Professor Yang provides unique insights into the future of AI governance.
Key Takeaways:
(02:12) Professor Yang’s early AI experiences and his value chain research.
(06:57) The biggest risks from AI, including existential risk and job displacement.
(11:42) The debate on AI nationalism and the preservation of cultural heritage.
(16:28) How China’s chip-making capacity could reshape AI competition.
(21:13) Open-source versus closed-source AI models and the risks involved.
(25:58) Why monitoring monopolies in AI is crucial for innovation.
(30:44) How content creators can benefit from AI and how copyright law is evolving.
(35:29) The importance of fair use standards for AI-generated content.
(40:14) Data aggregation and its future role in AI development.
(45:00) Professor Yang’s final thoughts on the need for agile, principle-based AI regulation.
Resources Mentioned:
Professor S. Alex Yang -
https://www.linkedin.com/in/songayang/
London Business School | LinkedIn -
https://www.linkedin.com/school/london-business-school/
London Business School | Website -
https://www.london.edu/?utm_source=google&utm_medium=ppc&utm_campaign=MC_BRBRAND_ppc_google&sc_camp=760e17bef14a4b399386ef32e55393a8&gad_source=1&gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&gclsrc=aw.ds
WorldCoin -
https://worldcoin.org/
The Case for Regulating Generative AI Through Common Law -
https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02
Generative AI and Copyright: A Dynamic Perspective -
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by Professor Norman Sadeh, a Computer Science Professor at Carnegie Mellon University, and Co-Founder and Co-Director of the Privacy Engineering Program. With years of experience in AI and privacy, he offers valuable insights into the complexities of AI governance, the evolving landscape of data privacy and why a multidisciplinary approach is vital for creating effective and ethical AI policies.
Key Takeaways:
(02:09) How Professor Sadeh’s work in AI and privacy began.
(05:30) Privacy engineers are in AI governance.
(08:45) Why AI governance must integrate with existing company structures.
(12:10) The challenges of data ownership and consent in AI applications.
(15:20) Privacy implications of foundational models in AI.
(18:30) The limitations of current regulations like GDPR in addressing AI concerns.
(22:00) How user expectations shape the principles of AI governance.
(26:15) The growing debate around the need for specialized AI regulations.
(30:40) The role of transparency in AI governance for building trust.
(35:50) The potential impact of open-source AI models on security and privacy.
Resources Mentioned:
Professor Norman Sadeh -
https://www.linkedin.com/in/normansadeh/
Carnegie Mellon University | LinkedIn -
https://www.linkedin.com/school/carnegie-mellon-university/
Carnegie Mellon University | Website -
https://www.cmu.edu/
EU AI Act -
https://artificialintelligenceact.eu/
General Data Protection Regulation (GDPR) -
https://gdpr-info.eu/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.
Our guest, Elena Sinel, FRSA and Founder of Teens in AI, shares her mission to champion diversity and equip young people with the skills they need to thrive in the AI era. She discusses the importance of empowering youth to lead the way in creating ethical AI solutions for a better future.
-
In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the philosophical underpinnings that drive this evolving landscape.
Our distinguished guest, Ted Lechterman, Holder of the UNESCO Chair in AI Ethics & Governance at IE University, provides critical perspectives on how governments can lead the way in creating inclusive, ethical AI policies that align with democratic values.
-
In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innovation in AI development.
Our guest, Sean Musch, Founder and CEO of AI & Partners, shares his expertise on the European AI Act and other regulatory frameworks shaping the future of AI. Discover practical strategies for navigating compliance while fostering responsible AI practices.
-
In this episode, we explore how geospatial data is being leveraged to improve crisis response efforts through the integration of AI. Learn about the groundbreaking work of the Humanitarian OpenStreetMap Team in mapping vulnerable areas and using AI to support humanitarian missions in real-time.
Our guest, Paul Uithol, Director of Humanitarian Data at the Humanitarian OpenStreetMap Team, shares his insights into how geospatial data and AI are transforming disaster management and crisis response. Discover the innovative strategies that enable faster, more accurate responses to humanitarian challenges.
-
In this episode, we explore the complexities of global AI regulation and enforcement, focusing on how governments and organizations can balance the need for compliance while fostering innovation. We dive into the challenges of supervising AI across different legislative frameworks and how these regulations shape the future of AI technologies.
Our featured guest, Huub Janssen, Manager on AI at the Ministry of Economic Affairs and the Dutch Authority for Digital Infrastructure, The Netherlands, shares his insights on navigating the regulatory landscape and driving responsible AI development.
-
In this insightful episode, we explore the intersection of AI governance and legal innovation. Join us as we discuss the critical challenges and opportunities that arise as organizations strive to implement responsible AI practices in an ever-evolving regulatory landscape.
Our esteemed guest, Hadassah Drukarch, Director of Policy and Delivery at the Responsible AI Institute, shares her expertise on how to navigate the complexities of AI governance, legal frameworks, and the importance of fostering ethical AI practices.
-
In this compelling episode, we explore how artificial intelligence is transforming disaster response efforts, especially for vulnerable communities impacted by crises. Join us as we discuss innovative strategies that leverage AI to enhance humanitarian action and build more resilient systems.
Our special guest, Katya Klinova, Head of AI and Data Insights for Social and Humanitarian Action at the United Nations Secretary-General's Innovation Lab, shares invaluable insights into the role of AI in disaster management and its potential to bridge critical gaps in support for those most in need.
-
In the latest episode of the RegulatingAI Podcast at the World Summit AI on October 9, 2024, the discussion dives deep into the critical AI competencies driving organizational transformation. The episode explores how AI revolutionizes the workforce through augmentation, reskilling, and enhancing human-computer interaction, all while promoting ethical AI hiring practices.
Special guest Dr. Kevin J. Jones, Director at the IU Columbus Center for Teaching and Learning and Associate Professor of Management, shares insights on how leaders can leverage AI to enhance their organizations and stay ahead of the curve.
-
On this episode, I’m joined by Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University. Ruslan discusses the pressing need for AI regulation, its potential for societal transformation and the ethical considerations of its future development, including how to safeguard humanity while embracing innovation.
Key Takeaways:
(02:14) The need to regulate AI to prevent monopolization by large corporations.
(06:03) The dangers of AI-driven misinformation and its impact on public opinion.
(10:32) The risks AI poses in job displacement across multiple industries.
(14:22) How deepfake technology is evolving and its potential consequences.
(18:47) The challenge of balancing AI innovation with data privacy concerns.
(22:10) AI’s growing role in military applications and the need for careful oversight.
(26:05) How AI agents could autonomously interact and the risks involved.
(31:30) The potential for AI to surpass human performance in certain professions.
(37:14) Why international collaboration is critical for effective AI regulation.
(42:56) The ethical dilemmas surrounding AI’s influence in healthcare and decision-making.
Resources Mentioned:
Ruslan Salakhutdinov -
https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/
OpenAI’s Sora Technology -
https://openai.com/index/sora/
Geoffrey Hinton and his contributions to AI -
https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/
Carnegie Mellon University -
https://www.cmu.edu
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode, I’m joined by Senator Todd Young, United States Senator (R-Ind.) at the United States Senate. He shares insights into AI policy, national security and the steps needed to maintain US leadership in this critical field.
Key Takeaways:
(01:54) The bipartisan effort behind the Senate AI Working Group.
(03:34) How existing laws adapt to an AI-enabled world.
(05:17) Identifying AI risks and regulatory barriers.
(07:41) The role of government expertise in AI-related areas.
(10:12) Understanding the significance of the $32 billion AI public investment.
(13:17) Applying AI innovations across various industries.
(15:27) The impact of China on AI competition and US strategy.
(17:44) Why semiconductors are vital to AI development.
(20:26) Balancing open-source and closed-source AI models.
(22:51) The need for global AI standards and harmonization.
Resources Mentioned:
Senator Todd Young -
https://www.linkedin.com/in/senator-todd-young/
Todd Young -
https://www.young.senate.gov/
United States Senate -
https://www.linkedin.com/company/ussenate/
National AI Research Resource -
https://nairrpilot.org/
CHIPS and Science Act -
https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/
Senate AI Policy Roadmap -
https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf
National Security Commission on Artificial Intelligence -
https://reports.nscai.gov/final-report/introduction
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
AI and RNA are revolutionizing drug discovery, promising a future where life-saving medications are developed faster and at lower costs.
In this episode, Raphael Townshend, PhD, Founder and CEO of Atomic AI, sits down with me to discuss the intersection of AI and RNA in drug development. We explore how AI technologies can reduce the cost and time required for clinical trials and target previously incurable diseases.
Key Takeaways:
(02:15) Raphael's background in AI and biology, and founding of Atomic AI.
(05:59) Reducing time and failure rate in drug discovery with AI.
(07:16) AlphaFold's breakthrough in understanding molecular shapes using AI.
(09:23) Ensuring transparency and accountability in AI-driven drug discovery.
(12:22) Navigating intellectual property concerns in healthcare AI.
(15:34) Integrating AI with wet lab testing for accurate drug discovery results.
(17:31) Balancing intellectual property and open research in biotech.
(20:02) Addressing data privacy and security in AI algorithms.
(22:30) Educating users and healthcare professionals about AI in drug discovery.
(24:48) Collaborating with global regulators for AI-driven drug discovery innovations.
Resources Mentioned:
Raphael Townshend -
https://www.linkedin.com/in/raphael-townshend-9154962a/
Atomic AI | LinkedIn -
https://www.linkedin.com/company/atomic-ai-rna/
AlphaFold -
https://deepmind.google/technologies/alphafold/
Atomic AI Homepage -
https://atomic.ai/
ATOM-1 Large Language Model -
https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-development
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
On this episode, I’m joined by Senator Mike Rounds, US Senator for South Dakota and Co-Chair of the Senate AI Caucus, to discuss how the US can regulate AI responsibly while fostering innovation. With his extensive experience in both state and federal government, Senator Rounds shares his insights into the Bipartisan Senate AI Working Group and its roadmap for AI policy.
Key Takeaways:
(01:23) The Bipartisan Senate AI Working Group aims to balance AI regulation and innovation.
(05:07) Why intellectual property protections are essential in AI development.
(07:27) National security implications of AI in weapons systems and defense.
(09:19) The potential of AI to revolutionize healthcare through faster drug approvals.
(10:55) How AI can aid in detecting and combating biological threats.
(15:00) The importance of workforce training to mitigate AI-driven job displacement.
(19:05) The role of community colleges in preparing the workforce for an AI-driven future.
(24:00) Insights from international collaboration on AI regulation.
Resources Mentioned:
Senator Mike Rounds Homepage - https://www.rounds.senate.gov/
GUIDE AI Initiative - https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package
Medshield - https://www.linkedin.com/company/medshield-llc
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
In this episode, I’m joined by Charity Rae Clark, Vermont Attorney General, and Monique Priestley, Vermont State Representative. They have been instrumental in shaping Vermont’s legislative approach to data privacy and AI. We dive into the challenges of regulating AI to keep citizens safe, the importance of data minimization and the broader implications for society.
Key Takeaways:
(02:10) “Free” apps and websites take payment with your data.
(08:15) The Data Privacy Act includes stringent provisions to protect children online.
(10:05) Protecting consumer privacy and reducing security risks.
(15:29) Vermont’s legislative journey includes educating lawmakers.
(18:45) Innovation and regulation must be balanced for future AI development.
(23:50) Collaboration and education can overcome intense pressure from lobbyists.
(30:02) AI’s potential to exacerbate discrimination demands regulation.
(36:15) Deepfakes present a growing threat.
(42:40) Consumer trust could be lost due to premature releases of AI products.
(50:10) The necessity of a strong foundation in data privacy.
Resources Mentioned:
Charity Rae Clark -
https://www.linkedin.com/in/charityrclark/
Monique Priestley -
https://www.linkedin.com/in/mepriestley/
Vermont -
https://www.linkedin.com/company/state-of-vermont/
“The Age of Surveillance Capitalism” by Shoshana Zuboff -
https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697
“Why Privacy Matters” by Neil Richards -
https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
-
Dive into the tangled web of AI and copyright law with Keith Kupferschmid, CEO of the Copyright Alliance, as he reveals how AI companies navigate legal responsibilities and examines what creators can do to safeguard their intellectual property in an AI-driven world.
Key Takeaways:
(02:00) The Copyright Alliance represents over 15,000 organizations and 2 million individual creators.
(05:12) Two potential copyright infringement settings: during the ingestion process and the output stage.
(06:00) There have been 17 or 18 AI copyright cases filed recently.
(08:00) Fair Use in AI is not categorical and is decided on a case-by-case basis.
(13:32) AI companies often shift liability to prompters, but both can be held liable under existing laws.
(15:00) Creators should clearly state their licensing preferences on their works to protect themselves.
(17:50) Current copyright laws are flexible enough to adapt to AI without needing new legislation.
(20:00) Market-based solutions, such as licensing, are crucial for addressing AI copyright issues.
(27:34) Education and public awareness are vital for understanding copyright issues related to AI.
Resources Mentioned:
Keith Kupferschmid - https://www.linkedin.com/in/keith-kupferschmid-723b19a/
Copyright Alliance - https://copyrightalliance.org
U.S. Copyright Office - https://www.copyright.gov
Getty Images Licensing - https://www.gettyimages.com
National Association of Realtors - https://www.nar.realtor
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
- Visa fler