Avsnitt

  • In this episode, I welcome Carmel Shachar, Faculty Director of the Health Law and Policy Clinic and Assistant Clinical Professor of Law at Harvard Law School Center for Health Law and Policy Innovation. We delve into how AI is shaping the future of healthcare, its profound impacts and the vital importance of thoughtful regulation. The interplay between AI and healthcare is increasingly critical, pushing the boundaries of medicine while challenging our regulatory frameworks.

    Key Takeaways:

    (00:00) AI’s challenges in balancing patient data needs.

    (03:09) The revolutionary potential of AI in healthcare innovation.

    (04:30) How AI is driving precision and personalized medicine.

    (06:19) The urgent need for healthcare system evolution.

    (09:00) Potential negative impacts of poorly implemented AI.

    (12:00) The unique challenges posed by AI as a medical device.

    (15:10) Minimizing regulatory handoffs to enhance AI efficacy.

    (18:00) How AI can reduce healthcare disparities.

    (20:00) Ethical considerations and biases in AI deployment.

    (25:00) AI’s growing impact on healthcare operations and management.

    (30:00) Enhancing patient-physician communication with AI tools.

    (39:00) Future directions in AI and healthcare policy.

    Resources Mentioned:

    Carmel Shachar - https://www.linkedin.com/in/carmel-shachar-7b3a8525/

    Harvard Law School Center for Health Law and Policy Innovation - https://www.linkedin.com/company/harvardchlpi/

    Carmel Shachar's Faculty Profile at Harvard Law School - https://hls.harvard.edu/faculty/carmel-shachar/

    Precision Medicine, Artificial Intelligence and the Law Project - https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-law

    Petrie-Flom Center Blog - https://blog.petrieflom.law.harvard.edu/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I welcome Ari Kaplan, Head Evangelist of Databricks, a leading data and AI company. We discuss the intricacies of AI regulation, how different regions, like the US and EU, are addressing AI’s rapid development, and the importance of industry perspectives in shaping effective legislation.

    Key Takeaways:

    (04:42) Insights on the rapid advancements in AI technology and legislative responses.

    (10:32) The role of tech leaders in shaping AI policy and bridging knowledge gaps.

    (13:57) Open-source versus closed-source AI — Ari Kaplan advocates for transparency.

    (16:56) Ethical concerns in AI across different countries.

    (21:21) The necessity for both industry-specific and overarching AI regulations.

    (25:09) Automation’s potential to improve efficiency also raises employment risk.

    (29:17) A balanced, educational approach in the age of AI is crucial.

    (32:45) Risks associated with generative AI and the importance of intellectual property rights.

    Resources Mentioned:

    Ari Kaplan - https://www.linkedin.com/in/arikaplan/

    Databricks - https://www.linkedin.com/company/databricks/

    Unity Catalog Governance Value Levers - https://www.databricks.com/blog/unity-catalog-governance-value-levers

    President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

    EU AI Act Information - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • In this episode, I welcome Nicolas Kourtellis, Co-Director of Telefónica Research and Head of Systems AI Lab at Telefónica Innovación Digital, a company of the Telefonica Group. Nicolas shares his expert insights on the pivotal role of AI in revolutionizing telecommunications, the challenges of AI regulation and the innovative strides Telefónica is making toward sustainable and ethical AI deployment. 

    Imagine a world where every device you own not only connects seamlessly but also intelligently adapts to your needs. This isn’t just a vision for the future; it’s the reality AI is creating today in telecommunications.

    Key Takeaways:

    (00:00) AI research focuses and applications in telecommunications.

    (03:24) AI’s role in optimizing network systems and enhancing user privacy is critical.

    (06:00) How Telefónica uses AI to improve customer service through AI chatbots.

    (12:03) The ethical considerations and sustainability of AI models.

    (16:08) Democratizing AI to make it accessible and beneficial for all users.

    (18:09) Designing AI systems with privacy and security from the start.

    (27:00) The challenges and opportunities AI presents for the workforce.

    (30:25) The potential of 6G and its reliance on AI technologies.

    (32:16) The integral role of AI in future technological advancements and network optimizations.

    (39:35) The societal impacts of AI in telecommunications.

    Resources Mentioned:

    Nicolas Kourtellis - https://www.linkedin.com/in/nicolas-kourtellis-3a154511/

    Telefónica Innovación Digital - https://www.linkedin.com/company/telefonica-innovacion-digital/

    Telefonica Group - https://www.linkedin.com/company/telefonica/

    You can find all of Nicolas’ publications on his Google Scholar page: http://scholar.google.com/citations?user=Q5oWwiQAAAAJ 

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode of the Regulating AI Podcast, I'm joined by Dr. Irina Mirkina, Innovation Manager and AI Lead at UNICEF's Office of Innovation. An AI strategist, speaker, and expert for the European Commission, Dr. Mirkina brings a wealth of experience from academia, the private sector, and now, the humanitarian sector. Today’s discussion focuses on AI for social good.

    Key Takeaways:

    (03:31) The role of international organizations like UNICEF in shaping global AI regulations.

    (07:06) Challenges of democratizing AI across different regions to overcome the digital divide.

    (10:28) The importance of developing AI systems that cater to local contexts.

    (13:23) The transformative potential and limitations of AI in personalized education.

    (16:37) Engaging vulnerable populations directly in AI policy discussions.

    (20:47) UNICEF's use of AI in addressing humanitarian challenges.

    (25:10) The role of civil society in AI regulation and policymaking.

    (33:50) AI's risks and limitations, including issues of open-source management and societal impact.

    (38:57) The critical need for international collaboration and standardization in AI regulations.

    Resources Mentioned:

    Dr. Irina Mirkina - https://www.linkedin.com/in/irinamirkina/

    UNICEF Office of Innovation - https://www.unicef.org/innovation/

    Policy Guidance on AI for Children by UNICEF - https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Professor Angela Zhang, Associate Professor of Law at the University of Hong Kong and Director of the Philip K. H. Wong Center for Chinese Law. We delve into the complexities of AI regulation in China, exploring how the government’s strategies impact both the global market and internal policies.

    Key Takeaways:

    (02:14) The introduction of China’s approach to AI regulation.

    (06:40) Discussion on the volatile nature of Chinese regulatory processes.

    (10:26) How China’s AI strategy impacts international relations and global standards.

    (13:32) Angela explains the strategic use of law as an enabler in China’s AI development.

    (18:53) High-level talks between the US and China on AI risk have not led to substantive actions.

    (22:04) The US’s short-term gains from AI chip restrictions on China may lead to long-term disadvantages as China becomes self-sufficient and less cooperative.

    (24:13) Unintended consequences of the Chinese regulatory system.

    (29:19) Angela advocates for a slower development of AI technology to better assess and manage risks before they become unmanageable.

    Resources Mentioned:

    Professor Angela Zhang - http://www.angelazhang.net

    High Wire by Angela Zhang - https://global.oup.com/academic/product/high-wire-9780197682258

    Article: The Promise and Perils of China’s Regulation - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676

    Research: Generative AI and Copyright: A Dynamic Perspective - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233

    Research: The Promise and Perils of China's Regulation of Artificial Intelligence - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676

    Angela Zhang’s Website - https://www.angelazhang.net/

    High Wire Book Trailer - https://www.youtube.com/watch?v=u6OPSit6k6s

    Purchase High Wire by Angela Zhang - https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&keywords=high+wire+angela+zhang&qid=1706441967&sprefix=high+wire+angela+zha,aps,333&sr=8-1

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I am thrilled to sit down with Congressman Joseph Morelle, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, Congressman Morelle shares his deep insights into AI's potential and challenges, particularly concerning legislation and societal impacts.

    Key Takeaways:

    (02:13) Congressman Morelle's extensive experience in AI legislation and its implications.

    (04:27) Deep fakes and their growing threat to privacy and integrity.

    (07:13) Introducing federal legislation against non-consensual deep fakes.

    (14:00) Urgent need for social media platforms to enforce their guidelines rigorously.

    (19:46) The No AI Fraud Act and protecting individual likeness in AI use.

    (23:06) The importance of adaptable and 'living' statutes in technology regulation.

    (32:59) The critical role of continuous education and skill adaptation in the AI era.

    (37:47) Exploring the use of AI in Congress to ensure unbiased, culturally appropriate policymaking and data privacy.

    Resources Mentioned:

    Congressman Joseph Morelle - https://www.linkedin.com/in/joe-morelle-8246099/

    No AI Fraud Act - https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&r=9

    Preventing Deep Fakes of Intimate Images Act - https://www.congress.gov/bill/118th-congress/house-bill/3106

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I welcome Dr. Sethuraman Panchanathan, Director of the U.S. National Science Foundation and a professor at Arizona State University. Sethuraman shares personal insights on the transformative power of artificial intelligence and the importance of democratizing this technology to be sure it benefits humanity as a whole.

    Key Takeaways:

    (00:21) AI’s pivotal role in enhancing speech-language services.

    (01:28) Introduction to Sethuraman’s visionary leadership at NSF.

    (02:36) NSF’s significant AI investment totaled over $820 million.

    (06:19) The shift toward interdisciplinary AI research at NSF.

    (10:26) NSF’s initiative of launching 25 AI institutes for innovation.

    (18:26) Emphasis on AI democratization through education and training.

    (25:11) The NSF ExpandAI program boosts AI in minority-serving institutions.

    (30:21) Focus on ethical AI development to build public trust.

    (40:10) AI’s transformative applications in healthcare, agriculture and more.

    (42:45) The importance of ethical guardrails in AI’s development.

    (43:08) Advancing AI through international collaborations.

    (44:53) Lessons from a career in AI and advice for the next generation.

    (50:19) Motivating young researchers and entrepreneurs in AI.

    (52:24) Advocating for AI innovation and accessibility for everyone.

    Resources Mentioned:

    Dr. Sethuraman Panchanathan -

    https://www.linkedin.com/in/drpanch/

    U.S. National Science Foundation | LinkedIn -

    https://www.linkedin.com/company/national-science-foundation/

    U.S. National Science Foundation | Website -

    https://www.nsf.gov/

    Arizona State University -

    https://www.linkedin.com/school/arizona-state-university/

    ExpandAI Program -

    https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building

    Dr. Sethuraman Panchanathan’s NSF Profile -

    https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan

    NSF Regional Innovation Engines -

    https://new.nsf.gov/funding/initiatives/regional-innovation-engines

    National AI Research Resource (NAIRR) -

    https://new.nsf.gov/focus-areas/artificial-intelligence/nairr

    NSF Focus on Artificial Intelligence -

    https://new.nsf.gov/focus-areas/artificial-intelligence

    NSF AI Research Funding -

    https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research

    GRANTED Initiative for Broadening Participation in STEM -

    https://new.nsf.gov/funding/initiatives/broadening-participation/granted

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • The rapid evolution of artificial intelligence in cybersecurity presents both significant opportunities and daunting challenges. On this episode, I'm joined by Bruce Schneier, who is renowned globally for his expertise in cybersecurity and is dubbed a “security guru” by the Economist. Bruce, a best-selling author and lecturer at Harvard Kennedy School, discusses the fast-paced world of AI and cybersecurity, exploring how these technologies intersect with national security and what that means for future regulations.

    Key Takeaways:

    (00:00) I discuss with Bruce the challenges of regulating AI in the US.

    (02:28) Bruce explains the role and future potential of AI in cybersecurity.

    (05:05) The benefits of AI in defense, enhancing capabilities at computer speeds.

    (07:22) The need for robust regulations akin to those in the EU.

    (12:56) Bruce draws analogies between AI regulation and pharmaceutical controls.

    (19:56) The critical role of knowledgeable staff in supporting legislators.

    (22:24) The challenges of effectively regulating AI.

    (26:15) The potential of AI to transform enforcement across various sectors.

    (30:58) Reflections on the future of AI governance and ethical considerations.

    Resources Mentioned:

    Bruce Schneier Website - https://www.schneier.com/

    EU AI Strategy - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Trooper Sanders, CEO of Benefits Data Trust and a member of the White House National Artificial Intelligence Advisory Committee. Trooper’s expertise in leveraging AI to enhance the efficiency and humanity of America’s social safety net offers unique insights into the potential and challenges of AI in public services.

    Key Takeaways:

    (02:27) The role of Benefits Data Trust in connecting people to essential benefits using AI.

    (04:54) The components of trustworthy AI: reliability, public interest alignment, security, transparency, explainability, privacy and harm mitigation.

    (09:38) The ‘tortoise and hare’ challenge in aligning AI advancements with legislative processes.

    (16:17) The significance of voluntary industry commitments in shaping AI’s ethical use.

    (20:32) Ethical considerations in deploying AI, focusing on its societal impact and the readiness of systems for AI integration.

    (22:53) Addressing biases in AI to ensure fairness and equitable benefits across all socioeconomic groups.

    (27:52) Amplifying diverse voices in the AI discussion to encompass a wide range of societal perspectives.

    (34:22) The potential workforce disruption by AI and the necessity of supportive measures for affected individuals.

    (37:26) Considering the potentially massive impact of AI-driven career changes across various professions.

    Resources Mentioned:

    Trooper Sanders -

    https://www.linkedin.com/in/troopersanders/

    Benefits Data Trust | LinkedIn -

    https://www.linkedin.com/company/benefits-data-trust/

    Benefits Data Trust | Website -

    https://bdtrust.org/

    White House National Artificial Intelligence Advisory Committee -

    https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/

    BDT Launches AI and Human Services Learning Hub -

    https://bdtrust.org/bdt-launches-ai-learning-lab/

    Our Vision for an Intelligent Human Services and Benefits Access System -

    https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-system

    Humans Must Control Human-Serving AI -

    https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/

    Trooper Sanders’ Bio -

    https://bdtrust.org/trooper-sanders/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • I'm thrilled to be joined by Dr. Paul Lushenko, a Lieutenant Colonel in the U.S. Army and Director of Special Operations at the U.S. Army War College. Dr. Lushenko brings a wealth of knowledge from the front line of AI implementation in military strategy. He joins me to share his insights into the delicate balance between innovation and regulation.

    Key Takeaways:

    (02:28) The necessity of addressing AI’s impact on warfare and crisis escalation.

    (06:37) The gaps in global governance regarding AI and autonomous weapon systems.

    (08:30) U.S. policies on the responsible use of AI in military operations.

    (16:29) The importance of cutting-edge research in informing legislative actions on AI.

    (18:49) The risk of biases in AI systems used in national security.

    (20:09) Discussion on automation bias and its consequences in military operations.

    (32:49) Emphasis on the importance of careful management and extensive testing to build trust in AI systems within the military.

    (39:51) The critical need for data-driven decision-making in high-stakes environments, advocating for leveraging expert insights.

    (24:44) Dr. Lushenko argues for the adoption of a strategic framework to guide AI development in military contexts.

    Resources Mentioned:

    Dr. Paul Lushenko -

    https://www.linkedin.com/in/paul-lushenko-phd-5b805113/

    U.S. Army War College -

    https://www.linkedin.com/school/united-states-army-war-college/

    Political Declaration on Responsible Use of AI in Military Technologies -

    https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf

    Memorandum on Ethical Use of AI - White House 2023 -

    https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I welcome Randi Weingarten, President of the American Federation of Teachers (AFT). She discusses why implementing AI in education requires a collaborative effort. Join us as we explore the challenges and opportunities of AI in shaping equitable and effective educational environments.

    Key Takeaways:

    (01:08) Introduction of Randi Weingarten and her role in the AFT.

    (05:00) The critical issue of ensuring equitable access to AI technologies in education.

    (08:06) Addressing bias and discrimination within AI-driven educational systems.

    (11:53) The importance of inclusive participation in the implementation of educational technologies.

    (13:09) The evolving necessity for educators to acquire new skills in response to AI advancements.

    (17:26) The role of personalized teaching as a complement, not a replacement, for traditional educational methods.

    (18:08) Concerns surrounding data privacy and security within AI-driven platforms.

    (20:25) The need for regulation and oversight in the application of AI in educational settings.

    (25:22) The potential for productive industry collaboration in developing AI tools for education.

    (30:28) Advocating for a just transition fund to support workers displaced by AI and technological advancements.

    Resources Mentioned:

    Randi Weingarten - https://www.linkedin.com/in/randi-weingarten-05896224/

    American Federation of Teachers - https://www.aft.org/

    Testimony to Senator Schumer by Randi Weingarten on equity in AI - https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefits

    Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • AI regulation is not a simple field, particularly in the realm of national security, and it requires a nuanced approach. In this episode, I welcome Anja Manuel, the Executive Director of the Aspen Strategy Group and the Aspen Security Forum, as well as Co-Founder and Partner at Rice, Hadley, Gates & Manuel, LLC. Anja’s insights make the path forward clearer, framing effective AI legislation and emphasizing the need for global cooperation and ethical considerations. Her perspective, deeply rooted in national security expertise, underscores the critical balance between innovation and safeguarding against misuse.

    Key Takeaways:

    (00:17) The functionality of intelligence committees across party lines.

    (00:59) AI in warfare reflects a shift from World War I tactics to modern tech battles.

    (03:10) The rapid innovation in military technology and the US’s efforts to adapt.

    (03:53) Risks of unregulated AI, including in cyber, autonomous weapons and bio-tech.

    (07:09) AI regulation is needed both globally and nationally.

    (11:21) International collaboration plays a vital role in AI regulation.

    (13:39) Ethical considerations unique to AI applications in national security.

    (14:31) National security agencies’ openness to regulatory frameworks.

    (15:35) Public-private collaboration in addressing national security considerations.

    (17:08) Establishing standards in AI technology for national security is necessary.

    (18:28) Regulation of autonomous weapons and international agreements.

    (19:32) Balancing secrecy in national security operations with public scrutiny of AI use.

    (20:17) AI’s role and risks in intelligence and privacy.

    (21:13) Regulating AI in cybersecurity and other areas is a challenge.

    Resources Mentioned:

    Anja Manuel - https://www.linkedin.com/in/anja-manuel-26805023/

    Aspen Strategy Group - https://www.aspeninstitute.org/programs/aspen-strategy-group/

    Aspen Security Forum - https://www.aspensecurityforum.org/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Dr. Gunter Beitinger, Senior Vice President of Manufacturing and Head of Factory Digitalization and Product Carbon Footprint at Siemens. Dr. Beitinger lends a comprehensive view on AI’s role in transforming manufacturing, emphasizing its potential to enhance productivity, ensure workforce well-being and drive sustainable practices without displacing human labor.

    Key Takeaways:

    (02:17) Dr. Beitinger’s extensive background and role at Siemens.

    (05:13) Specific examples of AI-driven improvements in Siemens’ operations.

    (07:52) The measurable productivity gains attributed to AI in manufacturing.

    (10:02) The impact of AI on employment and the importance of re-skilling.

    (13:06) The necessity for a collaborative approach between governments and the private sector in workforce development.

    (16:24) The role of AI in improving the working conditions of industrial workers.

    (26:53) The potential for smaller companies to leverage AI and compete with industry giants.

    (36:49) AI’s future role in creating digital twins and the industrial metaverse.

    Resources Mentioned:

    Dr. Gunter Beitinger -

    https://www.linkedin.com/in/gunter-dr-beitinger/

    Siemens | LinkedIn -

    https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text

    Siemens | Website -

    https://www.siemens.com/

    https://blog.siemens.com/space/artificial-intelligence-in-industry/

    https://blog.siemens.com/2023/07/the-need-to-rethink-production/

    https://www.siemens.com/global/en/products/automation/topic-areas/industrial-operations-x.html#GetyourfreeticketforHannoverMesse2023

    https://www.siemens.com/global/en/company/innovation/research-development/next-gen-industrial-ai.html

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Sarah Kreps, the John L Wetherell Professor in the Department of Government, Adjunct Professor of Law, and the Director of the Tech Policy Institute at Cornell Brooks School of Public Policy. Her expertise in international politics, technology and national security offers a valuable perspective on shaping AI legislation.

    Key Takeaways:

    (00:20) The significant impact of industry and NGOs on AI regulation and congressional awareness.

    (03:27) AI's multifaceted applications and its national security implications.

    (05:07) Advanced efficiency of AI in misinformation campaigns and the importance of legislative responses.

    (10:58) Proactive measures by AI firms like OpenAI for electoral fidelity and misinformation control.

    (14:23) The challenge of balancing AI innovation with security and economic considerations in legislation.

    (20:30) Concerns about potential AI monopolies and the economic consequences.

    (28:16) Ethical and practical aspects of AI assistance in legislative processes.

    (30:13) The critical need for human involvement in AI-augmented military decisions.

    (35:32) National security agencies' approach to AI regulatory frameworks.

    (39:13) The imperative of Congress's engagement with diverse sectors for comprehensive AI legislation.

    Resources Mentioned:

    Sarah Kreps - https://www.linkedin.com/in/sarah-kreps-51a3b7257/

    Cornell - https://www.linkedin.com/school/cornell-university/

    Sarah Kreps’ paper for the Brookings Institution - https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/

    President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

    Discussions on AI Global Governance - https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfm

    Sarah Kreps - Cornell University -

    https://government.cornell.edu/sarah-kreps

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Professor Ronald Arkin, a renowned expert in robotics and roboethics from the Georgia Institute of Technology. Our discussion focuses on AI and robotics. We explore the ethical implications and the necessity for regulatory frameworks that ensure responsible development and deployment.

    Key Takeaways:

    (02:40) Ethical guidelines for AI and robotics.

    (03:19) IEEE’s role in creating soft law guidelines.

    (06:56) Robotics’ overshadowing by large language models.

    (10:13) The necessity of oversight and compliance in AI development.

    (15:30) Ethical considerations for emotionally expressive robots.

    (23:41) Liability frameworks for ethical lapses in robotics.

    (27:43) The debate on open-sourcing robotics software.

    (29:52) The impact of robotics on workforce and employment.

    (33:37) Human rights implications in robotic deployment.

    (42:55) Final insights on cautious advancement in AI regulation.

    Resources Mentioned:

    Ronald Arkin - https://sites.cc.gatech.edu/aimosaic/faculty/arkin/

    Ronald Arkin | LinkedIn - https://www.linkedin.com/in/ronald-arkin-a3a9206/

    Georgia Tech Mobile Robot Lab - https://sites.cc.gatech.edu/ai/robot-lab/

    Georgia Institute of Technology - https://www.linkedin.com/school/georgia-institute-of-technology/

    IEEE Standards Association - https://standards.ieee.org/

    United Nations Convention on Certain Conventional Weapons - https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&clang=_en&mtdsg_no=XXVI-2&src=TREATY

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I welcome Steve Mills, Global Chief AI Ethics Officer for Boston Consulting Group and Global AI Lead for the Public Sector. Steve shares insights into the intersection of AI innovation and ethical responsibility, guiding us through the often-confusing topic of AI regulation and ethics.

    Key Takeaways:

    (00:26) The role clear regulations play in fostering innovation.

    (02:43) The importance of consultation with industry to set achievable regulations.

    (04:07) Addressing the uncertainty surrounding AI regulation.

    (06:19) The necessity of sector-specific AI regulations.

    (07:33) The debate over establishing a separate AI regulatory body.

    (09:22) Adapting AI policy to keep pace with technological advancements.

    (11:40) Enhancing AI literacy and upskilling the workforce.

    (13:06) Ethical considerations in AI deployment, focusing on trustworthiness and harmlessness.

    (15:01) Strategies for ensuring AI systems are fair and equitable.

    (20:10) The discussion on open-source AI and combating monopolies.

    (22:00) The importance of transparency in AI usage by companies.

    Resources Mentioned:

    Steve Mills - https://www.linkedin.com/in/stevndmills/

    Boston Consulting Group - https://www.linkedin.com/company/boston-consulting-group/

    Responsible AI Ethics - https://www.bcg.com/capabilities/artificial-intelligence/responsible-ai

    Study on the impact of AI in the workforce - https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I welcome Kai Zenner, Head of Office and Digital Policy Advisor at the European Parliament. We discuss the complexities and challenges of Artificial Intelligence, especially focusing on the legislative efforts within the EU to regulate AI technologies.

    Key Takeaways:

    (01:36) Diverse perspectives in AI legislation play a significant role.

    (02:34) The EU AI Act’s status and its risk-based, innovation-friendly approach.

    (07:11) The recommendation for a vertical, industry-specific approach to AI legislation.

    (08:32) Measures in the AI Act to prevent AI power concentration and ensure transparency.

    (11:50) The global approach of the EU AI Act and its focus on international alignment.

    (14:28) Ethical considerations in AI development addressed by the AI Act.

    (16:21) Implementation and enforcement mechanisms of the EU AI Act.

    (23:31) The involvement of industry experts, researchers and civil society in developing the AI Act.

    (29:51) The importance of educating the public on AI issues.

    (33:12) Concerns about deepfake technology and election interference.

    Resources Mentioned:

    Kai Zenner - https://www.linkedin.com/in/kzenner/?originalSubdomain=be

    European Parliament - https://www.linkedin.com/company/european-parliament/

    EU AI Act - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Lexi Kassan, Lead Data and AI Strategist of Databricks and Founder and Host of the Data Science Ethics Podcast. Lexi brings a wealth of knowledge from her dual role as an AI ethicist and industry insider, providing an in-depth perspective on how legislation can shape the future of AI without curbing its potential.

    Key Takeaways:

    (02:44) The global impact of the EU AI Act.

    (03:46) The necessity for risk-based AI model assessments.

    (08:20) Ethical challenges hidden within AI applications.

    (11:45) Strategies for inclusive AI benefiting marginalized communities.

    (13:29) Core ethical principles for AI systems.

    (19:50) The complexity of creating unbiased AI data sets.

    (21:58) Categories of unacceptable risks in AI according to the EU Act.

    (27:18) Accountability in AI deployment.

    (30:53) The role of open-source models in AI development.

    (36:24) Businesses seek clear regulatory guidelines.

    Resources Mentioned:

    Lexi Kassan - https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk

    Data Science Ethics Podcast - https://www.linkedin.com/company/dsethics/

    EU AI Act - https://artificialintelligenceact.eu/

    Databricks - https://www.databricks.com/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • In a world racing toward the development of Artificial General Intelligence (AGI), the balance between innovation and existential risk becomes a pivotal conversation. In this episode, I’m joined by Otto Barten, Founder of the Existential Risk Observatory. We focus on the critical issue of artificial general intelligence (AGI) and its potential to pose existential risks to humanity. Otto shares valuable insights into the necessity of global policy innovation and raising public awareness to navigate these uncharted waters responsibly.

    Key Takeaways:

    (00:18) Public awareness of AI risks is rising rapidly.

    (01:39) The Existential Risk Observatory’s mission is to mitigate human extinction risks.

    (02:51) The European Union’s political consensus on the EU AI Act.

    (04:11) Otto explains multiple AI threat models leading to existential risks.

    (07:01) Why distinguish between AGI and current AI capabilities?

    (09:18) Sam Altman and Mark Zuckerberg made recent statements on AGI.

    (12:15) The potential dangers of open-sourcing AGI.

    (14:17) The current regulatory landscapes and potential improvements.

    (17:01) The concept of a “pause button” for AI development is introduced.

    (20:13) Balancing AI development with ethical considerations and existential risks.

    (23:51) Increasing public and legislative awareness of AI risks.

    (29:01) The significance of transparency and accountability in AI development.

    Resources Mentioned:

    Otto Barten - https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl

    Existential Risk Observatory - https://www.linkedin.com/company/existential-risk-observatory/

    European Union AI Act -

    The Bletchley Process for global AI safety summits -

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I'm joined by Daniel Jeffries, Managing Director of the AI Infrastructure Alliance and CEO of Kentauros, to explore the complexities of AI's potential and the critical need for balanced, forward-thinking legislation.

    Key Takeaways:

    (02:05) Recent executive orders on AI, watermarking and model size regulation.

    (03:54) Autonomous weapons and the need for regulation in areas exempted by governments.

    (07:01) Liability in AI-induced harm and the challenge of assigning responsibility.

    (07:52) The rapid evolution of AI and the legislative challenge to keep pace.

    (10:37) The risk of regulatory capture and the importance of preventing AI monopolies.

    (13:29) The role of open source in fostering innovation.

    (16:32) Skepticism towards the feasibility of a global consensus on AI regulation.

    (18:21) Advocacy for industry-specific regulations, emphasizing use-case and industry nuances.

    (22:33) Recommendations for policymakers to focus on real-world problems.

    Resources Mentioned:

    Daniel Jeffries - https://www.linkedin.com/in/danjeffries/

    AI Infrastructure Alliance - https://www.linkedin.com/company/ai-infrastructure-alliance/

    Kentauros - https://www.linkedin.com/company/kentauros-ai/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard