Avsnitt

  • In this of People + AI, host Karthik Ramakrishnan engages with Var Shankar, Executive Director of the Responsible AI Institute. Together, they dissect the complexities and necessities of advocating for ethical artificial intelligence – a critical conversation in today's advancing tech world. Var brings knowledge from diverse arenas, including international policymaking and legal academia, to illuminate the operational challenges and international standards shaping responsible AI. Listen in as they delve into the intersection of law and AI governance, AI implementation in enterprises, and decode policy instruments like the G7 code of conduct and ISO 42. This episode is a must-listen for anyone passionate about the implications and evolution of AI ethics and governance.

  • In this episode of People in AI, join host Karthik Ramakrishnan and his guest, Patricia Thaine, CEO and co-founder of Private AI, as they unravel the complexities of AI in a privacy-centric world. Patricia illuminates the risks of creating embeddings with personal information and shares essential advice on selling technical products to large enterprises. The duo discusses the nuances of deploying AI models, the intricacies of data privacy regulations, and the evolving business landscapes in the post-GPT era. With a rich linguistics and computer science background, Patricia offers unique insights into the importance of privacy in innovation, revealing how Private AI technology is pioneering in data anonymization. Do not miss this insightful conversation filled with expert analysis, personal anecdotes, and a shared admiration for the intersection of AI and privacy.

  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • On this episode of People & AI, host Karthik Ramakrishnan welcomes Marta Janczarski and Phil Dawson for an in-depth discussion on SO standards, regulatory compliance, and AI systems. Marta, an expert in AI governance, delves into the intricate world of ISO standards that she helped create like 9001 and 27001, and their adaptability for organizations of all sizes, alongside the newly introduced ISO 2005 for impact assessment. Phil queries the practicality of these standards in the face of evolving regulations, sparking a comprehensive conversation on how ISO certifications serve as a foundation for risk management and play a pivotal role in the burgeoning relationship between AI standards and the insurance industry. The trio also explores the development of industry-specific handbooks and the necessity for standards to evolve with AI technology, reinforcing the importance of a common language across stakeholders in AI governance and risk mitigation.

  • After a long break, People + AI is back! Host Karthik Ramakrishnan welcomes Armilla AI co-founders Phil Dawson and Dan Adamson for a deep dive into the pivotal advancements and regulatory landscapes surrounding AI in the coming year. The trio dissects the transformative effects of generative AI in enterprise environments, particularly in light of the IMF's reports on labor impacts and the surging interest from companies like Anthropic. They navigate the complex discussions of AI governance with a lens on the EU's forthcoming AI Act and the active regulatory debates in North America. The episode showcases the evolution of AI's capabilities, notably in generative AI models, and their predictions for open-source model disruption in the enterprise sector. Join the conversation as they address AI's precarious intersection with copyright, privacy, and standards and ponder the industry's strides toward equitable and safe AI deployment. From high-stakes regulation to in-house AI development, People + AI promises a comprehensive discourse on the state and future of artificial intelligence.


  • The use of AI in the context of enterprise is commonly misunderstood, but today’s guest is an expert in this field, and she joins us today to dispel some pervasive myths. Sheetal Patole is currently the CIO for customer engagement, data, insights, and operations at Barclays UK. Her prior range of experience spans from healthcare to mining, but she has always been focused on introducing new technology to transform organizations. In this episode, Sheetal shares some of the use cases that she has worked on during her career, and how she uses a combination of data, hardware, and software to solve real problems that businesses are experiencing. She also explains what is needed to implement AI at scale, why you shouldn’t wed yourself to particular models and techniques, what can be done about the problem of talent acquisition being faced by the AI industry, the importance of AI governance in real-time, and why, despite what you may have heard, AI really does need people!

    Key Points From This Episode:

    The breadth of experience and field of expertise of today’s guest, Sheetal Patole.Sheetal explains the purpose of AI in the context of an enterprise, which is commonly misunderstood.Examples of the different ways that AI is used in different industries.Benefits of the autonomous trucks that are being used on mines.Sheetal shares how she and her team used AI to solve a transportation challenge.How to determine whether a business challenge can be solved using AI.Dos and don’ts when it comes to putting together a strategy for solving a business problem.The importance of having a coherent and mature strategy across an entire organization before implementing AI at scale.How to cultivate a level of maturity in an organization.Sheetal’s recommendation for navigating inter-team dynamics in an organization.A challenge that the AI industry is currently facing, and Sheetal’s thoughts on how it can be combated.AI governance in real-time; Sheetal shares her thoughts on why this is necessary, and how it should be implemented.What Sheetal wishes she could do differently, advice to her younger self, and an element of the AI realm that she overestimated.

    Tweetables:

    “Using data intelligently and moving away from a system of record to a system of intelligence that allows your organization to learn from this information and data sets to drive change.” — Sheetal Patole [0:03:18]

    “[AI] has never been something just purely to make money, it’s been to transform the organization from where it is, and it’s always gone to somewhere better.” — Sheetal Patole [0:08:36]

    “If you want to do data analytics and AI at scale, you need to build a level of maturity in your organization to be able to do it. The way you build that maturity is first to centralize the team.” — Sheetal Patole [0:27:57]

    “AI is not a mythical creature or black box, it requires human beings.” — Sheetal Patole [0:38:35]

    Links Mentioned in Today’s Episode:

    Sheetal Patole on LinkedIn

    Barclays UK

  • Philippe Beaudoin, CEO and Co-Founder at Waverly, shares his thoughts on how to bridge the gap between academics and industry professionals, the immense challenge of balancing company optimization strategies with the best interests of the users and why empathy should be a much bigger part of the AI conversation.

    Key Points From This Episode:

    Phil shares the exciting career journey that led him to where he is currently.The changes that have taken place in the AI world since Phil entered it.Why a large percentage of AI in enterprise still fails.A major difference between the academics researching AI and those applying that research.Phil’s thoughts on how to bridge the gap between academia and enterprise.An explanation of one of the lesser-known threats of AI.The inspiration behind Phil’s app, Waverly, and how it works.Why companies are hesitant to change the way their recommendation engines are built.Challenges of balancing company optimization with the best interests of the user.Problems that Phil sees with the implementation of policies and regulations around AI technology.The empathy component of AI technology that Phil thinks we should be focusing on.Advice for researchers and practitioners for dealing with the unintended consequences of working in the AI field.

    Tweetables:

    “[Waverly] is my take on how to build better AI systems, and for me, better means AI systems that care more about the user they are trying to help than we are used to seeing.” — @PhilBeaudoin [0:02:43]

    “Vision has improved a lot, natural language understanding, speech recognition, our ability to find patterns, all of that has improved a lot. It’s not AGI but it has improved quite a bit.” — @PhilBeaudoin [0:04:46]

    “Bring an open mind, bring a lot of respect, and try to see the importance of the other party.” — @PhilBeaudoin [0:13:05]

    “Most people have aspirations, most people have a direction they want to grow into, and when they go about their everyday life they are super happy to have assistance, but this assistance should be at the service of our aspirations.” — @PhilBeaudoin [0:18:50]

    Links Mentioned in Today’s Episode:

    Phil Beaudoin

    Element AI

    Waverly

  • A broad thinker from an unusual background, Dr. Gillian Hadfield shares a different take on building these models from the general norm, as well as how to incorporate transparency into justifiable systems, and the hypothesis of building a system where decisions are attached back to a person responsible. We also talk about the need for safe, consistent, and up-to-date regulatory structures, and the effects of not having this, before closing with some powerful advice around the work we have to do going forward in this sector! We hope you can join us for this hugely insightful conversation.

    Key Points From This Episode:

    Introducing Dr. Gillian Hadfield and what drew her to the space of law and globalization.How the challenges of AI align with the challenges of economics.The need for people in social sciences and humanities to engage in design and building.The objective of the Schwartz Reisman Institute for Technology and Society.Defining AI governance to address the alignment problem.Comparing AI with conventional programming and the difficulties with test sets.The difference between AI explainability and justifiability.Talking about the GDPR and what they are really looking for.A legal analogy on incorporating transparency into justifiable systems.Discussing the chicken-and-egg confusion that regulators are feeling.Why we haven't seen the growth of AI we would expect.How regulatory regimes haven't kept up with the speed of globalization and digitization.The balance of having the right kind of regulation.A walk through the current landscape of AI regulation.What regulatory technologies look like.The focus on fairness and algorithmic bias and AI's capacity in all domains.Dr. Hadfield's advice for people who are looking for AI integration in their practices.

    Tweetables:

    “There's no one solution to how you align AI.” — @ghadfield [0:10:19]

    “We have the alignment problem everywhere. How do you get a corporation to do what you want it to do, how do you get governments to do what you want them to do?” — @ghadfield [0:23:52]

    “AI is a general-purpose technology, it's a way of solving problems, it's a way of coming up with new ideas. It's going to be everywhere. I prefer to think of the regulatory challenge as, how is AI changing your capacity to achieve your regulatory goals, in any domain?” — @ghadfield [0:37:41]

    “We need way more people who are not engineers, deeply engaged in the process of building our systems.” — @ghadfield [0:42:13]

    Links Mentioned in Today’s Episode:

    Gillian Hadfield on LinkedIn

    Gillian Hadfield on Twitter

    The Vector Institute

    Schwartz Reisman Institute for Technology and Society

    GDPR

    Mila

  • Today’s esteemed guest is one of the world’s best-recognized AI experts, and most cited computer scientists. Dr. Yoshua Bengio began his AI journey in the field of neural networks, following which he spent many years focusing on deep learning. He is currently working towards bridging the gap between human intelligence and state-of-the-art machine learning technologies. In this episode, we discuss system one versus system two thinking, and how understanding these systems can contribute to building more moral machines. Dr. Bengio also explains the positive impacts that AI can have on people and the planet, and how the risks of AI can be mitigated through a variety of approaches.

    Key Points From This Episode:

    An introduction to today’s esteemed guest, Yoshua Bengio.Yoshua briefly shares his thoughts on the crisis that is currently taking place in Ukraine. Origins of Yoshua’s journey in the AI world.The research field that Yoshua is most excited about at the moment. Yosuha explains the concept of Consciousness Prior.System one versus system two thinking. How neural networks solved the problem of ‘search’ in AI. Ways of mitigating the risks of AI, and some of the organizations that are working on this. How our inductive biases can be problematic.Lack of regulation in the computing industry, and why this needs to change.The global threats that AI has the potential to solve. The importance of knowledge sharing.Advice from Yoshua for all scientific researchers.Yoshua’s favorite AI movie.

    Tweetables:

    “The idea that there would be general principles that could explain intelligence, both ours, the intelligence of animals, and would allow us to build intelligent machines, I found that so exciting, and I’ve been riding that wave since then.” — Yoshua Bengio

    “I really believe in the importance of a diversity of research paths and research directions.” — Yoshua Bengio

    “The system one, system two division is a path towards making more moral machines.” — Yoshua Bengio

    “The area of healthcare is one where AI has the greatest potential of touching human beings positively in the coming years, and really saving a lot of lives.” — Yoshua Bengio

    Links Mentioned in Today’s Episode:

    Yoshua Bengio

    Yoshua Bengio on LinkedIn

    The Consciousness Prior

    Mila

    University of Montreal

    A.M. Turing Award

    2001: A Space Odyssey

  • People + AI gives you access to great minds in the field of artificial intelligence. We're exploring the key design principles, ethical quandaries, and full development cycles responsible for inventing the technologies of the future, today. Powered by Armilla AI.