Avsnitt

  • Srujana is Vice President and Group Director at Walmart’s Machine Learning Center of Excellence and is an experienced and respected AI, machine learning, and data science professional. She has a strong background in developing AI and machine learning models, with expertise in natural language processing, deep learning, and data-driven decision-making. Srujana has worked in various capacities in the tech industry, contributing to advancing AI technologies and their applications in solving complex problems. In our conversation, we unpack the trends shaping AI governance, the importance of consumer data protection, and the role of human-centered AI. Explore why upskilling the workforce is vital, the potential impact AI could have on white-collar jobs, and which roles AI cannot replace. We discuss the interplay between bias and transparency, the role of governments in creating AI development guardrails, and how the regulatory framework has evolved. Join us to learn about the essential considerations of deploying algorithms at scale, striking a balance between latency and accuracy, the pros and cons of generative AI, and more.

    Key Points From This Episode:

    Srujana breaks down the top concerns surrounding technology and data.Learn how AI can be utilized to drive innovation and economic growth.Navigating the adoption of AI with upskilling and workforce retention.The AI gaps that upskilling should focus on to avoid workforce displacement.Common misconceptions about biases in AI and how they can be mitigated. Why establishing regulations, laws, and policies is vital for ethical AI development.Outline of the nuances of creating an effective worldwide regulatory framework.She explains the challenges and opportunities of deploying algorithms at scale. Hear about the strategies for building architecture that can adapt to future changes. She shares her perspective on generative AI and what its best use cases are.Find out what area of AI Srujana is most excited about.

    Quotes:

    “By deploying [bias] algorithms we may be going ahead and causing some unintended consequences.” — @Srujanadev [0:03:11]

    “I think it is extremely important to have the right regulations and guardrails in place.” — @Srujanadev [0:11:32]

    “Just using generative AI for the sake of it is not necessarily a great idea.” — @Srujanadev [0:25:27]

    “I think there are a lot of applications in terms of how generative AI can be used but not everybody is seeing the return on investment.” — @Srujanadev [0:27:12]

    Links Mentioned in Today’s Episode:

    Srujana Kaddevarmuth

    Srujana Kaddevarmuth on X

    Srujana Kaddevarmuth on LinkedIn

    United Nations Association (UNA) San Francisco

    The World in 2050

    American INSIGHT

    How AI Happens

    Sama

  • Our guest goes on to share the different kinds of research they use for machine learning development before explaining why he is more conservative when it comes to driving generative AI use cases. He even shares some examples of generative use cases he feels are worthwhile. We hear about how these changes will benefit all UPS customers and how they avoid sharing private and non-compliant information with chatbots. Finally, Sunzay shares some advice for anyone wanting to become a leader in the tech world.

    Key Points From This Episode:

    Introducing Sunzay Passari to the show and how he landed his current role at UPS.Why Sunzay believes that this huge operation he’s part of will drive transformational change. How AI and machine learning have made their way into UPS over the past few years. The way Sunzay and his team have decided where AI will be most disruptive within UPS. Qualitative and qualitative research and what that looks like for this project. Why Sunzay is conservative when it comes to driving generative AI use cases. Sunzay shares some of the generative use cases that he thinks are worthwhile. The way these new technologies will benefit everyday UPS customers. How they are preventing people from accessing non-compliant data through chatbots. Sunzay passes on some advice for anyone looking to forge their career as a leader in tech.

    Quotes:

    “There’s a lot of complexities in the kind of global operations we are running on a day-to-day basis [at UPS].” — Sunzay Passari [0:04:35]

    “There is no magic wand – so it becomes very important for us to better our resources at the right time in the right initiative.” — Sunzay Passari [0:09:15]

    “Keep learning on a daily basis, keep experimenting and learning, and don’t be afraid of the failures.” — Sunzay Passari [0:22:48]

    Links Mentioned in Today’s Episode:

    Sunzay Passari on LinkedIn

    UPS

    How AI Happens

    Sama

  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • Martin shares what reinforcement learning does differently in executing complex tasks, overcoming feedback loops in reinforcement learning, the pitfalls of typical agent-based learning methods, and how being a robotic soccer champion exposed the value of deep learning. We unpack the advantages of deep learning over modeling agent approaches, how finding a solution can inspire a solution in an unrelated field, and why he is currently focusing on data efficiency. Gain insights into the trade-offs between exploration and exploitation, how Google DeepMind is leveraging large language models for data efficiency, the potential risk of using large language models, and much more.

    Key Points From This Episode:

    What it is like being a five times world robotic soccer champion.The process behind training a winning robotic soccer team.Why standard machine learning tools could not train his team effectively. Discover the challenges AI and machine learning are currently facing.Explore the various exciting use cases of reinforcement learning.Details about Google DeepMind and the role of him and his team. Learn about Google DeepMind’s overall mission and its current focus.Hear about the advantages of being a scientist in the AI industry. Martin explains the benefits of exploration to reinforcement learning.How data mining using large language models for training is implemented. Ways reinforcement learning will impact people in the tech industry.Unpack how AI will continue to disrupt industries and drive innovation.

    Quotes:

    “You really want to go all the way down to learn the direct connections to actions only via learning [for training AI].” — Martin Riedmiller [0:07:55]

    “I think engineers often work with analogies or things that they have learned from different [projects].” — Martin Riedmiller [0:11:16]

    “[With reinforcement learning], you are spending the precious real robots time only on things that you don’t know and not on the things you probably already know.” — Martin Riedmiller [0:17:04]

    “We have not achieved AGI (Artificial General Intelligence) until we have removed the human completely out of the loop.” — Martin Riedmiller [0:21:42]

    Links Mentioned in Today’s Episode:

    Martin Riedmiller

    Martin Riedmiller on LinkedIn

    Google DeepMind

    RoboCup

    How AI Happens

    Sama

  • Jia shares the kinds of AI courses she teaches at Stanford, how students are receiving machine learning education, and the impact of AI agents, as well as understanding technical boundaries, being realistic about the limitations of AI agents, and the importance of interdisciplinary collaboration. We also delve into how Jia prioritizes latency at LiveX before finding out how machine learning has changed the way people interact with agents; both human and AI.

    Key Points From This Episode:

    The AI courses that Jia teaches at Stanford. Jia’s perspective on the future of AI. What the potential impact of AI agents is. The importance of understanding technical boundaries. Why interdisciplinary collaboration is imperative. How Jia is empowering other businesses through LiveX AI. Why she prioritizes latency and believes that it’s crucial. How AI has changed people’s expectations and level of courtesy.A glimpse into Jia’s vision for the future of AI agents. Why she is not satisfied with the multi-model AI models out there. Challenges associated with data in multi-model machine learning.

    Quotes:

    “[The field of AI] is advancing so fast every day.” — Jia Li [0:03:05]

    “It is very important to have more sharing and collaboration within the [AI field].” — Jia Li [0:12:40]

    “Having an efficient algorithm [and] having efficient hardware and software optimization is really valuable.” — Jia Li [0:14:42]

    Links Mentioned in Today’s Episode:

    Jia Li on LinkedIn

    LiveX AI

    How AI Happens

    Sama

  • Key Points From This Episode:

    Reid Robinson's professional background, and how he ended up at Zapier. What he learned during his year as an NFT founder, and how it serves him in his work today.How he gained his diverse array of professional skills.Whether one can differentiate between AI and mere automation. How Reid knew that partnering with OpenAI and ChatGPT would be the perfect fit. The way the Zapier team understands and approaches ML accuracy and generative data.Why real-world data is better as it stands, and whether generative data will one day catch up. How Zapier uses generative data with its clients. Why AI is still mostly beneficial for those with a technical background. Reid Robinson's next big idea, and his parting words of advice.

    Quotes:

    “Sometimes, people are very bad at asking for what they want. If you do any stint in, particularly, the more hardcore sales jobs out there, it's one of the things you're going to have to learn how to do to survive. You have to be uncomfortable and learn how to ask for things.” — @Reidoutloud_ [0:05:07]

    “In order to really start to drive the accuracy of [our AI models], we needed to understand, what were users trying to do with this?” — @Reidoutloud_ [0:15:34]

    “The people who being enabled the most with AI in the current stage are the technical tinkerers. I think a lot of these tools are too technical for average-knowledge workers.” — @Reidoutloud_ [0:28:32]

    “Quick advice for anyone listening to this, do not start a company when you have your first kid! Horrible idea.” — @Reidoutloud_ [0:29:28]

    Links Mentioned in Today’s Episode:

    Reid Robinson on LinkedIn

    Reid Robinson on X

    Zapier

    CocoNFT

    How AI Happens

    Sama

  • In this episode of How AI Happens, Justin explains how his project, Wondr Search, injects creativity into AI in a way that doesn’t alienate creators. You’ll learn how this new form of AI uses evolutionary algorithms (EAs) and differential evolution (DE) to generate music without learning from or imitating existing creative work. We also touch on the success of the six songs created by Wondr Search, why AI will never fully replace artists, and so much more. For a fascinating conversation at the intersection of art and AI, be sure to tune in today!

    Key Points From This Episode:

    How genetic algorithms can preserve human creativity in the age of AI.Ways that Wondr Search differs from current generative AI models.Why the songs produced by Wondr Search were so well-received by record labels.Justin’s motivations for creating an AI model that doesn’t learn from existing music.Differentiating between AI-generated content and creative work made by humans.Insight into Justin’s PhD topic focused on mathematical optimization.Key differences between operations research and data science.An understanding of the relationship between machine learning and physics.Our guest’s take on “big data” and why more data isn’t always better.Problems Justin focuses on as a technical advisor to Fortune 500 companies.What he is most excited (and most concerned) about for the future of AI.

    Quotes:

    “[Wondr Search] is definitely not an effort to stand up against generative AI that uses traditional ML methods. I use those a lot and there’s going to be a lot of good that comes from those – but I also think there’s going to be a market for more human-centric generative methods.” — Justin Kilb [0:06:12]

    “The definition of intelligence continues to change as [humans and artificial systems] progress.” — Justin Kilb [0:24:29]

    “As we make progress, people can access [AI] everywhere as long as they have an internet connection. That's exciting because you see a lot of people doing a lot of great things.” — Justin Kilb [0:26:06]

    Links Mentioned in Today’s Episode:

    Justin Kilb on LinkedIn

    Wondr Search

    ‘Conserving Human Creativity with Evolutionary Generative Algorithms: A Case Study in Music Generation’

    How AI Happens

    Sama

  • Jacob shares how Gong uses AI, how it empowers its customers to build their own models, and how this ease of access for users holds the promise of a brighter future. We also learn more about the inner workings of Gong and how it trains its own models, why it’s not too interested in tracking soft skills right now, what we need to be doing more of to build more trust in chatbots, and our guest’s summation of why technology is advancing like a runaway train.

    Key Points From This Episode:

    Jacob Eckel walks us through his professional background and how he ended up at Gong.The ins and outs of Gong, and where AI fits in. How Gong empowers its customers to build their own models, and the results thereof. Understanding the data ramifications when customers build their own models on Gong.How Gong trains its own models, and the way the platform assists users in real time. Why its models aren’t tracking softer skills like rapport-building, yet.Everything that needs to be solved before we can fully trust chatbots. Jacob’s summation of why technology is growing at an increasingly rapid rate.

    Quotes:

    “We don’t expect our customers to suddenly become data scientists and learn about modeling and everything, so we give them a very intuitive, relatively simple environment in which they can define their own models.” — @eckely [0:07:03]

    “[Data] is not a huge obstacle to adopting smart trackers.” — @eckely [0:12:13]

    “Our current vibe is there’s a limit to this technology. We are still unevolved apes.” — @eckely [0:16:27]

    Links Mentioned in Today’s Episode:

    Jacob Eckel on LinkedIn

    Jacob Eckel on X

    Gong

    How AI Happens

    Sama

  • Bobak further opines on the pros and cons of Perplexity and GPT 4.0, why the technology uses both models, the differences, and the pros and cons. Finally, our guest tells us why Brilliant Labs is open-source and reminds us why public participation is so important.

    Key Points From This Episode:

    Introducing Bobak Tavangar to today’s episode of How AI Happens. Bobak tells us about his background and what led him to start his company, Brilliant Labs. Our guest shares his interesting Lord of the Rings analogy and how it relates to his business. How wearable technology is creeping more and more into our lives. The hurdles they face with generative AI glasses and how they’re overcoming them. How Bobak chose the most important factors to incorporate into the glasses. What the glasses can do at this stage of development. Bobak explains how the glasses know whether to query GPT 4.0 or Perplexity AI. GPT 4.0 versus Perplexity and why Bobak prefers to use them both. The importance of gauging public reaction and why Brilliant Labs is open-source.

    Quotes:

    “To have a second pair of eyes that can connect everything we see with all the information on the web and everything we’ve seen previously – is an incredible thing.” — @btavangar [0:13:12]

    “For live web search, Perplexity – is the most precise [and] it gives the most meaningful answers from the live web.” — @btavangar [0:26:40]

    “The [AI] space is changing so fast. It’s exciting [and] it’s good for all of us but we don’t believe you should ever be locked to one model or another.” — @btavangar [0:28:45]

    Links Mentioned in Today’s Episode:

    Bobak Tavangar on LinkedIn

    Bobak Tavangar on X

    Bobak Tavangar on Instagram

    Brilliant Labs

    Perplexity AI

    GPT 4.0

    How AI Happens

    Sama

  • Andrew shares how generative AI is used by academic institutions, why employers and educators need to curb their fear of AI, what we need to consider for using AI responsibly, and the ins and outs of Andrew’s podcast, Insight x Design.

    Key Points From This Episode:

    Andrew Madson explains what a tech evangelist is and what his role at Dremio entails. The ins and outs of Dremio. Understanding the pain points that Andrew wanted to alleviate by joining Dremio. How Andrew became a tech evangelist, and why he values this role.Why all tech roles now require one to upskill and branch out into other areas of expertise. The problems that Andrew most commonly faces at work, and how he overcomes them. How Dremio uses generative AI, and how the technology is used in academia. Why employers and educators need to do more to encourage the use of AI. The provenance of training data, and other considerations for the responsible use of AI. Learning more about Andrew’s new podcast, Insight x Design.

    Quotes:

    “Once I learned about lakehouses and Apache Iceberg and how you can just do all of your work on top of the data lake itself, it really made my life a lot easier with doing real-time analytics.” — @insightsxdesign [0:04:24]

    “Data analysts have always been expected to be technical, but now, given the rise of the amount of data that we’re dealing with and the limitations of data engineering teams and their capacity, data analysts are expected to do a lot more data engineering.” — @insightsxdesign [0:07:49]

    “Keeping it simple and short is ideal when dealing with AI.” — @insightsxdesign [0:12:58]

    “The purpose of higher education isn’t to get a piece of paper, it’s to learn something and to gain new skills.” — @insightsxdesign [0:17:35]

    Links Mentioned in Today’s Episode:

    Andrew Madson

    Andrew Madson on LinkedIn

    Andrew Madson on X

    Andrew Madson on Instagram

    Dremio

    Insights x Design

    Apache Iceberg

    ChatGPT

    Perplexity AI

    Gemini

    Anaconda

    Peter Wang on LinkedIn

    How AI Happens

    Sama

  • Tom shares further thoughts on financing AI tech venture capital and whether or not data centers pose a threat to the relevance of the Cloud, as well as his predictions for the future of GPUs and much more.

    Key Points From This Episode:

    Introducing Tomasz Tunguz, General Partner at Theory Ventures.What he is currently working on including AI research and growing the team at Theory.How he goes about researching the present to predict the future.Why professionals often work in both academia and the field of AI.What stands out to Tom when he is looking for companies to invest in.Varying applications where an 80% answer has differing relevance.The importance of being at the forefront of AI developments as a leader. Why the metrics of risk and success used in the past are no longer relevant.Tom’s thoughts on whether or not Generative AI will replace search.Financing in the AI tech venture capital space.Differentiating between the Cloud and data centers.Predictions for the future of GPUs.Why ‘hello’ is the best opener for a cold email.

    Quotes:

    “Innovation is happening at such a deep technological level and that is at the core of machine learning models.” — @tomastungusz [0:03:37]

    “Right now, we’re looking at where [is] there rote work or human toil that can be repeated with AI? That’s one big question where there’s not a really big incumbent.” — @tomastungusz [0:05:51]

    “If you are the leader of a team or a department or a business unit or a company, you can not be in a position where you are caught off guard by AI. You need to be on the forefront.” — @tomastungusz [0:08:30]

    “The dominant dynamic within consumer products is the least friction in a user experience always wins.” — @tomastungusz [0:14:05]

    Links Mentioned in Today’s Episode:

    Tomasz Tunguz

    Tomasz Tunguz on LinkedIn

    Tomasz Tunguz on X

    Theory Ventures

    How AI Happens

    Sama

  • Kordel is the CTO and Founder of Theta Diagnostics, and today he joins us to discuss the work he is doing to develop a sense of smell in AI. We discuss the current and future use cases they’ve been working on, the advancements they’ve made, and how to answer the question “What is smell?” in the context of AI. Kordel also provides a breakdown of their software program Alchemy, their approach to collecting and interpreting data on scents, and how he plans to help machines recognize the context for different smells. To learn all about the fascinating work that Kordel is doing in AI and the science of smell, be sure to tune in!

    Key Points From This Episode:

    Introducing today’s guest, Kordel France.How growing up on a farm encouraged his interest in AI.An overview of Kordel’s education and the subjects he focused on.His work today and how he is teaching machines to smell.Existing use cases for smell detection, like the breathalyzer test and smoke detectors.The fascinating ways that the ability to pick up certain smells differs between people.Unpacking the elusive question “What is smell?”How to apply this question to AI development.Conceptualizing smell as a pattern that machines can recognize.Examples of current and future use cases that Kordel is working on.How he trains his devices to recognize smells and compounds.A breakdown of their autonomous gas system (AGS).How their software program, Alchemy, helps them make sense of their data.Kordel’s aspiration to add modalities to his sensors that will create context for smells.

    Quotes:

    “I became interested in machine smell because I didn't see a lot of work being done on that.” — @kordelkfrance [0:08:25]

    “There's a lot of people that argue we can't actually achieve human-level intelligence until we've met we've incorporated all five senses into an artificial being.” — @kordelkfrance [0:08:36]

    “To me, a smell is a collection of compounds that represent something that we can recognize. A pattern that we can recognize.” — @kordelkfrance [0:17:28]

    “Right now we have about three dozen to four dozen compounds that we can with confidence detect.” — @kordelkfrance [0:19:04]

    “[Our autonomous gas system] is really this interesting system that's hooked up to a bunch of machine learning, that helps calibrate and detect and determine what a smell looks like for a specific use case and breaking that down into its constituent compounds.” — @kordelkfrance [0:23:20]

    “The success of our device is not just the sensing technology, but also the ability of Alchemy [our software program] to go in and make sense of all of these noise patterns and just make sense of the signals themselves.” — @kordelkfrance [0:25:41]

    Links Mentioned in Today’s Episode:

    Kordel France

    Kordel France on LinkedIn

    Kordel France on X

    Theta Diagnostics

    Alchemy by Theta Diagnostics

    How AI Happens

    Sama

  • After describing the work done at StoneX and her role at the organization, Elettra explains what drew her to neural networks, defines data science and how she overcame the challenges of learning something new on the job, breaks down what a data scientist needs to succeed, and shares her thoughts on why many still don’t fully understand the industry. Our guest also tells us how she identifies an inadequate data set, the recent innovations that are under construction at StoneX, how to ensure that your AI and ML models are compliant, and the importance of understanding AI as a mere tool to help you solve a problem.

    Key Points From This Episode:

    Elettra Damaggio explains what StoneX Group does and how she ended up there. Her professional journey and how she acquired her skills. The state of neural networks while she was studying them, why she was drawn to the subject, and how it’s changed. StoneX’s data science and ML capabilities when she arrived, and Elettra’s role in the system. Her first experience of being thrown into the deep end of data science, and how she swam. A data scientist’s tools for success. The multidisciplinary leaders and departments that she sought to learn from when she entered data science. Defining data science, and why many do not fully understand the industry. How Elettra knows when her data set is inadequate. The recent projects and ML models that she’s been working on. Exploring the types of guardrails that are needed when training chatbots to be compliant.Elettra’s advice to those following a similar career path as hers.

    Quotes:

    “The best thing that you can have as a data scientist to be set up for success is to have a decent data warehouse.” — Elettra Damaggio [0:09:17]

    “I am very much an introverted person. With age, I learned how to talk to people, but that wasn’t [always] the case.” — Elettra Damaggio [0:12:38]

    “In reality, the hard part is to get to the data set – and the way you get to that data set is by being curious about the business you’re working with.” — Elettra Damaggio [0:13:58]

    “[First], you need to have an idea of what is doable, what is not doable, [and] more importantly, what might solve the problem that [the client may] have, and then you can have a conversation with them.” — Elettra Damaggio [0:19:58]

    “AI and ML is not the goal; it’s the tool. The goal is solving the problem.” — Elettra Damaggio [0:28:28]

    Links Mentioned in Today’s Episode:

    Elettra Damaggio on LinkedIn

    StoneX Group

    How AI Happens

    Sama

  • Mike Miller is the Director of Project Management at AWS, and he joins us today to share about the inspirational AI-powered products and services that are making waves at Amazon, particularly those with generative prompt engineering capabilities. We discuss how Mike and his team choose which products to bring to market, the ins and outs of PartyRock including the challenges of developing it, AWS’s strategy for generative AI, and how the company aims to serve everyone, even those with very little technical knowledge. Mike also explains how customers are using his products and what he’s learned from their behaviors, and we discuss what may lie ahead in the future of generative prompt engineering.

    Key Points From This Episode:

    Mike Miller’s professional background, and how he got into AI and AWS. How Mike and his team decide on the products to bring to market for developers. Where PartyRock came from and how it fits into AWS’s strategy. How AWS decided on the timing to make PartyRock accessible to all. What AWS’s products mean for those with zero coding experience. The level of oversight that is required to service clients who have no technical background. Taking a closer look at AWS’s strategy for generative AI. How customers are using PartyRock, and what Mike has learned from these observations.The challenges that the team faced whilst developing PartyRock, and how they persevered. Trying to understand the future of generative prompt engineering. A reminder that PartyRock is free, so go try it out!

    Quotes:

    “We were working on AI and ML [at Amazon] and discovered that developers learned best when they found relevant, interesting, [and] hands-on projects that they could work on. So, we built DeepLens as a way to provide a fun opportunity to get hands-on with some of these new technologies.” — Mike Miller [0:02:20]

    “When we look at AIML and generative AI, these things are transformative technologies that really require almost a new set of intuition for developers who want to build on these things.” — Mike Miller [0:05:19]

    “In the long run, innovations are going to come from everywhere; from all walks of life, from all skill levels, [and] from different backgrounds. The more of those people that we can provide the tools and the intuition and the power to create innovations, the better off we all are.” — Mike Miller [0:13:58]

    “Given a paintbrush and a blank canvas, most people don’t wind up with The Sistine Chapel. [But] I think it’s important to give people an idea of what is possible.” — Mike Miller [0:25:34]

    Links Mentioned in Today’s Episode:

    Mike Miller on LinkedIn

    Amazon Web Services

    AWS DeepLens

    AWS DeepRacer

    AWS DeepComposer

    PartyRock

    Amazon Bedrock

    How AI Happens

    Sama

  • Key Points From This Episode:

    Welcoming Seth Walker to the podcast.The importance of being agile in AI.All about Seth’s company, Carrier, and what they do.Seth tells us about his background and how he ended up at Carrier.How Seth goes about unlocking the power of AI.The different levels of success when it comes to AI creation and how to measure them.Seth breaks down the different things Carrier focuses on.The importance of prompt engineering.What makes him excited about the new iterations of machine learning.

    Quotes:

    “In many ways, Carrier is going to be a necessary condition in order for AI to exist.” — Seth Walker [0:04:08]

    “What’s hard about generating value with AI is doing it in a way that is actually actionable toward a specific business problem.” — Seth Walker [0:09:49]

    “One of the things that we’ve found through experimentation with generative AI models is that they’re very sensitive to your content. I mean, there’s a reason that prompt engineering has become such an important skill to have.” — Seth Walker [0:25:56]

    Links Mentioned in Today’s Episode:

    Seth Walker on LinkedIn

    Carrier

    How AI Happens

    Sama

  • Philip recently had the opportunity to speak with 371 customers from 15 different countries to hear their thoughts, fears, and hopes for AI. Tuning in you’ll hear Philip share his biggest takeaways from these conversations, his opinion on the current state of AI, and his hopes and predictions for the future. Our conversation explores key topics, like government and company attitudes toward AI, why adversarial datasets will need to be audited, and much more. To hear the full scope of our conversation with Philip – and to find out how 2024 resembles 1997 – be sure to tune in today!

    Key Points From This Episode:

    Some background on Philip Moyer and his role as part of Google’s AI engineering team.What he learned from speaking with 371 customers from 15 different countries about AI.Philip shares his insights on how governments and companies are approaching AI.Recognizing the risks and requirements of models and how to manage them.Adversarial datasets; what they are and why they need to be audited.Understanding how adversarial datasets can vary between industries.A breakdown of Google’s approach to adversarial datasets in different languages.The most relevant takeaways from Philip’s cross-continental survey.How 2024 resembles the technological and competitive business landscape of 1997.Google’s partnership with Nvidia and how they are providing technologies at every layer.The new class of applications that come with generative AI.Using a company’s proprietary data to train generative AI models.The collective challenges we are all facing when it comes to creating generative AI at scale.Understanding the vectorization of knowledge and why it will need to be auditable.Philip shares what he is most excited about when it comes to AI.

    Quotes:

    “What's been so incredible to me is how forward-thinking – a lot of governments are on this topic [of AI] and their understanding of – the need to be able to make sure that both their citizens as well as their businesses make the best use of artificial intelligence.” — Philip Moyer [0:02:52]

    “Nobody's ahead and nobody's behind. Every single company that I'm speaking to, has about one to five use cases live. And they have hundreds that are on the docket.” — Philip Moyer [0:15:36]

    “All of us are facing the exact same challenges right now of doing [generative AI] at scale.” — Philip Moyer [0:17:03]


    “You should just make an assumption that you're going to be somewhere on the order of about 10 to 15% more productive with AI.” — Philip Moyer [0:25:22]

    “[With AI] I get excited around proficiency and job satisfaction because I really do think – we have an opportunity to make work fun again.” — Philip Moyer [0:27:10]

    Links Mentioned in Today’s Episode:

    Philip Moyer on LinkedIn

    How AI Happens

    Sama

  • Joelle further discusses the relationship between her work, AI, and the end users of her products as well as her summation of information modalities, world models versus word models, and the role of responsibility in the current high-stakes of technology development.

    Key Points From This Episode:

    Joelle Pineau's professional background and how she ended up at Meta.The aspects of AI robotics that fascinate her the most.Why elegance is an important element in Joelle's machine learning systems.How asking the right question is the most vital part of research and how to get better at it.FRESCO: how Joelle chooses which projects to work on.The relationship between her work, AI, and the end users of her final products.What success looks like for her and her team at Meta.World models versus word models and her summation of information modalities.What Joelle thinks about responsibility in the current high-stakes of technology development.

    Quotes:

    “Perhaps, the most important thing in research is asking the right question.” — @jpineau1 [0:05:10]

    “My role isn't to set the problems for [the research team], it's to set the conditions for them to be successful.” — @jpineau1 [0:07:29]

    “If we're going to push for state-of-the-art on the scientific and engineering aspects, we must push for state-of-the-art in terms of social responsibility.” — @jpineau1 [0:20:26]

    Links Mentioned in Today’s Episode:

    Joelle Pineau on LinkedIn

    Joelle Pineau on X

    Meta

    How AI Happens

    Sama

  • Key Points From This Episode:

    Amii’s machine learning project management tool: MLPL.Amii’s ultimate goal of building capacity and how it differs from an agency model. Asking the right questions to ascertain the appropriate use for AI. Instances where AI is not a relevant solution. Common challenges people face when adopting AI strategies. Mara’s perspective on the education necessary to excel in a career in machine learning.

    Quotes:

    “Amii is all about capacity building, so we’re not a traditional agent in that sense. We are trying to educate and inform industry on how to do this work, with Amii at first, but then without Amii at the end.” — Mara Cairo [0:06:20]

    “We need to ask the right questions. That’s one of the first things we need to do, is to explore where the problems are.” — Mara Cairo [0:07:46]

    “We certainly are comfortable turning certain business problems away if we don’t feel it’s an ethical match or if we truly feel it isn’t a problem that will benefit much from machine learning.” — Mara Cairo [0:11:52]

    Links Mentioned in Today’s Episode:

    Maria Cairo

    Maria Cairo on LinkedIn

    Alberta Machine Intelligence Unit

    How AI Happens

    Sama

  • Jerome discusses Meta's Segment Anything Model, Ego Exo 4D, the nature of Self Supervised Learning, and what it would mean to have a non-language based approach to machine teaching.

    For more, including quotes from Meta Researchers, check out the Sama Blog

  • Bryan discusses what constitutes industrial AI, its applications, and how it differs from standard AI processes. We explore the innovative process of deep reinforcement learning (DRL), replicating human expertise with machines, and the types of AI approaches available. Gain insights into the current trends and the future of generative AI, the existing gaps and opportunities, why DRL is a game-changer and much more! Join us as we unpack the nuances of industrial AI, its vast potential, and how it is shaping the industries of tomorrow. Tune in now!

    Key Points From This Episode:

    Bryan’s professional background and his role in the company.Unpack the concept of “industrial AI” and its various applications.The current state and trends of AI in the industrial landscape.Deep reinforcement learning (DRL) and how it applies to industrial AI.Why deep RL is a game-changer for solving industrial problems.Learn about autonomous AI, machine teaching, and explainable AI.Discover the approach for replicating human expertise with machines.Opportunities and challenges of using machine teaching techniques.Differences between monolithic deep learning and standard deep learning.His perspective on current trends and the future of generative AI.

    Quotes:

    “We typically look at industrial [AI] as you are either making something or you are moving something.” — Bryan DeBois [0:04:36]

    “One of the key distinctions with deep reinforcement learning is that it learns by doing and not by data.” — Bryan DeBois [0:10:22]

    “Autonomous AI is more of a technique than a technology.” — Bryan DeBois [0:16:00]

    “We have to have [AI] systems that we can count on, that work within constraints, and give right answers every time.” — Bryan DeBois [0:29:04]

    Links Mentioned in Today’s Episode:

    Bryan DeBois on LinkedIn

    Bryan DeBois Email

    RoviSys

    RoviSys AI

    Designing Autonomous AI

    How AI Happens

    Sama

  • 2023 ML Pulse Report

    Joining us today are our panelists, Duncan Curtis, SVP of AI products and technology at Sama, and Jason Corso, a professor of robotics, electrical engineering, and computer science at the University of Michigan. Jason is also the chief science officer at Voxel51, an AI software company specializing in developer tools for machine learning. We use today’s conversation to discuss the findings of the latest Machine Learning (ML) Pulse report, published each year by our friends at Sama. This year’s report focused on the role of generative AI by surveying thousands of practitioners in this space. Its findings include feedback on how respondents are measuring their model’s effectiveness, how confident they feel that their models will survive production, and whether they believe generative AI is worth the hype. Tuning in you’ll hear our panelists’ thoughts on key questions in the report and its findings, along with their suggested solutions for some of the biggest challenges faced by professionals in the AI space today. We also get into a bunch of fascinating topics like the opportunities presented by synthetic data, the latent space in language processing approaches, the iterative nature of model development, and much more. Be sure to tune in for all the latest insights on the ML Pulse Report!

    Key Points From This Episode:

    Introducing today’s panelists, Duncan Curtis and Jason Corso.An overview of what the Machine Learning (ML) Pulse report focuses on.Breaking down what the term generative means in AI.Our thoughts on key findings from the ML Pulse Report.What respondents, and our panelists, think of hype around generative AI.Unpacking one of the biggest advances in generative AI: accessibility.Insights on cloud versus local in an AI context.Generative AI use cases in the field of computer vision.The powerful opportunities presented by synthetic data.Why the role of human feedback in synthetic data is so important.Finding a middle ground between human language and machine understanding.Unpacking the notion of latent space in language processing approaches.How confident respondents feel that their models will survive production.The challenges of predicting how well a model will perform.An overview of the biggest challenges reported by respondents.Suggested solutions from panelists on key challenges from the report.How respondents are measuring the effectiveness of their models.What Duncan and Jason focus on to measure success.Career advice from our panelists on making meaningful contributions to this space.

    Quotes:

    “It's really hard to know how well your model is going to do.” — Jason Corso [0:27:10]

    “With debugging and detecting errors in your data, I would definitely say look at some of the tooling that can enable you to move more quickly and understand your data better.” — Duncan Curtis [0:33:55]

    “Work with experts – there's no replacement for good experience when it comes to actually boxing in a problem, especially in AI.” — Jason Corso [0:35:37]

    “It's not just about how your model performs. It's how your model performs when it's interacting with the end user.” — Duncan Curtis [0:41:11]

    “Remember, what we do in this field, and in all fields really, is by humans, for humans, and with humans. And I think if you miss that idea [then] you will not achieve – either your own potential, the group you're working with, or the tool.” — Jason Corso [0:48:20]

    Links Mentioned in Today’s Episode:


    Duncan Curtis on LinkedIn
    Jason Corso

    Jason Corso on LinkedIn

    Voxel51

    2023 ML Pulse Report

    ChatGPT

    Bard

    DALL·E 3

    How AI Happens

    Sama