Avsnitt
-
Today we are joined by Siddhika Nevrekar, an experienced product leader passionate about solving complex problems in ML by bringing people and products together in an environment of trust. We unpack the state of free computing, the challenges of training AI models for edge, what Siddhika hopes to achieve in her role at Qualcomm, and her methods for solving common industry problems that developers face.
Key Points From This Episode:
Siddhika Nevrekar walks us through her career pivot from cloud to edge computing. Why she’s passionate about overcoming her fears and achieving the impossible. Increasing compute on edge devices versus developing more efficient AI models.Siddhika explains what makes Apple a truly unique company. The original inspirations for edge computing and how the conversation has evolved. Unpacking the current state of free computing and what may happen in the near future. The challenges of training AI models for edge. Exploring Siddhika’s role at Qualcomm and what she hopes to achieve. Diving deeper into her process for achieving her goals. Common industry challenges that developers are facing and her methods for solving themQuotes:
“Ultimately, we are constrained with the size of the device. It’s all physics. How much can you compress a small little chip to do what hundreds and thousands of chips can do which you can stack up in a cloud? Can you actually replicate that experience on the device?” — @siddhika_
“By the time I left Apple, we had 1000-plus [AI] models running on devices and 10,000 applications that were powered by AI on the device, exclusively on the device. Which means the model is entirely on the device and is not going into the cloud. To me, that was the realization that now the moment has arrived where something magical is going to start happening with AI and ML.” — @siddhika_
Links Mentioned in Today’s Episode:
Siddhika Nevrekar on LinkedIn
Siddhika Nevrekar on X
Qualcomm AI Hub
How AI Happens
Sama
-
Today we are joined by Developer Advocate at Block, Rizel Scarlett, who is here to explain how to bridge the gap between the technical and non-technical aspects of a business. We also learn about AI hallucinations and how Rizel and Block approach this particular pain point, the burdens of responsibility of AI users, why it’s important to make AI tools accessible to all, and the ins and outs of G{Code} House – a learning community for Indigenous and women of color in tech. To end, Rizel explains what needs to be done to break down barriers to entry for the G{Code} population in tech, and she describes the ideal relationship between a developer advocate and the technical arm of a business.
Key Points From This Episode:
Rizel Scarlett describes the role and responsibilities of a developer advocate. Her role in getting others to understand how GitHub Copilot should be used. Exploring her ongoing projects and current duties at Block. How the conversation around AI copilot tools has shifted in the last 18 months. The importance of objection handling and why companies must pay more attention to it. AI hallucinations and Rizel’s advice for approaching this particular pain point. Why “I don’t know” should be encouraged as a response from AI companions, not shunned. Taking a closer look at how Block addresses AI hallucinations. The burdens of responsibility of users of AI, and the need to democratize access to AI tools. Unpacking G{Code} House and Rizel’s working relationship with this learning community.Understanding what prevents Indigenous and women of color from having careers in tech.The ideal relationship between a developer advocate and the technical arm of a business.Quotes:
“Every company is embedding AI into their product someway somehow, so it’s being more embraced.” — @blackgirlbytes [0:11:37]
“I always respect someone that’s like, ‘I don’t know, but this is the closest I can get to it.’” — @blackgirlbytes [0:15:25]
“With AI tools, when you’re more specific, the results are more refined.” — @blackgirlbytes [0:16:29]
Links Mentioned in Today’s Episode:
Rizel Scarlett
Rizel Scarlett on LinkedIn
Rizel Scarlett on Instagram
Rizel Scarlett on X
Block
Goose
GitHub
GitHub Copilot
G{Code} House
How AI Happens
Sama
-
Saknas det avsnitt?
-
Drew and his co-founders’ background working together at RJ Metrics.The lack of existing data solutions for Amazon Redshift and how they started dbt Labs.Initial adoption of dbt Labs and why it was so well-received from the very beginning.The concept of a semantic layer and how dbt Labs uses it in conjunction with LLMs.Drew’s insights on a recent paper by Apple on the limitations of LLMs’ reasoning.Unpacking examples where LLMs struggle with specific questions, like math problems.The importance of thoughtful prompt engineering and application design with LLMs.What is needed to maximize the utility of LLMs in enterprise settings.How understanding the specific use case can help you get better results from LLMs.What developers can do to constrain the search space and provide better output.Why Drew believes prompt engineering will become less important for the average user.The exciting potential of vector embeddings and the ongoing evolution of LLMs.
Key Points From This Episode:Quotes:
“Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]
“One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]
“I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]
“My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]
Links Mentioned in Today’s Episode:
Understanding the Limitations of Mathematical Reasoning in Large Language Models
Drew Banin on LinkedIn
dbt LabsHow AI Happens
Sama
-
In this episode, you’ll hear about Meeri's incredible career, insights from the recent AI Pact conference she attended, her company's involvement, and how we can articulate the reality of holding companies accountable to AI governance practices. We discuss how to know if you have an AI problem, what makes third-party generative AI more risky, and so much more! Meeri even shares how she thinks the Use AI Act will impact AI companies and what companies can do to take stock of their risk factors and ensure that they are building responsibly. You don’t want to miss this one, so be sure to tune in now!
Key Points From This Episode:
Insights from the AI Pact conference. The reality of holding AI companies accountable. What inspired her to start Saidot to offer solutions for AI transparency and accountability.How Meeri assesses companies and their organizational culture. What makes generative AI more risky than other forms of machine learning. Reasons that use-related risks are the most common sources of AI risks.Meeri’s thoughts on the impact of the Use AI Act in the EU.Quotes:
“It’s best to work with companies who know that they already have a problem.” — @meerihaataja [0:09:58]
“Third-party risks are way bigger in the context of [generative AI].” — @meerihaataja [0:14:22]
“Use and use-context-related risks are the major source of risks.” — @meerihaataja [0:17:56]
“Risk is fine if it’s on an acceptable level. That’s what governance seeks to do.” — @meerihaataja [0:21:17]
Links Mentioned in Today’s Episode:
Saidot
Meeri Haataja on LinkedIn
Meeri Haataja on Instagram
Meeri Haataja on X
How AI Happens
Sama
-
In this episode, Dr. Zoldi offers insight into the transformative potential of blockchain for ensuring transparency in AI development, the critical need for explainability over mere predictive power, and how FICO maintains trust in its AI systems through rigorous model development standards. We also delve into the essential integration of data science and software engineering teams, emphasizing that collaboration from the outset is key to operationalizing AI effectively.
How Scott integrates his role as an inventor with his duties as FICO CAO.Why he believes that mindshare is an essential leadership quality.What sparked his interest in responsible AI as a physicist.The shifting demographics of those who develop machine learning models.Insight into the use of blockchain to advance responsible AI.How FICO uses blockchain to ensure auditable ML decision-making.Operationalizing AI and the typical mistakes companies make in the process.The value of integrating data science and software engineering teams from the start.A fear-free perspective on what Scott finds so uniquely exciting about AI.
Key Points From This Episode:Quotes:
“I have to stay ahead of where the industry is moving and plot out the directions for FICO in terms of where AI and machine learning is going – [Being an inventor is critical for] being effective as a chief analytics officer.” — @ScottZoldi [0:01:53]
“[AI and machine learning] is software like any other type of software. It's just software that learns by itself and, therefore, we need [stricter] levels of control.” — @ScottZoldi [0:23:59]
“Data scientists and AI scientists need to have partners in software engineering. That's probably the number one reason why [companies fail during the operationalization process].” — @ScottZoldi [0:29:02]
Links Mentioned in Today’s Episode:
FICO
Dr. Scott Zoldi
Dr. Scott Zoldi on LinkedIn
Dr. Scott Zoldi on X
FICO Falcon Fraud Manager
How AI Happens
Sama
-
Jay breaks down the critical role of software optimizations and how they drive performance gains in AI, highlighting the importance of reducing inefficiencies in hardware. He also discusses the long-term vision for Lemurian Labs and the broader future of AI, pointing to the potential breakthroughs that could redefine industries and accelerate innovation, plus a whole lot more.
Key Points From This Episode:
Jay’s diverse professional background and his attraction to solving unsolvable problems.How his unfinished business in robotics led him to his current work at Lemurian Labs.What he has learned from being CEO and the biggest obstacles he has had to overcome.Why he believes engineers with a problem-solving mindset can be effective CEOs.Lemurian Labs: making AI computing more efficient, affordable, and environmentally friendly.The critical role of software in increasing AI efficiency.Some of the biggest challenges in programming GPUs.Why better software is needed to optimize the use of hardware.Common inefficiencies in AI development and how to solve them.Reflections on the future of Lemurian Labs and AI more broadly.Quotes:
“Every single problem I've tried to pick up has been one that – most people have considered as being almost impossible. There’s something appealing about that.” — Jay Dawani [0:02:58]
“No matter how good of an idea you put out into the world, most people don't have the motivation to go and solve it. You have to have an insane amount of belief and optimism that this problem is solvable, regardless of how much time it's going to take.” — Jay Dawani [0:07:14]
“If the world's just betting on one company, then the amount of compute you can have available is pretty limited. But if there's a lot of different kinds of compute that are slightly optimized with different resources, making them accessible allows us to get there faster.” — Jay Dawani [0:19:36]
“Basically what we're trying to do [at Lemurian Labs] is make it easy for programmers to get [the best] performance out of any hardware.” — Jay Dawani [0:20:57]
Links Mentioned in Today’s Episode:
Jay Dawani on LinkedIn
Lemurian Labs
How AI Happens
Sama
-
Melissa explains the importance of giving developers the choice of working with open source or proprietary options, experimenting with flexible application models, and choosing the size of your model according to the use case you have in mind. Discussing the democratization of technology, we explore common challenges in the context of AI including the potential of generative AI versus the challenge of its implementation, where true innovation lies, and what Melissa is most excited about seeing in the future.
Key Points From This Episode:
An introduction to Melissa Evers, Vice President and General Manager of Strategy and Execution at Intel Corporation.More on the communities she has played a leadership role in.Why open source governance is not an oxymoron and why it is critical.The hard work that goes on behind the scenes at open source.What to strive for when building a healthy open source community.Intel’s perspective on the importance of open source and open AI.Enabling developer choices about open source or proprietary options.Growing awareness around building architecture around the freedom of choice.Identifying that a model is a bad choice or lacking in accuracy.Thinking critically about future-proofing yourself with regard to model choice. Opportunities for large and smaller models.Finding the perfect intersection between value delivery, value creation, and cost. Common challenges in the context of AI, including the potential of generative AI and its implementation.Why there is such a commonality of use cases in the realm of generative AI.Where true innovation and value lies even though there may be commonality in use cases.Examples of creative uses of generative AI; retail, compound AI systems, manufacturing, and more.Understanding that innovation in this area is still in its early development stages. How Wardley Mapping can support an understanding of scale. What she is most excited about for the future of AI: Rapid learning in healthcare.Quotes:
“One of the things that is true about software in general is that the role that open source plays within the ecosystem has dramatically shifted and accelerated technology development at large.” — @melisevers [0:03:02]
“It’s important for all citizens of the open source community, corporate or not, to understand and own their responsibilities with regard to the hard work of driving the technology forward.” — @melisevers [0:05:18]
“We believe that innovation is best served when folks have the tools at their disposal on which to innovate.” — @melisevers [0:09:38]
“I think the focus for open source broadly should be on the elements that are going to be commodified.” — @melisevers [0:25:04]
Links Mentioned in Today’s Episode:
Melissa Evers on LinkedIn
Melissa Evers on X
Intel Corporation
-
VP of AI and ML at Synopsys, Thomas Andersen joins us to discuss designing AI chips. Tuning in, you’ll hear all about our guest’s illustrious career, how he became interested in technology, tech in East Germany, what it was like growing up there, and so much more! We delve into his company, Synopsys, and the chips they build before discussing his role in building algorithms.
Key Points From This Episode:
A warm welcome to today’s guest, Thomas Andersen. How he got into the tech world and his experience growing up in East Germany. The cost of Compute AI coming down at the same time the demand is going up. Thomas tells us about Synopsys and what goes into building their chips. Other traditional software companies that are now designing their own AI chips. What Thomas’ role looks like in machine learning and building AI algorithms. How the constantly changing rules of AI chip design continue to create new obstacles. Thomas tells us how they use reinforcement learning in their processes.The different applications for generative AI and why it needs good input data. Thomas’ advice for anyone wanting to get into the world of AI.Quotes:
“It’s not really the technology that makes life great, it’s how you use it, and what you make of it.” — Thomas Andersen [0:07:31]
“There is, of course, a lot of opportunities to use AI in chip design.” — Thomas Andersen [0:25:39]
“Be bold, try as many new things [as you can, and] make sure you use the right approach for the right tasks.” — Thomas Andersen [0:40:09]
Links Mentioned in Today’s Episode:
Thomas Andersen on LinkedIn
Synopsys
How AI Happens
Sama
-
Developing AI and generative AI initiatives demands significant investment, and without delivering on customer satisfaction, these costs can be tough to justify. Today, SVP of Engineering and General Manager of Xactly India, Kandarp Desai joins us to discuss Xactly's AI initiatives and why customer satisfaction remains their top priority.
An introduction to Kandarp and his transition from hardware to software.How he became SVP of Engineering and General Manager of Xactly India.His move to Bangalore and the expansion of Xactly’s presence in India.The rapid modernization of India as a key factor in Xactly’s growth strategy.An overview of Xactly’s AI and generative AI initiatives.Insight into the development of Xactly’s AI Copilot.Four key stakeholders served by the Xactly AI Copilot.The Xactly Extend, an enterprise platform for building custom apps.Challenges in justifying the ROI of AI initiatives.Why customer satisfaction and business outcomes are essential.How AI is overhyped in the short term and underhyped in the long term.The difficulties in quantifying the value of AI.Kandarp’s career advice to AI practitioners, from taking risks to networking.
Key Points From This Episode:Quotes:
“[Generative AI] is only useful if it drives higher customer satisfaction. Otherwise, it doesn't matter.” — Kandarp Desai [0:11:36]
“Justifying the ROI of anything is hard – If you can tie any new invention back to its ROI in customer satisfaction, that can drive an easy sell across an organization.” — Kandarp Desai [0:15:35]
“The whole AI trend is overhyped in the short term and underhyped long term. [It’s experienced an] oversell recently, and people are still trying to figure it out.” — Kandarp Desai [0:20:48]
Links Mentioned in Today’s Episode:
Kandarp Desai on LinkedInXactly
How AI Happens
Sama
-
Srujana is Vice President and Group Director at Walmart’s Machine Learning Center of Excellence and is an experienced and respected AI, machine learning, and data science professional. She has a strong background in developing AI and machine learning models, with expertise in natural language processing, deep learning, and data-driven decision-making. Srujana has worked in various capacities in the tech industry, contributing to advancing AI technologies and their applications in solving complex problems. In our conversation, we unpack the trends shaping AI governance, the importance of consumer data protection, and the role of human-centered AI. Explore why upskilling the workforce is vital, the potential impact AI could have on white-collar jobs, and which roles AI cannot replace. We discuss the interplay between bias and transparency, the role of governments in creating AI development guardrails, and how the regulatory framework has evolved. Join us to learn about the essential considerations of deploying algorithms at scale, striking a balance between latency and accuracy, the pros and cons of generative AI, and more.
Key Points From This Episode:
Srujana breaks down the top concerns surrounding technology and data.Learn how AI can be utilized to drive innovation and economic growth.Navigating the adoption of AI with upskilling and workforce retention.The AI gaps that upskilling should focus on to avoid workforce displacement.Common misconceptions about biases in AI and how they can be mitigated. Why establishing regulations, laws, and policies is vital for ethical AI development.Outline of the nuances of creating an effective worldwide regulatory framework.She explains the challenges and opportunities of deploying algorithms at scale. Hear about the strategies for building architecture that can adapt to future changes. She shares her perspective on generative AI and what its best use cases are.Find out what area of AI Srujana is most excited about.Quotes:
“By deploying [bias] algorithms we may be going ahead and causing some unintended consequences.” — @Srujanadev [0:03:11]
“I think it is extremely important to have the right regulations and guardrails in place.” — @Srujanadev [0:11:32]
“Just using generative AI for the sake of it is not necessarily a great idea.” — @Srujanadev [0:25:27]
“I think there are a lot of applications in terms of how generative AI can be used but not everybody is seeing the return on investment.” — @Srujanadev [0:27:12]
Links Mentioned in Today’s Episode:
Srujana Kaddevarmuth
Srujana Kaddevarmuth on X
Srujana Kaddevarmuth on LinkedIn
United Nations Association (UNA) San Francisco
The World in 2050
American INSIGHT
How AI Happens
Sama
-
Our guest goes on to share the different kinds of research they use for machine learning development before explaining why he is more conservative when it comes to driving generative AI use cases. He even shares some examples of generative use cases he feels are worthwhile. We hear about how these changes will benefit all UPS customers and how they avoid sharing private and non-compliant information with chatbots. Finally, Sunzay shares some advice for anyone wanting to become a leader in the tech world.
Key Points From This Episode:
Introducing Sunzay Passari to the show and how he landed his current role at UPS.Why Sunzay believes that this huge operation he’s part of will drive transformational change. How AI and machine learning have made their way into UPS over the past few years. The way Sunzay and his team have decided where AI will be most disruptive within UPS. Qualitative and qualitative research and what that looks like for this project. Why Sunzay is conservative when it comes to driving generative AI use cases. Sunzay shares some of the generative use cases that he thinks are worthwhile. The way these new technologies will benefit everyday UPS customers. How they are preventing people from accessing non-compliant data through chatbots. Sunzay passes on some advice for anyone looking to forge their career as a leader in tech.Quotes:
“There’s a lot of complexities in the kind of global operations we are running on a day-to-day basis [at UPS].” — Sunzay Passari [0:04:35]
“There is no magic wand – so it becomes very important for us to better our resources at the right time in the right initiative.” — Sunzay Passari [0:09:15]
“Keep learning on a daily basis, keep experimenting and learning, and don’t be afraid of the failures.” — Sunzay Passari [0:22:48]
Links Mentioned in Today’s Episode:
Sunzay Passari on LinkedIn
UPS
How AI Happens
Sama
-
Martin shares what reinforcement learning does differently in executing complex tasks, overcoming feedback loops in reinforcement learning, the pitfalls of typical agent-based learning methods, and how being a robotic soccer champion exposed the value of deep learning. We unpack the advantages of deep learning over modeling agent approaches, how finding a solution can inspire a solution in an unrelated field, and why he is currently focusing on data efficiency. Gain insights into the trade-offs between exploration and exploitation, how Google DeepMind is leveraging large language models for data efficiency, the potential risk of using large language models, and much more.
Key Points From This Episode:
What it is like being a five times world robotic soccer champion.The process behind training a winning robotic soccer team.Why standard machine learning tools could not train his team effectively. Discover the challenges AI and machine learning are currently facing.Explore the various exciting use cases of reinforcement learning.Details about Google DeepMind and the role of him and his team. Learn about Google DeepMind’s overall mission and its current focus.Hear about the advantages of being a scientist in the AI industry. Martin explains the benefits of exploration to reinforcement learning.How data mining using large language models for training is implemented. Ways reinforcement learning will impact people in the tech industry.Unpack how AI will continue to disrupt industries and drive innovation.Quotes:
“You really want to go all the way down to learn the direct connections to actions only via learning [for training AI].” — Martin Riedmiller [0:07:55]
“I think engineers often work with analogies or things that they have learned from different [projects].” — Martin Riedmiller [0:11:16]
“[With reinforcement learning], you are spending the precious real robots time only on things that you don’t know and not on the things you probably already know.” — Martin Riedmiller [0:17:04]
“We have not achieved AGI (Artificial General Intelligence) until we have removed the human completely out of the loop.” — Martin Riedmiller [0:21:42]
Links Mentioned in Today’s Episode:
Martin Riedmiller
Martin Riedmiller on LinkedIn
Google DeepMind
RoboCup
How AI Happens
Sama
-
Jia shares the kinds of AI courses she teaches at Stanford, how students are receiving machine learning education, and the impact of AI agents, as well as understanding technical boundaries, being realistic about the limitations of AI agents, and the importance of interdisciplinary collaboration. We also delve into how Jia prioritizes latency at LiveX before finding out how machine learning has changed the way people interact with agents; both human and AI.
Key Points From This Episode:
The AI courses that Jia teaches at Stanford. Jia’s perspective on the future of AI. What the potential impact of AI agents is. The importance of understanding technical boundaries. Why interdisciplinary collaboration is imperative. How Jia is empowering other businesses through LiveX AI. Why she prioritizes latency and believes that it’s crucial. How AI has changed people’s expectations and level of courtesy.A glimpse into Jia’s vision for the future of AI agents. Why she is not satisfied with the multi-model AI models out there. Challenges associated with data in multi-model machine learning.Quotes:
“[The field of AI] is advancing so fast every day.” — Jia Li [0:03:05]
“It is very important to have more sharing and collaboration within the [AI field].” — Jia Li [0:12:40]
“Having an efficient algorithm [and] having efficient hardware and software optimization is really valuable.” — Jia Li [0:14:42]
Links Mentioned in Today’s Episode:
Jia Li on LinkedIn
LiveX AI
How AI Happens
Sama
-
Key Points From This Episode:
Reid Robinson's professional background, and how he ended up at Zapier. What he learned during his year as an NFT founder, and how it serves him in his work today.How he gained his diverse array of professional skills.Whether one can differentiate between AI and mere automation. How Reid knew that partnering with OpenAI and ChatGPT would be the perfect fit. The way the Zapier team understands and approaches ML accuracy and generative data.Why real-world data is better as it stands, and whether generative data will one day catch up. How Zapier uses generative data with its clients. Why AI is still mostly beneficial for those with a technical background. Reid Robinson's next big idea, and his parting words of advice.Quotes:
“Sometimes, people are very bad at asking for what they want. If you do any stint in, particularly, the more hardcore sales jobs out there, it's one of the things you're going to have to learn how to do to survive. You have to be uncomfortable and learn how to ask for things.” — @Reidoutloud_ [0:05:07]
“In order to really start to drive the accuracy of [our AI models], we needed to understand, what were users trying to do with this?” — @Reidoutloud_ [0:15:34]
“The people who being enabled the most with AI in the current stage are the technical tinkerers. I think a lot of these tools are too technical for average-knowledge workers.” — @Reidoutloud_ [0:28:32]
“Quick advice for anyone listening to this, do not start a company when you have your first kid! Horrible idea.” — @Reidoutloud_ [0:29:28]
Links Mentioned in Today’s Episode:
Reid Robinson on LinkedIn
Reid Robinson on X
Zapier
CocoNFT
How AI Happens
Sama
-
In this episode of How AI Happens, Justin explains how his project, Wondr Search, injects creativity into AI in a way that doesn’t alienate creators. You’ll learn how this new form of AI uses evolutionary algorithms (EAs) and differential evolution (DE) to generate music without learning from or imitating existing creative work. We also touch on the success of the six songs created by Wondr Search, why AI will never fully replace artists, and so much more. For a fascinating conversation at the intersection of art and AI, be sure to tune in today!
Key Points From This Episode:
How genetic algorithms can preserve human creativity in the age of AI.Ways that Wondr Search differs from current generative AI models.Why the songs produced by Wondr Search were so well-received by record labels.Justin’s motivations for creating an AI model that doesn’t learn from existing music.Differentiating between AI-generated content and creative work made by humans.Insight into Justin’s PhD topic focused on mathematical optimization.Key differences between operations research and data science.An understanding of the relationship between machine learning and physics.Our guest’s take on “big data” and why more data isn’t always better.Problems Justin focuses on as a technical advisor to Fortune 500 companies.What he is most excited (and most concerned) about for the future of AI.Quotes:
“[Wondr Search] is definitely not an effort to stand up against generative AI that uses traditional ML methods. I use those a lot and there’s going to be a lot of good that comes from those – but I also think there’s going to be a market for more human-centric generative methods.” — Justin Kilb [0:06:12]
“The definition of intelligence continues to change as [humans and artificial systems] progress.” — Justin Kilb [0:24:29]
“As we make progress, people can access [AI] everywhere as long as they have an internet connection. That's exciting because you see a lot of people doing a lot of great things.” — Justin Kilb [0:26:06]
Links Mentioned in Today’s Episode:
Justin Kilb on LinkedIn
Wondr Search
‘Conserving Human Creativity with Evolutionary Generative Algorithms: A Case Study in Music Generation’
How AI Happens
Sama
-
Jacob shares how Gong uses AI, how it empowers its customers to build their own models, and how this ease of access for users holds the promise of a brighter future. We also learn more about the inner workings of Gong and how it trains its own models, why it’s not too interested in tracking soft skills right now, what we need to be doing more of to build more trust in chatbots, and our guest’s summation of why technology is advancing like a runaway train.
Key Points From This Episode:
Jacob Eckel walks us through his professional background and how he ended up at Gong.The ins and outs of Gong, and where AI fits in. How Gong empowers its customers to build their own models, and the results thereof. Understanding the data ramifications when customers build their own models on Gong.How Gong trains its own models, and the way the platform assists users in real time. Why its models aren’t tracking softer skills like rapport-building, yet.Everything that needs to be solved before we can fully trust chatbots. Jacob’s summation of why technology is growing at an increasingly rapid rate.Quotes:
“We don’t expect our customers to suddenly become data scientists and learn about modeling and everything, so we give them a very intuitive, relatively simple environment in which they can define their own models.” — @eckely [0:07:03]
“[Data] is not a huge obstacle to adopting smart trackers.” — @eckely [0:12:13]
“Our current vibe is there’s a limit to this technology. We are still unevolved apes.” — @eckely [0:16:27]
Links Mentioned in Today’s Episode:
Jacob Eckel on LinkedIn
Jacob Eckel on X
Gong
How AI Happens
Sama
-
Bobak further opines on the pros and cons of Perplexity and GPT 4.0, why the technology uses both models, the differences, and the pros and cons. Finally, our guest tells us why Brilliant Labs is open-source and reminds us why public participation is so important.
Key Points From This Episode:
Introducing Bobak Tavangar to today’s episode of How AI Happens. Bobak tells us about his background and what led him to start his company, Brilliant Labs. Our guest shares his interesting Lord of the Rings analogy and how it relates to his business. How wearable technology is creeping more and more into our lives. The hurdles they face with generative AI glasses and how they’re overcoming them. How Bobak chose the most important factors to incorporate into the glasses. What the glasses can do at this stage of development. Bobak explains how the glasses know whether to query GPT 4.0 or Perplexity AI. GPT 4.0 versus Perplexity and why Bobak prefers to use them both. The importance of gauging public reaction and why Brilliant Labs is open-source.Quotes:
“To have a second pair of eyes that can connect everything we see with all the information on the web and everything we’ve seen previously – is an incredible thing.” — @btavangar [0:13:12]
“For live web search, Perplexity – is the most precise [and] it gives the most meaningful answers from the live web.” — @btavangar [0:26:40]
“The [AI] space is changing so fast. It’s exciting [and] it’s good for all of us but we don’t believe you should ever be locked to one model or another.” — @btavangar [0:28:45]
Links Mentioned in Today’s Episode:
Bobak Tavangar on LinkedIn
Bobak Tavangar on X
Bobak Tavangar on Instagram
Brilliant Labs
Perplexity AI
GPT 4.0
How AI Happens
Sama
-
Andrew shares how generative AI is used by academic institutions, why employers and educators need to curb their fear of AI, what we need to consider for using AI responsibly, and the ins and outs of Andrew’s podcast, Insight x Design.
Key Points From This Episode:
Andrew Madson explains what a tech evangelist is and what his role at Dremio entails. The ins and outs of Dremio. Understanding the pain points that Andrew wanted to alleviate by joining Dremio. How Andrew became a tech evangelist, and why he values this role.Why all tech roles now require one to upskill and branch out into other areas of expertise. The problems that Andrew most commonly faces at work, and how he overcomes them. How Dremio uses generative AI, and how the technology is used in academia. Why employers and educators need to do more to encourage the use of AI. The provenance of training data, and other considerations for the responsible use of AI. Learning more about Andrew’s new podcast, Insight x Design.Quotes:
“Once I learned about lakehouses and Apache Iceberg and how you can just do all of your work on top of the data lake itself, it really made my life a lot easier with doing real-time analytics.” — @insightsxdesign [0:04:24]
“Data analysts have always been expected to be technical, but now, given the rise of the amount of data that we’re dealing with and the limitations of data engineering teams and their capacity, data analysts are expected to do a lot more data engineering.” — @insightsxdesign [0:07:49]
“Keeping it simple and short is ideal when dealing with AI.” — @insightsxdesign [0:12:58]
“The purpose of higher education isn’t to get a piece of paper, it’s to learn something and to gain new skills.” — @insightsxdesign [0:17:35]
Links Mentioned in Today’s Episode:
Andrew Madson
Andrew Madson on LinkedIn
Andrew Madson on X
Andrew Madson on Instagram
Dremio
Insights x Design
Apache Iceberg
ChatGPT
Perplexity AI
Gemini
Anaconda
Peter Wang on LinkedIn
How AI Happens
Sama
-
Tom shares further thoughts on financing AI tech venture capital and whether or not data centers pose a threat to the relevance of the Cloud, as well as his predictions for the future of GPUs and much more.
Key Points From This Episode:
Introducing Tomasz Tunguz, General Partner at Theory Ventures.What he is currently working on including AI research and growing the team at Theory.How he goes about researching the present to predict the future.Why professionals often work in both academia and the field of AI.What stands out to Tom when he is looking for companies to invest in.Varying applications where an 80% answer has differing relevance.The importance of being at the forefront of AI developments as a leader. Why the metrics of risk and success used in the past are no longer relevant.Tom’s thoughts on whether or not Generative AI will replace search.Financing in the AI tech venture capital space.Differentiating between the Cloud and data centers.Predictions for the future of GPUs.Why ‘hello’ is the best opener for a cold email.Quotes:
“Innovation is happening at such a deep technological level and that is at the core of machine learning models.” — @tomastungusz [0:03:37]
“Right now, we’re looking at where [is] there rote work or human toil that can be repeated with AI? That’s one big question where there’s not a really big incumbent.” — @tomastungusz [0:05:51]
“If you are the leader of a team or a department or a business unit or a company, you can not be in a position where you are caught off guard by AI. You need to be on the forefront.” — @tomastungusz [0:08:30]
“The dominant dynamic within consumer products is the least friction in a user experience always wins.” — @tomastungusz [0:14:05]
Links Mentioned in Today’s Episode:
Tomasz Tunguz
Tomasz Tunguz on LinkedIn
Tomasz Tunguz on X
Theory Ventures
How AI Happens
Sama
-
Kordel is the CTO and Founder of Theta Diagnostics, and today he joins us to discuss the work he is doing to develop a sense of smell in AI. We discuss the current and future use cases they’ve been working on, the advancements they’ve made, and how to answer the question “What is smell?” in the context of AI. Kordel also provides a breakdown of their software program Alchemy, their approach to collecting and interpreting data on scents, and how he plans to help machines recognize the context for different smells. To learn all about the fascinating work that Kordel is doing in AI and the science of smell, be sure to tune in!
Key Points From This Episode:
Introducing today’s guest, Kordel France.How growing up on a farm encouraged his interest in AI.An overview of Kordel’s education and the subjects he focused on.His work today and how he is teaching machines to smell.Existing use cases for smell detection, like the breathalyzer test and smoke detectors.The fascinating ways that the ability to pick up certain smells differs between people.Unpacking the elusive question “What is smell?”How to apply this question to AI development.Conceptualizing smell as a pattern that machines can recognize.Examples of current and future use cases that Kordel is working on.How he trains his devices to recognize smells and compounds.A breakdown of their autonomous gas system (AGS).How their software program, Alchemy, helps them make sense of their data.Kordel’s aspiration to add modalities to his sensors that will create context for smells.Quotes:
“I became interested in machine smell because I didn't see a lot of work being done on that.” — @kordelkfrance [0:08:25]
“There's a lot of people that argue we can't actually achieve human-level intelligence until we've met we've incorporated all five senses into an artificial being.” — @kordelkfrance [0:08:36]
“To me, a smell is a collection of compounds that represent something that we can recognize. A pattern that we can recognize.” — @kordelkfrance [0:17:28]
“Right now we have about three dozen to four dozen compounds that we can with confidence detect.” — @kordelkfrance [0:19:04]
“[Our autonomous gas system] is really this interesting system that's hooked up to a bunch of machine learning, that helps calibrate and detect and determine what a smell looks like for a specific use case and breaking that down into its constituent compounds.” — @kordelkfrance [0:23:20]
“The success of our device is not just the sensing technology, but also the ability of Alchemy [our software program] to go in and make sense of all of these noise patterns and just make sense of the signals themselves.” — @kordelkfrance [0:25:41]
Links Mentioned in Today’s Episode:
Kordel France
Kordel France on LinkedIn
Kordel France on X
Theta Diagnostics
Alchemy by Theta Diagnostics
How AI Happens
Sama
- Visa fler