Avsnitt
-
Longtime machine-learning researcher, and University of Washington Professor Emeritus, Pedro Domingos joins a16z General Partner Martin Casado to discuss the state of artificial intelligence, whether we're really on a path toward AGI, and the value of expressing unpopular opinions. It's a very insightful discussion as we head into an era of mainstream AI adoption, and ask big questions about how to ramp up progress and diversify research directions.
Here's an excerpt of Pedro sharing his thoughts on the increasing cost of frontier models and whether that's the right direction:
"if you believe the scaling laws hold and the scaling laws will take us to human-level intelligence, then, hey, it's worth a lot of investment. That's one part, but that may be wrong. The other part, however, is that to do that, we need exploding amounts of compute.
"If if I had to predict what's going to happen, it's that we do not need a trillion dollars to reach AGI at all. So if you spend a trillion dollars reaching AGI, this is a very bad investment."
Learn more:
The Master Algorithm
2040: A Silicon Valley Satire
The Economic Case for Generative AI and Foundation Models
Follow everyone on Z:
Pedro Domingos
Martin Casado
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode of AI + a16z, General Partner Anjney Midha shares his perspective on the recent collection of Nobel Prizes awarded to AI researchers in both Physics and Chemistry. He talks through how early work on neural networks in the 1980s spurred continuous advancement in the field — even through the "AI winter" — which resulted in today's extremely useful AI technologies.
Here's a sample of the discussion, in response to a question about whether we will see more high-quality research emerge from sources beyond large universities and commercial labs:
"It can be easy to conclude that the most impactful AI research still requires resources beyond the reach of most individuals or small teams. And that open source contributions, while valuable, are unlikely to match the breakthroughs from well-funded labs. I've even heard heard some dismissive folks call it cute, and undermine the value of those.
"But on the other hand, I think that you could argue that open source and individual contributions are becoming increasingly more important in AI development. I think that the democratization of AI will lead probably to more diverse and innovative applications. And I think, in particular, the reason we should expect an explosion in home scientists — folks who aren't necessarily affiliated with a top-tier academic, or for that matter, industry lab — is that as open source models get more and more accessible, the rate limiter really is on the creativity of somebody who's willing to apply the power of that model's computational ability to a novel domain. And there are just a ton of domains and combinatorial intersections of different disciplines.
"Our blind spot for traditional academia [is that] it's not particularly rewarding to veer off the publish-or-perish conference circuit. And if you're at a large industry lab and you're not contributing directly to the next model release, it's not that clear how you get rewarded. And so being an independent actually frees you up from the incentive misstructure, I think, of some of the larger labs. And if you get to leverage the millions of dollars that the Llama team spent on pre-training, applying it to data sets that nobody else has perused before, it results in pretty big breakthroughs."
Learn more:
They trained artificial neural networks using physics
They cracked the code for proteins’ amazing structures
Notable AI models by year
Follow on X:
Anjney Midha
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
Saknas det avsnitt?
-
In this episode of AI + a16z, General Partner Anjney Midha explains the forces that lead to GPU shortages and price spikes, and how the firm mitigates these concerns for portfolio companies by supplying them with the GPUs they need through a program called Oxygen. The TL;DR version of the problem is that competition for GPU access favors large incumbents who can afford to outbid startups and commit to long contracts; when startups do buy or rent in bulk, they can be stuck with lots of GPUs and — absent training runs or ample customer demand for inference workloads — nothing to do with them.
Here is an excerpt of Anjney explaining how training versus inference workloads affect what level of resources a company needs at any given time:
"It comes down to whether the customer that's using them . . . has a use that can really optimize the efficiency of those chips. As an example, if you happen to be an image model company or a video model company and you put a long-term contract on H100s this year, and you trained and put out a really good model and a product that a lot of people want to use, even though you're not training on the best and latest cluster next year, that's OK. Because you can essentially swap out your training workloads for your inference workloads on those H100s.
"The H100s are actually incredibly powerful chips that you can run really good inference workloads on. So as long as you have customers who want to run inference of your model on your infrastructure, then you can just redirect that capacity to them and then buy new [Nvidia] Blackwells for your training runs.
"Who it becomes really tricky for is people who bought a bunch, don't have demand from their customers for inference, and therefore are stuck doing training runs on that last-generation hardware. That's a tough place to be."
Learn more:
Navigating the High Cost of GPU Compute
Chasing Silicon: The Race for GPUs
Remaking the UI for AI
Follow on X:
Anjney Midha
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode of AI + a16z, Bowen Peng and Jeffrey Quesnelle of Nous Research join a16z General Partner Anjney Midha to discuss their mission to keep open source AI research alive and activate the community of independent builders. The focus is on a recent project called DisTrO, which demonstrates it's possible to train AI models across the public internet much faster than previously thought possible. However, Nous is behind a number of other successful open source AI projects, including the popular Hermes family of "neutral" and guardrail-free language models.
Here's an excerpt of Jeffrey explaining how DisTrO was inspired by the possibility that major open source AI providers could turn their efforts back inward:
"What if we don't get Llama 4? That's like an actual existential threat because the closed providers will continue to get better and we would be dead in the water, in a sense.
"So we asked, 'Is there any real reason we can't make Llama 4 ourselves?' And there is a real reason, which is that we don't have 20,000 H100s. . . . God willing and the creek don't rise, maybe we will one day, but we don't have that right now.
"So we said, 'But what do we have?' We have a giant activated community who's passionate about wanting to do this and would be willing to contribute their GPUs, their power, to it, if only they could . . . but we don't have the ability to activate that willingness into actual action. . . . The only way people are connected is over the internet, and so anything that isn't sharing over the internet is not gonna work.
"And so that was the initial premise: What if we don't get Llama 4? And then, what do we have that we could use to create Llama 4? And, if we can't, what are the technical problems that, if only we slayed that one technical problem, the dam of our community can now flow and actually solve the problem?"
Learn more:
DisTrO paper
Nous Research
Nous Research GitHub
Follow everyone on X:
Bowen Peng
Jeffrey Quesnelle
Anjney Midha
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode of AI + a16z, Ambience cofounder and chief scientist Nikhil Buduma joins Derrick Harris to discuss the nuances of using AI models to build vertical applications (including in his space, health care), and why industry acumen is at least as important as technical expertise. Nikhil also shares his experience of having a first-row seat to key advances in AI — including the transformer architecture — which not only allowed his company to be an early adopter, but also gave him insight into the types of problems that AI could solve in the future.
Here's an excerpt of Nikhil explaining the importance of understanding your buyer:
"If you believe that the most valuable companies are going to fall out of some level of vertical integration between the app layer and the model layer, [that] this next generation of incredibly valuable companies is going to be built by founders who've spent years just obsessively becoming experts in an industry, I would recommend that someone actually know how to map out the most valuable use cases and have a clear story for how those use cases have synergistic, compounding value when you solve those problems increasingly in concert together.
"I think the founding team is going to have to have the right ML chops to actually build out the right live learning loops, build out the ML ops loops to measure and to close the gap on model quality for those use cases. [But] the model is actually just one part of solving the problem.
"You actually need to be thoughtful about the product, the design, the delivery competencies to make sure that what you build is integrated with the right sources of the enterprise data that fits into the right workflows in the right way. And you're going to have to invest heavily in the change management to make sure that customers realize the full value of what they're buying from you. That's all actually way more important than people realize."
Learn more:
Fundamentals of Deep Learning
Follow everyone on X:
Nikhil Buduma
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode of AI + a16z, a16z General Partner Jennifer Li joins MotherDuck Cofounder and CEO Jordan Tigani to discuss DuckDB's spiking popularity as the era of big data wanes, as well as the applicability of SQL-based systems for AI workloads and the prospect of text-to-SQL for analyzing data.
Here's an excerpt of Jordan discussing an early win when it comes to applying generative AI to data analysis:
"Everybody forgets syntax for various SQL calls. And it's just like in coding. So there's some people that memorize . . . all of the code base, and so they don't need auto-complete. They don't need any copilot. . . . They don't need an ID; they can just type in Notepad. But for the rest of us, I think these tools are super useful. And I think we have seen that these tools have already changed how people are interacting with their data, how they're writing their SQL queries.
"One of the things that we've done . . . is we focused on improving the experience of writing queries. Something we found is actually really useful is when somebody runs a query and there's an error, we basically feed the line of the error into GPT 4 and ask it to fix it. And it turns out to be really good.
". . . It's a great way of letting you stay in the flow of writing your queries and having true interactivity."
Learn more:
Small Data SF conference
DuckDB
Follow everyone on X:
Jordan Tigani
Jennifer Li
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode of the AI + a16z podcast, Black Forest Labs founders Robin Rombach, Andreas Blattmann, and Patrick Esser sit down with a16z general partner Anjney Midha to discuss their journey from PhD researchers to Stability AI, and now to launching their own company building state-of-the-art image and video models. They also delve into the topic of openness in AI, explaining the benefits of releasing open models and sharing research findings with the field.
Learn more:
Flux
Keep the code to AI open, say two entrepreneurs
Follow everyone on X:
Robin Rombach
Andreas Blattmann
Patrick Esser
Anjney Midha
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode, a16z General Partner Vijay Pande walks us through the past two decades of applying software engineering to the life sciences — from the Folding@Home project that he launched, through AlphaFold and more. He also discusses the major opportunities for AI to transform medicine and health care, as well as some pitfalls that founders in that space need to watch out for.
Here's an excerpt of Vijay discussing how AlphaFold and other projects revolutionized biology research not just because of their algorithms, but because of how they introduced software engineering into the field:
"I think the key thing about AlphaFold that really got people excited was not just the AI part, because people have been using machine learning. And so that part was there. I think it was how fast, at least to me, an engineering approach could make a big jump in this field. Because this was a field largely addressed by academics, and academics would have a lab of maybe 20 [or] 30 people — some of the bigger ones, maybe slightly bigger. And of that, these are graduate students working on their PhDs. It's very different than having a team of professional programmers and engineers going after the problem.
"And so that jump in team ability, plus the technology, I think was very critical for the jump in results. And also, finally, I think having a company like Google say, 'You know, this is a problem we're excited about and we're interested in,' and that AI and biology is something that is an area of great interest to them . . . was a huge flag to plant."
Learn more:
a16z Bio + Health
Folding@Home
AlphaFold
Raising Health podcast
Follow everyone on X:
Vijay Pande
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode of the AI + a16z podcast, a16z General Partner Anjney Midha speaks with PromptFoo founder and CEO Ian Webster about the importance of red-teaming for AI safety and security, and how bringing those capabilities to more organizations will lead to safer, more predictable generative AI applications. They also delve into lessons they learned about this during their time together as early large language model adopters at Discord, and why attempts to regulate AI should focus on applications and use cases rather than models themselves.
Here's an excerpt of Ian laying out his take on AI governance:
"The reason why I think that the future of AI safety is open source is that I think there's been a lot of high-level discussion about what AI safety is, and some of the existential threats, and all of these scenarios. But what I'm really hoping to do is focus the conversation on the here and now. Like, what are the harms and the safety and security issues that we see in the wild right now with AI? And the reality is that there's a very large set of practical security considerations that we should be thinking about.
"And the reason why I think that open source is really important here is because you have the large AI labs, which have the resources to employ specialized red teams and start to find these problems, but there are only, let's say, five big AI labs that are doing this. And the rest of us are left in the dark. So I think that it's not acceptable to just have safety in the domain of the foundation model labs, because I don't think that's an effective way to solve the real problems that we see today.
"So my stance here is that we really need open source solutions that are available to all developers and all companies and enterprises to identify and eliminate a lot of these real safety issues."
Learn more:
Securing the Black Box: OpenAI, Anthropic, and GDM Discuss
Security Founders Talk Shop About Generative AI
California's Senate Bill 1047: What You Need to Know
Follow everybody on X:
Ian Webster
Anjney Midha
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode of the AI + a16z podcast, Command Zero cofounder and CTO Dean de Beer joins a16z's Joel de la Garza and Derrick Harris to discuss the benefits of training large language models on security data, as well as the myriad factors product teams need to consider when building on LLMs.
Here's an excerpt of Dean discussing the challenges and concerns around scaling up LLMs:
"Scaling out infrastructure has a lot of limitations: the APIs you're using, tokens, inbound and outbound, the cost associated with that — the nuances of the models, if you will. And not all models are created equal, and they oftentimes are very good for specific use cases and they might not be appropriate for your use case, which is why we tend to use a lot of different models for our use cases . . .
"So your use cases will heavily determine the models that you're going to use. Very quickly, you'll find that you'll be spending more time on the adjacent technologies or infrastructure. So, memory management for models. How do you go beyond the context window for a model? How do you maintain the context of the data, when given back to the model? How do you do entity extraction so that the model understands that there are certain entities that it needs to prioritize when looking at new data? How do you leverage semantic search as something to augment the capabilities of the model and the data that you're ingesting?
"That's where we have found that we spend a lot more of our time today than on the models themselves. We have found a good combination of models that run our use cases; we augment them with those adjacent technologies."
Learn more:
The Cuckoo's Egg
1995 Citigroup hack
Follow everyone on social media:
Dean de Beer
Joel de la Garza
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode of the AI + a16z podcast, Anyscale cofounder and CEO Robert Nishihara joins a16z's Jennifer Li and Derrick Harris to discuss the challenges of training and running AI models at scale; how a focus on video models — and the huge amount of data involved — will change generative AI models and infrastructure; and the unique experience of launching a company out of the UC-Berkeley Sky Computing Lab (the successor to RISElab and AMPLab).
Here's a sample of the discussion, where Robert explains how generative AI has turbocharged the appetite for AI capabilities within enterprise customers:
"Two years ago, we would talk to companies, prospective customers, and AI just wasn't a priority. It certainly wasn't a company-level priority in the way that it is today. And generative AI is the reason a lot of companies now reach out to us . . . because they know that succeeding with AI is essential for their businesses, it's essential for their competitive advantage.
"And time to market matters for them. They don't want to spend a year hiring an AI infrastructure team, building up a 20-person team to build all of the internal infrastructure, just to be able to start to use generative AI. That's something they want to do today."
At another point in the discussion, he notes on this same topic:
"One dimension where we try to go really deep is on the developer experience and just enabling developers to be more productive. This is a complaint we hear all the time with machine learning teams or infrastructure teams: They'll say that they hired all these machine learning people, but then the machine learning people are spending all of their time managing clusters or working on the infrastructure. Or they'll say that it takes 6 weeks or 12 weeks to get a model to transition from development to production . . . Or moving from a laptop to the cloud, and to go from single machine to scaling — these are expensive handoffs often involve rewriting a bunch of code."
Learn more:
Anyscale
Sky Computing Lab
Ray
Follow everyone on X:
Robert Nishihara
Jennifer Li
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this archive episode from 2015, a16z's Sonal Chokshi, Frank Chen, and Steven Sinofsky discuss DeepMind's breakthrough AlphaGo system, which mastered the ancient Chinese game Go and introduced the public to reinforcement learning.
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode of the AI + a16z podcast, Luma Chief Scientist Jiaming Song joins a16z General Partner Anjney MIdha to discuss Jiaming's esteemed career in video models, culminating thus far in Luma's recently released Dream Machine 3D model that shows abilities to reason about the world across a variety of aspects. Jiaming covers the history of image and video models, shares his vision for the future of multimodal models, and explains why he thinks Dream Machine demonstrates its emergent reasoning capabilities. In short: Because it was trained on a volume of high-quality video data that, if measured in relation to language data, would amount to hundreds of trillions of tokens.
Here's a sample of the discussion, where Jiaming explains the "bitter lesson" as applied to training generative models, and in the process sums up a big component of why Dream Machine can do what it does by using context-rich video data:
"For a lot of the problems related to artificial intelligence, it is often more productive in the long run to use methods that are simpler but use more compute, [rather] than trying to develop priors, and then trying to leverage the priors so that you can use less compute.
"Cases in this question first happened in language, where people were initially working on language understanding, trying to use grammar or semantic parsing, these kinds of techniques. But eventually these tasks began to be replaced by large language models. And a similar case is happening in the vision domain, as well . . . and now people have been using deep learning features for almost all the tasks. This is a clear demonstration of how using more compute and having less priors is good.
"But how does it work with language? Language by itself is also a human construct. Of course, it is a very good and highly compressed kind of knowledge, but it's definitely a lot less data than what humans take in day to day from the real world . . .
"[And] it is a vastly smaller data set size than visual signals. And we are already almost exhausting the . . . high-quality language sources that we have in the world. The speed at which humans can produce language is definitely not enough to keep up with the demands of the scaling laws. So even if we have a world where we can scale up the compute infrastructure for that, we don't really have the infrastructure to scale up the data efforts . . .
"Even though people would argue that the emergence of large language models is already evidence of the scaling law . . . against the rule-based methods in language understanding, we are arguing that language by itself is also a prior in the face of more of the richer data signal that is happening in the physical world."
Learn more:
Dream Machine
Jiaming's personal site
Luma careers
The bitter lesson
Follow everyone on X:
Jiaming Song
Anjney Midha
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode, design engineer Alasdair Monk joins a16z's Yoko Li and Derrick Harris to discuss how generative AI is changing how developers — and the those building for developers — interact with the tools of their trade. Alasdair’s journey includes stints at dev-centric companies such as Heroku/Salesforce, and he's presently designing the user experience for Poolside, an AI programming startup.
Here's a sample of Alasdair discussing the future of the prompt bar in generative coding tools:
"When interacting with machine learning models, we've almost thrown away 30 years of human-computer interaction knowledge and kind of reverted to using a terminal circa 1980 to interact with computer, or the prompt bar. This very plain-text way to interact with AI is really interesting.
"I think it's very different when you can't predict what a user interface is going to look like. What an LLM can spit out is basically unpredictable or non-deterministic, and so how do you design for that or how do you design around the guardrails for that are the really interesting things that I think everyone who works in the industry right now is trying to figure out. And I think it's pretty clear to a lot of people that sometimes you want to chat to the computer as if it's like the rubber duck.
"I think a lot of where AI is going to really help us, particularly with engineering, is going to be in the interactions that aren't that at all, and will actually probably look much more like interacting with traditional software today, where I interact with it via windows and buttons and all sorts of GUI elements."
Follow everyone on X:
Alasdair Monk
Yoko Li
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode, Inngest cofounder and CEO Tony Holdstock-Brown joins a16z partner Yoko Li, as well as Derrick Harris, to discuss the reality and complexity of running AI agents and other multistep AI workflows in production. Tony also why developer tools for generative AI — and their founders — might look very similar to previous generations of these products, and where there are opportunities for improvement.
Here's a sample of the discussion, where Tony shares some advice for engineers looking to build for AI:
"We almost have two parallel tracks right now as, as engineers. We've got the CPU track in which we're all like, 'Oh yeah, CPU-bound, big O notation. What are we doing on the application-level side?' And then we've got the GPU side, in which people are doing like crazy things in order to make numbers faster, in order to make differentiation better and smoother, in order to do gradient descent in a nicer and more powerful way. The two disciplines right now are working together, but are also very, very, very different from an engineering point of view."This is one interesting part to think about for like new engineers, people that are just thinking about what to do if they want to go into the engineering field overall. Do you want to be on the side using AI, in which you take all of these models, do all of this stuff, build the application-level stuff, and chain things together to build products? Or do you want to be on the math side of things, in which you do really low-level things in order to make compilers work better, so that your AI things can run faster and more efficiently? Both are engineering, just completely different applications of it."
Learn more:
The Modern Transactional Stack
The LLM App Stack
Follow everyone on X:
Tony Holdstock-Brown
Yoko Li
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode, Ideogram CEO Mohammad Norouzi joins a16z General Partner Jennifer Li, as well as Derrick Harris, to share his story of growing up in Iran, helping build influential text-to-image models at Google, and ultimately cofounding and running Ideogram. He also breaks down the differences between transformer models and diffusion models, as well as the transition from researcher to startup CEO.
Here's an excerpt where Mohammad discusses the reaction to the original transformer architecture paper, "Attention Is All You Need," within Google's AI team:
"I think [lead author Asish Vaswani] knew right after the paper was submitted that this is a very important piece of the technology. And he was telling me in the hallway how it works and how much improvement it gives to translation. Translation was a testbed for the transformer paper at the time, and it helped in two ways. One is the speed of training and the other is the quality of translation."To be fair, I don't think anybody had a very crystal clear idea of how big this would become. And I guess the interesting thing is, now, it's the founding architecture for computer vision, too, not only for language. And then we also went far beyond language translation as a task, and we are talking about general-purpose assistants and the idea of building general-purpose intelligent machines. And it's really humbling to see how big of a role the transformer is playing into this."
Learn more:
Investing in IdeogramImagen
Denoising Diffusion Probabilistic Models
Follow everyone on X:
Mohammad Norouzi
Jennifer Li
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
For this holiday weekend (in the United States) episode, we've stitched together two archived episodes from the a16z Podcast, both featuring General Partner Anjney Midha. In the first half, from December, he speaks with Mistral cofounder and CEO Arthur Mensch about the importance of open foundation models, as well as Mistral's approach to building them. In the second half (at 34:40), from February, he speaks with Stanford's Stefano Ermon about the state of the art in video models, including how OpenAI's Sora might work under the hood.
Here's a sample of what Arthur had to say about the debate over how to regulate AI models:
"I think the battle is for the neutrality of the technology. Like a technology, by a sense, is something neutral. You can use it for bad purposes. You can use it for good purposes. If you look at what an LLM does, it's not really different from a programming language. . . .
"So we should regulate the function, the mathematics behind it. But, really, you never use a large language model itself. You always use it in an application, in a way, with a user interface. And so, that's the one thing you want to regulate. And what it means is that companies like us, like foundational model companies, will obviously make the model as controllable as possible so that the applications on top of it can be compliant, can be safe. We'll also build the tools that allow you to measure the compliance and the safety of the application, because that's super useful for the application makers. It's actually needed.
"But there's no point in regulating something that is neutral in itself, that is just a mathematical tool. I think that's the one thing that we've been hammering a lot, which is good, but there's still a lot of effort in making this strong distinction, which is super important to understand what's going on."
Follow everyone on X:
Anjney Midha
Arthur Mensch
Stefano Ermon
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
a16z partners Guido Appenzeller and Matt Bornstein join Derrick Harris to discuss the state of the generative AI market, about 18 months after it really kicked into high gear with the release of ChatGPT — everything from the emergence of powerful open source LLMs to the excitement around AI-generated music.
If there's one major lesson to learn, it's that although we've made some very impressive technological strides and companies are generating meaningful revenue, this is still a a very fluid space. As Matt puts it during the discussion:
"For nearly all AI applications and most model providers, growth is kind of a sawtooth pattern, meaning when there's a big new amazing thing announced, you see very fast growth. And when it's been a while since the last release, growth kind of can flatten off. And you can imagine retention can be all over the place, too . . ."I think every time we're in a flat period, people start to think, 'Oh, it's mature now, the, the gold rush is over. What happens next?' But then a new spike almost always comes, or at least has over the last 18 months or so. So a lot of this depends on your time horizon, and I think we're still in this period of, like, if you think growth has slowed, wait a month and see it change."
Follow everyone on X:
Guido Appenzeller
Matt Bornstein
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this bonus episode, recorded live at our San Francisco office, security-startup founders Dean De Beer (Command Zero), Kevin Tian (Doppel), and Travis McPeak (Resourcely) share their thoughts on generative AI, as well as their experiences building with LLMs and dealing with LLM-based threats.
Here's a sample of what Dean had to say about the myriad considerations when choosing, and operating, a large language model:
"The more advanced your use case is, the more requirements you have, the more data you attach to it, the more complex your prompts — ll this is going to change your inference time.
"I liken this to perceived waiting time for an elevator. There's data scientists at places like Otis that actually work on that problem. You know, no one wants to wait 45 seconds for an elevator, but taking the stairs will take them half an hour if they're going to the top floor of . . . something. Same thing here: If I can generate an outcome in 90 seconds, it's still too long from the user's perspective, even if them building out and figuring out the data and building that report [would have] took them four hours . . . two days."
Follow everyone:
Dean De Beer
Kevin Tian
Travis McPeak
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
-
In this episode of the AI + a16z podcast, a16z General Partner Zane Lackey and a16z Partner Joel de la Garza sit down with Derrick Harris to discuss how generative AI — LLMs, in particular — and foundation models could effect profound change in cybersecurity. After years of AI-washing by security vendors, they explain why the hype is legitimate this time as AI provides a real opportunity to help security teams cut through the noise and automate away the types of drudgery that lead to mistakes.
"Often when you're running a security team, you're not only drowning in noise, but you're drowning in just the volume of things going on," Zane explains. "And so I think a lot of security teams are excited about, 'Can we utilize AI and LLMs to really take at least some of that off of our plate?'
"I think it's still very much an open question of how far they go in helping us, but even taking some meaningful percentage off of our plate in terms of overall work is going to really help security teams overall."
Follow everyone:
Zane Lackey
Joel de la Garza
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
- Visa fler