Naveen Rao has been building artificial intelligence technologies and companies for more than a decade. He founded Nervana Systems (acquired by Intel) and MosaicML (acquired by Databricks), and now serves as vice president of generative AI at Databricks. From chips to models, there are few people with a better pulse on how enterprises are using AI.
In this inaugural episode of the AI + a16z podcast, Rao joins a16z partner Matt Bornstein and a16z enterprise editor Derrick Harris to discuss where we’re at in terms of large language model (LLM) adoption, as well as how LLMs will influence chip design and software refresh cycles. He also shares some of his personal story of watching AI technology — and awareness — grow from fringe movement to mainstream phenomenon.
Here are just a few of the many highlights from Rao:
[6:21] “I think the transformer being a standard paradigm is a good thing for hardware vendors, for sure, because it gives them an opportunity to actually come into the game. And that’s what we’re going to see this year. And that’s why I think this year is where we’ll actually see some competition [for Nvidia].
“Is it [good] for the industry? I think it’s a bit of an over over rotation on the architecture — for now —but that’s just how these things go. We’ve got something that works [and] we keep chasing it. I think whatever [is next], it’ll have to be some sort of a modification of that paradigm to move forward.”
[8:26] “In Nervana, for instance, we were very focused on a particular set of primitives, like multilayer perceptrons and convolutional nets. But then we had things like resnets, which are convolutional nets, but have a different flow of information. That presented some issues. We changed the way we potentially will do convolutions: Can we do them in the frequency domain instead of the time domain? That actually changes the motifs again.
“A lot of that kind of worked in favor of something like a GPU that was a little bit more flexible. But now that we have something that’s more stereotyped, like with the transformer, it gives us an opportunity to go build something a little bit less flexible, but more performant.”
[18:35] “Now, everywhere, we’re seeing undergrads who come out of schools like Stanford or Berkeley or whatever, who understand a lot about an LLM and how to tune in and make it do what they want. They know IFT, SFT, RLHF — they know all this stuff now, at least conceptually. So I think the talent is getting to a point where it’s proliferating into many enterprises. It’s just, you’re not going to see the density [as inside a large AI research lab]. You’re not going to see a hundred-person infra team in these lines of business; you’re going to see a five-person infra team. So they need tools that abstract stuff.”
[26:37] “[T]hat’s the paradigm that shifted in my mind . . . pure supervised learning required you to go and build a very high-quality data set that was completely supervised. That was hard and expensive, and you had to do a bunch of ML engineering. So that didn’t quite take off, it was just too hard.
“But now we can get this sort of smooth gradation of performance, where I say, ‘Well, I have this base model that’s pretty good — understands language, understands concepts. Then I can start to layer in the things that I do know. . . And if I don’t have a ton of information, that’s OK. I can still get to something which is useful.'”
[39:08] “[I] went back to get a Ph.D. in neuroscience for the reason of: Can we actually bring intelligence to machines and do it in a way that’s economically feasible? And that last part is actually very important, because if something is not economically feasible, it won’t take off. It won’t proliferate. It won’t change the world.
“So I love building technologies into products, because when someone pays you for something, It means something very important. They saw something that adds value to them. They are solving a problem that they care about. They’re improving their business meaningfully — something. They’re willing to part ways with their money and give it to you for their product. That means something.”
If you liked this episode, you can also listen to the other episode we published this week: Making the Most of Open Source AI. It was recorded during a panel discussion with Jim Zemlin (Linux Foundation), Mitchell Baker (Mozilla), and Percy Liang (Stanford; Together AI), and was moderated by a16z General Partner Anjney Midha.
Sign up for our a16z newsletter to get analysis and news covering the latest trends reshaping AI and infrastructure.
Check your inbox for a welcome note.
Naveen Rao is the vice president of generative AI at Databricks.
Matt Bornstein is a partner at Andreessen Horowitz focused on AI, data systems, and infrastructure.
Derrick Harris is an editor at a16z, managing the content workflow across the Infra and American Dynamism teams.
The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.
This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.
Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
Artificial intelligence is changing everything from art to enterprise IT, and a16z is watching all of it with a close eye. This podcast features discussions with leading AI engineers, founders, and experts, as well as our general partners, about where the technology and industry are heading.