Here’s How Nvidia Will Power Automotive Giants
Nvidia (NASDAQ:NVDA) is providing automotive companies with advanced solutions for hardware, software, and infrastructure so they can compete alongside Tesla when it comes to autonomous driving technology. In this Backstage Pass clip from “The AI/ML Show” recorded on Jan. 12, Motley Fool contributor Jose Najarro explains Nvidia’s role in supporting automobile giants as they develop self-driving vehicles.
Jose Najarro: For those other automobile companies that don’t have the power like Tesla, they look to Nvidia for solutions. That’s where Nvidia is working, they’re creating solutions for the other automobile industries or automobile giants that don’t have maybe the type of talent or engineer that Tesla might have. I’m going to share, and we’re going to look at all the different solutions that they have and obviously, we’re going to talk about some of the chips.
First, they have solutions for the hardware, software, and infrastructure. Let’s start off with the hardware. They have NVIDIA DRIVE Orin, and this is pretty much the brain of the computer and this was the chip that used to be inside of Tesla, not this generation but the generation inside Tesla was the previous one. It was called the NVIDIA DRIVE, Pegasus and Xavier, things improve overtime. The newest generation is the Orin and the upcoming generation is going to be the Atlan. For example, this is just their system on chip, which delivers over 250 trillion operations per second and this is based on their current ampere, which is their newest, latest graphics card, and this allows for up to Level 2 plus to Level 5 autonomous driving. Like I mentioned, this is the central computer. This is pretty much the brain. It lets developers build scale and leverage one development investments across the entire fleet, and it improves over time. In the Tesla, I believe for the Pegasus and Xavier, for each vehicle, you would need two chips. Nowadays, for example, if one is using the Orin, you only need one chip per vehicle. This is their main hardware.
The second hardware that they have is what they call the NVIDIA DRIVE Hyperion 8. This is a full setup. This is a production-ready platform for autonomous vehicle. Let’s say you’re some form of car company, you want to enter into the autonomous market, you can buy this kit. It comes with, if you can see right here, this is pretty much that Orin that we just took a look at the chip, but it also comes with 12 exterior cameras, three interior cameras, nine RADARs, 12 ultrasonic sensors, and one front-facing lighter, plus one LiDAR for ground-truth data collection.
We can see, I’m pretty sure Trevor showed the sensors that Tesla was using. One of the obvious benefits for using less sensors is less money. The overall cost decreases. Here, using a lot more sensors, you’re most likely going to pay more for something like this. This is a huge improvement from their previous generation, the Hyperion 7.1, which I mentioned. It previously needed two system-on-chips. The NVIDIA DRIVE Hyperion 8 only needs one per vehicle. We’ll just take a quick look. Here, we have the sensors of everything. You have the LiDAR cameras, the RADARs, and their overall field of view, and the different sensors and different companies that operate this Hyperion 8.1 sensor field.
Those are the main hardware systems that Nvidia provides. Then they also provide infrastructure. It’s important to collect all this data. All this data that each vehicle collects is important, but we need to train that data. To be able to train that data, you need some high-performing computer, some crazy servers, some amazing workstations. That’s why we saw Trevor mention that Tesla is working for that, I forget, what was the chip name? Trevor, if you can.
Trevor Jennewine: The datacenter one is the D1.
Jose Najarro: The D1, yes. Tesla has their own servers. Companies need their own servers. Nvidia has this called the Nvidia DGX. Let’s say an automobile company, I need to create some servers, I need to have some servers, I can buy some pre-built servers from Nvidia. This could get pretty expensive. Building a server for machine learning, artificial intelligence, they can get pretty expensive $10 million plus. Some can go into the hundreds of millions of dollars.
Nvidia also does renting of servers. You can rent out servers, some starting as cheap as $90,000 a month. Obviously, the more power you need, the more expensive. But there are certain companies that just don’t want to deal with the overall maintenance, upgrading their servers, that they were just prefer to just rent them out more like a subscription-based, pretty much like leasing. Every time you lease, you can get a new vehicle. With infrastructure, you get a new server to make sure you’re always up to date.
This article represents the opinion of the writer, who may disagree with the “official” recommendation position of a Motley Fool premium advisory service. We’re motley! Questioning an investing thesis — even one of our own — helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.