Accelerated versions of Llama 3 optimised for Nvidia GPUs

Update: April 20, 2024 Tags:architectureecoeliclt

Developers can now access Llama 3 at ai.nvidia.com, where it’s offered as an NVIDIA NIM microservice with a standard API for deployment flexibility.

Meta revealed its engineers trained Llama 3 on a computer cluster of 24,576 NVIDIA H100 Tensor Core GPUs, linked with an NVIDIA Quantum-2 InfiniBand network.

Meta engineers trained Llama 3 on a computer cluster packing 24,576 NVIDIA H100 Tensor Core GPUs, linked with an NVIDIA Quantum-2 InfiniBand network. With support from NVIDIA, Meta tuned its network, software and model architectures for its flagship LLM.

To further advance the state of the art in generative AI, Meta recently described plans to scale its infrastructure to 350,000 H100 GPUs.

Versions of Llama 3, accelerated on NVIDIA GPUs, are available today for use in the cloud, data center, edge and PC.

From a browser, developers can try Llama 3 at ai.nvidia.com. It’s packaged as an NVIDIA NIM microservice with a standard application programming interface that can be deployed anywhere.

Businesses can fine-tune Llama 3 with their data using NVIDIA NeMo, an open-source framework for LLMs that’s part of the secure, supported NVIDIA AI Enterprise platform. Custom models can be optimized for inference with NVIDIA TensorRT-LLM and deployed with NVIDIA Triton Inference Server.

Llama 3 also runs on NVIDIA Jetson Orin for robotics and edge computing devices, creating interactive agents like those in the Jetson AI Lab.

What’s more, NVIDIA RTX and GeForce RTX GPUs for workstations and PCs speed inference on Llama 3. These systems give developers a target of more than 100 million NVIDIA-accelerated systems 

Best practices in deploying an LLM for a chatbot involves a balance of low latency, good reading speed and optimal GPU use to reduce costs.

Such a service needs to deliver tokens — the rough equivalent of words to an LLM — at about twice a user’s reading speed which is about 10 tokens/second.

Applying these metrics, a single NVIDIA H200 Tensor Core GPU generated about 3,000 tokens/second — enough to serve about 300 simultaneous users — in an initial test using the version of Llama 3 with 70 billion parameters.

That means a single NVIDIA HGX server with eight H200 GPUs could deliver 24,000 tokens/second, further optimizing costs by supporting more than 2,400 users at the same time.

For edge devices, the version of Llama 3 with eight billion parameters generated up to 40 tokens/second on Jetson AGX Orin and 15 tokens/second on Jetson Orin Nano.