How to Use Llama 3.1 405B AI Model Right Now
Meta has introduced its largestLlama 3.1 405B AI modelafter many months of waiting. The frontier model has been open-sourced and it rivals the proprietary models from OpenAI, Anthropic, and Google. We finally have an open AI model that is as good as the closed ones. So if you want to check out the model’s capability, follow our article and learn how to use the Llama 3.1 405B model right away.
Use Llama 3.1 405B on Meta AI
Users in the US can chat with the Llama 3.1 405B model on Meta AI and WhatsApp itself. Meta is initially rolling out the larger model to US users only. Here is how to access it.
Use Llama 3.1 405B on HuggingChat
If you are not from the US, don’t fret. You can still use the Llama 3.1 405B model on HuggingChat. It hosts the Instruct-based FP8 quantized model and the platform is completely free to use.
Use Llama 3.1 405B on Groq
Groqis also hosting the Llama 3.1 family models including 70B and 8B models. Earlier, it was serving the largest 405B model but due to high traffic and server issues, Groq seems to have removed it for the moment. Meanwhile, Llama 3.1 70B and 8B are available and these models are generating responses at a blazing speed of 250 tokens per second.
So these are the three ways to use the largest Llama 3.1 405B model. As Meta rolls out the new model to more regions, you will be able to use them in your country, without relying on third-party services. Anyway, that is all from us. If you have any questions, let us know in the comments below.
Arjun Sha
Passionate about Windows, ChromeOS, Android, security and privacy issues. Have a penchant to solve everyday computing problems.
Add new comment
Name
Email ID
Δ
01
02
03
04
05