Using Groq in LobeChat
Groq's LPU Inference Engine has excelled in the latest independent Large Language Model (LLM) benchmark, redefining the standard for AI solutions with its remarkable speed and efficiency. By integrating LobeChat with Groq Cloud, you can now easily leverage Groq's technology to accelerate the operation of large language models in LobeChat.
Groq's LPU Inference Engine achieved a sustained speed of 300 tokens per second in internal benchmark tests, and according to benchmark tests by ArtificialAnalysis.ai, Groq outperformed other providers in terms of throughput (241 tokens per second) and total time to receive 100 output tokens (0.8 seconds).
This document will guide you on how to use Groq in LobeChat:
Obtaining GroqCloud API Keys
First, you need to obtain an API Key from the GroqCloud Console.
Create an API Key in the API Keys
menu of the console.
Safely store the key from the pop-up as it will only appear once. If you accidentally lose it, you will need to create a new key.
Configure Groq in LobeChat
You can find the Groq configuration option in Settings
-> Language Model
, where you can input the API Key you just obtained.
Next, select a Groq-supported model in the assistant's model options, and you can experience the powerful performance of Groq in LobeChat.