
!logo
Meta Llama 3.1 is a collection of multilingual large language models (LLMs) available in 8B, 70B and 405B parameter sizes. These models are designed for text-based tasks, including chat and content generation. The instruction-tuned versions available here are optimized for multilingual dialogue use cases and have demonstrated superior performance compared to many open-source and commercial chat models on common industry benchmarks.
Assistant-like chat: Instruction-tuned text-only models are optimized for multilingual dialogue, making them ideal for developing conversational AI assistants.
Natural language generation tasks: Pretrained models can be adapted for various text-based applications, such as content creation, summarization, and translation.
Synthetic data generation: Utilize the outputs of Llama 3.1 to create synthetic datasets, which can aid in training and improving other models.
Model distillation: Leverage Llama 3.1 to enhance smaller models by transferring knowledge, resulting in more efficient and specialized AI systems, or by using it as a base model to fine-tune based on the knowledge of other bigger models (see deepseek-r1-distill-llama as an example)
Research purposes: Employ Llama 3.1 in academic and scientific research to explore advancements in natural language processing and artificial intelligence.
| Attribute | Details |
|---|---|
| Provider | Meta |
| Architecture | llama |
| Cutoff date | December 2023 |
| Languages | English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. |
| Tool calling | ✅ |
| Input modalities | Text |
| Output modalities | Text and Code |
| License | Llama 3.1 Community license |
| Model variant | Parameters | Quantization | Context window | VRAM¹ | Size |
|---|---|---|---|---|---|
ai/llama3.1:latestai/llama3.1:8B-Q4_K_M | 8B | IQ2_XXS/Q4_K_M | 131K tokens | 5.33 GiB | 4.58 GB |
ai/llama3.1:8B-Q4_K_M | 8B | IQ2_XXS/Q4_K_M | 131K tokens | 5.33 GiB | 4.58 GB |
ai/llama3.1:8B-F16 | 8B | F16 | 131K tokens | 15.01 GiB | 14.96 GB |
¹: VRAM estimated based on model characteristics.
latest→8B-Q4_K_M
First, pull the model:
bashdocker model pull ai/llama3.1
Then run the model:
bashdocker model run ai/llama3.1
For more information on Docker Model Runner, explore the documentation.
| Category | Benchmark | Llama 3.1 8B |
|---|---|---|
| General | MMLU | 69.4 |
| MMLU (CoT) | 73.0 | |
| MMLU-Pro (CoT) | 48.3 | |
| IFEval | 80.4 | |
| Reasoning | ARC-C | 83.4 |
| GPQA | 30.4 | |
| Code | HumanEval | 72.6 |
| MBPP ++ base version | 72.8 | |
| Multipl-E HumanEval | 50.8 | |
| Multipl-E MBPP | 52.4 | |
| Math | GSM-8K (CoT) | 84.5 |
| MATH (CoT) | 51.9 | |
| Tool Use | API-Bank | 82.6 |
| BFCL | 76.1 | |
| Gorilla Benchmark API Bench | 8.2 | |
| Nexus (0-shot) | 38.5 | |
| Multilingual | Multilingual MGSM (CoT) | 68.9 |
| MMLU (5-shot) - Portuguese | 62.12 | |
| MMLU (5-shot) - Spanish | 62.45 | |
| MMLU (5-shot) - Italian | 61.63 | |
| MMLU (5-shot) - German | 60.59 | |
| MMLU (5-shot) - French | 62.34 | |
| MMLU (5-shot) - Hindi | 50.88 | |
| MMLU (5-shot) - Thai | 50.32 |
来自真实用户的反馈,见证轩辕镜像的优质服务
免费版仅支持 Docker Hub 加速,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
免费版仅支持 docker.io;专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像加速服务
在 Docker Desktop 配置镜像加速
Docker Compose 项目配置加速
Kubernetes 集群配置 Containerd
在宝塔面板一键配置镜像加速
Synology 群晖 NAS 配置加速
飞牛 fnOS 系统配置镜像加速
极空间 NAS 系统配置加速服务
爱快 iKuai 路由系统配置加速
绿联 NAS 系统配置镜像加速
QNAP 威联通 NAS 配置加速
Podman 容器引擎配置加速
HPC 科学计算容器配置加速
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名加速
需要其他帮助?请查看我们的 常见问题 或 官方QQ群: 13763429