Compare commits

..

No commits in common. "34661fc9596fdc8e5a639384f8c7e427d009dbec" and "ed5f01928eb89bfe8842bf75341515e5245f7f22" have entirely different histories.

18 changed files with 1 additions and 304420 deletions

3
.gitattributes vendored
View File

@ -33,7 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text *.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text *.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text *tfevents* filter=lfs diff=lfs merge=lfs -text
<<<<<<< HEAD
=======
*.EncryptBy4pd filter=lfs diff=lfs merge=lfs -text *.EncryptBy4pd filter=lfs diff=lfs merge=lfs -text
>>>>>>> ed5f01928eb89bfe8842bf75341515e5245f7f22

21
LICENSE
View File

@ -1,21 +0,0 @@
MIT License
Copyright (c) 2023 DeepSeek
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

239
README.md
View File

@ -1,239 +0,0 @@
---
license: mit
library_name: transformers
---
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [service@deepseek.com](service@deepseek.com).

View File

@ -1,27 +0,0 @@
{
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151643,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 27648,
"max_position_embeddings": 131072,
"max_window_layers": 64,
"model_type": "qwen2",
"num_attention_heads": 40,
"num_hidden_layers": 64,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": 131072,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.43.1",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 152064
}

View File

@ -1 +0,0 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 759 KiB

View File

@ -1,9 +0,0 @@
{
"_from_model_config": true,
"bos_token_id": 151646,
"eos_token_id": 151643,
"do_sample": true,
"temperature": 0.6,
"top_p": 0.95,
"transformers_version": "4.39.3"
}

BIN
model-00001-of-000008.safetensors (Stored with Git LFS)

Binary file not shown.

BIN
model-00002-of-000008.safetensors (Stored with Git LFS)

Binary file not shown.

BIN
model-00003-of-000008.safetensors (Stored with Git LFS)

Binary file not shown.

BIN
model-00004-of-000008.safetensors (Stored with Git LFS)

Binary file not shown.

BIN
model-00005-of-000008.safetensors (Stored with Git LFS)

Binary file not shown.

BIN
model-00006-of-000008.safetensors (Stored with Git LFS)

Binary file not shown.

BIN
model-00007-of-000008.safetensors (Stored with Git LFS)

Binary file not shown.

BIN
model-00008-of-000008.safetensors (Stored with Git LFS)

Binary file not shown.

View File

@ -1,778 +0,0 @@
{
"metadata": {
"total_size": 65527752704
},
"weight_map": {
"model.embed_tokens.weight": "model-00001-of-000008.safetensors",
"model.layers.0.self_attn.q_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.0.self_attn.k_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.0.self_attn.v_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.1.self_attn.q_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.1.self_attn.k_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.1.self_attn.v_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.2.self_attn.q_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.2.self_attn.k_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.2.self_attn.v_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.3.self_attn.q_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.3.self_attn.k_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.3.self_attn.v_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.4.self_attn.q_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.4.self_attn.k_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.4.self_attn.v_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.5.self_attn.q_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.5.self_attn.k_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.5.self_attn.v_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.6.self_attn.q_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.6.self_attn.k_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.6.self_attn.v_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.6.input_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-000008.safetensors",
"model.layers.7.self_attn.q_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.7.self_attn.k_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.7.self_attn.v_proj.bias": "model-00001-of-000008.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-000008.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.8.self_attn.q_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.8.self_attn.k_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.8.self_attn.v_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.9.self_attn.q_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.9.self_attn.k_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.9.self_attn.v_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.10.self_attn.q_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.10.self_attn.k_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.10.self_attn.v_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.10.input_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.11.self_attn.q_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.11.self_attn.k_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.11.self_attn.v_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.11.input_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.12.self_attn.q_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.12.self_attn.k_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.12.self_attn.v_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.12.input_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.13.self_attn.q_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.13.self_attn.k_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.13.self_attn.v_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.13.input_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.14.self_attn.q_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.14.self_attn.k_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.14.self_attn.v_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.14.input_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.15.self_attn.q_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.15.self_attn.k_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.15.self_attn.v_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.15.input_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00002-of-000008.safetensors",
"model.layers.16.self_attn.q_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.16.self_attn.k_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.16.self_attn.v_proj.bias": "model-00002-of-000008.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00002-of-000008.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.16.input_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.17.self_attn.q_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.17.self_attn.k_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.17.self_attn.v_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.17.input_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.18.self_attn.q_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.18.self_attn.k_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.18.self_attn.v_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.18.input_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.19.self_attn.q_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.19.self_attn.k_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.19.self_attn.v_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.19.input_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.20.self_attn.q_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.20.self_attn.k_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.20.self_attn.v_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.20.input_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.21.self_attn.q_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.21.self_attn.k_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.21.self_attn.v_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.21.input_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.22.self_attn.q_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.22.self_attn.k_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.22.self_attn.v_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.22.input_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.23.self_attn.q_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.23.self_attn.k_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.23.self_attn.v_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.23.input_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.24.self_attn.q_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.24.self_attn.k_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.24.self_attn.v_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.24.input_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00003-of-000008.safetensors",
"model.layers.25.self_attn.q_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.25.self_attn.k_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.25.self_attn.v_proj.bias": "model-00003-of-000008.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00003-of-000008.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.25.input_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.26.self_attn.q_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.26.self_attn.k_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.26.self_attn.v_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.26.input_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.27.self_attn.q_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.27.self_attn.k_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.27.self_attn.v_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.27.input_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.28.self_attn.q_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.28.self_attn.k_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.28.self_attn.v_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.28.input_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.29.self_attn.q_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.29.self_attn.k_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.29.self_attn.v_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.29.input_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.30.self_attn.q_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.30.self_attn.k_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.30.self_attn.v_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.30.input_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.31.self_attn.q_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.31.self_attn.k_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.31.self_attn.v_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.31.input_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.32.self_attn.q_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.32.self_attn.k_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.32.self_attn.v_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.32.self_attn.q_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.32.self_attn.k_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.32.self_attn.v_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.32.self_attn.o_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.32.mlp.gate_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.32.mlp.up_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.32.mlp.down_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.32.input_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.32.post_attention_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.33.self_attn.q_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.33.self_attn.k_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.33.self_attn.v_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.33.self_attn.q_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.33.self_attn.k_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.33.self_attn.v_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.33.self_attn.o_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.33.mlp.gate_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.33.mlp.up_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.33.mlp.down_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.33.input_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.33.post_attention_layernorm.weight": "model-00004-of-000008.safetensors",
"model.layers.34.self_attn.q_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.34.self_attn.k_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.34.self_attn.v_proj.bias": "model-00004-of-000008.safetensors",
"model.layers.34.self_attn.q_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.34.self_attn.k_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.34.self_attn.v_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.34.self_attn.o_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.34.mlp.gate_proj.weight": "model-00004-of-000008.safetensors",
"model.layers.34.mlp.up_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.34.mlp.down_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.34.input_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.34.post_attention_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.35.self_attn.q_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.35.self_attn.k_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.35.self_attn.v_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.35.self_attn.q_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.35.self_attn.k_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.35.self_attn.v_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.35.self_attn.o_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.35.mlp.gate_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.35.mlp.up_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.35.mlp.down_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.35.input_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.35.post_attention_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.36.self_attn.q_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.36.self_attn.k_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.36.self_attn.v_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.36.self_attn.q_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.36.self_attn.k_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.36.self_attn.v_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.36.self_attn.o_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.36.mlp.gate_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.36.mlp.up_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.36.mlp.down_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.36.input_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.36.post_attention_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.37.self_attn.q_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.37.self_attn.k_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.37.self_attn.v_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.37.self_attn.q_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.37.self_attn.k_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.37.self_attn.v_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.37.self_attn.o_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.37.mlp.gate_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.37.mlp.up_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.37.mlp.down_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.37.input_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.37.post_attention_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.38.self_attn.q_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.38.self_attn.k_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.38.self_attn.v_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.38.self_attn.q_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.38.self_attn.k_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.38.self_attn.v_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.38.self_attn.o_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.38.mlp.gate_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.38.mlp.up_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.38.mlp.down_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.38.input_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.38.post_attention_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.39.self_attn.q_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.39.self_attn.k_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.39.self_attn.v_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.39.self_attn.q_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.39.self_attn.k_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.39.self_attn.v_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.39.self_attn.o_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.39.mlp.gate_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.39.mlp.up_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.39.mlp.down_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.39.input_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.39.post_attention_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.40.self_attn.q_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.40.self_attn.k_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.40.self_attn.v_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.40.self_attn.q_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.40.self_attn.k_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.40.self_attn.v_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.40.self_attn.o_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.40.mlp.gate_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.40.mlp.up_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.40.mlp.down_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.40.input_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.40.post_attention_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.41.self_attn.q_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.41.self_attn.k_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.41.self_attn.v_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.41.self_attn.q_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.41.self_attn.k_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.41.self_attn.v_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.41.self_attn.o_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.41.mlp.gate_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.41.mlp.up_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.41.mlp.down_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.41.input_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.41.post_attention_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.42.self_attn.q_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.42.self_attn.k_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.42.self_attn.v_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.42.self_attn.q_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.42.self_attn.k_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.42.self_attn.v_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.42.self_attn.o_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.42.mlp.gate_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.42.mlp.up_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.42.mlp.down_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.42.input_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.42.post_attention_layernorm.weight": "model-00005-of-000008.safetensors",
"model.layers.43.self_attn.q_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.43.self_attn.k_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.43.self_attn.v_proj.bias": "model-00005-of-000008.safetensors",
"model.layers.43.self_attn.q_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.43.self_attn.k_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.43.self_attn.v_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.43.self_attn.o_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.43.mlp.gate_proj.weight": "model-00005-of-000008.safetensors",
"model.layers.43.mlp.up_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.43.mlp.down_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.43.input_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.43.post_attention_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.44.self_attn.q_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.44.self_attn.k_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.44.self_attn.v_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.44.self_attn.q_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.44.self_attn.k_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.44.self_attn.v_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.44.self_attn.o_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.44.mlp.gate_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.44.mlp.up_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.44.mlp.down_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.44.input_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.44.post_attention_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.45.self_attn.q_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.45.self_attn.k_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.45.self_attn.v_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.45.self_attn.q_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.45.self_attn.k_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.45.self_attn.v_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.45.self_attn.o_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.45.mlp.gate_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.45.mlp.up_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.45.mlp.down_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.45.input_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.45.post_attention_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.46.self_attn.q_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.46.self_attn.k_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.46.self_attn.v_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.46.self_attn.q_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.46.self_attn.k_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.46.self_attn.v_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.46.self_attn.o_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.46.mlp.gate_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.46.mlp.up_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.46.mlp.down_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.46.input_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.46.post_attention_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.47.self_attn.q_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.47.self_attn.k_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.47.self_attn.v_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.47.self_attn.q_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.47.self_attn.k_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.47.self_attn.v_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.47.self_attn.o_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.47.mlp.gate_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.47.mlp.up_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.47.mlp.down_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.47.input_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.47.post_attention_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.48.self_attn.q_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.48.self_attn.k_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.48.self_attn.v_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.48.self_attn.q_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.48.self_attn.k_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.48.self_attn.v_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.48.self_attn.o_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.48.mlp.gate_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.48.mlp.up_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.48.mlp.down_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.48.input_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.48.post_attention_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.49.self_attn.q_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.49.self_attn.k_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.49.self_attn.v_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.49.self_attn.q_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.49.self_attn.k_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.49.self_attn.v_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.49.self_attn.o_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.49.mlp.gate_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.49.mlp.up_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.49.mlp.down_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.49.input_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.49.post_attention_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.50.self_attn.q_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.50.self_attn.k_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.50.self_attn.v_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.50.self_attn.q_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.50.self_attn.k_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.50.self_attn.v_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.50.self_attn.o_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.50.mlp.gate_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.50.mlp.up_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.50.mlp.down_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.50.input_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.50.post_attention_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.51.self_attn.q_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.51.self_attn.k_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.51.self_attn.v_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.51.self_attn.q_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.51.self_attn.k_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.51.self_attn.v_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.51.self_attn.o_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.51.mlp.gate_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.51.mlp.up_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.51.mlp.down_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.51.input_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.51.post_attention_layernorm.weight": "model-00006-of-000008.safetensors",
"model.layers.52.self_attn.q_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.52.self_attn.k_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.52.self_attn.v_proj.bias": "model-00006-of-000008.safetensors",
"model.layers.52.self_attn.q_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.52.self_attn.k_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.52.self_attn.v_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.52.self_attn.o_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.52.mlp.gate_proj.weight": "model-00006-of-000008.safetensors",
"model.layers.52.mlp.up_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.52.mlp.down_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.52.input_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.52.post_attention_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.53.self_attn.q_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.53.self_attn.k_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.53.self_attn.v_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.53.self_attn.q_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.53.self_attn.k_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.53.self_attn.v_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.53.self_attn.o_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.53.mlp.gate_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.53.mlp.up_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.53.mlp.down_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.53.input_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.53.post_attention_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.54.self_attn.q_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.54.self_attn.k_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.54.self_attn.v_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.54.self_attn.q_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.54.self_attn.k_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.54.self_attn.v_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.54.self_attn.o_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.54.mlp.gate_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.54.mlp.up_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.54.mlp.down_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.54.input_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.54.post_attention_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.55.self_attn.q_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.55.self_attn.k_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.55.self_attn.v_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.55.self_attn.q_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.55.self_attn.k_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.55.self_attn.v_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.55.self_attn.o_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.55.mlp.gate_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.55.mlp.up_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.55.mlp.down_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.55.input_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.55.post_attention_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.56.self_attn.q_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.56.self_attn.k_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.56.self_attn.v_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.56.self_attn.q_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.56.self_attn.k_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.56.self_attn.v_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.56.self_attn.o_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.56.mlp.gate_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.56.mlp.up_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.56.mlp.down_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.56.input_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.56.post_attention_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.57.self_attn.q_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.57.self_attn.k_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.57.self_attn.v_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.57.self_attn.q_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.57.self_attn.k_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.57.self_attn.v_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.57.self_attn.o_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.57.mlp.gate_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.57.mlp.up_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.57.mlp.down_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.57.input_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.57.post_attention_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.58.self_attn.q_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.58.self_attn.k_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.58.self_attn.v_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.58.self_attn.q_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.58.self_attn.k_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.58.self_attn.v_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.58.self_attn.o_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.58.mlp.gate_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.58.mlp.up_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.58.mlp.down_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.58.input_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.58.post_attention_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.59.self_attn.q_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.59.self_attn.k_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.59.self_attn.v_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.59.self_attn.q_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.59.self_attn.k_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.59.self_attn.v_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.59.self_attn.o_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.59.mlp.gate_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.59.mlp.up_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.59.mlp.down_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.59.input_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.59.post_attention_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.60.self_attn.q_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.60.self_attn.k_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.60.self_attn.v_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.60.self_attn.q_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.60.self_attn.k_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.60.self_attn.v_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.60.self_attn.o_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.60.mlp.gate_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.60.mlp.up_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.60.mlp.down_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.60.input_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.60.post_attention_layernorm.weight": "model-00007-of-000008.safetensors",
"model.layers.61.self_attn.q_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.61.self_attn.k_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.61.self_attn.v_proj.bias": "model-00007-of-000008.safetensors",
"model.layers.61.self_attn.q_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.61.self_attn.k_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.61.self_attn.v_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.61.self_attn.o_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.61.mlp.gate_proj.weight": "model-00007-of-000008.safetensors",
"model.layers.61.mlp.up_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.61.mlp.down_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.61.input_layernorm.weight": "model-00008-of-000008.safetensors",
"model.layers.61.post_attention_layernorm.weight": "model-00008-of-000008.safetensors",
"model.layers.62.self_attn.q_proj.bias": "model-00008-of-000008.safetensors",
"model.layers.62.self_attn.k_proj.bias": "model-00008-of-000008.safetensors",
"model.layers.62.self_attn.v_proj.bias": "model-00008-of-000008.safetensors",
"model.layers.62.self_attn.q_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.62.self_attn.k_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.62.self_attn.v_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.62.self_attn.o_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.62.mlp.gate_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.62.mlp.up_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.62.mlp.down_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.62.input_layernorm.weight": "model-00008-of-000008.safetensors",
"model.layers.62.post_attention_layernorm.weight": "model-00008-of-000008.safetensors",
"model.layers.63.self_attn.q_proj.bias": "model-00008-of-000008.safetensors",
"model.layers.63.self_attn.k_proj.bias": "model-00008-of-000008.safetensors",
"model.layers.63.self_attn.v_proj.bias": "model-00008-of-000008.safetensors",
"model.layers.63.self_attn.q_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.63.self_attn.k_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.63.self_attn.v_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.63.self_attn.o_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.63.mlp.gate_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.63.mlp.up_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.63.mlp.down_proj.weight": "model-00008-of-000008.safetensors",
"model.layers.63.input_layernorm.weight": "model-00008-of-000008.safetensors",
"model.layers.63.post_attention_layernorm.weight": "model-00008-of-000008.safetensors",
"model.norm.weight": "model-00008-of-000008.safetensors",
"lm_head.weight": "model-00008-of-000008.safetensors"
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,35 +0,0 @@
{
"add_bos_token": true,
"add_eos_token": false,
"bos_token": {
"__type": "AddedToken",
"content": "<begin▁of▁sentence>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"clean_up_tokenization_spaces": false,
"eos_token": {
"__type": "AddedToken",
"content": "<end▁of▁sentence>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"legacy": true,
"model_max_length": 16384,
"pad_token": {
"__type": "AddedToken",
"content": "<end▁of▁sentence>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"sp_model_kwargs": {},
"unk_token": null,
"tokenizer_class": "LlamaTokenizerFast",
"chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<User>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<Assistant><tool▁calls▁begin><tool▁call▁begin>' + tool['type'] + '<tool▁sep>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<tool▁call▁end>'}}{%- set ns.is_first = true -%}{%- else %}{{'\\n' + '<tool▁call▁begin>' + tool['type'] + '<tool▁sep>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<tool▁call▁end>'}}{{'<tool▁calls▁end><end▁of▁sentence>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<tool▁outputs▁end>' + message['content'] + '<end▁of▁sentence>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<Assistant>' + content + '<end▁of▁sentence>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<tool▁outputs▁begin><tool▁output▁begin>' + message['content'] + '<tool▁output▁end>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\\n<tool▁output▁begin>' + message['content'] + '<tool▁output▁end>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<tool▁outputs▁end>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<Assistant><think>\\n'}}{% endif %}"
}