Update readme, chat_template (#16)

- Update readme, chat_template (6d7581d7d7307c06eaac2b911634b15797a7bbb3)


Co-authored-by: yong <yo37@users.noreply.huggingface.co>
This commit is contained in:
Cherrytest 2025-08-26 07:42:26 +00:00
parent 2131a3c091
commit 8822c688ba
2 changed files with 22 additions and 21 deletions

View File

@ -28,7 +28,7 @@ tags:
<img src="https://img.shields.io/badge/Seed-Project Page-yellow"></a> <img src="https://img.shields.io/badge/Seed-Project Page-yellow"></a>
<a href="https://github.com/ByteDance-Seed/seed-oss"> <a href="https://github.com/ByteDance-Seed/seed-oss">
<img src="https://img.shields.io/badge/Seed-Tech Report Coming Soon-red"></a> <img src="https://img.shields.io/badge/Seed-Tech Report Coming Soon-red"></a>
<a href="https://huggingface.co/ByteDance-Seed"> <a href="https://huggingface.co/collections/ByteDance-Seed/seed-oss-68a609f4201e788db05b5dcd">
<img src="https://img.shields.io/badge/Seed-Hugging Face-orange"></a> <img src="https://img.shields.io/badge/Seed-Hugging Face-orange"></a>
<br> <br>
<a href="./LICENSE"> <a href="./LICENSE">
@ -36,7 +36,7 @@ tags:
</p> </p>
> [!NOTE] > [!NOTE]
> This model card is dedicated to the `Seed-OSS-36B-Instruct` model. > This model card is dedicated to the `Seed-OSS-36B-Base-Instruct` model.
## News ## News
- [2025/08/20]🔥We release `Seed-OSS-36B-Base` (both with and without synthetic data versions) and `Seed-OSS-36B-Instruct`. - [2025/08/20]🔥We release `Seed-OSS-36B-Base` (both with and without synthetic data versions) and `Seed-OSS-36B-Instruct`.
@ -312,12 +312,12 @@ Incorporating synthetic instruction data into pretraining leads to improved perf
</tr> </tr>
<tr> <tr>
<td align="center">ArcAGI V2</td> <td align="center">ArcAGI V2</td>
<td align="center">50.3</td> <td align="center">1.16</td>
<td align="center"><b>41.7</b></td> <td align="center"><b>1.74</b></td>
<td align="center">37.8</td> <td align="center">0.87</td>
<td align="center">14.4</td> <td align="center">0</td>
<td align="center">-</td> <td align="center">-</td>
<td align="center"><ins>40.6</ins></td> <td align="center"><ins>1.45</ins></td>
</tr> </tr>
<tr> <tr>
<td align="center">KORBench</td> <td align="center">KORBench</td>
@ -463,6 +463,8 @@ Incorporating synthetic instruction data into pretraining leads to improved perf
</sup><br/><sup> </sup><br/><sup>
- The results of Gemma3-27B are sourced directly from its technical report. - The results of Gemma3-27B are sourced directly from its technical report.
</sup><br/><sup> </sup><br/><sup>
- The results of ArcAGI-V2 were measured on the official evaluation set, which was not involved in the training process.
</sup><br/><sup>
- Generation configs for Seed-OSS-36B-Instruct: temperature=1.1, top_p=0.95. Specifically, for Taubench, temperature=1, top_p=0.7. - Generation configs for Seed-OSS-36B-Instruct: temperature=1.1, top_p=0.95. Specifically, for Taubench, temperature=1, top_p=0.7.
</sup><br/><sup> </sup><br/><sup>
</sup> </sup>
@ -474,7 +476,7 @@ Incorporating synthetic instruction data into pretraining leads to improved perf
Users can flexibly specify the model's thinking budget. The figure below shows the performance curves across different tasks as the thinking budget varies. For simpler tasks (such as IFEval), the model's chain of thought (CoT) is shorter, and the score exhibits fluctuations as the thinking budget increases. For more challenging tasks (such as AIME and LiveCodeBench), the model's CoT is longer, and the score improves with an increase in the thinking budget. Users can flexibly specify the model's thinking budget. The figure below shows the performance curves across different tasks as the thinking budget varies. For simpler tasks (such as IFEval), the model's chain of thought (CoT) is shorter, and the score exhibits fluctuations as the thinking budget increases. For more challenging tasks (such as AIME and LiveCodeBench), the model's CoT is longer, and the score improves with an increase in the thinking budget.
![thinking_budget](./thinking_budget.png) ![thinking_budget](./figures/thinking_budget.png)
Here is an example with a thinking budget set to 512: during the reasoning process, the model periodically triggers self-reflection to estimate the consumed and remaining budget, and delivers the final response once the budget is exhausted or the reasoning concludes. Here is an example with a thinking budget set to 512: during the reasoning process, the model periodically triggers self-reflection to estimate the consumed and remaining budget, and delivers the final response once the budget is exhausted or the reasoning concludes.
``` ```
@ -495,8 +497,7 @@ If no thinking budget is set (default mode), Seed-OSS will initiate thinking wit
## Quick Start ## Quick Start
```shell ```shell
pip3 install -r requirements.txt pip install git+https://github.com/huggingface/transformers.git@56d68c6706ee052b445e1e476056ed92ac5eb383
pip install git+ssh://git@github.com/Fazziekey/transformers.git@seed-oss
``` ```
```python ```python
@ -568,7 +569,7 @@ Use vllm >= 0.10.0 or higher for inference.
- First install vLLM with Seed-OSS support version: - First install vLLM with Seed-OSS support version:
```shell ```shell
VLLM_USE_PRECOMPILED=1 VLLM_TEST_USE_PRECOMPILED_NIGHTLY_WHEEL=1 pip install git+ssh://git@github.com/FoolPlayer/vllm.git@seed-oss VLLM_USE_PRECOMPILED=1 VLLM_TEST_USE_PRECOMPILED_NIGHTLY_WHEEL=1 pip install git+https://github.com/vllm-project/vllm.git
``` ```
- Start vLLM API server: - Start vLLM API server:

View File

@ -21,25 +21,25 @@
8192: 1024, 8192: 1024,
16384: 1024 16384: 1024
} -%} } -%}
{# 找到 “大于等于 thinking_budget” 的第一个档位 #} {# Find the first gear that is greater than or equal to the thinking_budget. #}
{%- set ns = namespace(interval = None) -%} {%- set ns = namespace(interval = None) -%}
{%- for k, v in budget_reflections_v05 | dictsort -%} {%- for k, v in budget_reflections_v05 | dictsort -%}
{%- if ns.interval is none and thinking_budget <= k -%} {%- if ns.interval is none and thinking_budget <= k -%}
{%- set ns.interval = v -%} {%- set ns.interval = v -%}
{%- endif -%} {%- endif -%}
{%- endfor -%} {%- endfor -%}
{# 若超过最大档位,则用最后一个档位的值 #} {# If it exceeds the maximum gear, use the value of the last gear #}
{%- if ns.interval is none -%} {%- if ns.interval is none -%}
{%- set ns.interval = budget_reflections_v05[16384] -%} {%- set ns.interval = budget_reflections_v05[16384] -%}
{%- endif -%} {%- endif -%}
{# ---------- 预处理 system 消息 ---------- #} {# ---------- Preprocess the system message ---------- #}
{%- if messages[0]["role"] == "system" %} {%- if messages[0]["role"] == "system" %}
{%- set system_message = messages[0]["content"] %} {%- set system_message = messages[0]["content"] %}
{%- set loop_messages = messages[1:] %} {%- set loop_messages = messages[1:] %}
{%- else %} {%- else %}
{%- set loop_messages = messages %} {%- set loop_messages = messages %}
{%- endif %} {%- endif %}
{# ---------- 确保 tools 存在 ---------- #} {# ---------- Ensure tools exist ---------- #}
{%- if not tools is defined or tools is none %} {%- if not tools is defined or tools is none %}
{%- set tools = [] %} {%- set tools = [] %}
{%- endif %} {%- endif %}
@ -51,7 +51,7 @@
{%- elif t == "array" -%}list {%- elif t == "array" -%}list
{%- else -%}Any{%- endif -%} {%- else -%}Any{%- endif -%}
{%- endmacro -%} {%- endmacro -%}
{# ---------- 输出 system 块 ---------- #} {# ---------- Output the system block ---------- #}
{%- if system_message is defined %} {%- if system_message is defined %}
{{ bos_token + "system\n" + system_message }} {{ bos_token + "system\n" + system_message }}
{%- else %} {%- else %}
@ -105,7 +105,7 @@ def {{ item.function.name }}(
{{"工具调用请遵循如下格式:\n<seed:tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>value_1</parameter>\n<parameter=example_parameter_2>This is the value for the second parameter\nthat can span\nmultiple lines</parameter>\n</function>\n</seed:tool_call>\n"}} {{"工具调用请遵循如下格式:\n<seed:tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>value_1</parameter>\n<parameter=example_parameter_2>This is the value for the second parameter\nthat can span\nmultiple lines</parameter>\n</function>\n</seed:tool_call>\n"}}
{%- endif %} {%- endif %}
{# 结束 system 块行尾 #} {# End the system block line #}
{%- if system_message is defined or tools is iterable and tools | length > 0 %} {%- if system_message is defined or tools is iterable and tools | length > 0 %}
{{ eos_token }} {{ eos_token }}
{%- endif %} {%- endif %}
@ -121,7 +121,7 @@ def {{ item.function.name }}(
{{ eos_token }} {{ eos_token }}
{%- endif %} {%- endif %}
{%- endif %} {%- endif %}
{# ---------- 逐条写出历史消息 ---------- #} {# ---------- List the historical messages one by one ---------- #}
{%- for message in loop_messages %} {%- for message in loop_messages %}
{%- if message.role == "assistant" {%- if message.role == "assistant"
and message.tool_calls is defined and message.tool_calls is defined
@ -157,15 +157,15 @@ def {{ item.function.name }}(
{%- if message.content is defined and message.content is string and message.content | trim | length > 0 %} {%- if message.content is defined and message.content is string and message.content | trim | length > 0 %}
{{ "\n" + message.content | trim + eos_token }} {{ "\n" + message.content | trim + eos_token }}
{%- endif %} {%- endif %}
{# 包括 tool 角色,在这个逻辑 #} {# Include the tool role #}
{%- else %} {%- else %}
{{ bos_token + message.role + "\n" + message.content + eos_token }} {{ bos_token + message.role + "\n" + message.content + eos_token }}
{%- endif %} {%- endif %}
{%- endfor %} {%- endfor %}
{# ---------- 控制模型开始续写 ---------- #} {# ---------- Control the model to start continuation ---------- #}
{%- if add_generation_prompt %} {%- if add_generation_prompt %}
{{ bos_token+"assistant\n" }} {{ bos_token+"assistant\n" }}
{%- if thinking_budget == 0 %} {%- if thinking_budget == 0 %}
{{ think_begin_token+budget_begin_token }} {{ think_begin_token + "\n" + budget_begin_token + "The current thinking budget is 0, so I will directly start answering the question." + budget_end_token + "\n" + think_end_token }}
{%- endif %} {%- endif %}
{%- endif %} {%- endif %}