Update README.md
This commit is contained in:
parent
ba1f828c09
commit
30b8421510
10
README.md
10
README.md
@ -90,7 +90,7 @@ print("thinking content:", thinking_content)
|
|||||||
print("content:", content)
|
print("content:", content)
|
||||||
```
|
```
|
||||||
|
|
||||||
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create an OpenAI-compatible API endpoint:
|
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
|
||||||
- SGLang:
|
- SGLang:
|
||||||
```shell
|
```shell
|
||||||
python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser qwen3
|
python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser qwen3
|
||||||
@ -100,7 +100,7 @@ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.4` or to create
|
|||||||
vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1
|
vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1
|
||||||
```
|
```
|
||||||
|
|
||||||
For local use, applications such as llama.cpp, Ollama, LMStudio, and MLX-LM have also supported Qwen3.
|
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
|
||||||
|
|
||||||
## Switching Between Thinking and Non-Thinking Mode
|
## Switching Between Thinking and Non-Thinking Mode
|
||||||
|
|
||||||
@ -274,7 +274,7 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
|
|||||||
{
|
{
|
||||||
...,
|
...,
|
||||||
"rope_scaling": {
|
"rope_scaling": {
|
||||||
"type": "yarn",
|
"rope_type": "yarn",
|
||||||
"factor": 4.0,
|
"factor": 4.0,
|
||||||
"original_max_position_embeddings": 32768
|
"original_max_position_embeddings": 32768
|
||||||
}
|
}
|
||||||
@ -286,12 +286,12 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
|
|||||||
|
|
||||||
For `vllm`, you can use
|
For `vllm`, you can use
|
||||||
```shell
|
```shell
|
||||||
vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
|
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
|
||||||
```
|
```
|
||||||
|
|
||||||
For `sglang`, you can use
|
For `sglang`, you can use
|
||||||
```shell
|
```shell
|
||||||
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
|
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
|
||||||
```
|
```
|
||||||
|
|
||||||
For `llama-server` from `llama.cpp`, you can use
|
For `llama-server` from `llama.cpp`, you can use
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user