update guides

This commit is contained in:
kelseye 2025-10-29 16:17:29 +00:00
parent 246d867506
commit 11b493caf9
6 changed files with 95 additions and 2 deletions

View File

@ -175,6 +175,11 @@ We recommend using [SGLang](https://docs.sglang.ai/) to serve MiniMax-M2. SGLang
We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to serve MiniMax-M2. vLLM provides efficient day-0 support of MiniMax-M2 model, check https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html for latest deployment guide. We also provide our [vLLM Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/vllm_deploy_guide.md). We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to serve MiniMax-M2. vLLM provides efficient day-0 support of MiniMax-M2 model, check https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html for latest deployment guide. We also provide our [vLLM Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/vllm_deploy_guide.md).
### MLX
We recommend using [MLX-LM](https://github.com/ml-explore/mlx-lm) to serve MiniMax-M2. Please refer to our [MLX Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/mlx_deploy_guide.md) for more details.
### Inference Parameters ### Inference Parameters
We recommend using the following parameters for best performance: `temperature=1.0`, `top_p = 0.95`, `top_k = 40`. We recommend using the following parameters for best performance: `temperature=1.0`, `top_p = 0.95`, `top_k = 40`.
@ -196,4 +201,4 @@ Please refer to our [Tool Calling Guide](https://huggingface.co/MiniMaxAI/MiniMa
# Contact Us # Contact Us
Contact us at [model@minimax.io](mailto:model@minimax.io). Contact us at [model@minimax.io](mailto:model@minimax.io) | [WeChat](https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg).

70
docs/mlx_deploy_guide.md Normal file
View File

@ -0,0 +1,70 @@
## MLX deployment guide
Run, serve, and fine-tune [**MiniMax-M2**](https://huggingface.co/MiniMaxAI/MiniMax-M2) locally on your Mac using the **MLX** framework. This guide gets you up and running quickly.
> **Requirements**
> - Apple Silicon Mac (M3 Ultra or later)
> - **At least 256GB of unified memory (RAM)**
**Installation**
Install the `mlx-lm` package via pip:
```bash
pip install -U mlx-lm
```
**CLI**
Generate text directly from the terminal:
```bash
mlx_lm.generate \
--model mlx-community/MiniMax-M2-4bit \
--prompt "How tall is Mount Everest?"
```
> Add `--max-tokens 256` to control response length, or `--temp 0.7` for creativity.
**Python Script Example**
Use `mlx-lm` in your own Python scripts:
```python
from mlx_lm import load, generate
# Load the quantized model
model, tokenizer = load("mlx-community/MiniMax-M2-4bit")
prompt = "Hello, how are you?"
# Apply chat template if available (recommended for chat models)
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Generate response
response = generate(
model,
tokenizer,
prompt=prompt,
max_tokens=256,
temp=0.7,
verbose=True
)
print(response)
```
**Tips**
- **Model variants**: Check this [MLX community collection on Hugging Face](https://huggingface.co/collections/mlx-community/minimax-m2) for `MiniMax-M2-4bit`, `6bit`, `8bit`, or `bfloat16` versions.
- **Fine-tuning**: Use `mlx-lm.lora` for efficient parameter-efficient fine-tuning (PEFT).
**Resources**
- GitHub: [https://github.com/ml-explore/mlx-lm](https://github.com/ml-explore/mlx-lm)
- Models: [https://huggingface.co/mlx-community](https://huggingface.co/mlx-community)

View File

@ -112,4 +112,7 @@ export HF_ENDPOINT=https://hf-mirror.com
- 通过邮箱 [model@minimax.io](mailto:model@minimax.io) 等官方渠道联系我们的技术支持团队 - 通过邮箱 [model@minimax.io](mailto:model@minimax.io) 等官方渠道联系我们的技术支持团队
- 在我们的 [GitHub](https://github.com/MiniMax-AI) 仓库提交 Issue - 在我们的 [GitHub](https://github.com/MiniMax-AI) 仓库提交 Issue
- 通过我们的 [官方企业微信交流群](https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg) 反馈
我们会持续优化模型的部署体验,欢迎反馈! 我们会持续优化模型的部署体验,欢迎反馈!

View File

@ -483,3 +483,15 @@ def execute_function_call(function_name: str, arguments: dict):
- [vLLM 项目主页](https://github.com/vllm-project/vllm) - [vLLM 项目主页](https://github.com/vllm-project/vllm)
- [SGLang 项目主页](https://github.com/sgl-project/sglang) - [SGLang 项目主页](https://github.com/sgl-project/sglang)
- [OpenAI Python SDK](https://github.com/openai/openai-python) - [OpenAI Python SDK](https://github.com/openai/openai-python)
## 获取支持
如果遇到任何问题:
- 通过邮箱 [model@minimax.io](mailto:model@minimax.io) 等官方渠道联系我们的技术支持团队
- 在我们的仓库提交 Issue
- 通过我们的 [官方企业微信交流群](https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg) 反馈
我们会持续优化模型的使用体验,欢迎反馈!

View File

@ -110,4 +110,7 @@ SAFETENSORS_FAST_GPU=1 vllm serve \
- 通过邮箱 [model@minimax.io](mailto:model@minimax.io) 等官方渠道联系我们的技术支持团队 - 通过邮箱 [model@minimax.io](mailto:model@minimax.io) 等官方渠道联系我们的技术支持团队
- 在我们的 [GitHub](https://github.com/MiniMax-AI) 仓库提交 Issue - 在我们的 [GitHub](https://github.com/MiniMax-AI) 仓库提交 Issue
- 通过我们的 [官方企业微信交流群](https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg) 反馈
我们会持续优化模型的部署体验,欢迎反馈! 我们会持续优化模型的部署体验,欢迎反馈!

BIN
figures/wechat.jpeg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB