diff --git a/README.md b/README.md index 9289a7a..2998b6c 100644 --- a/README.md +++ b/README.md @@ -175,6 +175,11 @@ We recommend using [SGLang](https://docs.sglang.ai/) to serve MiniMax-M2. SGLang We recommend using [vLLM](https://docs.vllm.ai/en/stable/) to serve MiniMax-M2. vLLM provides efficient day-0 support of MiniMax-M2 model, check https://docs.vllm.ai/projects/recipes/en/latest/MiniMax/MiniMax-M2.html for latest deployment guide. We also provide our [vLLM Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/vllm_deploy_guide.md). +### MLX + +We recommend using [MLX-LM](https://github.com/ml-explore/mlx-lm) to serve MiniMax-M2. Please refer to our [MLX Deployment Guide](https://huggingface.co/MiniMaxAI/MiniMax-M2/blob/main/docs/mlx_deploy_guide.md) for more details. + + ### Inference Parameters We recommend using the following parameters for best performance: `temperature=1.0`, `top_p = 0.95`, `top_k = 40`. @@ -196,4 +201,4 @@ Please refer to our [Tool Calling Guide](https://huggingface.co/MiniMaxAI/MiniMa # Contact Us -Contact us at [model@minimax.io](mailto:model@minimax.io). \ No newline at end of file +Contact us at [model@minimax.io](mailto:model@minimax.io) | [WeChat](https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg). \ No newline at end of file diff --git a/docs/mlx_deploy_guide.md b/docs/mlx_deploy_guide.md new file mode 100644 index 0000000..aba573d --- /dev/null +++ b/docs/mlx_deploy_guide.md @@ -0,0 +1,70 @@ +## MLX deployment guide + +Run, serve, and fine-tune [**MiniMax-M2**](https://huggingface.co/MiniMaxAI/MiniMax-M2) locally on your Mac using the **MLX** framework. This guide gets you up and running quickly. + +> **Requirements** +> - Apple Silicon Mac (M3 Ultra or later) +> - **At least 256GB of unified memory (RAM)** + + +**Installation** + +Install the `mlx-lm` package via pip: + +```bash +pip install -U mlx-lm +``` + +**CLI** + +Generate text directly from the terminal: + +```bash +mlx_lm.generate \ + --model mlx-community/MiniMax-M2-4bit \ + --prompt "How tall is Mount Everest?" +``` + +> Add `--max-tokens 256` to control response length, or `--temp 0.7` for creativity. + +**Python Script Example** + +Use `mlx-lm` in your own Python scripts: + +```python +from mlx_lm import load, generate + +# Load the quantized model +model, tokenizer = load("mlx-community/MiniMax-M2-4bit") + +prompt = "Hello, how are you?" + +# Apply chat template if available (recommended for chat models) +if tokenizer.chat_template is not None: + messages = [{"role": "user", "content": prompt}] + prompt = tokenizer.apply_chat_template( + messages, + tokenize=False, + add_generation_prompt=True + ) + +# Generate response +response = generate( + model, + tokenizer, + prompt=prompt, + max_tokens=256, + temp=0.7, + verbose=True +) + +print(response) +``` + +**Tips** +- **Model variants**: Check this [MLX community collection on Hugging Face](https://huggingface.co/collections/mlx-community/minimax-m2) for `MiniMax-M2-4bit`, `6bit`, `8bit`, or `bfloat16` versions. +- **Fine-tuning**: Use `mlx-lm.lora` for efficient parameter-efficient fine-tuning (PEFT). + +**Resources** +- GitHub: [https://github.com/ml-explore/mlx-lm](https://github.com/ml-explore/mlx-lm) +- Models: [https://huggingface.co/mlx-community](https://huggingface.co/mlx-community) \ No newline at end of file diff --git a/docs/sglang_deploy_guide_cn.md b/docs/sglang_deploy_guide_cn.md index 013d9b1..2208ceb 100644 --- a/docs/sglang_deploy_guide_cn.md +++ b/docs/sglang_deploy_guide_cn.md @@ -112,4 +112,7 @@ export HF_ENDPOINT=https://hf-mirror.com - 通过邮箱 [model@minimax.io](mailto:model@minimax.io) 等官方渠道联系我们的技术支持团队 - 在我们的 [GitHub](https://github.com/MiniMax-AI) 仓库提交 Issue + +- 通过我们的 [官方企业微信交流群](https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg) 反馈 + 我们会持续优化模型的部署体验,欢迎反馈! diff --git a/docs/tool_calling_guide_cn.md b/docs/tool_calling_guide_cn.md index c70660f..90f2bb2 100644 --- a/docs/tool_calling_guide_cn.md +++ b/docs/tool_calling_guide_cn.md @@ -482,4 +482,16 @@ def execute_function_call(function_name: str, arguments: dict): - [MiniMax-M2 模型仓库](https://github.com/MiniMax-AI/MiniMax-M2) - [vLLM 项目主页](https://github.com/vllm-project/vllm) - [SGLang 项目主页](https://github.com/sgl-project/sglang) -- [OpenAI Python SDK](https://github.com/openai/openai-python) \ No newline at end of file +- [OpenAI Python SDK](https://github.com/openai/openai-python) + +## 获取支持 + +如果遇到任何问题: + +- 通过邮箱 [model@minimax.io](mailto:model@minimax.io) 等官方渠道联系我们的技术支持团队 + +- 在我们的仓库提交 Issue + +- 通过我们的 [官方企业微信交流群](https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg) 反馈 + +我们会持续优化模型的使用体验,欢迎反馈! \ No newline at end of file diff --git a/docs/vllm_deploy_guide_cn.md b/docs/vllm_deploy_guide_cn.md index db76ed7..b757874 100644 --- a/docs/vllm_deploy_guide_cn.md +++ b/docs/vllm_deploy_guide_cn.md @@ -110,4 +110,7 @@ SAFETENSORS_FAST_GPU=1 vllm serve \ - 通过邮箱 [model@minimax.io](mailto:model@minimax.io) 等官方渠道联系我们的技术支持团队 - 在我们的 [GitHub](https://github.com/MiniMax-AI) 仓库提交 Issue + +- 通过我们的 [官方企业微信交流群](https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg) 反馈 + 我们会持续优化模型的部署体验,欢迎反馈! diff --git a/figures/wechat.jpeg b/figures/wechat.jpeg new file mode 100644 index 0000000..a30a649 Binary files /dev/null and b/figures/wechat.jpeg differ