diff --git a/README.md b/README.md index 3252398..9c02f54 100644 --- a/README.md +++ b/README.md @@ -50,8 +50,8 @@ Also check out our [GPTQ documentation](https://qwen.readthedocs.io/en/latest/qu Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python -from transformers import AutoModelForCausalLM, AutoTokenizer -model_name = "Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4" +from modelscope import AutoModelForCausalLM, AutoTokenizer +model_name = "qwen/Qwen2.5-72B-Instruct-GPTQ-Int4" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto",