From 30393e325de485464bac4cd94945ac66820b0884 Mon Sep 17 00:00:00 2001 From: Cherrytest Date: Wed, 18 Sep 2024 16:13:31 +0000 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 3252398..9c02f54 100644 --- a/README.md +++ b/README.md @@ -50,8 +50,8 @@ Also check out our [GPTQ documentation](https://qwen.readthedocs.io/en/latest/qu Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python -from transformers import AutoModelForCausalLM, AutoTokenizer -model_name = "Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4" +from modelscope import AutoModelForCausalLM, AutoTokenizer +model_name = "qwen/Qwen2.5-72B-Instruct-GPTQ-Int4" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto",