From 4cd7de69f88735b91cb0a53435d00f4a6cfc2188 Mon Sep 17 00:00:00 2001 From: ai-modelscope Date: Wed, 26 Feb 2025 23:23:35 +0800 Subject: [PATCH] Update metadata with huggingface_hub (#1) - Update metadata with huggingface_hub (e0fa19a8b51ebda760b65af526d983085852b52d) Co-authored-by: Vaibhav Srivastav --- README.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 9c02f54..54eb37a 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,12 @@ --- +base_model: Qwen/Qwen2.5-72B-Instruct +language: +- en +library_name: transformers license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4/blob/main/LICENSE -language: -- en pipeline_tag: text-generation -base_model: Qwen/Qwen2.5-72B-Instruct tags: - chat --- @@ -50,8 +51,8 @@ Also check out our [GPTQ documentation](https://qwen.readthedocs.io/en/latest/qu Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python -from modelscope import AutoModelForCausalLM, AutoTokenizer -model_name = "qwen/Qwen2.5-72B-Instruct-GPTQ-Int4" +from transformers import AutoModelForCausalLM, AutoTokenizer +model_name = "Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto",