Update metadata with huggingface_hub (#1)
- Update metadata with huggingface_hub (e0fa19a8b51ebda760b65af526d983085852b52d) Co-authored-by: Vaibhav Srivastav <reach-vb@users.noreply.huggingface.co>
This commit is contained in:
parent
c69030206b
commit
4cd7de69f8
11
README.md
11
README.md
@ -1,11 +1,12 @@
|
|||||||
---
|
---
|
||||||
|
base_model: Qwen/Qwen2.5-72B-Instruct
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
library_name: transformers
|
||||||
license: other
|
license: other
|
||||||
license_name: qwen
|
license_name: qwen
|
||||||
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4/blob/main/LICENSE
|
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4/blob/main/LICENSE
|
||||||
language:
|
|
||||||
- en
|
|
||||||
pipeline_tag: text-generation
|
pipeline_tag: text-generation
|
||||||
base_model: Qwen/Qwen2.5-72B-Instruct
|
|
||||||
tags:
|
tags:
|
||||||
- chat
|
- chat
|
||||||
---
|
---
|
||||||
@ -50,8 +51,8 @@ Also check out our [GPTQ documentation](https://qwen.readthedocs.io/en/latest/qu
|
|||||||
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
|
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from modelscope import AutoModelForCausalLM, AutoTokenizer
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
model_name = "qwen/Qwen2.5-72B-Instruct-GPTQ-Int4"
|
model_name = "Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4"
|
||||||
model = AutoModelForCausalLM.from_pretrained(
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
model_name,
|
model_name,
|
||||||
torch_dtype="auto",
|
torch_dtype="auto",
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user