Update README.md
This commit is contained in:
parent
7af7adc739
commit
9a475b07c3
11
README.md
11
README.md
@ -104,7 +104,7 @@ We have three models with 3, 7 and 72 billion parameters. This repo contains the
|
||||
## Requirements
|
||||
The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:
|
||||
```
|
||||
pip install git+https://github.com/huggingface/transformer accelerate
|
||||
pip install git+https://github.com/huggingface/transformers accelerate
|
||||
```
|
||||
or you might encounter the following error:
|
||||
```
|
||||
@ -118,7 +118,7 @@ Below, we provide simple examples to show how to use Qwen2.5-VL with 🤖 ModelS
|
||||
|
||||
The code of Qwen2.5-VL has been in the latest Hugging face transformers and we advise you to build from source with command:
|
||||
```
|
||||
pip install git+https://github.com/huggingface/transformer accelerate
|
||||
pip install git+https://github.com/huggingface/transformers accelerate
|
||||
```
|
||||
or you might encounter the following error:
|
||||
```
|
||||
@ -142,10 +142,13 @@ Here we show a code snippet to show you how to use the chat model with `transfor
|
||||
```python
|
||||
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
|
||||
from qwen_vl_utils import process_vision_info
|
||||
from modelscope import snapshot_download
|
||||
|
||||
model_dir=snapshot_download("Qwen/Qwen2.5-VL-7B-Instruct")
|
||||
|
||||
# default: Load the model on the available device(s)
|
||||
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
|
||||
"Qwen/Qwen2.5-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
|
||||
model_dir, torch_dtype="auto", device_map="auto"
|
||||
)
|
||||
|
||||
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
|
||||
@ -157,7 +160,7 @@ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
|
||||
# )
|
||||
|
||||
# default processer
|
||||
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
|
||||
processor = AutoProcessor.from_pretrained(model_dir)
|
||||
|
||||
# The default range for the number of visual tokens per image in the model is 4-16384.
|
||||
# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.
|
||||
|
||||
Loading…
Reference in New Issue
Block a user