eval-qwen2-5-vl-7b-instruct.../logs/infer/public/qwen2-5-vl-7b-instruct-awq@main/lambada_2.out
2025-07-25 09:25:48 +00:00

36 lines
2.6 KiB
Plaintext

[4pdvGPU Msg(372:140311500876800:libvgpu.c:873)]: Initializing.....
[4pdvGPU Msg(372:140311500876800:multiprocess_memory_limit.c:144)]: uuid GPU-9a16bbfd-e4c2-d946-8cf6-81879301e66c validated
[4pdvGPU Msg(372:140311500876800:multiprocess_memory_limit.c:144)]: uuid GPU-845ac5d5-6827-1c5e-8cd0-275d4ba08b97 validated
[4pdvGPU ERROR (pid:372 thread=140311500876800 libvgpu.c:924)]: cuInit failed:100
07/25 17:25:33 - OpenCompass - INFO - Task [public/qwen2-5-vl-7b-instruct-awq@main/lambada_2]
07/25 17:25:35 - OpenCompass - INFO - Start inferencing [public/qwen2-5-vl-7b-instruct-awq@main/lambada_2]
/opt/conda/lib/python3.8/site-packages/fuzzywuzzy/fuzz.py:11: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning
warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning')
[2025-07-25 17:25:35,506] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
0%| | 0/1717 [00:00<?, ?it/s]
0%| | 0/1717 [00:01<?, ?it/s]
Traceback (most recent call last):
File "/models/opencompass/opencompass/tasks/openicl_infer.py", line 147, in <module>
inferencer.run()
File "/models/opencompass/opencompass/tasks/openicl_infer.py", line 76, in run
self._inference()
File "/models/opencompass/opencompass/tasks/openicl_infer.py", line 119, in _inference
inferencer.inference(retriever,
File "/models/opencompass/opencompass/openicl/icl_inferencer/icl_gen_inferencer.py", line 122, in inference
results = self.model.generate_from_template(
File "/models/opencompass/opencompass/models/base.py", line 117, in generate_from_template
return self.generate(inputs, max_out_len=max_out_len, **kwargs)
File "/models/opencompass/opencompass/models/openai_api.py", line 123, in generate
results = list(
File "/opt/conda/lib/python3.8/concurrent/futures/_base.py", line 619, in result_iterator
yield fs.pop().result()
File "/opt/conda/lib/python3.8/concurrent/futures/_base.py", line 444, in result
return self.__get_result()
File "/opt/conda/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/opt/conda/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/models/opencompass/opencompass/models/openai_api.py", line 250, in _generate
raise RuntimeError('Calling OpenAI failed after retrying for '
RuntimeError: Calling OpenAI failed after retrying for 2 times. Check the logs for details.
[4pdvGPU Msg(372:140311500876800:multiprocess_memory_limit.c:543)]: Calling exit handler 372