eval-qwen2-5-vl-7b-instruct.../logs/infer/public/qwen2-5-vl-7b-instruct-awq@main/lambada_0.out
2025-07-25 10:26:52 +00:00

48 lines
3.4 KiB
Plaintext

[4pdvGPU Msg(368:139999047420928:libvgpu.c:873)]: Initializing.....
[4pdvGPU Msg(368:139999047420928:multiprocess_memory_limit.c:144)]: uuid GPU-9a16bbfd-e4c2-d946-8cf6-81879301e66c validated
[4pdvGPU Msg(368:139999047420928:multiprocess_memory_limit.c:144)]: uuid GPU-845ac5d5-6827-1c5e-8cd0-275d4ba08b97 validated
[4pdvGPU ERROR (pid:368 thread=139999047420928 libvgpu.c:924)]: cuInit failed:100
07/25 18:26:33 - OpenCompass - INFO - Task [public/qwen2-5-vl-7b-instruct-awq@main/lambada_0]
07/25 18:26:35 - OpenCompass - INFO - Start inferencing [public/qwen2-5-vl-7b-instruct-awq@main/lambada_0]
/opt/conda/lib/python3.8/site-packages/fuzzywuzzy/fuzz.py:11: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning
warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning')
[2025-07-25 18:26:35,690] [opencompass.openicl.icl_inferencer.icl_gen_inferencer] [INFO] Starting inference process...
0%| | 0/1718 [00:00<?, ?it/s]
0%| | 0/1718 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/models/opencompass/opencompass/tasks/openicl_infer.py", line 147, in <module>
inferencer.run()
File "/models/opencompass/opencompass/tasks/openicl_infer.py", line 76, in run
self._inference()
File "/models/opencompass/opencompass/tasks/openicl_infer.py", line 119, in _inference
inferencer.inference(retriever,
File "/models/opencompass/opencompass/openicl/icl_inferencer/icl_gen_inferencer.py", line 122, in inference
results = self.model.generate_from_template(
File "/models/opencompass/opencompass/models/base.py", line 117, in generate_from_template
return self.generate(inputs, max_out_len=max_out_len, **kwargs)
File "/models/opencompass/opencompass/models/openai_api.py", line 123, in generate
results = list(
File "/opt/conda/lib/python3.8/concurrent/futures/_base.py", line 619, in result_iterator
yield fs.pop().result()
File "/opt/conda/lib/python3.8/concurrent/futures/_base.py", line 444, in result
return self.__get_result()
File "/opt/conda/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/opt/conda/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/models/opencompass/opencompass/models/openai_api.py", line 222, in _generate
raw_response = requests.post(self.url,
File "/opt/conda/lib/python3.8/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 575, in request
prep = self.prepare_request(req)
File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 486, in prepare_request
p.prepare(
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 368, in prepare
self.prepare_url(url, params)
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 445, in prepare_url
raise InvalidURL(f"Invalid URL {url!r}: No host supplied")
requests.exceptions.InvalidURL: Invalid URL 'http:///learnware/models/openai/4pd/api/v1/chat/completions': No host supplied
[4pdvGPU Msg(368:139999047420928:multiprocess_memory_limit.c:543)]: Calling exit handler 368