Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

说明实在是太…… #11

Open
wycstc353 opened this issue Oct 6, 2024 · 1 comment
Open

说明实在是太…… #11

wycstc353 opened this issue Oct 6, 2024 · 1 comment

Comments

@wycstc353
Copy link

D:\asdasd\AI\GPT-SoVITS-Server-main\GPT-SoVITS-Server-main>python server.py
DirectML可用,将使用DirectML进行推理加速。
设备名称: NVIDIA GeForce GTX 1650
Traceback (most recent call last):
File "D:\asdasd\AI\GPT-SoVITS-Server-main\GPT-SoVITS-Server-main\server.py", line 71, in
tokenizer = AutoTokenizer.from_pretrained(bert_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\wyc\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 926, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\wyc\AppData\Local\Programs\Python\Python312\Lib\site-packages\transformers\tokenization_utils_base.py", line 2200, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for './pretrained/chinese-roberta-wwm-ext-large'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './pretrained/chinese-roberta-wwm-ext-large' is the correct path to a directory containing all relevant files for a RobertaTokenizerFast tokenizer.
好不容易把依赖装完了,还是报错,我现在东西是这样放得
image
image

@ben0oil1
Copy link
Owner

ben0oil1 commented Oct 6, 2024

你把脚本打开,把预训练模型的存放路径直接硬编码上去,试试

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants