Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

如何使用GPU? #36

Open
xuechaofei opened this issue Aug 23, 2023 · 6 comments
Open

如何使用GPU? #36

xuechaofei opened this issue Aug 23, 2023 · 6 comments

Comments

@xuechaofei
Copy link

api_test.py能运行,但是用了CPU,怎样才能改为用GPU运行呢?

@xuechaofei
Copy link
Author

执行run_extractive_unified_ie.sh,GPU可以运行,但运行HugIE/api_test.py,一直都是使用CPU模式运行,找不到哪里设置GPU模式。运行前也执行了export CUDA_VISIBLE_DEVICES=0;请问是什么原因?

@wjn1996
Copy link
Contributor

wjn1996 commented Mar 12, 2024

您好,api_test.sh这个文件只是用来检验提交的程序是否有bug,它是在提交pr之后由ci/circleci自动检验的,所以只有CPU。在实际使用时不需要管api_test.sh文件

@xuechaofei
Copy link
Author

如果需要用GPU使用模型,需要在哪个位置修改代码?所有代码中,似乎没有传入GPU device参数的调用?

@xuechaofei
Copy link
Author

我们使用HugIE任务,训练了一些模型,但在使用模型时,找不到传入GPU device 的参数的代码位置,一直都只能用CPU在运行,效果很低。麻烦告知下如何传入GPU参数。

@wjn1996
Copy link
Contributor

wjn1996 commented Mar 12, 2024

使用GPU很简单,只要在代码中,model和tensor后面加一个.cuda(),或者.to_device(0)就可以了。

self.model = SPAN_EXTRACTION_MODEL_CLASSES["global_pointer"][
            self.model_type].from_pretrained(hugie_model_name_or_path)
self.model = self.model.cuda()
batch_input = {
    "input_ids": inputs["input_ids"].cuda(),
    "token_type_ids": inputs["token_type_ids"].cuda(),
    "attention_mask": inputs["attention_mask"].cuda(),
}

outputs = self.model(**batch_input)
probs, indices = outputs["topk_probs"], outputs["topk_indices"]
predictions, topk_predictions = self.get_predict_result(
    probs.detach().cpu(), indices.probs.detach().cpu(), examples=examples)

您这样试试?

@wjn1996
Copy link
Contributor

wjn1996 commented Mar 12, 2024

非常感谢您的认可与支持,如有机会,可以相互交流相互学习🤝

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants