Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Fix words #11448

Merged
merged 1 commit into from
Jan 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion deploy/slim/prune/sensitivity_anal.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ def eval_fn():
run_sensitive_analysis=True:
Automatically compute the sensitivities of convolutions in a model.
The sensitivity of a convolution is the losses of accuracy on test dataset in
differenct pruned ratios. The sensitivities can be used to get a group of best
different pruned ratios. The sensitivities can be used to get a group of best
ratios with some condition.

run_sensitive_analysis=False:
Expand Down
2 changes: 1 addition & 1 deletion deploy/slim/quantization/quant_kl.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ def main(config, device, logger, vdl_writer):
if not (os.path.exists(os.path.join(inference_model_dir, "inference.pdmodel")) and \
os.path.exists(os.path.join(inference_model_dir, "inference.pdiparams")) ):
raise ValueError(
"Please set inference model dir in Global.inference_model or Global.pretrained_model for post-quantazition"
"Please set inference model dir in Global.inference_model or Global.pretrained_model for post-quantization"
)

if is_layoutxlm_ser:
Expand Down
2 changes: 1 addition & 1 deletion tools/program.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ def check_device(use_gpu, use_xpu=False, use_npu=False, use_mlu=False):

try:
if use_gpu and use_xpu:
print("use_xpu and use_gpu can not both be ture.")
print("use_xpu and use_gpu can not both be true.")
if use_gpu and not paddle.is_compiled_with_cuda():
print(err.format("use_gpu", "cuda", "gpu", "use_gpu"))
sys.exit(1)
Expand Down