Skip to content

std::runtime_error exceptions not caught as std::string by Visual C++? #1589

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
mgroeber9110 opened this issue May 24, 2023 · 8 comments · Fixed by #1599
Closed

std::runtime_error exceptions not caught as std::string by Visual C++? #1589

mgroeber9110 opened this issue May 24, 2023 · 8 comments · Fixed by #1599

Comments

@mgroeber9110
Copy link
Contributor

Platform: Windows x64
Commit: 7e4ea5b

I noticed that main.exe fails for me when I run it without any parameters, and no model is found. The only output I got was:

C:\Develop\llama.cpp>bin\Release\main.exe
main: build = 583 (7e4ea5b)
main: seed  = 1684960511

In the debugger, I found that this line had triggered an unhandled exception:

throw std::runtime_error(format("failed to open %s: %s", fname, strerror(errno)));

When I change the catch statement like this

-    } catch (const std::string & err) {
-        fprintf(stderr, "error loading model: %s\n", err.c_str());
+    } catch (const std::exception & err) {
+        fprintf(stderr, "error loading model: %s\n", err.what());

then I get a proper error message again, as in the past:

C:\Develop\llama.cpp\build>bin\Release\main.exe
main: build = 583 (7e4ea5b)
main: seed  = 1684961912
error loading model: failed to open models/7B/ggml-model.bin: No such file or directory
llama_init_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/7B/ggml-model.bin'
main: error: unable to load model

This appears to be related to the changes made in #1316 that explicitly changed the exception type to std::exception, even though they are caught as std::string. Is this a specific behavior that only works in some compilers, and does it make sense for me to submit an MR that catches exceptions as std::exception as above, which appears to be more common C++ practice?

My compiler version details (from CMake):

-- Building for: Visual Studio 17 2022
-- Selecting Windows SDK version 10.0.22000.0 to target Windows 10.0.19045.
-- The C compiler identification is MSVC 19.35.32216.1
-- The CXX compiler identification is MSVC 1
@divinity76
Copy link
Contributor

divinity76 commented May 26, 2023

does it make sense for me to submit an MR that catches exceptions as std::exception as above

absolutely. also if compilers actually catch

try {
    throw std::runtime_error("foo");
} catch(std::string &err){
     ...
}

... i'd recommend sending a bugreport to the compiler developers. that shouldn't be caught.

@mgroeber9110
Copy link
Contributor Author

It turns out that the current official release on github for x64 shows the same behaviour - apparently the codes mixes std::runtime_error and std::string exceptions in different parts. I will create an MR that makes this consistent.

@rajivmehtaflex
Copy link

I'm facing same issue in Ubuntu 22.04.2 LTS x86_64.

./main -m ./models/7B/ggml-model-q4_0.bin --color -f ./prompts/alpaca.txt -ins --temp 0.8 --top_k 40 -n 5000 --repeat_penalty 1.3 --top_p 0.0
main: build = 590 (66874d4)
main: seed = 1685119680
llama.cpp: loading model from ./models/7B/ggml-model-q4_0.bin
terminate called after throwing an instance of 'std::runtime_error'
what(): unexpectedly reached end of file
Aborted

@divinity76
Copy link
Contributor

divinity76 commented May 27, 2023

@rajivmehtaflex different issue. Also you probably need to regenerate your your q4_0.bin , it's most likely using an old format (because the format changed recently), which causes that exact error you posted.

@Nyandaro
Copy link

I'm having the same issue
When I run main.exe it stops and tries to send info to MS

main: build = 0 (unknown)
main: seed = 1685214659
terminate called after throwing an instance of 'std::runtime_error'
what(): failed to open models/7B/ggml-model.bin: No such file or directory

I hope this thread will solve it
Windows 10 pro 64bit
Ryzen 7

@divinity76
Copy link
Contributor

divinity76 commented May 27, 2023

@Nyandaro also different issue. This issue is probably not the right place to debug it,
but open a cmd terminal, run

cd "C:\path\to\llama"
dir /s /b | curl share.loltek.net -d @-

what do you get?

@Nyandaro
Copy link

@divinity76

What I did was download this repository as a ZIP
Main.exe created by building the unzipped folder with w64devkit.
When I double-clicked, the display just before appeared.

sorry i don't know what that command means

@divinity76
Copy link
Contributor

divinity76 commented May 28, 2023

@Nyandaro cd "PATH" changes the terminal's "working directory" to the path specified (documentation: https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/cd )
and dir /s /b create a recursive list of all files in the "working directory" (documentation: https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/dir )
and | curl share.loltek.net -d @- use curl (documentation: https://curl.se/docs/manpage.html ) to upload that list to https://share.loltek.net and return a shareable link (documentation: https://share.loltek.net/ )
that list would allow us to debug why you are getting the error

what(): failed to open models/7B/ggml-model.bin: No such file or directory`

but this is not the right place to debug it. Maybe the guys over at https://SuperUser.com are willing to help you :)

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants