We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
I prepare to use text-generation-webui,and enable CUBLAS; so I try build libllama.so for myself. then I got this error:
libllama.so
[root@A12-213P llama.cpp]# LLAMA_CUBLAS=1 make libllama.so I llama.cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME_M: x86_64 I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native -DGGML_USE_CUBLAS -I/usr/local/cuda/include I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native I LDFLAGS: -lcublas_static -lculibos -lcudart_static -lcublasLt_static -lpthread -ldl -L/usr/local/cuda/lib64 -lrt I CC: cc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) I CXX: g++ (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) g++ -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -shared -fPIC -o libllama.so llama.o ggml.o ggml-cuda.o -lcublas_static -lculibos -lcudart_static -lcublasLt_static -lpthread -ldl -L/usr/local/cuda/lib64 /opt/rh/devtoolset-11/root/usr/libexec/gcc/x86_64-redhat-linux/11/ld: ggml-cuda.o: relocation R_X86_64_32 against `.bss' can not be used when making a shared object; recompile with -fPIC collect2: error: ld returned 1 exit status make: *** [Makefile:184:libllama.so] 错误 1 [root@A12-213P llama.cpp]#
The text was updated successfully, but these errors were encountered:
does it help if you add -fPIC to the nvcc line in the Makefile? https://github.com/ggerganov/llama.cpp/blob/5addcb120cf2682c7ede0b1c520592700d74c87c/Makefile#L107-L108
Sorry, something went wrong.
THX, After add --compiler-options -fPIC at line 108 in the Makefile resolves the problem.
--compiler-options -fPIC
ifdef LLAMA_CUBLAS CFLAGS += -DGGML_USE_CUBLAS -I/usr/local/cuda/include LDFLAGS += -lcublas_static -lculibos -lcudart_static -lcublasLt_static -lpthread -ldl -L/usr/local/cuda/lib64 -lrt OBJS += ggml-cuda.o ggml-cuda.o: ggml-cuda.cu ggml-cuda.h nvcc -arch=native --compiler-options -fPIC -c -o $@ $<
This will be fixed in #1094
No branches or pull requests
I prepare to use text-generation-webui,and enable CUBLAS; so I try build
libllama.so
for myself. then I got this error:The text was updated successfully, but these errors were encountered: