Skip to content

[User] latest update fails to compile #1176

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
Electrofried opened this issue Apr 25, 2023 · 3 comments
Closed

[User] latest update fails to compile #1176

Electrofried opened this issue Apr 25, 2023 · 3 comments

Comments

@Electrofried
Copy link

Electrofried commented Apr 25, 2023

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [yes] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [that too] I carefully followed the README.md.
  • [mhhum] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [yup] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

running make after pulling from github should... work?

Current Behavior

make fails with the following:

I llama.cpp build info: 
I UNAME_S:  Linux
I UNAME_P:  x86_64
I UNAME_M:  x86_64
I CFLAGS:   -I.              -O3 -DNDEBUG -std=c11   -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native
I LDFLAGS:  
I CC:       cc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
I CXX:      g++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0

cc  -I.              -O3 -DNDEBUG -std=c11   -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -pthread -march=native -mtune=native   -c ggml.c -o ggml.o
ggml.c: In function 'ggml_vec_dot_q4_2_q8_0':
ggml.c:2885:40: warning: implicit declaration of function '_mm256_set_m128'; did you mean '_mm256_set_epi8'? [-Wimplicit-function-declaration]
         const __m256 d = _mm256_mul_ps(_mm256_set_m128(d1, d0), _mm256_broadcast_ss(&y[i].d));
                                        ^~~~~~~~~~~~~~~
                                        _mm256_set_epi8
ggml.c:2885:40: error: incompatible type for argument 1 of '_mm256_mul_ps'
In file included from /usr/lib/gcc/x86_64-linux-gnu/7/include/immintrin.h:41:0,
                 from ggml.c:189:
/usr/lib/gcc/x86_64-linux-gnu/7/include/avxintrin.h:318:1: note: expected '__m256 {aka __vector(8) float}' but argument is of type 'int'
 _mm256_mul_ps (__m256 __A, __m256 __B)
 ^~~~~~~~~~~~~
ggml.c:2889:22: warning: implicit declaration of function '_mm256_set_m128i'; did you mean '_mm256_set_epi8'? [-Wimplicit-function-declaration]
         __m256i bx = _mm256_set_m128i(bx1, bx0);
                      ^~~~~~~~~~~~~~~~
                      _mm256_set_epi8
ggml.c:2889:22: error: incompatible types when initializing type '__m256i {aka __vector(4) long long int}' using type 'int'
ggml.c: In function 'ggml_vec_dot_q4_3_q8_0':
ggml.c:3015:27: error: incompatible types when initializing type '__m256 {aka const __vector(8) float}' using type 'int'
         const __m256 dx = _mm256_set_m128(d1, d0);
                           ^~~~~~~~~~~~~~~
ggml.c:3022:28: error: incompatible types when initializing type '__m256i {aka const __vector(4) long long int}' using type 'int'
         const __m256i bx = _mm256_set_m128i(bx1, bx0);
                            ^~~~~~~~~~~~~~~~
ggml.c: In function 'ggml_compute_forward_sum_f32':
ggml.c:6850:21: warning: implicit conversion from 'float' to 'ggml_float {aka double}' to match other operand of binary expression [-Wdouble-promotion]
                 sum += row_sum;
                     ^~
At top level:
ggml.c:1286:13: warning: 'quantize_row_q4_2_rmse' defined but not used [-Wunused-function]
 static void quantize_row_q4_2_rmse(const float * restrict x, block_q4_2 * restrict y, int k) {
             ^~~~~~~~~~~~~~~~~~~~~~
Makefile:161: recipe for target 'ggml.o' failed
make: *** [ggml.o] Error 1

Environment and Context

  • Physical (or virtual) hardware you are using:

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 113
Model name: AMD Ryzen 9 3900X 12-Core Processor
Stepping: 0
CPU MHz: 2178.064
CPU max MHz: 4672.0698
CPU min MHz: 2200.0000
BogoMIPS: 7599.40
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-23
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es

  • Operating System:

Ubuntu 22.04

  • SDK version:

Python 3.10.8
GNU Make 4.1
g++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0

@slaren
Copy link
Member

slaren commented Apr 25, 2023

Updating gcc may help, that version is ancient.

@Electrofried
Copy link
Author

Thanks, that resolved it

@ATianmm
Copy link

ATianmm commented Jun 1, 2023

谢谢,解决了

May I ask how to solve it? The specific version of gcc?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants