Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Use AVX2 to speedup matmulQ40 #54

Merged
merged 1 commit into from
May 15, 2024

Conversation

DifferentialityDevelopment
Copy link
Contributor

Hi @b4rtaz

I managed to get a significant speed up on my machine with the following changes

I added AVX2 instructions to speed up the matmulQ40 in funcs.cpp

From my initial testing it definitely appears to be faster

With 1 worker:
sudo nice -n -20 ./main inference --steps 20 --prompt "Hello World! " --model ~/Meta-Llama-3-8B-Instruct-Distributed/dllama_original_q40.bin --tokenizer ~/Meta-Llama-3-8B-Instruct-Distributed/dllama-llama3-tokenizer.t --weights-float-type q40 --buffer-float-type q80 --nthreads 8 --workers 192.168.1.3:9990 [sudo] password for azamorn:
Using AVX2 instructions💡 arch: llama2
💡 dim: 4096
💡 hiddenDim: 14336
💡 nLayers: 32
💡 nHeads: 32
💡 nKvHeads: 8
💡 vocabSize: 128256
💡 seqLen: 2048
💡 nSlices: 2
💡 ropeTheta: 500000.0
📄 bosId: 128000
📄 eosId: 128001
🕒 ropeCache: 16384 kB
⏩ Loaded 6175568 kB
🔶 G 358 ms I 147 ms T 211 ms S 1917438 kB R 442 kB Hello 🔶 G 352 ms I 133 ms T 219 ms S 510 kB R 442 kB World 🔶 G 344 ms I 143 ms T 200 ms S 510 kB R 442 kB !
🔶 G 369 ms I 145 ms T 224 ms S 510 kB R 442 kB
🔶 G 339 ms I 140 ms T 198 ms S 510 kB R 442 kB I
🔶 G 347 ms I 148 ms T 198 ms S 510 kB R 442 kB 'm
🔶 G 368 ms I 150 ms T 218 ms S 510 kB R 442 kB a
🔶 G 361 ms I 137 ms T 223 ms S 510 kB R 442 kB bot 🔶 G 380 ms I 137 ms T 242 ms S 510 kB R 442 kB .
🔶 G 365 ms I 143 ms T 221 ms S 510 kB R 442 kB
🔶 G 356 ms I 139 ms T 217 ms S 510 kB R 442 kB I
🔶 G 356 ms I 145 ms T 211 ms S 510 kB R 442 kB 'm
🔶 G 364 ms I 143 ms T 221 ms S 510 kB R 442 kB here 🔶 G 375 ms I 136 ms T 239 ms S 510 kB R 442 kB to
🔶 G 345 ms I 132 ms T 212 ms S 510 kB R 442 kB help 🔶 G 367 ms I 140 ms T 227 ms S 510 kB R 442 kB you 🔶 G 343 ms I 134 ms T 208 ms S 510 kB R 442 kB with 🔶 G 352 ms I 144 ms T 208 ms S 510 kB R 442 kB any 🔶 G 362 ms I 145 ms T 217 ms S 510 kB R 442 kB questions 🔶 G 344 ms I 143 ms T 200 ms S 510 kB R 442 kB you Generated tokens: 20
Avg tokens / second: 2.80
Avg generation time: 357.35 ms
Avg inference time: 141.20 ms
Avg transfer time: 215.70 ms

Without a worker:
sudo nice -n -20 ./main inference --steps 20 --prompt "Hello World! " --model ~/Meta-Llama-3-8B-Instruct-Distributed/dllama_original_q40.bin --tokenizer ~/Meta-Llama-3-8B-Instruct-Distributed/dllama-llama3-tokenizer.t --weights-float-type q40 --buffer-float-type q80 --nthreads 8 Using AVX2 instructions💡 arch: llama2
💡 dim: 4096
💡 hiddenDim: 14336
💡 nLayers: 32
💡 nHeads: 32
💡 nKvHeads: 8
💡 vocabSize: 128256
💡 seqLen: 2048
💡 nSlices: 1
💡 ropeTheta: 500000.0
📄 bosId: 128000
📄 eosId: 128001
🕒 ropeCache: 32768 kB
⏩ Loaded 6175568 kB
🔶 G 232 ms I 232 ms T 0 ms S 0 kB R 0 kB Hello
🔶 G 256 ms I 255 ms T 1 ms S 0 kB R 0 kB World
🔶 G 235 ms I 234 ms T 1 ms S 0 kB R 0 kB !
🔶 G 223 ms I 222 ms T 1 ms S 0 kB R 0 kB
🔶 G 230 ms I 229 ms T 0 ms S 0 kB R 0 kB I
🔶 G 244 ms I 243 ms T 0 ms S 0 kB R 0 kB am
🔶 G 235 ms I 233 ms T 1 ms S 0 kB R 0 kB an
🔶 G 232 ms I 231 ms T 0 ms S 0 kB R 0 kB AI
🔶 G 228 ms I 227 ms T 1 ms S 0 kB R 0 kB designed
🔶 G 227 ms I 225 ms T 1 ms S 0 kB R 0 kB to
🔶 G 232 ms I 230 ms T 1 ms S 0 kB R 0 kB generate
🔶 G 227 ms I 225 ms T 1 ms S 0 kB R 0 kB text
🔶 G 225 ms I 224 ms T 0 ms S 0 kB R 0 kB based
🔶 G 229 ms I 228 ms T 0 ms S 0 kB R 0 kB on
🔶 G 232 ms I 230 ms T 1 ms S 0 kB R 0 kB the
🔶 G 227 ms I 225 ms T 1 ms S 0 kB R 0 kB input
🔶 G 228 ms I 227 ms T 0 ms S 0 kB R 0 kB I
🔶 G 228 ms I 226 ms T 1 ms S 0 kB R 0 kB receive
🔶 G 228 ms I 228 ms T 0 ms S 0 kB R 0 kB .
🔶 G 226 ms I 224 ms T 1 ms S 0 kB R 0 kB I
Generated tokens: 20
Avg tokens / second: 4.33
Avg generation time: 231.20 ms
Avg inference time: 229.90 ms
Avg transfer time: 0.60 ms

So it does seem to be working correctly at least, and it's definitely much faster than without it.

For reference previously I was getting
With worker:
Avg tokens / second: 2.60
Avg generation time: 384.90 ms
Avg inference time: 184.65 ms
Avg transfer time: 199.60 ms

Without worker:
Avg tokens / second: 3.69
Avg generation time: 271.15 ms
Avg inference time: 269.80 ms
Avg transfer time: 0.90 ms

So with worker it went up to 2.8 from 2.6 t/s (7% faster) Without worker it went up to 4.33 from 3.69 t/s (17% faster)

Hi @b4rtaz

I managed to get a significant speed up on my machine with the following changes

I added AVX2 instructions to speed up the matmulQ40 in funcs.cpp

From my initial testing it definitely appears to be faster

With 1 worker:
sudo nice -n -20 ./main inference --steps 20 --prompt "Hello World! " --model ~/Meta-Llama-3-8B-Instruct-Distributed/dllama_original_q40.bin --tokenizer ~/Meta-Llama-3-8B-Instruct-Distributed/dllama-llama3-tokenizer.t --weights-float-type q40 --buffer-float-type q80 --nthreads 8 --workers 192.168.1.3:9990
[sudo] password for azamorn:
Using AVX2 instructions💡 arch: llama2
💡 dim: 4096
💡 hiddenDim: 14336
💡 nLayers: 32
💡 nHeads: 32
💡 nKvHeads: 8
💡 vocabSize: 128256
💡 seqLen: 2048
💡 nSlices: 2
💡 ropeTheta: 500000.0
📄 bosId: 128000
📄 eosId: 128001
🕒 ropeCache: 16384 kB
⏩ Loaded 6175568 kB
🔶 G 358 ms I 147 ms T 211 ms S 1917438 kB R 442 kB Hello
🔶 G 352 ms I 133 ms T 219 ms S 510 kB R 442 kB World
🔶 G 344 ms I 143 ms T 200 ms S 510 kB R 442 kB !
🔶 G 369 ms I 145 ms T 224 ms S 510 kB R 442 kB
🔶 G 339 ms I 140 ms T 198 ms S 510 kB R 442 kB I
🔶 G 347 ms I 148 ms T 198 ms S 510 kB R 442 kB 'm
🔶 G 368 ms I 150 ms T 218 ms S 510 kB R 442 kB a
🔶 G 361 ms I 137 ms T 223 ms S 510 kB R 442 kB bot
🔶 G 380 ms I 137 ms T 242 ms S 510 kB R 442 kB .
🔶 G 365 ms I 143 ms T 221 ms S 510 kB R 442 kB
🔶 G 356 ms I 139 ms T 217 ms S 510 kB R 442 kB I
🔶 G 356 ms I 145 ms T 211 ms S 510 kB R 442 kB 'm
🔶 G 364 ms I 143 ms T 221 ms S 510 kB R 442 kB here
🔶 G 375 ms I 136 ms T 239 ms S 510 kB R 442 kB to
🔶 G 345 ms I 132 ms T 212 ms S 510 kB R 442 kB help
🔶 G 367 ms I 140 ms T 227 ms S 510 kB R 442 kB you
🔶 G 343 ms I 134 ms T 208 ms S 510 kB R 442 kB with
🔶 G 352 ms I 144 ms T 208 ms S 510 kB R 442 kB any
🔶 G 362 ms I 145 ms T 217 ms S 510 kB R 442 kB questions
🔶 G 344 ms I 143 ms T 200 ms S 510 kB R 442 kB you
Generated tokens: 20
Avg tokens / second: 2.80
Avg generation time: 357.35 ms
Avg inference time: 141.20 ms
Avg transfer time: 215.70 ms

Without a worker:
sudo nice -n -20 ./main inference --steps 20 --prompt "Hello World! " --model ~/Meta-Llama-3-8B-Instruct-Distributed/dllama_original_q40.bin --tokenizer ~/Meta-Llama-3-8B-Instruct-Distributed/dllama-llama3-tokenizer.t --weights-float-type q40 --buffer-float-type q80 --nthreads 8
Using AVX2 instructions💡 arch: llama2
💡 dim: 4096
💡 hiddenDim: 14336
💡 nLayers: 32
💡 nHeads: 32
💡 nKvHeads: 8
💡 vocabSize: 128256
💡 seqLen: 2048
💡 nSlices: 1
💡 ropeTheta: 500000.0
📄 bosId: 128000
📄 eosId: 128001
🕒 ropeCache: 32768 kB
⏩ Loaded 6175568 kB
🔶 G 232 ms I 232 ms T 0 ms S 0 kB R 0 kB Hello
🔶 G 256 ms I 255 ms T 1 ms S 0 kB R 0 kB World
🔶 G 235 ms I 234 ms T 1 ms S 0 kB R 0 kB !
🔶 G 223 ms I 222 ms T 1 ms S 0 kB R 0 kB
🔶 G 230 ms I 229 ms T 0 ms S 0 kB R 0 kB I
🔶 G 244 ms I 243 ms T 0 ms S 0 kB R 0 kB am
🔶 G 235 ms I 233 ms T 1 ms S 0 kB R 0 kB an
🔶 G 232 ms I 231 ms T 0 ms S 0 kB R 0 kB AI
🔶 G 228 ms I 227 ms T 1 ms S 0 kB R 0 kB designed
🔶 G 227 ms I 225 ms T 1 ms S 0 kB R 0 kB to
🔶 G 232 ms I 230 ms T 1 ms S 0 kB R 0 kB generate
🔶 G 227 ms I 225 ms T 1 ms S 0 kB R 0 kB text
🔶 G 225 ms I 224 ms T 0 ms S 0 kB R 0 kB based
🔶 G 229 ms I 228 ms T 0 ms S 0 kB R 0 kB on
🔶 G 232 ms I 230 ms T 1 ms S 0 kB R 0 kB the
🔶 G 227 ms I 225 ms T 1 ms S 0 kB R 0 kB input
🔶 G 228 ms I 227 ms T 0 ms S 0 kB R 0 kB I
🔶 G 228 ms I 226 ms T 1 ms S 0 kB R 0 kB receive
🔶 G 228 ms I 228 ms T 0 ms S 0 kB R 0 kB .
🔶 G 226 ms I 224 ms T 1 ms S 0 kB R 0 kB I
Generated tokens: 20
Avg tokens / second: 4.33
Avg generation time: 231.20 ms
Avg inference time: 229.90 ms
Avg transfer time: 0.60 ms

So it does seem to be working correctly at least, and it's definitely much faster than without it.

For reference previously I was getting
With worker:
Avg tokens / second: 2.60
Avg generation time: 384.90 ms
Avg inference time: 184.65 ms
Avg transfer time: 199.60 ms

Without worker:
Avg tokens / second: 3.69
Avg generation time: 271.15 ms
Avg inference time: 269.80 ms
Avg transfer time: 0.90 ms

So with worker it went up to 2.8 from 2.6 t/s (7% faster)
Without worker it went up to 4.33 from 3.69 t/s (17% faster)
@DifferentialityDevelopment
Copy link
Contributor Author

This project has taught me that I definitely need a faster networking setup, looking at connecting my machines using SFP+ connected to an switch with 4 or more SFP+ ports

@b4rtaz b4rtaz merged commit d1304c8 into b4rtaz:main May 15, 2024
@b4rtaz
Copy link
Owner

b4rtaz commented May 15, 2024

Merged. Great job!

@DifferentialityDevelopment DifferentialityDevelopment deleted the patch-2 branch May 15, 2024 07:55
@b4rtaz
Copy link
Owner

b4rtaz commented May 15, 2024

Confirmed the speed up.

Setup: GitHub codepaces, 4-core AMD EPYC 7763 64-Core Processor, 16GB RAM

0.5.0

@b4rtaz ➜ /workspaces/distributed-llama (main) $ ./main inference --model ./dllama_meta-llama-3-8b_q40.bin --tokenizer dllama_meta-llama3-tokenizer.t --weights-float-type q40 --buffer-float-type q80 --prompt "Hello world" --steps 16 --nthreads 4
...
⏩ Loaded 6175568 kB
🔶 G  755 ms I  755 ms T    0 ms S      0 kB R      0 kB Hello
🔶 G  730 ms I  730 ms T    0 ms S      0 kB R      0 kB  world
🔶 G  759 ms I  758 ms T    0 ms S      0 kB R      0 kB !
🔶 G  819 ms I  811 ms T    7 ms S      0 kB R      0 kB  <
🔶 G  717 ms I  715 ms T    1 ms S      0 kB R      0 kB br
🔶 G  874 ms I  862 ms T   11 ms S      0 kB R      0 kB >

🔶 G  710 ms I  708 ms T    0 ms S      0 kB R      0 kB  <
🔶 G  833 ms I  827 ms T    5 ms S      0 kB R      0 kB h
🔶 G  764 ms I  762 ms T    1 ms S      0 kB R      0 kB 1
🔶 G  726 ms I  725 ms T    0 ms S      0 kB R      0 kB  align
🔶 G  864 ms I  857 ms T    6 ms S      0 kB R      0 kB ="
🔶 G  813 ms I  808 ms T    4 ms S      0 kB R      0 kB center
🔶 G  698 ms I  697 ms T    0 ms S      0 kB R      0 kB ">
🔶 G  746 ms I  739 ms T    6 ms S      0 kB R      0 kB Hi
🔶 G  710 ms I  709 ms T    0 ms S      0 kB R      0 kB  �
🔶 G  717 ms I  714 ms T    2 ms S      0 kB R      0 kB 
Generated tokens:    16
Avg tokens / second: 1.31
Avg generation time: 764.69 ms
Avg inference time:  761.06 ms
Avg transfer time:   2.69 ms

Your PR:

@b4rtaz ➜ /workspaces/distributed-llama (main) $ ./main inference --model ./dllama_meta-llama-3-8b_q40.bin --tokenizer dllama_meta-llama3-tokenizer.t --weights-float-type q40 --buffer-float-type q80 --prompt "Hello world" --steps 16 --nthreads 4
...
⏩ Loaded 6175568 kB
🔶 G  568 ms I  567 ms T    1 ms S      0 kB R      0 kB Hello
🔶 G  642 ms I  642 ms T    0 ms S      0 kB R      0 kB  world
🔶 G  579 ms I  578 ms T    0 ms S      0 kB R      0 kB !
🔶 G  566 ms I  565 ms T    0 ms S      0 kB R      0 kB  


🔶 G  646 ms I  643 ms T    1 ms S      0 kB R      0 kB I
🔶 G  563 ms I  562 ms T    0 ms S      0 kB R      0 kB  am
🔶 G  818 ms I  785 ms T   32 ms S      0 kB R      0 kB  a
🔶 G  593 ms I  585 ms T    7 ms S      0 kB R      0 kB  computer
🔶 G  761 ms I  737 ms T   23 ms S      0 kB R      0 kB  science
🔶 G  579 ms I  579 ms T    0 ms S      0 kB R      0 kB  student
🔶 G  566 ms I  564 ms T    1 ms S      0 kB R      0 kB  in
🔶 G  625 ms I  623 ms T    0 ms S      0 kB R      0 kB  China
🔶 G  618 ms I  616 ms T    1 ms S      0 kB R      0 kB .
🔶 G  573 ms I  573 ms T    0 ms S      0 kB R      0 kB  I
🔶 G  690 ms I  657 ms T   32 ms S      0 kB R      0 kB  have
🔶 G  668 ms I  646 ms T   22 ms S      0 kB R      0 kB  been
Generated tokens:    16
Avg tokens / second: 1.59
Avg generation time: 628.44 ms
Avg inference time:  620.12 ms
Avg transfer time:   7.50 ms

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants