-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Cross compilation fails due to gozstd #55
Comments
Why not use the native Go |
gzip is many times slower by design. And https://github.com/klauspost/compress/tree/master/zstd was at least 2 times slower when I last tried it... The only way forward I see atm is to hide zstd under some build tag. |
Is speed that important when we're talking about async tasks? I'd say that cross-platform compatibility is probably better to have than faster compression 🤔 Especially if the diff between gzip and gozstd is in the order of microseconds. But I guess that build tags might also work if you want to give ppl the option to pick one compression lib over the other. |
It has to my knowledge never been 2 times slower, so I am interested in seeing your actual benchmarks. Also, small blocks got a nice speedup recently. |
@klauspost thanks for looking into this. Your work on compression is truly amazing 👍 I have never spent much time on benchmarks, but I've quickly re-created something simple and got this result BenchmarkZstd-12 11724 103757 ns/op
BenchmarkGoZstd-12 168 6675344 ns/op Probably I will get better results by reusing encoder & decoder... |
PS This is variant that reuses encoder and decoder - timing is not much better... |
https://gist.github.com/klauspost/7f15d32dd6c78d5ce7d67f0353958e8a
Writing the output from your encoder and decompressing with zstd doesn't work, so I cannot tell what you are storing, if it has CRC, etc. Obviously not adding CRC is faster. |
Never draw conclusions on a single sample, and never with a bad benchmark. Just as an example, benchmark 'compressing' random data:
And you should probably look into why your encoder isn't outputting zstandard compatible output. |
Hm, there is nothing mine in the benchmark I've posted... One benchmark uses C zstd encoder and another one uses Encoder from your package...
Again, what do you mean by my encoder? I am using PS github.com/valyala/gozstd is a C wrapper around original zstd... |
Well, the output it returns is not zstandard compatible. You can add this to your encode when you have flushed:
If you use the zstd commandline tool to decompress you get:
|
Your last block is not marked as the last block:
|
Okay, now "Writing the output from your encoder and decompressing with zstd doesn't work" makes sense. TBH this stuff is beyond me - I've just picked first C wrapper available. I am not sure what is the right thing here (may be omitting CRC is fine?), but I see that it is significantly faster than pure Go variant in some cases. No doubt that compressing random data is important (skipping already compressed bytes), but it would be nice to have comparable speed in other cases too. Anyway all this stuff is beyond me - I just wanted to share my naive benchmark. Hopefully it is not altogether wrong. |
Just pointing out that what your wrapper is writing is not zstandard compatible.
Sure, but you do pay a price for memory safety, debug-ability and cross compilation. Setting that as a bar will make it impossible to reach. If speed is important, use
Well, I would call it misleading and a sample set of one data type is not something you can use for conclusions. |
Closed by 07ac935 |
When I try to cross-compile a project that includes this lib I get the following errors:
I build using the following:
I do this on a macOS 10.15.2.
If I build on the mac (no cross-compilation) it works without errors.
The text was updated successfully, but these errors were encountered: