-
Notifications
You must be signed in to change notification settings - Fork 466
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Lucene 9.9: Benchmark HNSW improvements #2292
Comments
I’ll try to find time to run this. |
Reporting on experiments with code on #2302 at commit MS MARCO v1 passage, dev queries, cosDPR-distil model. This is the model reported here:
Experimental SetupComparing HNSW fp32 vs. int8. In both cases, I'm forcing no merge to reduce variance on indexing times (which varies from trial to trial depending on which segments get selected for merging). For both, we end up with 16 segments. Experiments on my Mac Studio, stats:
Four trials for each condition. Commands:
ResultsIndexing:
Retrieval:
Finally, effectiveness: remains the same, no difference. |
🎉
On index and initial flush, this isn't surprising. We build the graph with float32 and then have the small additional overhead of calculating quantiles and storing the quantized representation of everything. But, I have noticed that merging is faster (about 30-40%). |
@benwtrent merging is difficult to benchmark because I get high variance in running times... my diagnosis is that running time is idiosyncratically dependent on which segments get selected for merging... is this a convincing explanation? |
@lintool I understand :) I am happy to see that effectiveness is still, well, effective. |
Hrm... update: trying the same experiments, but with OpenAI embeddings. Getting errors:
@benwtrent any ideas? cc/ @ChrisHegarty @jpountz How should I start debugging? |
do these warnings result in indexing errors? or is it "just" polluting the output logs? |
We're still having issues with |
Ref: #2314 |
Ref: #2318 |
No further follow-up, closing. |
Follow up to this: #2288 (comment)
It'd be great to benchmark HNSW, reproducing the experiments here: Vector Search with OpenAI Embeddings: Lucene Is All You Need. arXiv:2308.14963.
It's actually quite simple, we've gotten it down to a single command:
See: https://github.com/castorini/anserini/blob/master/docs/regressions/regressions-msmarco-passage-openai-ada2.md
@jpountz @benwtrent @ChrisHegarty Do either of you have cycles to try it out?
The text was updated successfully, but these errors were encountered: