Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Why is the throughput of testing the same algorithm with raft ann benchmark one order of magnitude higher than using ann benchmark】 #532

Open
zjx1230 opened this issue Jun 7, 2024 · 0 comments

Comments

@zjx1230
Copy link

zjx1230 commented Jun 7, 2024

Hello, Everyone! Recently, I tested the performance of hnswlib using raft-ann-benchmark and ann-benchmarks, and found that under the same recall rate, the QPS of raft-ann-benchmark is about an order of magnitude higher than that of ann-benchmarks. Is this due to the overhead of Python calling the cpp library when calculating time in ann-benchmarks?

environment: x86_64,CentOS7,96core
raft-ann-benchmark command
python -m raft-ann-bench.run --force --dataset glove-100-inner --algorithms hnswlib -m throughput

ann-benchmarks command
python run.py --local --algorithm hnswlib --batch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant