Algorithms are currently not grouped by their underlying approach, which makes comparisons between fundamentally different methods unreliable. For example, comparing hnswlib with Glass is misleading since one is graph-based and the other combines graph and quantization.
Possible Solution:
- Require each author to specify the underlying algorithm type when submitting a method (e.g., graph-based, quantized, tree-based, hashing, neural, hybrid, etc.).
- Add this classification as metadata to the benchmark configuration.
- Extend the website visualization with a selector to filter and compare results by approach type.
This would support more transparent evaluations, fairer comparisons, and clearer research interpretation.