Abstract
Learning-to-rank (LTR) models are a central component of modern information access systems. Along core design dimensions-including listwise context modeling, non- compensatory scoring, permutation invariance, and intrinsic interpretability-existing LTR architectures typically address only a subset of these properties. In contrast, classical decision-theoretic models such as Tversky’s elimination-by-aspects exhibit inductive biases that naturally align with all four dimensions, but are not directly applicable in modern, trainable ranking settings. This thesis investigates whether ideas from elimination-by-aspects can be reformulated into a differentiable learning-to-rank model. The result is d-EBA, a listwise ranking architecture that translates the stepwise logic of elimination-by-aspects into an iterative, differentiable scoring process over continuous feature representations. By design, d-EBA is permutation-invariant, models listwise context, exhibits non-compensatory behavior across scoring steps, and remains intrinsically interpretable. An empirical evaluation on a music ranking task shows that d-EBA achieves competitive ranking performance while exposing model-internal quantities that support analysis of ranking dynamics. Overall, this work demonstrates that competitive ranking performance, intrinsic interpretability, listwise context modeling, and non-compensatory behavior need not be mutually exclusive, and provides a foundation for future research at the intersection of human decision theory and machine learning-based ranking.
Citation
Sara Steiner
Learning-to-rank with differentiable elimination-by-aspects
, 2026.
BibTeX
@misc{SaraSteiner2026master-thesis,
title = {Learning-to-rank with differentiable elimination-by-aspects},
author = {Sara Steiner},
school = {Johannes Kepler University Linz},
year = {2026}
}