3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in discrete-choice-models-2
No benchmarks available.
Use these libraries to find discrete-choice-models-2 models and implementations
No datasets available.
No subtasks available.
This paper proposes a new approach for estimating choice models in which the systematic part of the utility specification is divided into a knowledge-driven part and a data-driven one, which learns a new representation from available explanatory variables.
This work uses discrete choice modeling to develop an optimization framework of such interventions for several problems of group influence, namely maximizing agreement or disagreement and promoting a particular choice, which are NP-hard in general.
TasteNet-MNL can recover the underlying non-linear utility function, and provide predictions and interpretations as accurate as the true model; while examples of logit or random coefficient logit models with misspecified utility functions result in large parameter bias and low predictability.
Outliers in discrete choice response data may result from misclassification and misreporting of the response variable and from choice behaviour that is inconsistent with modelling assumptions (e.g. random utility maximisation). In the presence of outliers, standard discrete choice models produce biased estimates and suffer from compromised predictive accuracy. Robust statistical models are less sensitive to outliers than standard non-robust models. This paper analyses two robust alternatives to the multinomial probit (MNP) model. The two models are robit models whose kernel error distributions are heavy-tailed t-distributions to moderate the influence of outliers. The first model is the multinomial robit (MNR) model, in which a generic degrees of freedom parameter controls the heavy-tailedness of the kernel error distribution. The second model, the generalised multinomial robit (Gen-MNR) model, is more flexible than MNR, as it allows for distinct heavy-tailedness in each dimension of the kernel error distribution. For both models, we derive Gibbs samplers for posterior inference. In a simulation study, we illustrate the finite sample properties of the proposed Bayes estimators and show that MNR and Gen-MNR produce more accurate estimates if the choice data contain outliers through the lens of the non-robust MNP model. In a case study on transport mode choice behaviour, MNR and Gen-MNR outperform MNP by substantial margins in terms of in-sample fit and out-of-sample predictive accuracy. The case study also highlights differences in elasticity estimates across models.
It is proved that computing the Wasserstein distance between a discrete probability measure supported on two points and the Lebesgue measure on the standard hypercube is already P-hard, and it is shown that smoothing the dual objective function is equivalent to regularizing the primal objective function.
The effectiveness of these methods on real-world choice data is demonstrated, showing, for example, that accounting for choice set confounding makes choices observed in hotel booking and commute transportation more consistent with rational utility maximization.
We provide a sharp identification region for discrete choice models where consumers' preferences are not necessarily complete even if only aggregate choice data is available. Behavior is modeled using an upper and a lower utility for each alternative so that non-comparability can arise. The identification region places intuitive bounds on the probability distribution of upper and lower utilities. We show that the existence of an instrumental variable can be used to reject the hypothesis that the preferences of all consumers are complete. We apply our methods to data from the 2018 mid-term elections in Ohio.
This study uses continuous vector representations, called embeddings, for encoding categorical or discrete explanatory variables with a special focus on interpretability and model transparency, and delivers state-of-the-art predictive performance, outperforming existing ANN-based models while drastically reducing the number of required network parameters.
This study first operationalizes computational fairness by equality of opportunity, then distinguishes between the bias inherent in data and the bias introduced by modeling, and introduces an absolute correlation regularization method, which is evaluated with synthetic and real-world data.
A scalable method for approximating the MNL likelihood of general partial rankings in polynomial time complexity is developed and the proposed methods achieve more accurate parameter estimation and better fitness of data compared to conventional methods.
Adding a benchmark result helps the community track progress.