Preference Elicitation via Theory Refinement
Peter Haddawy, Vu Ha, Angelo Restificar, Benjamin Geisler, John Miyamoto; 4(Jul):317-337, 2003.
Abstract
We present an approach to elicitation of user preference models in which assumptions can be used
to guide but not constrain the elicitation process. We demonstrate that when domain knowledge is
available, even in the form of weak and somewhat inaccurate assumptions, significantly less data
is required to build an accurate model of user preferences than when no domain knowledge is
provided. This approach is based on the KBANN (Knowledge-Based Artificial Neural Network)
algorithm pioneered by Shavlik and Towell (1989). We demonstrate this
approach through two examples, one involves preferences under certainty, and the other involves
preferences under uncertainty. In the case of certainty, we show how to encode assumptions
concerning preferential independence and monotonicity in a KBANN network, which can be trained
using a variety of preferential information including simple binary classification. In the case of
uncertainty, we show how to construct a KBANN network that encodes certain types of dominance
relations and attitude toward risk. The resulting network can be trained using answers to standard
gamble questions and can be used as an approximate representation of a person's preferences. We
empirically evaluate our claims by comparing the KBANN networks with simple backpropagation
artificial neural networks in terms of learning rate and accuracy. For the case of uncertainty,
the answers to standard gamble questions used in the experiment are taken from an actual medical
data set first used by Miyamoto and Eraker (1988). In the case of
certainty, we define a measure to which a set of preferences violate a domain theory, and examine
the robustness of the KBANN network as this measure of domain theory violation varies.
[abs][pdf][ps.gz][ps]