Janardhan Rao Doppa, Alan Fern, Prasad Tadepalli.
Year: 2014, Volume: 15, Issue: 38, Pages: 1317−1350
We consider a framework for structured prediction based on search in the space of complete structured outputs. Given a structured input, an output is produced by running a time- bounded search procedure guided by a learned cost function, and then returning the least cost output uncovered during the search. This framework can be instantiated for a wide range of search spaces and search procedures, and easily incorporates arbitrary structured-prediction loss functions. In this paper, we make two main technical contributions. First, we describe a novel approach to automatically defining an effective search space over structured outputs, which is able to leverage the availability of powerful classification learning algorithms. In particular, we define the limited-discrepancy search space and relate the quality of that space to the quality of learned classifiers. We also define a sparse version of the search space to improve the efficiency of our overall approach. Second, we give a generic cost function learning approach that is applicable to a wide range of search procedures. The key idea is to learn a cost function that attempts to mimic the behavior of conducting searches guided by the true loss function. Our experiments on six benchmark domains show that a small amount of search in limited discrepancy search space is often sufficient for significantly improving on state-of-the-art structured- prediction performance. We also demonstrate significant speed improvements for our approach using sparse search spaces with little or no loss in accuracy.