Dissertation Defense: Jiyun Luo

Candidate Name: Jiyun Luo

Major: Computer Science

Advisor: Grace Hui Yang, Ph.D.

Title: Dynamic Search Models and Applications

Dynamic search is an information retrieval task that involves a sequence of queries for a complex information need (e.g. searching for one-week tour plans in Italy). It is characterized by rich user-system interactions and temporal dependency between queries and between consecutive user behaviors. Dynamic search is a trial-and-error process, which matches well with the family of Reinforcement Learning (RL) algorithms: the RL algorithms learn from repeated, varied attempts which are continued until success. The learner/agent learns from its dynamic interactions with the world. These similarities inspire me to model dynamic search using RL frameworks.

In particular, I model dynamic search as a dual-agent stochastic game, one of the standard variants of Partially Observable Markov Decision Process (POMDP), where the user agent and the search engine agent work together to jointly maximize their long-term rewards. In the framework, users’ search statuses, such as exploitation, exploration, struggle etc. are modeled as hidden states, which can only be estimated through user interaction data. In each search iteration, one search algorithm is picked from a set of candidates to maximize a reward function. My work provides a general framework to model dynamic search. It enables the use of Reinforcement Learning algorithms for optimizing retrieval results. The experiments on the Text REtrieval Conference (TREC) 2011–2014 Session Track datasets show a statistically significant improvement over the state-of-the-art dynamic search algorithms.

The design of states, actions, and rewards is quite flexible when using POMDPs in dynamic search. In the thesis, I also examine all available design choices from related work and compare their retrieval accuracy and efficiency. My analysis reveals the effects of these choices, which enables me to recommend practical design choices. The finding again proves that modeling dynamic search using POMDPs is promising, however, it also shows that this approach is computational demanding.

To improve the efficiency of above dynamic search models, I propose another RL framework, direct policy learning, which finds optimal policies for the best search engine actions directly from what is observed in the user and search engine interactions via gradient descent. The proposed framework greatly reduces the model complexity than the POMDP framework. It is also a flexible design, which includes a wide range of features describing the rich interactions in dynamic search. The framework is shown to be highly effective and efficient on the TREC Session Tracks.

In addition to the dynamic search frameworks, I propose predictive models to detect user struggling states in search. Most prior work uses effort-based features to detect user struggle. However, recent studies suggest that there might be a gap between user effort and user struggle. In this work, I take a psychological perspective to bridge this gap. I demonstrate that after removing the biases introduced by different searcher motivations, user struggle can be tightly connected to user effort.

At last, I implement a dynamic search tool to support the annotation task for the TREC Dynamic Domain Tracks. This serves as my first trial of implementing dynamic search algorithm and struggling detection in practice. The results show that my algorithm is also effective in real life settings.

The research in this thesis is among the first in this field and serves as one step towards solving dynamic search and developing real-world applications.

Tuesday, April 17 at 11:00am to 2:00pm

Regents Hall, 551
3700 O St. NW

Event Type

Academic Events, Dissertation Defense

Departments

Georgetown College, Computer Science, Graduate School of Arts and Sciences

Cost

Free

Open to the public and the press?

Yes

Event Contact Name

Jeremy Fineman

Subscribe
Google Calendar iCal Outlook

Recent Activity