How can we more systematically assess
whether an information retrieval system (e.g., a search engine), delivers an
engaging user experience? This is the
question that has initiated a research project among PhD student, Mengdie Zhuang and her supervisors, Professor
Elaine Toms, and Dr. Gianluca Demartini.
To date, search systems are evaluated using
a range of isolated measures and metrics, mostly drawn from computer logfiles
that contain keystrokes and mouse clicks. Some systems are assessed at the end
of using the system with a questionnaire or interview. When the system delivers a negative user
experience, the system has no time to rectify its actions when the evaluation
occurs at the end of using the system. This research team is looking at how one
might examine the patterns of those actions so as to predict whether the user
is likely to express a positive or negative assessment, combining both types of
evaluations used to date. The potential
impact of this research is to shorten and simplify the evaluation process.
The first output from this research can be
found in, “The Relationship Between User
Perception and User Behaviour in Interactive Information Retrieval Evaluation”
which won Best Paper Award at the European Conference on Information Retrieval
(ECIR) which took place in Padova, Italy, March 20-23, 2016. ECIR, in its 38th year, is the
premier European conference that deals with new research in information
retrieval.
Comments