Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
INF6060: Information Retrieval
Coursework overview
• Assignment 1: Tasks and queries (10%).
• Assignment 2: Portfolio of evaluations with
recommendations (90%).
IMPORTANT you MUST also read the Coursework Brief
and FAQ.
Assignment 2: Portfolio of evaluations with
recommendations (90%).
Three sections to assignment 2
1. Tasks and queries
2. Activities 1 - 4
3. Recommendations
Gratuitous photo of a cat!
Assignment 2: Tasks and queries
Section 1 What we are looking for Marks Supported in
Tasks and
queries
All tasks (simulated work tasks and search tasks) and
queries used in the evaluations (guideline up to 100 words
per simulated work task).
1 mark
deducted from
presentation if
not included.
-
Reflection A brief explanation of any changes made to the
assessment 1 tasks and queries (guideline 100 words).
1 mark
deducted from
presentation if
not included.
-
What we are looking for - tasks and queries
• Are the tasks and queries realistic? Can they be used to conduct a fair
test of the system?
• Have you taken on board assignment 1 feedback?
• Have you revised your tasks and queries as you become more
knowledgeable about IR and the Sheffield University Website Search
System? HINT!
Assignment 2: Activities 1 - 4
Section 2 What we are looking for (minimum requirements) Marks Supported
in
Activity 1: heuristic
evaluation
A table that lists the 10 Nielsen heuristics and provides a brief description of whether the search
interface deals with the heuristic effectively for 1 SWT. A brief summary of the findings.
15 Week 7
Activity 2: text tokenisation
and processing strategy
A logical strategy for applying tokenisation to text documents to be indexed by the search system. You
should justify any assumptions you make when generating this strategy.
15 Weeks 4 & 5
Activity 3: measuring
retrieval system
effectiveness
You should submit 3 different queries for each of your SWTs (i.e. 6 in total) to the University search
system and assess the objective performance of the system based on your chosen queries. This will
require you to create relevance judgements for the results of each of your queries and to assess the
results using appropriate performance metrics up to rank 10. You should state any assumptions you
made when assessing retrieved documents for relevance.
15 Week 6
Activity 4: supporting the
users’ search process
A table that identifies features of the search system. Features should be categorised using Wilson
(2011) and the purpose of each feature should be succinctly described. Brief description of how the
user may carry out the two search tasks utilising the categorised features. You should structure this
description around a search process model such as Sutcliffe & Ennis (1998). Short conclusion on how
well the system supports the two task types.
15 Week 8
Activity 1: Heuristic Evaluation
Coursework brief
• A table that lists the 10 Nielsen heuristics and provides a brief
description of whether the search interface deals with the heuristic
effectively for 1 SWT.
• A brief summary of the findings.
Lecture week 7
• We practiced conducting an heuristic evaluation in class using the
Birmingham website search system (remember to use Sheffield for
your coursework)
Conducting your heuristic evaluation (I)
• You will be the evaluator
• Evaluate the search interface against Nielsen’s heuristics
• Use your SWT when conducting the evaluation
• Evaluate the search interface iteratively
• Pass 1: get a feel for the system and interaction
• Pass 2+: focus on specific elements (and how they fit with the overall
interaction)
Conducting an heuristic evaluation (II)
We ask you for: A table that lists the 10 Nielsen heuristics and provides
a brief description of whether the search interface deals with the
heuristic effectively for one SWT
• Identify positive and negative aspects of the design
• You can use screenshots to illustrate your findings (if you think it
helps)
Heuristic Positives Negatives
1. Visibility of system status System indicates if additional /
replacement terms were searched for
(figure 1)
…
More examples … …
2. Match between system and real world … ….
Conducting an heuristic evaluation (III)
• We ask you for “a brief summary of the findings”.
• Think about the severity of the problem. You could refer to one of
these schemes
• Frequency, impact and persistence of positives and negatives.
www.nngroup.com/articles/how-to-rate-the-severity-of-usability-problems
• Showstopper, Major issue, Irritant (Kirmani, 2008)
What we are looking for – heuristic evaluation
• How the search system performs for each heuristic (e.g. strengths
and weaknesses).
• Correct application of each heuristic
• Clear and well written summary. Could also include screenshots.
You may also wish to extend this evaluation so that you can make more
evidenced based recommendations in section 3
• More than one SWT, device or comparison with other university
website
Activity 2: text tokenisation and processing
strategy (I)
Coursework brief
• A logical strategy for applying tokenisation to text documents to be indexed by the search system.
Lecture 5 RECAP - you practiced this already!
1. Enter a query into the Sheffield search engine based on one of your SWTs
2. Choose a single relevant document and copy and paste its first paragraph/first few sentences
3. Define tokenisation rules and apply them
4. Choose set of 5 stopwords and apply them
5. Create 2 stemming rules and apply them
What we are looking for
• Accurate application of rules
• Description of logical tokenisation rules (including term separation characters; treatment of case and
punctuation) with justification.
• Description/summary of additional processing steps (e.g., stopword removal)
• Concrete examples of how the strategy would transform a raw text document.
Activity 2: text tokenisation and processing
strategy (II)
Coursework brief:
• You should justify any assumptions you make when generating this
strategy
What we are looking for:
• A discussion of assumptions and their positive/negative implications.
Activity 3: measuring retrieval system
effectiveness
You should submit 3 different queries for each of your SWTs (i.e. 6 in total) to the University search
system and assess the objective performance of the system based on your chosen queries. This will
require you to create relevance judgements for the results of each of your queries and to assess the
results using appropriate performance metrics up to rank 10. You should state any assumptions you
made when assessing retrieved documents for relevance.
What we are looking for
• Correct use of metric(s) according to search task (e.g. query for known-item should use Reciprocal
Rank rather than P@10). Recall is not really appropriate here.
• Clear and reproducible methodology including engagement with the appropriate literature.
• Clear presentation of results in tabular form.
• Some commentary on how relevance judgements were made. Typically binary, but could be
multi-graded (highly relevant, partially relevant and not relevant). If multi-graded does not work
with P@10.
• Some analysis of different query types.
Activity 4: supporting the users’ search
process (I)
Coursework brief:
• A table that identifies features of the search system. Features should
be categorised using Wilson (2011) and the purpose of each feature
should be succinctly described.
Lecture week 8
• You made a start on this
Activity 4: supporting the users’ search
process (II)
Coursework brief:
• Brief description of how the user may carry out the two search tasks
utilising the categorised features. You should structure this description
around a search process model such as Sutcliffe & Ennis (1998).
• Short conclusion on how well the system supports the two task types.
Questions to think about:
• Did you use different search features for the two search tasks?
• Did the system perform better for different tasks?
• Were there features that you thought were missing?
• Were some stages of the search process better supported than others?
What we are looking for
• Table showing a good number of features based on Wilson’s
categories.