CS6200 Information Retrieval  Homework1: Retrieval Models solution

$30.00

Original Work ?
Category: You will Instantly receive a download link for .ZIP solution file upon Payment

Description

5/5 - (1 vote)

Objective

  • Introduction to elasticsearch: one of the many available commercial-grade indexes. These instructions will generally not spell out how to accomplish various tasks in elasticsearch; instead, you are encouraged to read the online documentation to familiarize yourself with this tool. Feel free to ask for help on Piazza or during office hours if you encounter any difficulties.
  • Implement and compare various retrieval systems using vector space models and language models. Explain how and why their performance differs.

This assignment involves writing two programs:

  1. A program to parse the corpus and index it with elasticsearch
  2. A query processor, which runs queries from an input file using a selected retrieval model

Task1: Elasticsearch

  • Download and install elasticsearch, and the kibana plugin
  • Download IR_data/AP89_DATA.zip from Github.

Task2: Document Indexing

Create an index of the downloaded corpus. The documents are found within the ap89_collection folder in the data .zip file. You will need to write a program to parse the documents and send them to your elasticsearch instance.

The corpus files are in a standard format used by TREC. Each file contains multiple documents. The format is similar to XML, but standard XML and HTML parsers will not work correctly. Instead, read the file one line at a time with the following rules:

  1. Each document begins with a line containing <DOC> and ends with a line containing </DOC>.
  2. The first several lines of a document’s record contain various metadata. You should read the <DOCNO> field and use it as the ID of the document.
  3. The document contents are between lines containing <TEXT> and </TEXT>.
  4. All other file contents can be ignored.

Make sure to index term positions: you will need them later. You are also free to add any other fields to your index for later use. This might be the easiest way to get particular values used in the scoring functions. However, if a value is provided by elasticsearch then we encourage you to retrieve it using the elasticsearch API rather than calculating and storing it yourself.

Task3: Query execution

Write a program to run the queries in the file query_desc.51-100.short.txt, included in the data .zip file. You should run all queries (omitting the leading number) using each of the retrieval models listed below, and output the top 1000 results for each query to an output file. If a particular query has fewer than 1000 documents with a nonzero matching score, then just list whichever documents have nonzero scores.

You should write precisely one output file per retrieval model. Each line of an output file should specify one retrieved document, in the following format:

<query-number> Q0 <docno> <rank> <score> Exp
	

Where:

  • is the number preceding the query in the query list
  • is the document number, from the <DOCNO> field (which we asked you to index)
  • is the document rank: an integer from 1-1000
  • is the retrieval model’s matching score for the document
  • Q0 and Exp are entered literally

Your program will run queries against elasticsearch. Instead of using their built in query engine, we will be retrieving information such as TF and DF scores from elasticsearch and implementing our own document ranking. It will be helpful if you write a method which takes a term as a parameter and retrieves the postings for that term from elasticsearch. You can then easily reuse this method to implement the retrieval models.

Implement the following retrieval models, using TF and DF scores from your elasticsearch index, as needed:

ES built-in

Use ES query with the API “match”{“body_text”:”query keywords”}. This should be somewhat similar to BM25 scoring

Okapi TF

This is a vector space model using a slightly modified version of TF to score documents. The Okapi TF score for term w𝑤 in document d𝑑 is as follows.

okapi_tf(w,d)=tfw,dtfw,d+0.5+1.5(len(d)/avg(len(d)))𝑜𝑘𝑎𝑝𝑖_𝑡𝑓(𝑤,𝑑)=𝑡𝑓𝑤,𝑑𝑡𝑓𝑤,𝑑+0.5+1.5⋅(𝑙𝑒𝑛(𝑑)/𝑎𝑣𝑔(𝑙𝑒𝑛(𝑑)))

Where:

  • tfw,d𝑡𝑓𝑤,𝑑 is the term frequency of term w𝑤 in document d𝑑
  • len(d)𝑙𝑒𝑛(𝑑) is the length of document d𝑑
  • avg(len(d))𝑎𝑣𝑔(𝑙𝑒𝑛(𝑑)) is the average document length for the entire corpus

The matching score for document d𝑑 and query q𝑞 is as follows.

tf(d,q)=wqokapi_tf(w,d)𝑡𝑓(𝑑,𝑞)=∑𝑤∈𝑞𝑜𝑘𝑎𝑝𝑖_𝑡𝑓(𝑤,𝑑)

TF-IDF

This is the second vector space model. The scoring function is as follows.

tfidf(d,q)=wqokapi_tf(w,d)logDdfw𝑡𝑓𝑖𝑑𝑓(𝑑,𝑞)=∑𝑤∈𝑞𝑜𝑘𝑎𝑝𝑖_𝑡𝑓(𝑤,𝑑)⋅log⁡𝐷𝑑𝑓𝑤

Where:

  • D𝐷 is the total number of documents in the corpus
  • dfw𝑑𝑓𝑤 is the number of documents which contain term w𝑤

Okapi BM25

BM25 is a language model based on a binary independence model. Its matching score is as follows.

bm25(d,q)=wq⎡⎣⎢log(D+0.5dfw+0.5)tfw,d+k1tfw,dtfw,d+k1((1b)+blen(d)avg(len(d)))tfw,q+k2tfw,qtfw,q+k2⎤⎦⎥𝑏𝑚25(𝑑,𝑞)=∑𝑤∈𝑞[log⁡(𝐷+0.5𝑑𝑓𝑤+0.5)⋅𝑡𝑓𝑤,𝑑+𝑘1⋅𝑡𝑓𝑤,𝑑𝑡𝑓𝑤,𝑑+𝑘1((1−𝑏)+𝑏⋅𝑙𝑒𝑛(𝑑)𝑎𝑣𝑔(𝑙𝑒𝑛(𝑑)))⋅𝑡𝑓𝑤,𝑞+𝑘2⋅𝑡𝑓𝑤,𝑞𝑡𝑓𝑤,𝑞+𝑘2]

Where:

  • tfw,q𝑡𝑓𝑤,𝑞 is the term frequency of term w𝑤 in query q𝑞
  • k1𝑘1k2𝑘2, and b𝑏 are constants. You can use the values from the slides, or try your own.

Unigram LM with Laplace smoothing

This is a language model with Laplace (“add-one”) smoothing. We will use maximum likelihood estimates of the query based on a multinomial model “trained” on the document. The matching score is as follows.

lm_laplace(d,q)=wqlogp_laplace(w|d)p_laplace(w|d)=tfw,d+1len(d)+V𝑙𝑚_𝑙𝑎𝑝𝑙𝑎𝑐𝑒(𝑑,𝑞)=∑𝑤∈𝑞log⁡𝑝_𝑙𝑎𝑝𝑙𝑎𝑐𝑒(𝑤|𝑑)𝑝_𝑙𝑎𝑝𝑙𝑎𝑐𝑒(𝑤|𝑑)=𝑡𝑓𝑤,𝑑+1𝑙𝑒𝑛(𝑑)+𝑉

Where:

  • V𝑉 is the vocabulary size – the total number of unique terms in the collection.

Unigram LM with Jelinek-Mercer smoothing

This is a similar language model, except that here we smooth a foreground document language model with a background model from the entire corpus.

lm_jm(d,q)=wqlogp_jm(w|d)p_jm(w|d)=λtfw,dlen(d)+(1λ)dtfw,ddlen(d)𝑙𝑚_𝑗𝑚(𝑑,𝑞)=∑𝑤∈𝑞log⁡𝑝_𝑗𝑚(𝑤|𝑑)𝑝_𝑗𝑚(𝑤|𝑑)=𝜆𝑡𝑓𝑤,𝑑𝑙𝑒𝑛(𝑑)+(1−𝜆)∑𝑑′𝑡𝑓𝑤,𝑑′∑𝑑′𝑙𝑒𝑛(𝑑′)

Where:

  • λ(0,1)𝜆∈(0,1) is a smoothing parameter which specifies the mixture of the foreground and background distributions.

Think carefully about how to efficiently obtain the background model here. If you wish, you can instead estimate the corpus probability using cfwV𝑐𝑓𝑤𝑉.

Task4: Evaluation

A) Compare manually the top 10 docs returned by ESBuilt-In, TFIDF, BM25, LMLaplace, for 5 queries specified by TAs. Explain or speculate on the reasons for differences in the rankings

B) Download trec_eval and use it to evalute your results for each retrieval model:

To perform an evaluation, run:

trec_eval [-q] qrel_file results_file
	

The -q option shows a summary average evaluation across all queries, followed by individual evaluation results for each query; without the -q option, you will see only the summary average. The trec_eval program provides a wealth of statistics about how well the uploaded file did for those queries, including average precision, precision at various recall cut-offs, and so on.

You should evaluate using the QREL file named qrels.adhoc.51-100.AP89.txt, included in the data .zip file.

Create a document showing the following metrics:

  • The uninterpolated mean average precision numbers for all six retrieval models.
  • Precision at 10 and precision at 30 documents for all six retrieval models.

Be prepared to answer questions during your demo as to how model performance compares, why the best model outperformed the others, and so on.

Task5: Pseudo-relevance Feedback (MS students only)

Pseudo-relevance Feedback

Implement pseudo-relevance feedback. The general algorithm is:

  1. Retrieve the top 𝑘 documents using one of the above retrieval models.
  2. Identify terms in these documents which are distinctive to the documents.
  3. Add the terms to the query, and re-run the retrieval process. Return the final results.

It is up to you to devise a reasonable way to choose terms to add to the query. It doesn’t have to be complicated, but you should be prepared to explain and justify your approach.

Evaluate your results using trec_eval and include similar metrics with your submission.

Pseudo-relevance Feedback using ElasticSearch aggs “significant terms”

Use ES API “significat terms” separately on each query term (stem root) to get a list of related words. The words you want to add to the query are:
– related to more than one query term
– not stopwords
– high IDF
– other tricks you might need in order to only get interesting words
Add few of these words to the query and rerun your models.

Below is an example of this API in Sense for query term “atom”:
GET /ap_dataset/document/_search
{
“query” : {
“terms” : {“TEXT” : [ “atom” ]}
},
“aggregations” : {
“significantCrimeTypes” : {
“significant_terms” : {
“field” : “TEXT”
}
}
},
“size”: 0
}

Extra Credit

These extra problems are provided for students who wish to dig deeper into this project. Extra credit is meant to be significantly harder and more open-ended than the standard problems. We strongly recommend completing all of the above before attempting any of these problems.

No extra points will be awarded for EC assignments, they are only to satisfy students motivated to do more than the regular classwork.

EC1: Bigram Language Models

Perform retrieval using a bigram language model. You should match bigrams from the query against bigrams in the document. You may smooth with a unigram language model, if you wish.

You should devise a matching score function for this task. Your matching score should assign smaller scores to documents when the two bigram terms appear at a greater distance from each other. Use term position information from elasticsearch to accomplish this. Be prepared to explain and justify your matching score function to the graders.

Evaluate your results using trec_eval and include similar metrics with your submission.

Submission checklist

  • Your source code for the indexer
  • Your source code for the query processor
  • A short report describing your implementation and comparing your scoring results that should be presented in a tabular form. The evaluation results should also be included in the report.

Rubric (for undegraduate students)

10 points
Documents indexed correctly
10 points
Able to retrieve term and document statistics from elasticsearch
50 points
Retrieval models implemented correctly
15 points
Your evaluation results are reasonable, and you can adequately explain your results during the demo.
10 points
Pseudo-relevance feedback implemented correctly
5 points
Your submitted report