Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
Marks |
Worth 50 marks, and 25% of all marks for the unit |
Due Date |
Due Week 7 – Lecture Date at 23:55pm |
Extension |
An extension could be granted under some circumstances. A special consideration application form. must be submitted. Please refer to the university webpage on special consideration. |
Lateness |
For all assessment items handed in after the official due date, and without an agreed extension, a 10% penalty applies to the student’s mark for each day after the due date (including weekends) for up to 10 days. Assessment items handed in after 10 days without special consideration will not be considered. |
Authorship |
This is an individual assessment. All work must be your own. All submissions will be placed through Turnitin. This makes plagiarism remarkably easy to identify for us. |
Submission |
Submission is 3 files: one PDF discussion report, and one Jupyter notebook with a PDF print of it. The three files must be submitted via Moodle. All files will go through Turnitin for plagiarism detection. |
Programming language |
Python in Jupyter |
Part 1: Text Classification
The content has been gathered from the popular academic website arXiv.org for articles tagged as computer science content (though some of these are in mathematics or physics categories). This spans 2024-2016. You are given 3 csv files: train/dev/test sets. The fields in the csv files are:
. Title: the full title
. Abstract: the full abstract
. InformationTheory: a "1" if it is classified as an Information Theory article, otherwise "0".
. ComputerVision: a "1" if it is classified as a Computer Vision article, otherwise "0".
. ComputationalLinguistics: a "1" if it is classified as a Computational Linguistics article, otherwise "0".
The three classes are ComputationalLinguistics, InformationTheory and ComputerVision. These can occur in any combination, so an article could be all three at once, two, one or none. Your job is to build a text classifier that predicts the class ComputationalLinguistics using the Abstract field. Then repeat the same experiment using only the Titles. You should train different text classifiers using different configurations for this binary prediction task. The variations we would like to consider are:
1. Task: 1 binary classification task (ComputationalLinguistics vs. Other two classes)
2. Input: use Abstract, and Titles alone (separate configurations)
3. Algorithm: use 2 different algorithms from tutorials, use the RNN and then choose one of the statistical classifiers (logistic regressions, SVM, etc)
4. Data size: train on the first 1000 cases in the training set, and then train on the full the training set.
5. Pre-processing: Choose a data pre-processing procedure (i.e., lemmatization, stemming, removing stop words, etc) and stick with it in all your experiments.
So this makes 2 (i.e., abstract vs. title) by 2 (i.e., 2 algorithms) by 2 (i.e., 2 training sizes) different configurations.
For each configuration test the algorithm on the test set (Note: when testing for the model trained on the Abstracts, you should use only the Abstracts oftest set. Similarly for testing for the model trained on the Titles, you should use only the Titles oftest set.) provided and report the following results in your notebook
. F1, precision, recall, accuracy
. precision-recall curve
being creative about how you assemble the different values and plot the curves. The discussion of these results should be in its own 2 page discussion section in the PDF report. How well did the two algorithms work under different data size conditions, when and why? How the model trained on title compared with the one trained on the abstracts? What insights do the various metrics and plots give you?
Part 2: Topic Modelling
The data used is the training data from Part 1. Your job is to perform appropriate text pre-processing and preparation and then design two different variations for running LDA using the gensim.models.LdaModel()function call and pre-processing steps such as given in the tutorial. Select appropriate choice of pre-processing and parameters to develop model outputs that are informative. Choices you might make in differentiating the two variations are:
. different pre-processing of text or vocabulary
. use of bi-grams or not
. different numbers of topics (e.g., K=10, K=40)
Now run these two on the first 1000 and the first 20,000 articles in the training data set. This means there are 2 by 2 different configurations for the LDA runs. Then make visualisations of some kind in the notebook. These should allow you to analyse and interpret the output of the topic models.
The actual discussion (analysis and interpretation) about the results should not appear in the notebook but be in the separate PDF discussion report. This is a 2 page discussion giving your analysis and findings that were presented in the notebook output. What sorts of topics do you see?