You are expected to submit a collection of source codes, documentations and test results related to the design of a chatbot. The chatbot must be modular and demonstrate the use of several major AI techniques, as well as their integration into one user friendly system with a common topic.
Submissions and Deadlines: The coursework is to be submitted on NOW DropBox in two stages over the academic year as detailed in the following table.
1 TBC (in term 1) Tasks a and b:Chatbotwithrule-basedandsimilarity-based conversation and logical reasoning components
2 TBC (in term 2) Tasks c and d: Chatbot extended with image classification component (local and cloud-based)
Chatbot topic: Make sure that the chatbot topic, as well as the individual functionalities, are reasonably specific, so it’s unlikely someone else in the class creates very similar functionalities.
Source codes and supplementary files: The source codes are to be programmed in Python, and submitted in .py or .ipynb. For multiple source code files, and where you have any supplementary files (e.g., AIML and CSV) you may compress and submit them together with your source codes as a single .zip file. Add in-line comments and indent your source code for maximum readability. You can use other Python packages if they are on pypi.org and you find them useful for your chosen domain of application, but you must not use them as a replacement for any of the specified parts of the assignment (for example, you must use AIML, not any other method to program chatbot responses).
Documentation: The submission of each part of the coursework must have a documentation file included. The document you submit in stage 2 is the design descriptions of stage 1 updated with the explanations of the new functionalities added in stage 2. This must be submitted in single Word or pdf file. This document should include:
Design notes: The general explanations of the system and its goals, the system requirements, i.e., the list of what the system should do/have from a user’s perspective, explaining the employed AI techniques, and the explanation of your programs, i.e., details of what parts of your program do what. You do not need to do a research or reference your work. No word count, but the details should be reasonably enough for a reader to understand your design.
Conversation log: The submission of each part of the coursework must have a conversation log (showing at least 20 conversation pairs for each stage). For each conversation log, record an actual conversation between you and the chatbot that demonstrates the implemented features. Where appropriate, annotate it with brief comments that explain which feature/component generated this, and how, for any particularly remarkable output. If the conversation included non-textual input (e.g., images), this should also be included to the degree possible.
Recorded Demo URL: Each part of the coursework must be demonstrated to the tutor. These demonstrations would normally take place during lab sessions, but due to the current situations, a recorded demo is required. You must record the video and clearly include a sharing link within your document. A demo video for each stage should be between 10 to 15 minutes. In the recording, first go through the codes and talk about the employed AI techniques, then run your system and demonstrate all the implemented functionalities.
Task a – Rule-based and similarity-based conversation features: This submission should consist at least one Python file that implements the chat bot, one AIML file that implements the rules, and one file of pre-defined Q/A pairs. For the rule-based conversation, you may use the Python and AIML files provided on NOW as a starting point, but you should extend and customise them towards your design specification. The similarity-based conversation should be added based on the bag-of- words model, tf/idf, and cosine similarity. If the user inputs a sentence than is matched to an AIML entry, the answer is provided accordingly. Otherwise, the QA pairs should be searched for the closest match, and the relevant answer is to be returned to the user. See the notes on week 3 lab sheet for more information.
Task b – Add logical reasoning extension: The aim of this task is to build a simplistic first order logic (FOL) knowledgebase and inference engine using NLTK library, that can be updated or queried by the user. This component could ideally be implemented using full grammatical analysis and NLP of the user input, but this is out of the scope of this coursework stage. Therefore, the user inputs for this component are limited to two simple patterns of: “I know that … is …” and “Check that … is …” or similar. For example, user types “I know that Tim is British” or “Check that Tim is European”. You must make a KB file with a number of initial statements (at least 10) about your chosen chatbot topic. Each fact is written in first-order logic structured in the NLTK’s FOL syntax, such as “British (Tim)” and “British(x) -> European(x)”. In your program, first import the initial knowledgebase file and check it for any contradiction. Then, if the user inputs “I know that … is …”, first check if the new expression is not in contradiction with the knowledgebase. If not, add it to the knowledgebase (in memory not in the file) and respond like “OK, I will remember that
… is …”. If the user inputs “Check that … is …” then you respond with “Correct”, “Incorrect” or “Sorry, I don’t know” by applying the NLTK’s resolution algorithm. See a head-start program, sample files ad more details in Week 6 lab sheet.
Submission Stage-1: Includes tasks a and b.
Files to submit: A zip file containing Python program (py or ipynb), AIML file (xml), Q/A pairs (csv), KB file (csv). Separately submiited documentation (doc, docx or pdf) that also includes demo video URL.
Task c – Train and add a local image classification model and evaluate it: This submission should include the updated versions of the stage 1 files/documentation as well as your training program, a saved model and sample image files. Your updated python program provides this new feature: If the user inputs a question such as “What is img01.jpg”?, the chatbot should invoke a pre-trained convolutional neural network to provide the answer. You decide yourself which network architecture and training data set to use. If you wish, you can decide to use a different source of images instead of files, e.g., camera input, and also decide to trigger this feature in other ways. You will need to train your neural network using an image dataset with class labels. There are web sites that serve as a repository for many good data sets such as Kaggle.com or the UCI Machine Learning Repository. Don’t worry too much if it doesn’t fit that well with the topic of chatbot. A weak connection between the image topics and your chatbot topic is acceptable. Once trained, you should save the model (e.g., as a h5 file) and include in your submission. In your main chatbot program, you will import the pre-trained model and use it for serving user’s query on image classification. Report and explain the model’s performance using n-fold cross- validation method and a measurement of your choice. If the classification does not perform well discuss the causes and/or limitations.
Task d: Add a cloud-based image classification service: The idea is that in addition to the model training, your chatbot can also analyse images based on a model that you develop as a web service on Azure cloud. You can use either the Image Analysis feature (pre-trained) or the Image Classification with Custom Vision (user-trained) or any relevant service of your choice in the Azure’s Computer Vision group of services. You will provide the user with result of this image classification alongside with the results of task c. You will need to make sure that your Azure web service is up and running by the time your system is tested by your tutor.
Submission Stage-2: Includes tasks c and d.
Files to submit: A zip file containing Python program for model training (py or ipynb), updated python program of the chatbot (py or ipynb), sample image files for test, AIML file (xml), Q/A pairs (csv), KB file (csv). A separately submitted documentation (doc, docx or pdf) for all the four tasks that also includes demo video URL for the both submissions.
Assessment Criteria (for each submission)
The design of this coursework is based on recognising your creativities in applying AI techniques. As such, the required tasks explained in section II, are to be considered as the minimum requirements.
In addition to the required tasks, extra AI functionality/functionalities are to implemented for both tasks.
What is an “Extra Functionality”?
The extra functionalities for each task, is implementing AI techniques related to the basic task, so that it shows you have explored some new techniques on your own, and that you showed how to program them in order to extend the minimum required functionality. Remember that the extra works are considered only if all the basic tasks are implemented successfully. Of course, there is no definite list of the extra functionalities you may go for, however here are some suggestions:
Basic requirements Extra functionalities: Think of…
Task 1: A chatbot that uses AIML for pattern- based conversation and tf/idf for similarity matching. Other methods for communication rather than text-based.
Other NLP techniques that makes your chatbot smarter in answering questions
Other web services (than what given) that you may use to extend the chatbot answering capabilities etc.
Task 2: Adding logical reasoning based on FOL knowledgebase, NLTK library and resolution inference. Other libraries or reasoning techniques.
Other logics than FOL, e.g., fuzzy or multi-valued logics
Other NLP techniques to support extended forms of logical conversations than the basic ones, e.g., support for multi-valued predicates, or designing a logical game etc.
Task 3: Adding and evaluating a CNN-based image classification model. Multi-object detection in images
Other ML techniques to support extended classification tasks such as support for video, voice or face recognition
Applying hyper-parameter optimisation and discussing the results etc.
Task 4: Adding a cloud- based image classificationservice The range of relevant extra functionalities for this task is deliberately wider than just image classification, so that you can demonstrate a wide range of what could be done by Azure AI services (or alternatives such as AWS if you like), including:
Other Azure AI services in Computer Vision group such as face analysis, video detection and OCR.
Other Azure AI services in NLP group such as translation (i.e., implementing a multi-lingual chatbot), text/speech analysis or language understanding
Upgrading your chatbot to be a web service on the cloud e.g., using Azure’s conversational-AI techniques etc.
This IT Assignment has been solved by our IT experts at Schooling Best. Our Assignment Writing Experts are efficient to provide a fresh solution to this question. We are serving more than 10000+ Students in Australia, UK & US by helping them to score HD in their academics. Our Experts are well trained to follow all marking rubrics & referencing style.
Be it a used or new solution, the quality of the work submitted by our assignment experts remains unhampered. You may continue to expect the same or even better quality with the used and new assignment solution files respectively. There’s one thing to be noticed that you could choose one between the two and acquire an HD either way. You could choose a new assignment solution file to get yourself an exclusive, plagiarism (with free Turnitin file), expert quality assignment or order an old solution file that was considered worthy of the highest distinction.