Details of the KR third lab in 2020
Our goal is to write a simple watson-like system which takes a question in natural language (english) and tries to answer it. The domain of the questions is assumed to be geography, in particular the fact and rule database you have created.
What should the system do
The system should be able to answer two types of questions in the geography domain:
- Yes-no questions. Say, given an input question "Is Tallinn in Estonia" or "Is Peipsi a lake" or "Is Peipsi a body of water" or "Is Tallinn in Europe" or "are Võrtsjärv and Peipsi connected ", it should answer "yes", whereas for "Is Tallinn in Sweden" or "Is Peipsi a river" or "Is Peipsi a country" or "Is Tallinn in Australia" it should answer "no". Some of these questions could be answerable directly from the database, while others require reasoning with rules.
- Questions for finding a particular data item or a tuple of data items. Say, given a question "What is the capital of Estonia" it should answer "Tallinn", for "Into what does Emajõgi flow" is should answer "Peipsi". If there are many possible answers, like "Where is Tallinn" or "which rivers flow into Peipsi", it is OK to give just one answer. Attempting to give several answers or preferring better answers is a very interesting, but complex question: better do not tackle it at all or at least unless everything else works already nicely.
Note that you are not required to be able to answer very complex or vague questions. The examples above are just examples, not requirements.
You must, however, give a number of concrete examples in your final presentation where you do succeed to answer (and exlain briefly how is answer obtained) as well as a number of answers where you do not succeed (and again, explain briefly why).
In case your system cannot understand the question (i.e. cannot sensibly parse), it is advisable to answer "I do not understand", ideally with some additional details of what is not understood. In case it seems to understand/parse ok, but does not know the answer, it should say "I do not have the answer".
Since your reasoning component may potentially run forever if it does not find an answer, it is recommended to limit the search time to ca one second. If the answer is not found during this time, it should say "I did not find the answer" or something similar.
How to build the system
Build one concrete program which takes a question string as input (preferrably from the command line) and prints the resulting English answer as output. Use the programming language you like or want to experiment with. Python is proably the safest choice.
The program has the following three main parts. Start by building a simple version of the first part, then the second and third. When this works, start improving the first part.
- First, you have to build a parser from English question sentences to a logic form, possibly containing an answer predicate. It is a good idea to also print this as a debug output.
- Second, you should run the reasoner on the data/ruleset and the logic form of the question. Take the data/ruleset, append the logic form created in the previous step, store it all in the file, run the reasoner with output to file, finally read the output file and extract the result. Print the extracted result as debug output.
- Third, you should convert the output (true or a concrete value where the result is true or a lack of answer) to a suitable English answer. Finally, print the answer.
What is a question sentence?
You have fairly free hands in determining what is a question sentence. Surely the sentences starting with "Is ..." or "What is ..." or "Where does ..." etc are good candidates for being treated as a question. However, you could also treat statements as "Tallinn is the capital of Estonia" as questions to which you can answer "yes" or "no" or "I did not find an answer".
How to parse the English sentence?
This is the main part of the practice work. Potentially you could either build a simple rigid parser or a powerful flexible parser. You could program a simple matcher of words in the sentence or use a deep-learning-based NLP component for building a powerful parser. We have no strict requirements of what is the "correct" way of building a parser. A better overall system will give you more points, though.
It is strongly advisable to start with very simple sentences of one kind and build a system which can handle these. Once it works, extend your parser to a bit more complex questions, experiment with fancy parsers, etc.
Whatever way you build your parser, a crucial part of this is mapping English words to the predicates used in your system. Say, you have a word "river" in a question and you have a predicate "river" or "is_river" in your database. Then the mapping is fairly obvious. However, if you have a "creek" in your sentence and "is_river" in the database but no "creek", then it is not so obvious. Consider four options to do the mapping, from simple and rigid to complex, but maximally flexible:
- you may convert "creek" to "is_river" in code.
- you may want add a new rule creek(X) => is_river(X) into the rule base you already have, thus allowing to handle the new word without converting it in your code first.
- you may integrate the relevant parts of wordnet (say, in its tptp form) as such rules for word taxonomies: in this way you get a huge number of words mapped for you.
- you may integrate a pre-trained machine-learned model of word similarities and use it to map very similar words to the ones you have in the rule/database or wordnet.
Check out the Lecture 9 materials: these are all devoted to semantic parsing.
A really trivial parser you may use as an inspiration for building the most simplistic version: Nlp.py
sippycup in github: a hands-on tutorial for building a semantic parser along with supporting code (not trivial!).