Teadmiste formaliseerimine 2020

Allikas: Lambda
teadmised

Name: Knowlege representation
Code: ITI8700
Link: http://lambda.ee/Teadmiste_formaliseerimine
Lecturer: Tanel Tammet
Practice work: Priit Järv
Contact: tanel.tammet@ttu.ee, 6203457, TTÜ ICT-426
Archives of previous years: 2019.

Sisukord

NB! See on 2020 arhiiv, mitte aktuaalne õppematerjal!

Eksamiajad ja korraldus

Eksamid toimuvad sel aastal ainult kaugtööna: eksami algusajaks paneb õppejõud küsimused/ülesanded siia wikilehele ning kolme tunni jooksul tuleb saata emailiga vastused. Eksamile registeeruda ei ole vaja.

Eksamil on kaks osa:

  • Kirjalik: veebis antakse ülesanded ja nende vastused ja lahendused tuleb saata kirjalikult emailiga 3 tunni jooksul.
  • Suuline: iga tudengiga tehakse samal päeval kohe peale kirjalikku eksamit kiire skype-vestlus lühikese suulise eksamina peale kirjalikku osa.


Eksamiks õppimiseks ja eksamiküsimuste valiku indikeerimiseks on allpool osad materjalid märgitud punase tagiga, üks kahest:

  • E1 põhimaterjal, loe süvenemisega
  • E2 teise taseme materjal, vaata läbi ja saa peamisest aru

Ilma E1 või E2 tagideta materjalide kohta otseselt küsimusi/ülesandeid ei tule. Tagitud lambda-alamlehtedel võivad samuti olla tagitud lingid sees.


Aine õppimine piirangurezhiimis

Loengud

  • Loenguajaks paneb õppejõud üles senisest detailsemad soovitused, et mida kuni järgmise loenguajani iseseisvalt lugeda.
  • Lisaks teeb õppejõud kas väikese sissejuhatava video või jutu nende asjade kohta, mida lugeda antakse, ja see video/jutt läheb vastava loengu peatükki wikis.
  • Värske lugemisülesanne on 8. mail "Lecture 12" allpool loengute listis.

Praktikumid

Priit andis praktikumide jaoks järgmised juhtnöörid:

Kodutööde esitamiseks avasime moodles lehe:

https://ained.ttu.ee/course/view.php?id=320

Kõik üliõpilased, kes on midagi juba esitanud, on ka ainesse enrollitud ja gruppidesse pandud. Kui keegi on kogemata vales grupis (või ei õpi enda teada üldse seda ainet :) siis andke palun teada.

Kodutööd esitage kirjalikult, vajalik info on olemas moodles. Õppematerjalid, näited jne on endiselt lambda wikis http://lambda.ee/wiki/Teadmiste_formaliseerimine

Diskussioon kodutööde kohta võiks toimuda esialgu siin:

https://ained.ttu.ee/mod/forum/view.php?id=11102

Katsetame selle formaadi sobivust. See toimib eelkõige siis, kui küsite ja kommenteerite julgesti.

Time, place, result

Lectures: Friday 10:15-11:45 room ICT-315
Practical work: Friday 12:00-13:30 room ICT-315
Practical work will give 40% and exam 60% of points underlying the final grade. The exam will consist of several small excercises.

The first practical work time on 31. January at 12:00 will be used for introducing the practice work. Some of the last lecture times at the end of the course may be used as additional seminars/labs.

Focus

The main focus of the course is on knowledge representation and reasoning (KR), a part of symbolic AI, on a spectrum from simple (representing and using knowledge in databases) to complex (meaning of sentences in natural language and commonsense knowledge).

The overall goal of the labwork to understand issues facing a task of building a natural language question answering system a la Watson and to see and experiment with existing parts of the solution.

The course contains the following blocks:

  • Representing knowledge in the SQL and document databases and hybrids.
  • Representing and using simple facts and rules in symbolic AI and logic.
  • General-knowledge databases: wikidata, wordnet, yago, conceptnet, nell, cyc.
  • Knowledge conveyed by natural sentences: both statistical methods like word2vec and logic-based methods.
  • More about reasoners and question answering systems.
  • Confidence and relevance.
  • Context, time, location, events, causality.

Check out this description of the whole KR area.

Books to use


Observe that a noticeable part of the course contents are not covered by these books: use the course materials and links to papers, standards and tutorials provided.

Practical work

There are three labs. They are all steps in a single project: build a simple natural-language question-answering system about geography.

The labs have to be presented to the course teachers and all students present at labwork time.

The labs can be prepared alone or by teams of two or three people.


Each lab is evaluated independently and gives the same number of points. NB! When you do not meet the lab deadline, you will be awarded half of the points for this lab.

First lab

Please read about the task for the first lab. Deadline: 28. February. It is better to present it earlier to have more time for the following labs.

Second lab

Deadline: 10 april (shifted one week from the original 3. april)

The task in the second lab is to

  • write a small set of additonal, nontrivial rules for geography not present in Yago or Wordnet,
  • actually answering simple questions using logic and rules with the help of a reasoner.

See Details of the KR second lab in 2020


Third lab

Deadline: 8. may.

Add a simple NLP component to the second lab to answer questions posed in natural language. I.e, the NLP component should convert a natural language query to logic usable by your system and the full system should be able to answer the questions.

See Details of the KR third lab in 2020


Block 1: knowledge in the SQL and document databases and hybrids.

Lecture: Intro and background, 31. Jan

Lecture materials:

Lecture: Nosql, json and rdf in databases. 7. Feb


Block 2: simple facts and rules in symbolic AI and logic

Lecture: simple rules in rdfs, logic and owl. 14. Feb

Have a brief a look of these, no need to read thoroughly:

More resources:

Lecture: rules in logic with provers to answer questions, 21. Feb

These should be looked at in parallel:

Lecture: what the reasoner does, 28. Feb

We will look in detail into the same presentation:

and will explain details of the second practice work.

Block 3: General-knowledge databases

Lecture: looking into main large knowledge bases. 6. March

E2: you can search/investigate what kind of data is available and you can find out actual data from these databases with some work, by surfing on the web pages of the systems.

We will have a look at the goals, main content and differences between:

Lecture 6: Big annotation systems. 13. March.

First, big annotation systems:

Importantly, see also:

Block 4: natural language and question answering

Lecture 7: Intro to NLP. 20 March and the following self-study week

A lot of reading this week. The main material is in the following chapters 25 and 26 of the Jurafsky and Manning book, the rest is relevant overviews/background. Lecture materials for the week:

Not obligatory, but highly recommended:

Lecture 8: vector representation of words. 27 March and the following self-study week

Now, you may have had a course on NLP and machine learning, in which case you should already know a lot about vector semantics. If so, consider yourself lucky :) and just skim through the following materials to check whether there are any new things or old friends for you. If not so, consider yourself extra lucky: you have come to the core of machine-learning-based NLP, and this is both pretty cool and quite understandable at the same time! Most of the machine-learning-based NLP is built on top of the ideas of vector representation of words.

Obligatory study materials:

Additional supporting non-obligatory materials from (roughly) easier to more complex:


Also interesting to look and read:

Block 5: More about semantic parsing. symbolic question answering and reasoners

Lecture 9: supporting materials for the third practical work: 3. April and the following week.

Please first listen and read in this order:

You may want to have a look at the AllenAI tutorial

Additionally you may want to read some papers about semantic parsing:

  • Read a paper about building a semantic parser in a relatively simple way.
  • Another paper about semantic parsing

There is an interesting specific method for semantic parsing, called AMR, worth having a brief look into:

There is a very good fairly deep NLP-and-commonsense series of podcasts and other interesting stuff from the Allen Institute for AI:


Lecture 10: planning, frame problem, open/closed world and the blocks world: 17 april and the following week

This week is about planning and the frame problem. The standard example for these is a blocks world: some blocks are on the table and the robot arm can lift single blocks and put them on other blocks or on the table. The arm cannot lift several blocks or a tower of blocks. Then, given a position of blocks at initial situation, can the robot arm create a required new position of blocks? Can we get the steps required? For getting the steps we use the $ans predicate.

One hard issue arising is that how do we know that doing some action like lifting a block does not create side effects like stumbling existing towers, moving other blocks etc? This issue is called the frame problem and it has no easy solutions.

Importantly, the frame problem arises since normal first order logic has

  • the open world assumption E1 (must read): if we do not have (and cannot derive) a positive fact like P(a) and neither a negative fact like -P(a), we assume we simply do not know which holds.

Prolog and databases operate under the

  • closed world assumption E1 (must read): if we do not have (and cannot derive) a positive fact like P(a), we automatically assume that -P(a) must be true. For example, Prolog "negation" really means "cannot prove". One consequence of this assumption is that we cannot represent real negation and disjunction in an OK manner (Prolog does not contain these) and cannot satisfactorily speak about "true", "false" and "unknown". See also negation as failure in prolog


Please look at and read these materials about the frame problem:

These two readings are optional:

  • frame problem at the Stanford Encyclopedia of philosophy: not obligatory, but a good read to get a better understanding.
  • another classic about frame problem: read this only if you became really really interested about the whole issue: it goes quite deep, although it is not especially technical.

Next, importantly, you should experiment yourself with gkc and both of the following files. Please copy and read the files and understand the encoding. At the end of the files are several queries to experiment with: instructions are also there.

In both of them the predicate holds(X,Y) is used to describe something X about the blocks world (like, one block is on another or robot holds some block) and the second argument Y describes the state we have arrived to via robot actions from the original state (like, picked up a block in the initial state and then did put the block down on top of another).

To get a simple visualization of the blocks world, have a look at


NB!Please email me tanel.tammet at taltech.ee a short description of (a) with which of the queries in the files were you successful and how long did they approximately take, (b) what queries did you invent yourself and what happened with these. Please put a word "formaliseerimine" into the email subject so I can easily find them.


Block 6: confidence and relevance

Lecture 11: reasoning with uncertainty: discrete reasoning options: 24 april and the following week

Almost all the knowledge we have is uncertain: there are many exceptions to a rule or a fact/rule holds with some vague probability. Notice that, in contrast, typical databases in companies contain facts with very high certainty, but they do not contain rule or commonsense knowledge: the latter is built into brittle software code of systems using these databases.

We will consider two different ways to tackle uncertainty:

  • Discrete methods: no numbers used. Instead, uncertainty is described as exceptions or beliefs of people.
  • Numeric methods: uncertainty is estimated with probability-like numbers of various kinds.

NB! Both of these ways are hard to actually implement, neither have they been understood very well, despite an immense amount of research and papers. For example, typical machine learning methods operate with extremely vague probability-like numbers and do not attempt to put these numbers into a real properly-theoretical-and-correct probabilities framework.

This week is devoted to discrete methods: we will focus on numeric methods next week.

Roughly said, there are three kinds of discrete methods:

  • Exceptions to rules. The main method is called default logic. E1 Read carefully!. Another, less popular method is circumscription (skim through, careful reading not needed). The main classical example for default logic is "birds normally fly": bird(X) & [we fail to prove -flies(X)] => flies(X). In other words, a rule is blocked, if we manage to prove that a negation of the rule-derived fact can be proved. Say we have
    • bird(tweety)
    • penguin(pengu)
    • (bird(X) & [we fail to prove -flies(X)]) => flies(X)
    • penguin(X) => -flies(X)
Then we can derive -flies(pengu) and flies(tweety) but we cannot derive flies(pengu), since it is blocked by -flies(pengu) which we could derive. In the wikipedia page above the "justification P" really means "we fail to prove -P".
Et asjadest paremini aru saada, vaata lisaks veidi põhjalikumat eranditega järeldamise tutoriali E2
Esimest järku loogikas ei ole üldjuhul võimalik välja rehkendada, et mingi asi ei ole järelduv. Lausearvutuses samas on. Sestap on olemas selline tore valdkond nagu answer set programming, E2 kus esimest järku muutujatega valem teisendatakse lausearvutuseks, tekitades muutujatega reeglitest hästi palju konstante sisaldavaid reegleid. Siis muutuvad blokkerite arvutamised võimalikuks. Samas ei ole see lausearvutuseks-teisendamine mittetriviaalsetel juhtudel võimalik. Üks paremaid süsteeme seal on DLV (DLV wikis), vaata tutorialit ja DLV manuaali Chapter 4. Queries: siin on näha erinevaid default reasoningu variante.
Teine oluline küsimus eranditega reeglite puhul on nende prioriteedid: kui kaks reeglit üksteist blokivad, kas mõnel peaks olema kõrgem prioriteet? Näiteks, "materiaalsed asjad üldiselt ei lenda", "linnud on materiaalsed asjad", "linnud üldiselt lendavad", "pingiviinid on linnud", "pingviinid üldiselt ei lenda". Siin paistab, et taksonoomias allpool olevaid asju kirjeldavad reeglid võiks olla kõrgema prioriteediga kui ülalpool asju kirjeldavad reeglid. Samas "Nixoni kolmnurga" näites "vabariiklased ei ole patsifistid", "kveekerid on patsifistid", "Nixon korraga vabariiklane ja kveeker" oleks loomulik mõelda, et esimesed kaks default reeglit Nixoni jaoks blokeeruvadki ja me ei saa järelda, et Nixon on patsifist, ega ka seda, et ta ei ole patsifist. Prioriteetide teema kohta võib vaadata täeindavalt seda artiklit või seda sarnast artiklit.
  • Describing beliefs of various actors. This builds upon modal logic (see intro presentation E2 to understand the basics) and the main method is (auto)epistemic logic: the logic for encoding who knows or believes what. See the long tutorial as an easily read text E1 and additionally tutorial slides Ijcai93.pdf by the same authors to accompany the long text. Read carefully, starting with the long text and also looking at slides at the side.
  • Multi-valued logics with belief degrees (a la "very unlikely", "very likely") at different intervals. We will not look into these: we'll investigate the numeric methods instead. Have a very brief look at the wikipedia.

NB! None of these methods are really used in any kinds of practical systems: they are hard to efficiently implement and they are not understood well enough in practical settings.

Lecture 12: numeric reasoning with uncertainty: 8 May and the following week

The main material for reading is an overview-followed-in-depth by the lecturer Numeric uncertainty: (E1 up to and including the chapter 4, see also E2 tags inside)

  • Read carefully up to and including the chapter 4 "Different ways to encode confidences in logic" and have a quick look at all the wikipedia links.
  • Skim through the later parts and look only into these subchapters which you found interesting.

Next, skim through the arXiv preprint of the recent paper:

Next, have a brief look at a good example system doing some specific kinds of probability reasoning:

Finally some additional possibly helpful material considering some parts of the main Numeric uncertainty material.



Teadmiste formaliseerimise eksamiülesanded 20 mai 2020 Teadmiste formaliseerimise eksamiülesanded 27 mai 2020