Seminars in Artificial Intelligence and Robotics

A.A. 2015/2016

Master in Artificial Intelligence and Robotics
(Laurea Magistrale in Intelligenza Artificiale e Robotica)

Prof. Daniele Nardi


General

6 Credits (ECTS), Fall Semester, Spring Semester, This year the course includes two sections, the first one took place in the fall semester (see details down below) and the second one will be in the spring semester.

Schedule (spring semester): Classes are on Fridays 12:00-15:30

Classes start February 26th. Class attendance is mandatory. Unless otherwise stated, classes are in classroom A3, Via Ariosto 25.


Section: Human Robot Interaction
Prof. Marc Hanheide (Univ. Lincoln)


In this seminar we will be looking at various sub-areas of human-robot interaction, ranging from general challenges and methodological foundations in HRI research, over different interaction modalities and patterns, to enabling technologies for short and long-term interaction with autonomous robots. The seminar will discuss state of the art algorithms and approaches as well as providing a "bigger picture" of the interlinking concepts of HRI.

For the complete course schedule and material go to: Seminars in AI and Robotics

  1. February 26th, Introduction & Background:
  2. March 11th, History and Overview of HRI
  3. April 1st, Studying and Measuring HRI
  4. April 15th, Human-Robot Spatial Interaction / Human-aware Navigation
  5. April 22nd, Human-Robot Collaboration
  6. April 29th, Social Signals
  7. May 13th, Long-term interaction


Exams

Students should register for 1 reading class in each section of the course. For each reading class, where the student registered, he/she should prepare a report, due 1 week after the class and prepared according to these guidelines. The student will be assigned the role of presenter in one of the chosen reading classes and the role of discussant in the other one. Each class will have two presenters and two or more discussants. The presentation should summarize the content of the paper. The discussant should contribute to the discussion on the paper by providing his/her personal comments/views on the paper. In preparing the presentation the student is invited to coordinate with the presenter colleagues and with the teacher.
Additional Notes:
The reading class with the students' presentations will take place in the dates of the corresponding seminars by the teacher. The seminar of the teacher will be given in the first time slot, the reading class will be in the second time slot.

Students can register the exam after the acceptance of both reports.
For the registration please book through Infostud. The exam dates are:
January/February 2016
June/July 2016
September 2016


Section: Machine Reading
Prof. David Israel (SRI International)
This section was held in the Fall Semester 2015

Schedule (fall semester): Classes are on Tuesdays 15:45-19:00

Classes start September 29th.

Class calendar (preliminary)

  1. October 6th, Introduction & Background: Some Thoughts on Methodology in Artificial Intelligence with a Special Focus on Machine Reading
  2. I happen to think that Artificial Intelligence, and indeed all of Computer Science, are rather strange and atypical sciences and it’s only right and proper that you should have some sense of why I think this. Toward the end of the discussion, we will move to the specific instance of AI approaches to Machine Reading (= Text Understanding).

    Recommended Readings:
    1. Human Problem Solving: The State of the Theory in 1970 (Simon & Newell, 1971).
    2. Building Watson: an overview of the Deep QA Project. 2010. AI Magazine 31(3):59-79.
    Additional readings:
    1. Ng and Zelle. Corpus-based approaches to semantic interpretation in natural language processing. 1997. AI Magazine 18(4):45-64.

  3. October 20th, Words: Word Senses, Lexical Relations, Word Sense Disambiguation
  4. Languages are not simply sets of words, but words are where we shall begin. What kind of thing is the meaning of a word? Could there be a single kind of thing that is the meaning of a word – any word? Even if not, is there a small set of ways that we can represent the meanings of words, a set of ways related in the right ways to represent the relations that we sense between the meanings of words?

    Recommended readings:
    1. Jurafsky & Martin. Speech and Language Processing, 2nd Ed., 2009. Chapter 19, ps. 1-9; Chapter 20.
    2. Lin. Automatic retrieval and clustering of similar words. 1998. In Proceedings of COLING-ACL, 768-774. Montreal, Canada: ACL.
    3. Mihalcea. Using Wikipedia for automatic word sense disambiguation. 2007. In Proceedings of NAACL, 196-203. Rochester, NY: ACL.
    4. McCarthy, Koeling, Weeds and Carroll. 2007. Finding predominant word senses in untagged text. In Proceedings of ACL, 279-286. Barcelona, Spain: ACL.

  5. November 3rd, Vector-space models of meaning and semantic composition with vectors (PART I).
  6. Overview: Ng & Zelle, from above and Turney and Pantel. From frequency to meaning: vector space models of semantics. 2010. JAIR 37: 141-188.

  7. November 10th, Vector-space models of meaning and semantic composition with vectors (PART II).
  8. This material is fairly technical and we shall have to figure out how much of it to cover.

    Recommended readings:
    1. Brown, deSouza, Mercer, Della Pietra, Lai. Class-Based n-gram Models of Natural Language. 1992. Computational Linguistics 18:(4): 467-479.
    2. Dumais, Letsche, Littman, Landauer. Automatic Cross-Language Retrieval Using Latent Semantic Indexing. 1997. AAAI Spring Symposium.
    3. Blei, Ng, Jordan. Latent Dirichlet Allocation. 2003. Journal of Machine Learning Research (3): 993-1022.
    4. Bengio, Ducharme, Vincent, Jauvin. A Neural Probabilistic Language Model. 2003. JMLR (3): 1137-1155.
    5. Mitchell & Lapata. Composition in distributional models of semantics. 2010. Cognitive Science 34(8): 1388-14

  9. November 17th, Phrase Structure and Dependency Parses
  10. So far we have largely ignored the fact that just as languages are not simply sets of words, so too sentences are not simply strings or sequences of words. While we will not be focusing (much) on syntax in the seminar, something needs to be said to allow us to distinguish between the proper treatment of (i) Marcello kissed Sophia and (ii) Sophia kissed Marcello. And here I am going to be highly partisan and present the world of parsing according to the Stanford NLP group. Further readings on parsing may be recommended; again, fair warning will be given.

    Recommended readings:
    1. de Marneffe, MacCartney, Manning. Generating typed dependency parses from phrase structure parses. 2006. Proceedings of 5th Int'l Conf. on Language Resources and Evaluation (LREC 2006): 449-454.
    2. de Marneffe & Manning. The Stanford typed dependencies representation. 2008. Proceedings of COLING 2008 Workshop on Cross-Framework and Cross-Domain Parser Evaluation, 1-8.

  11. December 1st, Information Extraction (I)
  12. Language is used for many, many purposes; but in this seminar, we will focus on only one (or really on one small family of purposes): to communicate information. Given that focus, it is not surprising that we shall take as a main purpose of Machine Reading systems that they “extract” information from texts. (I admit to never having been happy with this terminology, as it rather suggests – to an American ear – enhanced interrogation techniques; but it has become fairly standard.)

    Recommended readings:
    1. Jurafsky & Martin. Speech and Language Processing, 2nd edition. Chapter 22, Information Extraction, p. 725-743.
    2. Hobbs, Appelt, Bear, Israel, Kameyama, Stickel, Tyson. FASTUS: Extracting Information from Natural-Language Texts. 1991. SRI International Technical Note.
    3. Appelt, Douglas E., et al. FASTUS: A finite-state processor for information extraction from real-world text. 1993. IJCAI. Vol. 93, p. 1172-1178.
    4. Mintz, Bills, Snow, Jurafsky. Distant Supervision for relation extraction without labeled data. 2009. Proceedings of ACL-IJCNLP, 1003-1011.

  13. December 15th, Information Extraction (II)
  14. In the previous session, we discussed old-fashioned hand-crafted Information Extraction systems, with a bridge to the new coming by way of the Mintz et al. paper. In this session, we discuss more recent Machine Learning-based approaches.

    Recommended readings:
    1. Banko, Cafarella, Soderland, Broadhead, Etzioni. Open information extraction from the Web. 2007. IJCAI. 2670-2676.
    2. Wu & Weld. Open information extraction using Wikipedia. 2010. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, p. 118-127.
    3. Hoffman, Zhang, Weld. Learning 5000 Relational Extractors. 2010. Proceedings of 48th Annual Mtg. of ACL, p. 286-295.
    4. Etzioni, Fader, Christensen, Soderland, Mausam. Open Information Extraction: the Second Generation. 2011. IJCAI.
Section Machine Reading: Additional Seminar (A3, Via Ariosto 25, 16:00)
December 22nd, Pragmatic Named Entity Disambiguation
In this talk we explore what are the fundamental, and effective ideas for Named Entity Disambiguation, the problem of automatically assigning a specific identity to the Named Entities encountered in text. We will introduce the problem, its common solutions, what are state-of-the-art solutions, and what are outstanding problems. Stefano Pacifico Founder, Ai4Good