StarAI 2016

Sixth International Workshop on Statistical Relational AI

The purpose of the Statistical Relational AI (StarAI) workshop is to bring together researchers and practitioners from two fields: logical (or relational) AI and probabilistic (or statistical) AI. These fields share many key features and often solve similar problems and tasks. Until recently, however, research in them has progressed independently with little or no interaction. The fields often use different terminology for the same concepts and, as a result, keeping-up and understanding the results in the other field is cumbersome, thus slowing down research. Our long term goal is to change this by achieving a synergy between logical and statistical AI. As a stepping stone towards realizing this big picture view on AI, we are organizing the Sixth International Workshop on Statistical Relational AI at the 25th International Joint Conference on Artificial Intelligence (IJCAI) in New York City, on July 11th 2016.



StarAI will be a one day workshop with around 50 attendees, short paper presentations, a poster session, and three invited speakers:

  • William Cohen (CMU) 
  • Percy Liang (Stanford) 
  • Daniel Lowd (University of Oregon) 


Authors should submit either a full paper reporting on novel technical contributions or work in progress (AAAI style, up to 6 pages excluding references), a short position paper (AAAI style, up to 2 pages excluding references), or an already published work (verbatim, no page limit, citing original work) in PDF format via EasyChair. All submitted papers will be carefully peer-reviewed by multiple reviewers and low-quality or off-topic papers will be rejected. Accepted papers will be presented as a short talk and poster.

Important Dates

  • Paper Submission: May 8 (extended)
  • Notification of Acceptance: May 20
  • Camera-Ready Papers: July 1
  • Date of Workshop: July 11



  •  8:55 a.m.: Welcome and introduction
  •  9:00 a.m.: Invited talk by William Cohen
    Title: TensorLog: A Differentiable Deductive Database [pdf]

    Large knowledge bases (KBs) are useful in many tasks, but it is unclear how to integrate this sort of knowledge into "deep" gradient-based learning systems. To address this problem, we describe a probabilistic deductive database, called TensorLog, in which reasoning uses a differentiable process. In TensorLog, each clause in a logical theory is first converted into certain type of factor graph. Then, for each type of query to the factor graph, the message-passing steps required to perform belief propagation (BP) are "unrolled" into a function, which is differentiable. We show that these functions can be composed recursively to perform inference in non-trivial logical theories containing multiple interrelated clauses and predicates. Both compilation and inference in TensorLog are efficient: compilation is linear in theory size and proof depth, and inference is linear in database size and the number of message-passing steps used in BP. We also present experimental results with TensorLog and discuss its relationship to other first-order probabilistic logics.

  • 10:00 a.m.: Poster spotlights for papers 1 to 9
  • 10:30 a.m.: Coffee break
  • 11:00 a.m.: Invited talk by Daniel Lowd
    Title: Adversarial Statistical Relational AI [pdf]

    Statistical relational AI has worked to unify probabilistic and logical approaches to AI, but mostly ignores game-theoretic approaches. In a growing number of domains, we need to combine all three. Social network spam, online auction fraud, fake reviews, and terrorism are inherently adversarial as well as statistical and relational. Such settings will be increasingly common as AI is deployed in systems that interact with humans, both competitively and cooperatively.

    In this talk, I will make the case for developing AI methods that are game-theoretic, as well as statistical and relational, and discuss our initial work towards developing such methods. Specifically, I will present two methods for learning adversarially-robust Markov logic networks. The first method learns robust collective classification models by incorporates a model of the adversary directly into the learning objective; the second generalizes to any structured prediction problem by representing robustness as an equivalent regularizer. I will conclude by discussing future directions and open questions.

  • 12:00 p.m.: Poster spotlights for papers 10 to 18
  • 12:30 p.m.: Lunch break


  • 2:00 p.m.: Invited talk by Percy Liang
    Title: Querying Unnormalized and Incomplete Knowledge Bases [pdf]

    In an ideal world, one might construct a perfect knowledge base and use it to answer compositional queries. However, real-world knowledge bases are far from perfect---they can be inaccurate and incomplete. In this talk, I show two ways that we can cope with these imperfections by directly learning to answer queries on the imperfect knowledge base. First, we treat semi-structured web tables as an unnormalized knowledge base and perform semantic parsing on it to answer compositional questions. Second, we show how to embed an incomplete knowledge base to support compositional queries directly in vector space. Finally, we discuss some ideas for combining the best of both worlds.

  • 3:00 p.m.: Poster spotlights for papers 19 to 25 and late-breaking posters
  • 3:30 p.m.: Poster session (with coffee)
  • 5:30 p.m.: End

Accepted Papers


Organizing Committee

For comments, queries and suggestions, please contact:
  • Guy Van den Broeck (UCLA) 
  • Mathias Niepert (NEC Labs) 
  • Sebastian Riedel (University College London) 
  • David Poole (University of British Columbia) 

Program Committee

  • Hendrik Blockeel (KU Leuven) 
  • Guillaume Bouchard (University College London)
  • Hung Bui (NLU Lab, Nuance) 
  • Arthur Choi (University of California, Los Angeles)
  • Jaesik Choi (UNIST) 
  • James Cussens (University of York) 
  • Adnan Darwiche (University of California, Los Angeles)
  • Jesse Davis (KU Leuven)
  • Martine De Cock (University of Washington Tacoma) 
  • Rodrigo de Salvo Braz (SRI International) 
  • Pedro Domingos (University of Washington, Seattle)
  • Stefano Ermon (Stanford)
  • Paolo Frasconi (University of Florence)
  • David Jensen (University of Massachusetts Amherst) 
  • Henry Kautz (University of Rochester)
  • Kristian Kersting (TU Dortmund) 
  • Angelika Kimmig (KU Leuven)
  • Daniel Lowd (University of Oregon) 
  • Sriraam Natarajan (Indiana University) 
  • Jennifer Neville (Purdue University)
  • Dan Olteanu (Oxford)
  • Jay Pujara (University of Maryland) 
  • Tim Rocktäschel (University College London)
  • Scott Sanner (Oregon State University) 
  • Jude Shavlik (University of Wisconsin, Madison)
  • Daniel Sheldon (University of Massachusetts, Amherst) 
  • Sameer Singh (University of Washington, Seattle) 
  • Heiner Stuckenschmidt (University of Mannheim) 


StarAI is currently provoking a lot of new research and has tremendous theoretical and practical implications. Theoretically, combining logic and probability in a unified representation and building general-purpose reasoning tools for it has been the dream of AI, dating back to the late 1980s. Practically, successful StarAI tools will enable new applications in several large, complex real-world domains including those involving big data, social networks, natural language processing, bioinformatics, the web, robotics and computer vision. Such domains are often characterized by rich relational structure and large amounts of uncertainty. Logic helps to effectively handle the former while probability helps her effectively manage the latter. We seek to invite researchers in all subfields of AI to attend the workshop and to explore together how to reach the goals imagined by the early AI pioneers.

The focus of the workshop will be on general-purpose representation, reasoning and learning tools for StarAI as well as practical applications. Specifically, the workshop will encourage active participation from researchers in the following communities: satisfiability (SAT), knowledge representation (KR), constraint satisfaction and programming (CP), (inductive) logic programming (LP and ILP), graphical models and probabilistic reasoning (UAI), statistical learning (NIPS, ICML, and AISTATS), graph mining (KDD and ECML PKDD) and probabilistic databases (VLDB and SIGMOD). It will also actively involve researchers from more applied communities, such as natural language processing (ACL and EMNLP), information retrieval (SIGIR, WWW and WSDM), vision (CVPR and ICCV), semantic web (ISWC and ESWC) and robotics (RSS and ICRA).



Previous Workshops

Previous StarAI workshops were held in conjunction with AAAI 2010, UAI 2012, AAAI 2013, AAAI 2014, and UAI 2015, and were among the most popular workshops at the conferences.