Report: Joint Conference on Artificial Intelligence in Stockholm

By Maximilian Kroker

Älvsjö is a suburban district in Stockholm with a population of around 22,000. The area consists of houses, playgrounds and smaller grocery stores. On a typical day one would only meet the Swedish locals. Not so in the middle of July: between the 13th and 19th of July the aisles of the little grocery stores were crowded with artificial intelligence researchers from around the world. The reason for this phenomenon is the annual International Joint Conference on Artificial Intelligence. The conference is held at the Stockholmsmässen convention center in Älvsjö. And it was attended by more than 1200 AI scholars. One was certainly able to notice the sudden increase of artificial intelligence researchers within the population. If a store didn’t exist on Google Maps, it didn’t exist at all.

Most researchers attend IJCAI (pronounced “I(t)sch-kay) to present their accepted research papers. With AI being a very broad field, the topics of these papers range from natural language processing to algorithmic game theory. IJCAI is known for being very selective with the papers they receive. Therefore an accepted paper by IJCAI is a respective achievement. Another reason to attend IJCAI are the tutorials. Experts present in two to four hours the basics of their research. One of the first tutorials “Game Theory to Data Science: Eliciting Truthful Information”, was held by the founder and director of the Artificial Intelligence Laboratory at EPFL, Boi Faltings. His research tackles problems for platforms like Amazon: they have to deal with users who provide reviews with wrong information. By incentivizing the users to write only truthful reviews, one could overcome this problem. This is done by applying different methods from math, statistics to game theory. The results of his research are not only relevant for the quality of content presented on these platforms. The findings could also be used to comply efficiently with the requirement of correctness that has been recently introduced by the GDPR.

The computer scientist at Duke, Vincent Conitzer, presented a different approach to artificial intelligence. Part of his tutorial focused on the moral behavior of artificial intelligent agents. There two approaches in making the artificial agents behave morally. The first would be to ask humans about their moral opinions. The gathered data could be used to build a fundamental moral framework for the agent. For instance, the Moral Machine project does this with the dilemma of autonomous vehicles. The second, much harder approach, would be to extend game theory to incorporate moral reasoning. This approach seems very theoretical and practical applications might take their time. However, the first approach has its own set of problems. Using machine learning or letting the majority decide could sustain certain biases and discrimination against minorities. But at the same time, the technology could provide insights to overcome these problems. Overall moral artificial intelligence promises to support human actors in their moral decision making.

Moral reasoning is not the only social science where AI will have an impact. The field of artificial intelligence and the law, also the title of the tutorial done by the linguist and computer scientist Adam Wyner, has gained significant traction in the recent years. Predicting legal outcomes has become more accurate. For example by using machine learning the decisions of the U.S. Supreme Court can be predicted with an accuracy of 70% (Katz, D. M., Bommarito, M. J., and Blackman, J. ,2017)). This is achieved by primarily using non substantial meta information about the case like the month when the decision was made. Despite the promising efficiency of such a prediction system a non substantial analysis is not desirable. The result is not explainable and a conviction solely based on the fact that the case was tried in November is not very helpful, even if the prediction aligns with 70% of what the Supreme Court Judges would decide. On the other hand a solely substantial analysis would require a machine learning method working at the semantic level. Unfortunately the state of the art still has a hard time with analyzing the semantics of a text. Legal texts are even more difficult to process due to their unique structure and greater complexity. This is why the Stanford Parser, a tool used to process natural language, simply breaks down, if applied to a legal text.

But machine learning and deep learning are not the only game changers in AI and law. Legal work can also be accelerated by using graph theory and methods of logic to improve case based reasoning, elicit new arguments or structure known arguments. Before these formal tools become practical, there is one hurdle to overcome: currently, the data for these legal tools has to be put in single-handedly. The key information in the legal source, be it a statute or a court decision, cannot be extracted in an automated way. This hurdle can be overcome if the format of legal texts adapts to the requirements of the formal tools. Although demanding a processable legal format from the legislator seems utopic, the Scottish government already started experimenting with a slightly modified version of XML, known as RuleML and LegalRuleML. Thus the field of law and artificial intelligence has become very exciting and the legal landscape will certainly change in the next decade.