Artificial Intelligence
Natural language processing
Semantic web

Improvement of the precision of relation extraction system using a machine learning-based filter

Student: Kevin Lange Di Cesare

Supervisor: Michel Gagnon

Co-supervisor(s): Amal Zouaq, Jean-Louis Ludovic

In the current state of the semantic web, the quantity of available data and the multiplicity of its uses impose the continuous evaluation of the quality of this data, on the various Linked Open Data (LOD) datasets. These datasets are based on the RDF syntax, i.e. <subject, relation, object> triples, such as <Montréal, is a city of, Québec>. As a consequence, the LOD cloud can be represented as a huge graph, where every triple links the two nodes “subject” and “object”, by an edge “relation”. In this representation, each dataset is a sub-graph. DBpedia, one of the major datasets, is colloquially considered to be the central hub of this cloud. Indeed, the ultimate purpose of DBpedia is to provide all the information present in Wikipedia, “translated” into RDF, and therefore covers a wide range of domains, allowing a linkage with every other LOD dataset, including the most specialized. From this wide coverage arises one of the fundamental concepts of this project: the notion of “domain”. Informally, a domain is a set of subjects with a common thematic. For instance, the domain Mathematics contains several subjects such as algebra, function or addition. More formally, a domain is a sub-graph of DBpedia, where the nodes represent domain-related concepts. Currently, the automatic extraction methods for DBpedia are usually far less efficient when the target subject is conceptual than when it is a named entity (such as a person, city or company). Hence our first hypothesis: the domain-related information available on DBpedia is often poor, since domains are constituted of concepts. In the first part of this research project, we confirm this hypothesis by evaluating the quality of domain-related knowledge in DBpedia for 17 domains chosen semi-randomly. This evaluation is based on three numerical aspects of the “quality” of a domain: 1 – number of inbound and outbound links for each concepts, 2 – number of links between two domain concepts compared to the number of links between the domain and the rest of DBpedia, 3- number of typed concepts (i.e. representing the instance of a class : for example, Addition is an instance of the class Mathematical operation : the concept Addition is typed if the relation <addition, type, mathematical operation> appears in DBpedia). We reach the conclusion that the domain-related, conceptual information present in DBpedia is indeed poor on the three axis. In the second half of this work, we give two solutions to the quality problem highlighted in the first half. The first one allows to propose potential classes that could be added in DBpedia, addressing the 3rd quality aspect: number of typed concepts. The second one uses an Open Relation Extraction (ORE) system that allows to detect relations in a text. By using this system on the abstract (i.e. the first paragraph of the Wikipedia page) of each concept, and classifying the extracted relation depending on their semantic meaning, we can 1) propose novel relations between domain concepts, and 2) propose additional potential classes. These two methods currently only represent the first step, but the preliminary results we obtain are very encouraging, and seem to indicate that they are absolutely relevant to help correcting the issues highlighted in the first part.

For more information, click here



A Machine learning Filter for Relation Extraction

Kevin Lange Di Cesare, Amal Zouaq, Ludovic Jean-Louis, Michel Gagnon

25th World Wide Web Conference, Montreal