Early use of knowledge graphs, before the start of this century, related to building a knowledge graph manually or semi-automatically and applying them for semantic applications, such as search, browsing, personalization, and advertisement. Taalee/Semagix Semantic Search in 2000 had a KG that covered many domains and supported search with an equivalent of today’s infobox .
Recently a number of enterprises have been employing semantic technologies to improve the means by which their content is structured, classified, made available to content consumers and its performance measured. A knowledge base this resultantly interlinked content is increasingly referred to as a content graph.
Big Data has made many data processing tasks significantly easier over the past couple of years. We now have the capability to perform data processing like never before. However, Big Data comes with a Big Assumption: that we can bring lots of data together in one homogeneous dataset. But what if we can’t?
Data Integration has been an active area of computer science research for over two decades. A modern manifestations of data integration can be seen as Knowledge Graphs which integrates not just data but also knowledge at scale. Tasks such as conceptual modeling, schema/ontology matching, entity matching, data quality among others are fundamental in the data integration process.
Many enterprises just work at the edges of the toughest data problems and may not even realize how to tackle them. But some have managed to transform their organizations with the help of knowledge graphs. How did they succeed, and what can we learn from them?
In this presentation, I will show through a number of examples how Linked Open Data, and especially DBpedia, have contributed to AI by making it possible to create intelligent, open domain applications, i.e. applications which do not have a fixed domain, or for which this domain is not known in advance. This was made evident through a number of high profile applications (e.g.
“Code is Law” – three famous words of Professor Lawrence Lessig back in 1999, when the Internet as the first important “cyberspace” emerged. This raised fundamental questions about how Code will impact our legal environment. Since then IT has further moved into our lives and now eventually reaches out to the legal profession. Major questions raised since then are still valid.
How one can link structured to unstructured data to get a holistic view and generate more insights.
Specifically trained bots - driven by Semantic Analytics and Artifical Intelligence - can identify substantial contradictions and other inconsistencies within tons of structured and unstructured data.
Javier D. Fernández
Ten years into Linked Data there are still many unresolved challenges towards arriving at a truly machine-readable and decentralized stage that would make the promised vision of a Web of Data come true. In this talk we will review the current state of affairs and highlight the key technical and non-technical challenges to the success of LOD.