Industry

Andreas Blumauer

While the term 'Semantic search' has become a buzzword on the search market, the concepts behind it remain unclear for most end-users. We want to present a rating system which helps to better understand and to classify the different forms of semantic search.

Raffaele PalmieriVincenzo Orabona

As well known, Semantic Web technologies make available a set of facilities for enabling interoperability among software agents in the Web, providing a common framework that allows data to be shared and reused across applications. From the other hand, the related data formats (as XML and RDF) constitute a suitable mean to represent in a machine understandable way the knowledge connected to the great amount of semi-structured or unstructured documents accessible by the Web itself. Following the Semantic Web vision, the last generation of Content Management System (CMS) focuses their attention on data (information embedded in a document) rather than content (the document itself), thus shifting from a “content centric” approach to a “data centric” one. To this goal, they incorporate semantic annotation modules in order to derive useful information from the managed contents and deal with their semantics, leveraging the Linked Data paradigm to relate extracted concepts with the available external knowledge (often coded in the shape of vocabularies, taxonomies or ontologies) depending on the considered application scenario. In this presentation we describe the design and development of a novel Semantic Content Management System, representing our solution to the content management processing problem. In particular, we provide a CMS combined with a fully featured semantic metadata repository with reasoning capabilities, based on reusing different Open Source solutions (Apache Stanbol, Apache SOLR, Openlink Virtuoso...).

Vera MeisterJonas Jetschni

The implementation of IT service catalogs at public organizations can be considered as an effective first step towards IT service management. Latter becomes more and more inevitable due to growing financial, business and security threats faced by public organizations. Traditional IT service catalog implementations are mostly based on common Content Management Systems. A small number of pubic organizations uses document-based catalogs or stick to Configuration Management Databases, which provide a rather technical type of service catalog. None of the implementation types met all of the valid requirements against an IT service catalog. That’s why the development and implementation of a semantic catalog was initiated. A vertical prototype is now implemented and tested and can be presented at the conference.

Tudor B. Ionescu

In the mobility industry collaborative processes are often described in natural language and stored in Word and PDF handbooks and logbooks. This unstructured information is complemented by emails and meeting minutes resulting from the communication between project stakeholders (customers, managers, engineers). Execution logs of past processes also contribute to this unstructured repository of process information. In the railway domain, non-functional requirements, such as safety, reliability, certifiability, and standard compliance of both the systems and the business processes used in creating them are key to the success of products and projects. As the fulfillment of these non-functional requirements is extremely costly and time-consuming, the automation and optimization of business processes for developing railway systems are constantly sought after in large business organizations.

To enable automation and optimization of a business process for configuring railway interlocking systems, a BPMN (Business Process Model and Notation) workflow was implemented using the Camunda Suite, which supports visual semantics and executable code generation from BPMN models. In the proposed solution, semantic technologies are used to infer semantic process models, which refine existing models at runtime. The proposed solution helps reduce the process execution time and costs through process automation and optimization. This is facilitated by semantic technologies and a strict separation of concerns using a 3-process approach: (1) a productive process monitored by (2) a mining process and dynamically refined by (3) an adaptation process.

Tomas Knap

In my talk, I would like to introduce 2 pilot projects we ran as part COMSODE EU FP7 project with Slovak Environment Agency (SEA) and Czech Trade Inspection Authority (CTIA). The goal of these pilot projects was to help these organisations to transform and publish their selected datasets as (linked) open data. I will also demonstrate UnifiedViews, an ETL for RDF data, and detail its role in Open Data Node, the publication platform prepared in COMSODE project.

Roland FleischhackerDr. Sonja Kabicher-Fuchs

As one of the largest property management companies in Europe Stadt Wien - Wiener Wohnen (WW) manages approximately 220,000 community-owned apartments, 47,000 parking spaces and 5,500 shops. More than half a million tenants, and thus about a quarter of Vienna's city population, cause 1.5 million customer inquiries to the contact center per year. The reported customer issues are manifold, ranging from technical defects, suggestions, information and complaints to commercial issues about rent and operating costs. This large variety of topics and the proper selection of associated procedures for handling the concerns remain for the employees of the contact center a major challenge. In particular taking into account the fact that some by the call center initiated businesses process very high costs.

To increase the quality and speed of the concern identification, WW implemented the cognitive decision system DEEP.assist, which went live in June 2014. With DEEP.assist the call center agent now only has to type in the statements of the caller in form of normal German sentences. Doing this, the call center agent documents the business case and the system analyses additionally in real time the meaning of the text and the call center agent gets proposals for the solution already during the writing. A key challenge in problem solving was the fact, that the caller often does not describe the specific problem, but articulates the symptoms of the concern. With the help of chains of associations DEEP.assist is able to identify the concerns, even with very unusual descriptions of the caller.

Miroslav LíškaMarek Šurek

At present it is very difficult to work with government data effectively. A lot of effort is just spent to integrate various datasets. Data are often at low level quality such as they are inconsistent or incomplete and published in different formats. This all limits their integration and utilization for various purposes. At present the linked data based method to government data integration seems to be most promising approach in this field. Data are annotated with ontologies hence they can be easily linked with semantics and processed with reasoners for additional content inferencing. Subsequently, when the government data are linked and also open, then great business value can be produced. On the one hand the data integration process is more effective and precise and on the other hand, any software project can benefit from including open linked data in their solutions. 

This presentation aims to provide information about semantic web adoption process for Slovak government data. First, an initial formal proposal of semantic standards for Slovak government data [SK-SEM2013] is presented. Second, the focus is oriented into presentation how the URI became the key element of Slovak semantics standards. Third, a new approach to semantic standards is presented. The base properties of semantic standards, i.e. approved ontologies and a method to URI creation, are shown. Finally, the concrete example of government linked data is presented. It covers the Slovpedia, Slovak open linked data database and the Pharmanet, the Slovpedia client that provides an approach to NLP based drugs interactions extended with inferencing.

Michiel Hildebrand

CultuurLINK is a Web application to link vocabularies. The application is developed to support the cultural heritage community with the alignment of vocabularies. While several tools have existed previously to support the fully automatic alignment of vocabularies, these are difficult to apply effectively in practice. With CultuurLINK, the user guides the system step-by-step through the alignment process. With the graphical strategy editor the user builds a case-specific link strategy out of building blocks that provide operations such as filters and string comparison. The output of each step is directly available for the user to inspect, manually evaluate and decide which step to take next. Links are exported as SKOS triples while the definition of the link strategy provides the provenance of these links. CultuurLINK is a new, free service for the Dutch cultural heritage community, part of the national roadmap for digital heritage, to establish a digital infrastructure that connects collections from all over the Netherlands to each other, and to the rest of the world.

Interactive session: We are looking forward to discuss your own uses cases for searching and linking vocabularies. Learn how to use this technology in other domains.

Lieke Verhelst

Organisations that develop semantic models have a lot to think about. The envisioned benefits from semantic solutions have to compete with many factors of uncertainty that threaten project results. Challenges lie not only in the evident scarcity of knowledge, skills and tools but also in other factors such as business objectives and requirements. 

In this talk Lieke Verhelst shares her long experience as a semantic modeller. Side by side with subject matter experts she has constructed semantic models and infrastructures for the environment, construction and education sectors. She will point out which common pitfalls she has seen during a ontology development process. While illustrating these, she will come to answering the question why SKOS is the key to success for semantic solution projects.

Julien Gonçalves

The development of Big Data technologies offers new perspectives in building powerful disambiguation systems. New approaches can be imagined to discover and normalize non-controlled vocabularies such as named entities.

In this presentation, I will explain how Reportlinker.com, an award-winning market research solution, developed an inference engine based on supervised analysis to disambiguate the names of companies found in a corpus of unstructured documents.

Through several examples, I will explain the main steps of our approach:
- The discovery of non-verified fact (hypotheses) using a large volume of data
- The transformation of hypotheses into verified facts, using an iterative graph processing system
- The construction of a relational graph to attribute new context around each normalized concept.

Pages

Subscribe to RSS - Industry