Skip to main content
Favorite
Add To List

E- Tutorials

Seven tutorials on various topics including how to build a knowledge graph, schema.org and more

Matching Videos

5 Matching Videos

May 5th 2020, 9am EST

Knowledge Graphs are fulfilling the vision of creating intelligent systems that integrate knowledge and data at large scale. We observe the adoption of Knowledge Graphs by the Googles of the world. However, not everybody is a Google. Enterprises still struggle to understand their relational databases which consist of thousands of tables, tens of thousands of attributes and how the data all works together. How can enterprises adopt Knowledge Graphs successfully to integrate data, without boiling the ocean?

May 5th 2020, 9am EST

A hands-on tutorial that will introduce logic knowledge graphs via TerminusDB to those beginning or looking to develop their knowledge graph journey.

May 5th 2020, 1:30pm EST

Modeling your data as a graph has a significant advantage: The schema does not need to be explicitly defined or specified ahead of time. Thus, you can add data to your graph without being constrained by any schema. One of the less recognized problems with data addition to a graph, however, is the potential for loss of backward compatibility with regard to queries designed before the changes are made to the data. Use of RDF Quads (W3C RDF1.1 Recommendation 25-FEB-2014) as your graph data model would allow schema evolution caused by data addition to your graph to preserve backward compatibility of pre-existing queries.

May 5th 2020, 1:30pm EST

The enterprise knowledge graphs help modern organizations to preserve the semantic context of abundant accessible information. They become the backbone of enterprise knowledge management and AI technologies with the ability to differentiate things versus strings. Still, beyond the hype of repackaging the semantic web standards for enterprise, few practical tutorials are demonstrating how to build and maintain an enterprise knowledge graph. This tutorial helps you learn how to build an enterprise knowledge graph beyond the RDF database and SPARQL with GraphQL protocol. Overcome critical challenges like exposing simple to use interface for data consumption to users who may be unfamiliar with information schemas. Control information access by implementing robust security. Open the graph for updates, but preserve its consistency and quality. You will pass step by step process to (1) start a knowledge graph from a public RDF dataset, (2) generate GraphQL API to abstract the RDF database, (3) pass a quick GraphQL crash course with examples (4) develop a sample web application. Finally, we will discuss other possible directions like extending the knowledge graph with machine learning components, extend the graph with additional services, add monitoring dashboards, integrate external systems. The tutorial is based on Ontotext GraphDB and Platform products and requires basic RDF and SPARQL knowledge.

May 4th 2020, 1:30pm EST

Knowledge graphs have proven to be a highly useful technology for connecting data of various kinds into complex, logic-based models that are easily understood by both humans and machines. Their descriptive power rests in their ability to logically describe data as sets of connected assertions (triples) at the metadata level. However, knowledge graphs have suffered from problems of scale when used against large data sets with lots of instance data. This has by-and-large hampered their adoption at enterprise scale. In the meantime, big data systems (using statistics) have matured which can handle instance data at massive scale – but these systems often lack in expressive power. They rely on indexing which is often incomplete for solving advanced analytical problems. LeapAnalysis is a new product that married these 2 worlds together by utilizing graph technologies for metadata, but leaves all instance data in its native source. This allows the knowledge graph to stay small in size and computationally tractable, even at high scale in environments with billions of pieces of instance-level data. LeapAnalysis utilizes API connectors that can translate graph-based queries (from the knowledge graph) into other data formats (e.g., CSV, Relational, Tabular, etc.) to fetch the corresponding instance data from source systems without the expensive step of migrating or transforming the data into the graph. Data stays truly federated and the knowledge graph is virtualized across those sources. Machine Learning algorithms read the schema of the data source and allow users to quickly align those schemas to their reference model (in the knowledge graph). Using this technique, graph-based SPARQL queries can be run against a wide range of data sources natively and produce extremely fast query response times with instance data coming in as fragments from multiple sources all in one go.