Skip to main content
Favorite
Add To List
Knowledge Graph Conference 2020

Knowledge Graph Conference 2020

May 4-7, 2020
Virtual
Knowledge Graphs form an organized and curated set of facts that provide support for models to help understand the world. This conference gathers technology leaders, researchers, academics, vendors — and most importantly, practitioners, who know the discipline. For KGC 2020, attendees can participate from wherever they want in the world, from the comfort of their homes. We will stream the content, provide access to our speakers and support chat and networking as well as give access to all of the content live and on-demand after the event.
Twitter:
#kgconf

Matching Videos

48 Matching Videos

May 7th 2020, 2:20pm EST

As part of the Linked Data For Production: Pathway to Implementation (LD4P2) project’s larger goal of building a pathway to the use of linked data in the description of library resources, we are exploring the integration of linked data sources in library discovery interfaces. Through a series of focused experiments and prototype building, aided by user studies and feedback, we have explored linking and displaying connections between library catalog data and linked data sources such as Wikidata and DbPedia as well as library authorities such as FAST, Library of Congress Subject Headings, and the Library of Congress Name Authority File. Examples of areas we investigated for the integration of linked data include: knowledge panels bringing in contextual information and relationships from knowledge graphs like Wikidata to describe people and subjects related to library resources in the catalog; suggested searches based on user-entered queries using results from Wikidata and DbPedia; and browsing experiences for subjects and authors bringing in relationships and data from Wikidata and library authorities. In this presentation, we will review the challenges and opportunities with bringing in information from external knowledge graphs into a library catalog to support discovery. LD4P2 (http://ld4p.org) is a multi-institution collaborative effort funded by the Andrew W. Mellon Foundation.

May 6th 2020, 12pm EST

Predictive analytics in inventory management has not been the traditional domain of knowledge graphs and semantics; however, it is a surprisingly natural fit. This talk will review knowledge graphs in the supply chain and look into the details of implementation. In our central case, using a semantic model, we build a ‘digital twin’ of a complex inventory management supply chain. Data from heterogeneous sources - including warehouse management systems, point of sale systems and weather data – are then imported into the knowledge graph. Using the graph we carry out analytics, optimization, scheduling and Monte Carlo simulations. A complex set of operations built around the central supply chain knowledge graph. The net result is a predictive analytic system that delivers real value to the enterprise (up to a 50% reduction in inventory). The knowledge graph can be extended to include product information and other central commercial data use cases. This presentation will draw on the production delivery of TerminusDB to the largest retailer in Ireland.

May 7th 2020, 3pm EST

Pinterest is a popular Web application that has over 250 million active users. It is a visual discovery engine for finding ideas for recipes, fashion, weddings, home decoration, and much more. In the last year, the company decided to create a knowledge graph that aims to represent the vast amount of content and users on Pinterest, to help both content recommendation and ads targeting. In this talk, we present the engineering of an ontology--the Pinterest Taxonomy--that forms the core of Pinterest's knowledge graph, the Pinterest Taste Graph. We describe modeling choices and enhancements to the cloud-based WebProtégé tool that we used for the creation of the ontology. In two months, eight Pinterest engineers, without prior experience of ontologies, knowledge graphs, and WebProtégé, revamped an existing taxonomy of noisy terms into an OWL ontology, which they then combine with additional structured information and machinery to form the Pinterest Taste Graph. We share our experience and present the key aspects of our work that we believe will be useful for others working in this area.

May 7th 2020, 12pm

In this presentation, we'll cover why a data catalog is the first knowledge graph that an organization should build. Most organizations don't have a clear picture of their data assets and how they're being used - a data datalog helps provide that picture and unlock the value of data. We'll explore why a data catalog is a knowledge graph, and how a data catalog can be used as a foundational asset to build more granular domain and application knowledge graphs.

May 6th 2020, 2:40 PM EST

Expressing opinions and interacting with others on the Web has led to the production of an abundance of online discourse data, such as claims and viewpoints on controversial topics, their sources and contexts (e.g., events, entities). These data constitute a valuable source of insights for studies into misinformation spread, bias reinforcement, echo chambers or political agenda setting. While knowledge graphs of today enable data reuse and federation thus improving information retrieval and facilitating research and knowledge discovery in various fields, they do not store informatotion about claims and related online discourse data, making it difficult to access, query and reuse this wealth of information. In my talk, I will present recent work in collaboration with the Leibniz Institute of Social Sciences GESIS (Germany), on the construction of ClaimsKG - a knowledge graph of fact-checked controversial claims, which facilitates structured queries about their truth values, authors, dates, journalistic reviews and other kinds of metadata and provides ground truth data for a number of tasks relevant to the analysis of societal debates on the web. I will discuss perspectives on modelling claims in a generalized and contextualized manner, as well as related challenges such as claim disambiguation and the assessment of claim relatedness. I will present preliminary results on learning claim vector representations (embeddings) from ClaimsKG and their application for the task of automatic fact-checking.

May 6th 2020, 10:20:00 AM

The proposed meta-graph solution for recommender systems is a living process for semi-automatically resolving recommendations using guided queries upon a knowledge graph. In addition, this solution is explainable; it can provide comprehensible recommendations which show the reason for each result along with a statistical measure. A detailed use case for this application which is an issue tracking system for Oil & Gas companies is presented in this work. In the introduced use case, the interlinks between failure events and their associated actions are captured in a knowledge graph. To overcome data sparsity, scarcity, and non-trivial relationships, a relatively new concept called ‘meta-graphs’ are employed. Using meta-graph patterns defined on the knowledge graph schema, we can recommend actions based on the semantic relatedness of the various failure events and their corrective actions. Moreover, the recommendation results selected by the end-user and the meta-graphs evaluated by the domain experts are automatically graded during the entire lifecycle of the system for transfer learning, reliability evaluation and reproducibility.

May 7th 2020, 1:40pm EST

Yes. Why? Because building a knowledge graph means you are consolidating your important knowledge assets, and that forces you to think of the “bigger picture” of all the data you have in your organization including shared, common domain models. You are not merely shoving data into a warehouse, but actually thinking how different things connect and fit together. Once built, the graph can serve as the source of data for many applications, including Machine and Deep Learning use cases. In this talk, we will walk through the typical tasks involved in building a knowledge graph, options for data modeling and querying (e.g., RDF and Property Graph), as well as tools and integrations. We will discuss and share insights from customers using knowledge graphs to power innovation and putting those into production.

May 6th 2020, 4:40pm

The Information Management team at Morgan Stanley has built an RDF graph and a semantic knowledge base to help answer domain specific questions, formulate classification recommendations and deliver quality search to our internal users. In doing so over the past 4 years, we also helped other departments across the firm discover and embrace semantic data modeling for their own use cases. In the first part of our presentation, we would like to briefly describe the Semantic Modeling and Ontology consortium we created within the Firm before diving deeper into our knowledge base creation effort. We think that a knowledge base reflecting concepts specifically relevant to our company is extremely valuable to develop semantic technology applications. However, populating knowledge bases can be time consuming, costly and error prone, with an end product that is difficult to maintain. In the second part of this presentation, we would like to discuss a framework for automatic generation of a Simple Knowledge Organization System (SKOS) knowledge base from unstructured text. Our Natural Language Processing (NLP) engine parses the input text to create a semantic knowledge graph, which is then mapped to a SKOS knowledge model. During the linguistic understanding of the text, relevant domain concepts are identified and connected by semantic links -- abstractions of underlying relations between concepts that capture the overall meaning of text based on a corpus of roughly 6,500 policies and procedures published at Morgan Stanley.

May 6th 2020, 3pm

In today’s world understanding the full context of an event is not an easy thing, especially in a world where there’s countless sources of low quality one dimensional content is fighting for our attention. Which leaves most readers of the news uniformed or spending time piecing together incomplete information. At Dow Jones we are taking advantage of the wealth of data & information we have access to, and modeling out a more complete view of the world so that this additional context can be provided through our knowledge graph. As a result of this effort will be providing those readers that deserve more this missing context through the Wall Street Journal. In this talk we will share what Dow Jones is working towards to better inform our readers through the application of graph technologies, nlp, and a lot of data extraction.

May 7th 2020, 2pm EST

Boost your Graph with Semantic NLP: Nicole Moldovan from Lymba talks about the Lymba platform and how Knowledge Graphs help produce better results in this demo called Boost your Graph with Semantic NLP
VIDEO

Demo Causality Link

18:49

May 6th 2020, 2pm EST

Leveraging a graph of causal forces acting on financial markets

May 6th 2020, 11:20pm EST

Vassil Momtchev announces new features of GraphDB and Ontotext platform
VIDEO

Demo Timbr.AI

29:05

May 6th 2020, 11:20 AM EST

timbr – the SQL Knowledge Graph.

May 5th 2020, 9am EST

Knowledge Graphs are fulfilling the vision of creating intelligent systems that integrate knowledge and data at large scale. We observe the adoption of Knowledge Graphs by the Googles of the world. However, not everybody is a Google. Enterprises still struggle to understand their relational databases which consist of thousands of tables, tens of thousands of attributes and how the data all works together. How can enterprises adopt Knowledge Graphs successfully to integrate data, without boiling the ocean?

May 6th 2020, 12:20pm

In the realm of enterprise applications such as cybersecurity and anti-money laundering (AML), data and system engineers team up to deal with interconnected data of great scale and richness. The regulatory need adds requirements to instant tracibility and explanability of data and analytic models, to aid and reduce human workloads. Moreover, the teams have to deal with reliability and timeliness of data and events that are of dubious precision and cleanliness. In this work, we will share our real-world experiences of integrating streaming graph operations, with automatically tuned analytics including Graph Behavior Learning, Machine Learning and Bayesian Reasoning. We will also describe the use of imperfect learning which is critical to real-time enterprise applications.

May 6th 2020, 2:20pm EST

The development and availability of Web knowledge repositories, in particular Wikipedia, as the largest general-knowledge encyclopedic collection, have changed remarkably not only the way in which individual users fulfill their informational needs but also the way in which information providers organize data by employing large knowledge graphs derived from these repositories. In this talk, we will take a closer look at the task of linking to entities from knowledge graphs both as an enabler for new user experiences on the Web, and in particular in Web news, and as a key instrument for keeping the information from knowledge graphs fresh and accurate. One part of the talk will cover news processing in near-real time for fact extraction and fact verification. Another part will cover news consumption experiences enabled by entity linking from concept-based video navigation to social-media guided news presentation. Finally, the talk will follow a journey from the idea of enabling users to check facts in their Office documents to integrating the Microsoft Knowledge Graph with Excel through the new Data Types feature.

May 7th 2020, 11:20am EST

From Card Sorting to Data Automation: PoolParty—A Complete Semantic Middleware: Andreas Blumauer CEO of Semantic Web Company shows us how to use semantics to drive the business value of your data using the PoolParty Semantic Suite

MAy 7th 2020, 2:40pm EST

Law firms are starting to build knowledge graphs to power next-generation marketing and business development applications that efficiently integrate data and help deliver the most important intelligence insights to the right people sooner. These systems help firms spot opportunities sooner, author higher-value client alerts, and foster cross-selling and RFP responses. They also enhance the ability of law firms to analyze the relationships between markets, clients, matters, lawyers, and practices to spot growth opportunities. This talk will explain the challenges and benefits of bringing knowledge graphs to the legal services market.

May 6th 2020, 10:00:00 AM

KGC organizer François Scharffe opening address

May 6th 2020, 10:40:00 AM

Knowledge graphs are increasingly built using complex multifaceted machine learning based systems relying on a wide of different data sources. To be effective these must constantly evolve and thus be maintained. I present work on combining knowledge graph construction (e.g. information extraction) and refinement (e.g. link prediction) in end to end systems. I then discuss the challenges of ongoing system maintenance, knowledge graph quality and traceability.

May 4th 2020, 12pm EST

The United Nations (UN) supports 17 Sustainable Development Goals (SDGs) covering topics from poverty to healthcare, education, and beyond. This workshop will gather a community from which to discuss ongoing industry work, academic research efforts, and include a collaborative ideation exercise. The workshop will cover topics from the 17 Sustainable Development Goals (SDGs) supported by the United Nations.

May 6th 2020, 1:20pm EST

The Yahoo Knowledge(YK) graph crawls, reconciles and blends information (around 10B fact triples) from 200 M entities across 30 semi-structured source (crawlable sites like Wikipedia, IMDB, LonelyPlanet etc and as well licensed feeds) graphs to a merged graph of 75 M entities, 5B facts distributed across 140 entity types and 300 attributes. From classifying entity type of source entities, to reconcile entities across sources (e.g. Brad Pitt from Wikipedia vs. Brad Pitt from IMDB), and blending conflicting and complementing facts for each entity from different sources, the YK graph encapsulates production scale machine learning solutions for multi-label classification(e.g. predicted entity types for Arnold Schwarzenegger could be Actor, Politician, BusinessPerson etc ), large scale high precision binary classifiers along with an array of distributed hashing techniques help scale a potential billion edge comparisons (de-duplication of entities across sources require high precision classifiers for which we develop active learning and precision clamped training strategies) and lastly hubs and authorities based fact blending from competing sources. To support product initiatives like surfacing knowledge augmented results on web and sponsored searches we build a variety of "knowledge discovery" services like 1. knowledge triples based question answering and reading comprehension type question answering utilizing our blended/merged knowledge graph, 2. related entities for a given entity to other connected entities beyond direct ontological relations to generate browsing interest to other sites/properties in Yahoo. In contrast to broad cross domain knowledge, we delve into deep domain specific information extraction from news text and videos to power unique experiences for brands like Yahoo! Sports. Specifically for US Sports (NBA/NFL/NHL/MLB/Soccer) our text information extraction sits in the cross roads of fact finding in articles, fine grained entity typing and topical extractive summarization of temporal topics like trades/contracts/injuries and performances connecting player and potential teams to provide 360 degree browsing of daily fantasy news/sports rumors. Through our Video deep linking capabilities we link moments in highlight videos to points in time of a game such that we can power within-video search/browse experiences for e.g. queries like "Lebron Jame's dunks from yesterday" would seek to exact moments in a highlight video where LeBron dunked or "Laker's top scorer's tonight" would find the stats of the top Laker's scorers, followed by seeking to exact moments of their plays in highlight videos.

May 7th 2020, 5:40pm

An extremely powerful and efficient use for Knowledge Graphs is to unite well-understood domains of knowledge alongside novel and specific business/scientific questions. Using small ontologies that embed large reference taxonomies, we are able to go from tactical scientific questions (bottom-up) to common taxonomies and reference datasets (middle-out) approach to model scientific questions, aligning with enterprise master data strategies on the way up. If building blocks follow FAIR (Findable, Accessible, Interoperable, Reusable) data principles, reusing these processes becomes more and more efficient over time as the middle layer grows. Examples in the translational medicine space will be highlighted.

May 7th 2020, 5:20pm EST

2020 Talk: Knowledge Graph for Drug Discovery: A critical barrier in current drug discovery is the inability to utilize public datasets in an integrated fashion to fully understand the actions of drugs and chemical compounds on biological systems. There is a need to intelligently integrate heterogeneous datasets pertaining to compounds, drugs, targets, genes, diseases, and drug side effects now available to enable effective network data mining algorithms to extract important biological relationships. In this talk, we demonstrate the semantic integration of 25 different databases and develop various mining and predication methods to identify hidden associations that could provide valuable directions for further exploration at the experimental level. (Conflict of Interest: I am the co-founder of Data2Discovery - https://www.d2discovery.com/)

May 6th 2020, 11:00:00 AM

Cherre’s knowledge graph is a model of the entire US real estate ecosystem. The graph incorporates hundreds of millions of entities such as properties, addresses, individual and commercial owners, lenders, brokers, estate managers, lawyers etc. as nodes – while the edges are various types of connections between the entities. A wealth of attributes are associated with each entity. Cherre’s knowledge graph is a closed-world graph: it allows inferring an absence of connection between two entities if there is no edge between them in the graph. Furthermore, Cherre’s graph is temporal: edges and nodes are being added and deleted on a timely basis. Some of the main challenges in constructing a closed-world graph from noisy data sources are entity resolution and disambiguation. In this talk, we will present parallel algorithms for entity resolution and disambiguation in Cherre’s knowledge graph, and outline our current work on assessing entity similarities using (temporal) node embedding.
In today’s data-driven economy, the company with the most data wins. This means that companies go through great lengths to collect all of our data, sometimes even crossing ethical boundaries. But in the end, we all lose: having the best data process is not an indication of how well a company innovates or what value its services bring. As a result, innovation has come to a standstill, and people are stuck with mediocre data experiences. If data truly is the new oil, then what we’re currently doing is pouring all that oil in barrels—often without the ability to open the lid ourselves. In order for data to fuel the engine, it has to flow much better than it does today. In this talk, I will explain how the Solid project gives people back control over their personal knowledge graphs. In doing so, we enable small and large companies alike to innovate.

May 6th 2020, 2pm EST

Q&A of the 3rd session of the day with Paco Nathan from Derwen Ai, Topojoy Biswas from Yahoo and Melliyal Annmalai from Oracle

May 6th 2020, 11:20:00 AM

Q&A of the first session of day 1. Here our own Francois Scharffe, Neda Abolhassani from Accenture Labs, Paul Groth from the University of Amsterdam and Ron Bekkerman from Cherre answer some questions from the audience.

May 06th 2020, 3:20pm EST

Q&A of the 4th session with Silviu Cucerzan of Microsoft Research, Konstantin Todorov of the University of Montpellier, and Dylan Roy from Dow Jones.

May 06th 2020, 5pm EST

Q&A of Session 5 with Bethany Sehon, & Brian Donohue from Capital One, Radu Marian from Bank of America and Nicolas Seyot from Morgan Stanley.

May 6th 2020, 1pm EST

The Rich Context project at NYU Wagner is the knowledge graph complement to the ADRF platform for cross-agency social science research using sensitive data, currently used by 50+ agencies. Rich Context represents metadata about datasets and their use in research which in turn influences public policy, with a goal of producing recommender systems for analysts and policymakers. Most all of the code is open source. This talk introduces the background for the project, our team process for collaboration, and several areas where machine learning is used to infer or clean metadata obtained from scholarly infrastructure and for semi-automated graph construction, along with human-in-the-loop feedback mechanisms for domain experts to help improve our graph.

May 6th 2020, 6pm

Akash Magoon presents Nayya, the winner of the startup-investor pitch event. Nayya is a digital health platform that integrates with a company's benefit to help employees find, book, and access high-value care.

May 6th 2020, 12:20pm

The Curious Case of the Semantic Data Catalog: Knowledge graphs have been on the rise and organizations have found a variety of use cases for the technology. One specific type of use case is the implementation of a knowledge graph as a semantic data catalog. With the inherent power of the technology to integrate structured and unstructured information, the application of knowledge graphs as data catalogs seems to be a foregone conclusion. Capturing the semantic context of data in a smart data catalog application provides much more than a simple description of the organization’s data. In this presentation, we will discuss key considerations and business value of semantic data catalogs. We will also review a specific use case where we implemented a semantic data catalog for a government organization to help them track, discover, and govern a large number of data sets.

May 7th 2020, 1pm EST

Enterprises that are building Knowledge Graphs are rapidly getting a grip on unstructured data with current advances in Natural Language Processing (NLP) techniques. But there is still a large mass of unstructured data that is untapped and that is spoken conversations with customers. Speech to text for general purpose conversations (e.g. Google, Alexa, Siri) have proven themselves in the market to be highly accurate. However, speech recognition technology for domain specific industries with lots of product names, industry lingo, and acronyms often creates a challenge for accuracy and usefulness of the content. In this presentation we will demonstrate how taxonomy driven speech recognition helps solve these industry specific terminology challenges for real-time voice capture and how this process augments an Enterprise Knowledge Graph for customer insights enabling real time decision support.

May 7th 2020, 12:40pm EST

The rise of no-code knowledge graphs

May 5th 2020, 9am EST

A hands-on tutorial that will introduce logic knowledge graphs via TerminusDB to those beginning or looking to develop their knowledge graph journey.

May 5th 2020, 1:30pm EST

Modeling your data as a graph has a significant advantage: The schema does not need to be explicitly defined or specified ahead of time. Thus, you can add data to your graph without being constrained by any schema. One of the less recognized problems with data addition to a graph, however, is the potential for loss of backward compatibility with regard to queries designed before the changes are made to the data. Use of RDF Quads (W3C RDF1.1 Recommendation 25-FEB-2014) as your graph data model would allow schema evolution caused by data addition to your graph to preserve backward compatibility of pre-existing queries.

May 5th 2020, 1:30pm EST

The enterprise knowledge graphs help modern organizations to preserve the semantic context of abundant accessible information. They become the backbone of enterprise knowledge management and AI technologies with the ability to differentiate things versus strings. Still, beyond the hype of repackaging the semantic web standards for enterprise, few practical tutorials are demonstrating how to build and maintain an enterprise knowledge graph. This tutorial helps you learn how to build an enterprise knowledge graph beyond the RDF database and SPARQL with GraphQL protocol. Overcome critical challenges like exposing simple to use interface for data consumption to users who may be unfamiliar with information schemas. Control information access by implementing robust security. Open the graph for updates, but preserve its consistency and quality. You will pass step by step process to (1) start a knowledge graph from a public RDF dataset, (2) generate GraphQL API to abstract the RDF database, (3) pass a quick GraphQL crash course with examples (4) develop a sample web application. Finally, we will discuss other possible directions like extending the knowledge graph with machine learning components, extend the graph with additional services, add monitoring dashboards, integrate external systems. The tutorial is based on Ontotext GraphDB and Platform products and requires basic RDF and SPARQL knowledge.

May 4th 2020, 1:30pm EST

Knowledge graphs have proven to be a highly useful technology for connecting data of various kinds into complex, logic-based models that are easily understood by both humans and machines. Their descriptive power rests in their ability to logically describe data as sets of connected assertions (triples) at the metadata level. However, knowledge graphs have suffered from problems of scale when used against large data sets with lots of instance data. This has by-and-large hampered their adoption at enterprise scale. In the meantime, big data systems (using statistics) have matured which can handle instance data at massive scale – but these systems often lack in expressive power. They rely on indexing which is often incomplete for solving advanced analytical problems. LeapAnalysis is a new product that married these 2 worlds together by utilizing graph technologies for metadata, but leaves all instance data in its native source. This allows the knowledge graph to stay small in size and computationally tractable, even at high scale in environments with billions of pieces of instance-level data. LeapAnalysis utilizes API connectors that can translate graph-based queries (from the knowledge graph) into other data formats (e.g., CSV, Relational, Tabular, etc.) to fetch the corresponding instance data from source systems without the expensive step of migrating or transforming the data into the graph. Data stays truly federated and the knowledge graph is virtualized across those sources. Machine Learning algorithms read the schema of the data source and allow users to quickly align those schemas to their reference model (in the knowledge graph). Using this technique, graph-based SPARQL queries can be run against a wide range of data sources natively and produce extremely fast query response times with instance data coming in as fragments from multiple sources all in one go.

May 6th 2020, 5:20 pm EST

The Financial Industry Business Ontology (FIBO) has matured in significant areas over the last few years, but until recently there were few examples to help users understand how to interpret it and build it into their own knowledge graphs. Since early 2019, the FIBO development team has been focused primarily on use cases to address the steep learning curve: • How to represent businesses and incorporate data about them that is publicly available, such as identifiers, locations, ownership and control information and other details from the Global Legal Entity Identifier Foundation (GLEIF), Open Corporates, state and government registries, etc., • How to use that information to investigate counterparty relationships, • How to build on that baseline to represent securities (stocks, bonds, etc.) issued by these organizations, • How to leverage those securities to represent the components of an index and track index performance, such as the Dow Jones Industrial Average, • How to use this information to represent more complex derivative instruments. In this talk, we will take a quick cook’s tour through some of the more basic use cases and related ontologies. The approach that FIBO has taken to build a use case stack that can be used to demonstrate the value of knowledge graphs translates well to most domain-specific projects. The use cases, ontologies, and reference and example data are all publicly available and open source. While some of the work is still underway, the basic building blocks are in place. Conference participants can download and try them, and the cook’s tour is intended to provide enough of an introduction to help make that feasible.

May 6th 2020, 1:40pm EST

Graphs provide a new dimension to managing and analyzing data, and enterprises are keen to explore and adopt this technology. There have been some barriers to adoption, including a lack of familiarity with graph query languages and tools and challenges in integrating graph analytics into existing workflows without using specialized silos. We will illustrate customer use cases from three different industries and see how they overcome some of these challenges to successfully deploy solutions based on graphs, enabling significant impact on their businesses. The use cases are the use of RDF for a semantic terminology server in Pharma, use of RDF for linking public data sets (Department of National Statistics in Japan), and use of Property Graphs for fraud detection (Paysafe, an online payments solutions company).
One of the challenges in protecting consumer privacy and managing data risk is the ability to validate that privacy-related data from across our data ecosystem has been identified and categorized accurately and consistently. This challenge has become especially salient in light of recent legislation like GDPR and CCPA. At Capital One, the 2nd line is using standardized semantic models (ontologies) and technology to validate whether data is privacy-related as well as recommend privacy categories to data producers.

May 6th 2020, 11:40am

Grasping large Knowledge Graphs is challenging. We present SemSpect, an innovative tool that brings together overview and detail view into one perspective by visually aggregating graph nodes and relationships for efficient exploration and data-driven querying of graphs. It comes either with a reasoning back-end for OWL RL for RDF or as a brand-new Graph App for Neo4j. In our talk we will report on experiences with business-critical, real-world Knowledge Graphs from various domains such as engineering industry, life sciences and intelligence.

May 5th 2020, 9am EST

Electronic health records (EHRs) have become a popular source of observational health data for learning insights that could inform the treatment of acute medical conditions. Their utility for learning insights for informing preventive care and management of chronic conditions however, has remained limited. For this reason, the addition of social determinants of health (SDoH) [1] and ‘observations of daily living’ (ODL) [2] to the EHR have been proposed. This combination of medical, social, behavioral and lifestyle information about the patient is essential for allowing medical events to be understood in the context of one’s life and conversely, allowing lifestyle choices to be considered jointly with one’s medical context; it would be generated by both patients and their providers and potentially useful to both for decision-making. We propose that the personal health knowledge graph is a semantic representation of a patient’s combined medical records, SDoH and ODLs. While there are some initial efforts to clarify what personal knowledge graphs are [3] and how they may be made specific for health [4, 5], there is still much to be determined with respect to how to operationalize and apply such a knowledge graph in life and in clinical practice. There are challenges in collecting, managing, integrating, and analyzing the data required to populate the knowledge graph, and subsequently in maintaining, reasoning over, and sharing aspects of the knowledge graph. Importantly, we recognize that it would not be fruitful to design a universal personal health knowledge graph, but rather, to be use-case driven. In this workshop, we aim to gather health practitioners, health informaticists, knowledge engineers, and computer scientists working on defining, building, consuming, and integrating personal health knowledge graphs to discuss the challenges and opportunities in this nascent space.

May 5th 2020, 12:30pm EST

Electronic health records (EHRs) have become a popular source of observational health data for learning insights that could inform the treatment of acute medical conditions. Their utility for learning insights for informing preventive care and management of chronic conditions however, has remained limited. For this reason, the addition of social determinants of health (SDoH) [1] and ‘observations of daily living’ (ODL) [2] to the EHR have been proposed. This combination of medical, social, behavioral and lifestyle information about the patient is essential for allowing medical events to be understood in the context of one’s life and conversely, allowing lifestyle choices to be considered jointly with one’s medical context; it would be generated by both patients and their providers and potentially useful to both for decision-making. We propose that the personal health knowledge graph is a semantic representation of a patient’s combined medical records, SDoH and ODLs. While there are some initial efforts to clarify what personal knowledge graphs are [3] and how they may be made specific for health [4, 5], there is still much to be determined with respect to how to operationalize and apply such a knowledge graph in life and in clinical practice. There are challenges in collecting, managing, integrating, and analyzing the data required to populate the knowledge graph, and subsequently in maintaining, reasoning over, and sharing aspects of the knowledge graph. Importantly, we recognize that it would not be fruitful to design a universal personal health knowledge graph, but rather, to be use-case driven. In this workshop, we aim to gather health practitioners, health informaticists, knowledge engineers, and computer scientists working on defining, building, consuming, and integrating personal health knowledge graphs to discuss the challenges and opportunities in this nascent space.

May 07th 2020, 5pm EST

Lambert Hogenhout offers a summary of the workshop Knowledge Graph for Social Good

May 07th 2020, 5pm EST

Ching-Hua Chen summarizes and talks about the workshop on Personal Health Knowledge Graph.