Difference between revisions of "Snapquery EKAW 2024 paper"

From BITPlan cr Wiki
Jump to navigation Jump to search
 
(40 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Key Points ==
+
= The Usecases =
# Query rot versus link rot
+
This work is motivated by three use cases:
# Transparency vs. complexity of SPARQL queries
+
# Scholarly publishing
# Use cases for named queries
+
# QLever SPARQL engine development
# Ambiguity of names
+
# Wikidata SPARQL examples, tutorials and usage
# Persistent identifiers
+
 
# Query hashes
+
We will introduce the use cases and the challenges relevant for this work.
# How to deal with aspects that do not (usually) influence the execution of a SPARQL query, like whitespace, comments, capitalization and variable names?
+
 
# Parameterized queries
+
== Scholarly publishing ==
# technical debt and accidential complexity
+
Scholarly publishing via scientific events revolves around the entities scholar, paper, institution, proceedings, events and eventseries.
# Wikidata example queries
+
An example set (using Wikidata Q-Identifiers) would be scholar [https://scholia.toolforge.org/author/Q80 Tim-Berners Lee (Q80)], paper Tabulator Redux: Browsing and Writing Linked Data, institution [https://scholia.toolforge.org/organization/Q42944 CERN (Q42944)], proceedings LDOW 2008, event [https://www.wikidata.org/wiki/Q113672821 LDOW 2008 (Q11367282)] and the event series [https://scholia.toolforge.org/event-series/Q105491258 LDOW Q105491258].
# Scholia and Wikidata graph split
+
Scholia is a project that has created a portal at https://scholia.toolforge.org/ that allows to search, browse and analyse scholarly publishing related data that has been curated in the Wikidata knowledge graph. You can verify the example Q-Identifiers by performing a plain text search.
# Other knowledge graphs, e.g., DBLP, OpenStreetMap
+
Note that https://ceur-ws.org/Vol-369/paper11.pdf is missing since it has not been curated yet by the CEUR-WS Semantification project [[CiteRef::Fahl2023Semantification]] and has no persistent identifier other than its URL. The LDOW 2008 proceedings are found at https://dblp.org/db/conf/www/ldow2008.html but have not been in the list of events and were therefore added manually by us.
# Perhaps also some NFDI examples or some custom knowledge graphs like FAIRJupyter
+
 
# An Ngo Lam's ESWC 2023 paper as a pointer to the style of comparison
+
The frustration of incomplete search results is the effect of a combination of factors. We would hope that the SPARQL query https://github.com/WDscholia/scholia/blob/master/scholia/app/templates/event-series_events.sparql event-series-events.sparql would work as designed and answer with the year,ordinal,short_name event id and label and proceedings id and level when given the identifier of an event series.
# Quality criteria https://github.com/WolfgangFahl/snapquery/issues/26
+
 
# List of standard refactoring activities and the support by this approach
+
The authors' favorite query for which it would be nice to get the result is: At what conferences did scholars affiliated to an institution q at the time of publishing successfully submit papers in the time frame start_date to end_date. Now it is quite straightforward to envision a (federated) knowledge graph that provides the data and SPARQL query to retrieve the results for any major institution. Unfortunately, the reality is much bleaker and there are still obstacles and challenges to overcome. It is promising that natural language input is quite feasible these days and asking the question without any SPARQL knowledge and getting a result has been demonstrated lately with a good success rate (ORKG demonstration ESWC 2024 Sören Auer)
# SPARQL standard changes
+
 
# Getting your own copy of Wikidata; the infrastructure effort needs to be mentioned
+
Scholia currently faces the challenge that the maximum size of the knowledge graph that the Blazegraph SPARQL engine that is currently backing the wikidata knowledge graph can hold is 4 Terabyte. Currently there are claims for more than 110 million entities and more than 15 billion triples in the wikidata KG. Over 20 million humans and as many scholarly papers are referenced. Unfortunately this is only a small portion of the scholarly publishing data that would be collectible from all the digital traces that have been left by scholarly publishing in the past centuries. The [https://en.wikipedia.org/wiki/List_of_academic_databases_and_search_engines list of academic databases and search engines] in wikipedia has entries with more than 300 million items. So scholarly publishing data alone would break the 4 TB limit.
# W3C test set - why did we not use that as an example
+
 
# Useability evaluation https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
+
The Wikimedia foundation which is responsible for running the Wikidata infrastructure has decided to go the route of a graph split. The scholarly data shall get its own KG and the data be migrated. Unfortunately this potentially invalidates all current 373(check #) Scholia queries.
# https://github.com/ad-freiburg/qlever/wiki/QLever-performance-evaluation-and-comparison-to-other-SPARQL-engines
+
 
# Automatic syntax repairs - automatic conversion of SQL input, SPARQL output.
+
Scholia queries are named parameterized queries. Scholia uses python and JavaScript as programming languages, SPARQL as a query language and Jinja Templates for query parameterization.
# A closed issue should have at least one example that runs
+
See [[https://cr.bitplan.com/index.php/Scholia_Parameter_Handling]] for an analysis.
# Hypothesis by Stefan Decker: Query rot is more prominent in KG environments than with relational databases.
+
 
# https://stackoverflow.com/questions/tagged/sparql
+
The CEUR-WS semantification project is using Wikidata and Scholia as a target Knowledge Graph to introduce "Metadata-First Publishing" (⚠️ Ref to  Papers and paper under review ...). As part of the project [https://cr.bitplan.com/index.php/List_of_Queries a set of SPARQL queries]  has been created that is based on the Semantic Publishing Challenges [[CiteRef::lange2014se]]. Some of these queries have already been manually refactored to run successfully on both the Wikidata Query Service and QLever.
# https://web.archive.org/web/20150512231123/http://answers.semanticweb.com:80/questions/12147/whats-the-best-way-to-parameterize-sparql-queries
+
 
# https://jena.apache.org/documentation/query/parameterized-sparql-strings.html
+
The refactoring activities of the CEUR-WS semantification project have shown challenges that are a strong motivation for this work.
# https://www.semantic-web-journal.net/system/files/swj3076.pdf
+
 
# https://arxiv.org/pdf/cs/0605124
+
== QLever SPARQL Engine development ==
# https://arxiv.org/pdf/1402.0576 optimizing queries
+
QLever is a SPARQL Engine developed at the Computer Science Department of the University of Freiburg by Hannah Bast and her team.
# https://www.w3.org/TR/REC-rdf-syntax/
+
Written in C++ and aiming for high performance QLever is a candidate for the replacement of Blazegraph as the main SPARQL engine for the Wikidata knowledge graph. QLever is not feature complete. It is developed as an open source project using the github infrastructure ⚠️ Ref. https://github.com/ad-freiburg/qlever.
# https://biblio.ugent.be/publication/8632551/file/8653456 Towards supporting multiple semantics of named graphs using N3 rules
+
 
# Wikidata Graph Pattern Benchmark (WGPB) for RDF/SPARQL by Aidan Hogan et al., 2020 (DOI: 10.5281/zenodo.4035223)
+
We have proposed that QLever should use queries from Scholia as a test suite https://github.com/ad-freiburg/qlever/issues/859.
# ESWC 2019 proceedings (978-3-030-21348-0.pdf)
+
In this work we go a step further and show how a testsuite can be constructed from SPARQL queries which are extractable from the github issues.
# Linked Data Fragments https://linkeddatafragments.org/  e.g. https://ldfclient.wmflabs.org/ 404 error
+
A typical issue such as https://github.com/ad-freiburg/qlever/issues/896 (CONCAT not implemented) will start with a standard
 +
situation/action/expected result setup in which the action is executing a given query and the result is an error or misbehavior.
 +
When the issues is closed the query will run or an alternative query is presented that shows that the issue has been fixed.
 +
 
 +
== Wikidata SPARQL examples, tutorials and usage ==
 +
The Wikidata SPARQL Service example wiki page https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/queries/examples has been created in 2016 and has had more than 1500 edits since then. As of the creation of this work in July 2024 more than 300 queries have been added to the page to illustrate the usage of Wikidata. The assumption is that all these queries should actually work against the Wikidata service endpoint (hopefully at all times). This work shows that this is not the case and introduces tools and methods to analyze the reasons systematically.
 +
 
 +
The Wikidata query service at https://query.wikidata.org/ offers an option to create Short-URLs for queries that have been entered using the service. We have included a random set of 100 such queries as a query set for investigation.
 +
 
 +
= Structure =
 +
== Introduction ==
 +
# ★★★★★ Query rot versus link rot
 +
# ★★★★☆ Transparency vs. complexity of SPARQL queries
 +
# ★★★★☆ Use cases for named queries
 +
# ★★★★☆ Persistent identifiers
 +
# ★★★☆☆ Query hashes and short_urls
 +
== Mitigation Query Rot using snapquery ==
 +
# ★★★★★ Parameterized queries
 +
# ★★★☆☆ https://web.archive.org/web/20150512231123/http://answers.semanticweb.com:80/questions/12147/whats-the-best-way-to-parameterize-sparql-queries
 +
# ★★★☆☆ https://jena.apache.org/documentation/query/parameterized-sparql-strings.html
 +
# ★★★★☆ Scholia Jinja templates
 +
# ★★★★☆ Technical debt and accidental complexity
 +
# ★★★☆☆ How to deal with aspects that do not (usually) influence the execution of a SPARQL query, like whitespace, comments, capitalization and variable names?
 +
== SnapQuery Implementation ==
 +
 
 +
# ★★★☆☆ Natural Language input
 +
# ★★★☆☆ Automatic syntax repairs
 +
# ★★★☆☆ Automatic conversion of SQL input, SPARQL output
 +
== Evaluation ==
 +
#★★★★☆ Wikidata example queries
 +
#★★★★★ Scholia and Wikidata graph split
 +
#★★★☆☆ Other knowledge graphs, e.g., DBLP, OpenStreetMap
 +
#★★☆☆☆ Perhaps also some NFDI examples or some custom knowledge graphs like FAIRJupyter
 +
#★★★★★ Quality criteria https://github.com/WolfgangFahl/snapquery/issues/26
 +
#★★★★☆ List of standard refactoring activities and the support by this approach
 +
#★★★★☆ Getting your own copy of Wikidata; the infrastructure effort needs to be mentioned
 +
#★★★☆☆ Usability evaluation https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
 +
#★★★★☆ https://github.com/ad-freiburg/qlever/wiki/QLever-performance-evaluation-and-comparison-to-other-SPARQL-engines
 +
#★★★★☆ A closed issue should have at least one example that runs
 +
== Conclusion and Future Work ==
 +
#★★★★☆ SPARQL standard changes
 +
#★★★★★ Hypothesis by Stefan Decker: Query rot is more prominent in KG environments than with relational databases
 +
#★★★☆☆ Ambiguity of names
 +
#★★★☆☆ Sensitivity Analysis
 +
== Additional Resources ==
 +
#★★☆☆☆ https://stackoverflow.com/questions/tagged/sparql
 +
#★★★☆☆ https://www.semantic-web-journal.net/system/files/swj3076.pdf
 +
#★★☆☆☆ https://arxiv.org/pdf/cs/0605124
 +
#★★★☆☆ https://arxiv.org/pdf/1402.0576 optimizing queries
 +
#★★☆☆☆ https://www.w3.org/TR/REC-rdf-syntax/
 +
#★★☆☆☆ https://biblio.ugent.be/publication/8632551/file/8653456 Towards supporting multiple semantics of named graphs using N3 rules
 +
#★★★☆☆ ESWC 2019 proceedings (978-3-030-21348-0.pdf)
 +
#★★☆☆☆ Linked Data Fragments https://linkeddatafragments.org/  e.g. https://ldfclient.wmflabs.org/ 404 error
 +
 
 +
== Related Work ==
 +
#★★★☆☆ Link rot
 +
#★★★★☆ Information Hiding and Dependency Inversion Principles
 +
#★★★☆☆ Federated Queries
 +
#★★★☆☆ grlc
 +
#★★☆☆☆ querypulator
 +
=== Use cases ===
 +
==== Scholarly Publishing ====
 +
===== Semantic Publishing Challenge – Assessing the Quality of Scientific Output =====
 +
[[CiteRef::lange2014se]]
 +
{{#scite:
 +
|reference=lange2014se
 +
|type=journal-article
 +
|title=Semantic Publishing Challenge – Assessing the Quality of Scientific Output
 +
|authors=Christoph Lange;Angelo Di Iorio
 +
|journal=Communications in Computer and Information Science
 +
|publisher=Springer International Publishing
 +
|issn=1865-0929;1865-0937|+sep=;
 +
|pages=61-76
 +
|doi=10.1007/978-3-319-12024-9_8
 +
|year=2014
 +
|retrieved-from=https://doi.org/
 +
|retrieved-on=2024-07-10
 +
}}
 +
 
 +
=====  Semantification of CEUR-WS with Wikidata as a Target Knowledge Graph =====
 +
[[CiteRef::Fahl2023Semantification]]
 +
{{#scite:
 +
|reference=Fahl2023Semantification
 +
|type=article
 +
|title=Semantification of CEUR-WS with Wikidata as a Target Knowledge Graph
 +
|authors=Wolfgang Fahl; Tim Holzheim; Christoph Lange; Stefan Decker
 +
|booktitle=Joint Proceedings of TEXT2KG 2023 and BiKE 2023
 +
|year=2023
 +
|url=https://ceur-ws.org/Vol-3447/Text2KG_Paper_13.pdf
 +
|language=English
 +
|publisher=CEUR Workshop Proceedings
 +
|topics=Wikidata, CEUR Workshop Proceedings, knowledge graph
 +
|uses=Wikidata, Semantic MediaWiki, CEUR Workshop Proceedings, Scholia, BeautifulSoup4, QLever
 +
|retrieved-from=https://ceur-ws.org/Vol-3447/Text2KG_Paper_13.pdf
 +
|retrieved-on=2024-07-10
 +
}}
 +
 
 +
=== Misc ===
 +
 
 +
==== A Comparison of the Cognitive Difficulties Posed by SPARQL Query Constructs ====
 +
 
 +
[[CiteRef::warren2020a ]]
 +
{{#scite:
 +
|reference=warren2020a
 +
|type=journal-article
 +
|title=A Comparison of the Cognitive Difficulties Posed by SPARQL Query Constructs
 +
|authors=Paul Warren;Paul Mulholland
 +
|journal=Lecture Notes in Computer Science
 +
|publisher=Springer International Publishing
 +
|issn=0302-9743;1611-3349|+sep=;
 +
|pages=3-19
 +
|doi=10.1007/978-3-030-61244-3_1
 +
|year=2020
 +
|retrieved-from=https://doi.org/
 +
|retrieved-on=2024-06-22
 +
|acronym=EKAW 2022
 +
}}
 +
 
 +
==== Using SPARQL – The Practitioners’ Viewpoint ====
 +
[[CiteRef::warren2018us]]
 +
{{#scite:
 +
|reference=warren2018us
 +
|type=journal-article
 +
|title=Using SPARQL – The Practitioners’ Viewpoint
 +
|authors=Paul Warren;Paul Mulholland
 +
|journal=Lecture Notes in Computer Science
 +
|publisher=Springer International Publishing
 +
|issn=0302-9743;1611-3349|+sep=;
 +
|pages=485-500
 +
|doi=10.1007/978-3-030-03667-6_31
 +
|year=2018
 +
|retrieved-from=https://doi.org/
 +
|retrieved-on=2024-07-09
 +
|acroynm=EKAW 2018
 +
}}
 +
==== LSQ: The Linked SPARQL Queries Dataset ====
 +
[[CiteRef::saleem2015ls]]
 +
{{#scite:
 +
|reference=saleem2015ls
 +
|type=journal-article
 +
|title=LSQ: The Linked SPARQL Queries Dataset
 +
|authors=Muhammad Saleem;Muhammad Intizar Ali;Aidan Hogan;Qaiser Mehmood;Axel-Cyrille Ngonga Ngomo
 +
|journal=The Semantic Web - ISWC 2015
 +
|publisher=Springer International Publishing
 +
|issn=0302-9743;1611-3349|+sep=;
 +
|pages=261-269
 +
|doi=10.1007/978-3-319-25010-6_15
 +
|year=2015
 +
|retrieved-from=https://doi.org/
 +
|retrieved-on=2024-07-09
 +
}}
 +
 
 +
==== Detecting SPARQL Query Templates for Data Prefetching ====
 +
 
 +
[[CiteRef::lorey2013de]]
 +
{{#scite:
 +
|reference=lorey2013de
 +
|type=journal-article
 +
|title=Detecting SPARQL Query Templates for Data Prefetching
 +
|authors=Johannes Lorey;Felix Naumann
 +
|journal=The Semantic Web: Semantics and Big Data
 +
|publisher=Springer Berlin Heidelberg
 +
|issn=0302-9743;1611-3349|+sep=;
 +
|pages=124-139
 +
|doi=10.1007/978-3-642-38288-8_9
 +
|year=2013
 +
|retrieved-from=https://doi.org/
 +
|retrieved-on=2024-07-09
 +
}}
 +
==== An analytical study of large SPARQL query logs ====
 +
[[CiteRef::bonifati2020an]]
 +
{{#scite:
 +
|reference=bonifati2020an
 +
|type=journal-article
 +
|title=An analytical study of large SPARQL query logs
 +
|authors=Angela Bonifati;Wim Martens;Thomas Timm
 +
|journal=The VLDB Journal
 +
|publisher=Springer Science and Business Media LLC
 +
|issn=1066-8888;0949-877X|+sep=;
 +
|volume=29
 +
|pages=655-679
 +
|doi=10.1007/s00778-019-00558-9
 +
|year=2020
 +
|retrieved-from=https://doi.org/
 +
|retrieved-on=2024-07-09
 +
}}
 +
 
 +
=== Testsuites ===
 +
==== ★★★☆☆ W3C SPARQL 1.1 Test Suite ====
 +
W3C test set - why did we not use that as an example
 +
 
 +
* Official test suite developed by the W3C SPARQL Working Group
 +
* Designed to test conformance to the SPARQL 1.1 specification
 +
* Covers a wide range of SPARQL features and edge cases
 +
* Primarily focused on correctness rather than performance
 +
see https://wikitech.wikimedia.org/wiki/User:AndreaWest/WDQS_Testing/Running_TFT
 +
 
 +
=== Benchmarks ===
 +
==== ★★★★☆ An Ngoc Lam et al.'s ESWC 2023 paper "Evaluation of a Representative Selection of SPARQL Query Engines Using Wikidata" ====
 +
* Evaluates performance of 5 RDF triplestores and 1 experimental SPARQL engine
 +
* Uses complete version of Wikidata knowledge graph
 +
* Compares importing time, loading time, exporting time, and query performance
 +
* Evaluates 328 queries defined by Wikidata users
 +
* Also uses SP2Bench synthetic benchmark for comparison
 +
* Provides detailed analysis of query execution plans and profiling information
 +
* Offers insights on triplestore performance with large-scale real-world data
 +
==== Wikidata Graph Pattern Benchmark (WGPB) for RDF/SPARQL by Aidan Hogan et al., 2020 ====
 +
DOI: 10.5281/zenodo.4035223
 +
* Focuses on evaluating performance of graph pattern matching in SPARQL engines
 +
* Uses a subset of Wikidata as the dataset
 +
* Provides a large set of SPARQL basic graph patterns
 +
* Designed to test the benefits of worst-case optimal join algorithms
 +
* Exhibits a variety of increasingly complex join patterns
 +
* Allows for systematic testing of query optimization techniques
 +
*  Offers insights into the performance characteristics of different SPARQL engines on complex graph patterns
 +
 
 +
 
 +
[[Category:Snapquery]]

Latest revision as of 17:59, 10 July 2024

The Usecases

This work is motivated by three use cases:

  1. Scholarly publishing
  2. QLever SPARQL engine development
  3. Wikidata SPARQL examples, tutorials and usage

We will introduce the use cases and the challenges relevant for this work.

Scholarly publishing

Scholarly publishing via scientific events revolves around the entities scholar, paper, institution, proceedings, events and eventseries. An example set (using Wikidata Q-Identifiers) would be scholar Tim-Berners Lee (Q80), paper Tabulator Redux: Browsing and Writing Linked Data, institution CERN (Q42944), proceedings LDOW 2008, event LDOW 2008 (Q11367282) and the event series LDOW Q105491258. Scholia is a project that has created a portal at https://scholia.toolforge.org/ that allows to search, browse and analyse scholarly publishing related data that has been curated in the Wikidata knowledge graph. You can verify the example Q-Identifiers by performing a plain text search. Note that https://ceur-ws.org/Vol-369/paper11.pdf is missing since it has not been curated yet by the CEUR-WS Semantification project 1 and has no persistent identifier other than its URL. The LDOW 2008 proceedings are found at https://dblp.org/db/conf/www/ldow2008.html but have not been in the list of events and were therefore added manually by us.

The frustration of incomplete search results is the effect of a combination of factors. We would hope that the SPARQL query https://github.com/WDscholia/scholia/blob/master/scholia/app/templates/event-series_events.sparql event-series-events.sparql would work as designed and answer with the year,ordinal,short_name event id and label and proceedings id and level when given the identifier of an event series.

The authors' favorite query for which it would be nice to get the result is: At what conferences did scholars affiliated to an institution q at the time of publishing successfully submit papers in the time frame start_date to end_date. Now it is quite straightforward to envision a (federated) knowledge graph that provides the data and SPARQL query to retrieve the results for any major institution. Unfortunately, the reality is much bleaker and there are still obstacles and challenges to overcome. It is promising that natural language input is quite feasible these days and asking the question without any SPARQL knowledge and getting a result has been demonstrated lately with a good success rate (ORKG demonstration ESWC 2024 Sören Auer)

Scholia currently faces the challenge that the maximum size of the knowledge graph that the Blazegraph SPARQL engine that is currently backing the wikidata knowledge graph can hold is 4 Terabyte. Currently there are claims for more than 110 million entities and more than 15 billion triples in the wikidata KG. Over 20 million humans and as many scholarly papers are referenced. Unfortunately this is only a small portion of the scholarly publishing data that would be collectible from all the digital traces that have been left by scholarly publishing in the past centuries. The list of academic databases and search engines in wikipedia has entries with more than 300 million items. So scholarly publishing data alone would break the 4 TB limit.

The Wikimedia foundation which is responsible for running the Wikidata infrastructure has decided to go the route of a graph split. The scholarly data shall get its own KG and the data be migrated. Unfortunately this potentially invalidates all current 373(check #) Scholia queries.

Scholia queries are named parameterized queries. Scholia uses python and JavaScript as programming languages, SPARQL as a query language and Jinja Templates for query parameterization. See [[1]] for an analysis.

The CEUR-WS semantification project is using Wikidata and Scholia as a target Knowledge Graph to introduce "Metadata-First Publishing" (⚠️ Ref to Papers and paper under review ...). As part of the project a set of SPARQL queries has been created that is based on the Semantic Publishing Challenges 2. Some of these queries have already been manually refactored to run successfully on both the Wikidata Query Service and QLever.

The refactoring activities of the CEUR-WS semantification project have shown challenges that are a strong motivation for this work.

QLever SPARQL Engine development

QLever is a SPARQL Engine developed at the Computer Science Department of the University of Freiburg by Hannah Bast and her team. Written in C++ and aiming for high performance QLever is a candidate for the replacement of Blazegraph as the main SPARQL engine for the Wikidata knowledge graph. QLever is not feature complete. It is developed as an open source project using the github infrastructure ⚠️ Ref. https://github.com/ad-freiburg/qlever.

We have proposed that QLever should use queries from Scholia as a test suite https://github.com/ad-freiburg/qlever/issues/859. In this work we go a step further and show how a testsuite can be constructed from SPARQL queries which are extractable from the github issues. A typical issue such as https://github.com/ad-freiburg/qlever/issues/896 (CONCAT not implemented) will start with a standard situation/action/expected result setup in which the action is executing a given query and the result is an error or misbehavior. When the issues is closed the query will run or an alternative query is presented that shows that the issue has been fixed.

Wikidata SPARQL examples, tutorials and usage

The Wikidata SPARQL Service example wiki page https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/queries/examples has been created in 2016 and has had more than 1500 edits since then. As of the creation of this work in July 2024 more than 300 queries have been added to the page to illustrate the usage of Wikidata. The assumption is that all these queries should actually work against the Wikidata service endpoint (hopefully at all times). This work shows that this is not the case and introduces tools and methods to analyze the reasons systematically.

The Wikidata query service at https://query.wikidata.org/ offers an option to create Short-URLs for queries that have been entered using the service. We have included a random set of 100 such queries as a query set for investigation.

Structure

Introduction

  1. ★★★★★ Query rot versus link rot
  2. ★★★★☆ Transparency vs. complexity of SPARQL queries
  3. ★★★★☆ Use cases for named queries
  4. ★★★★☆ Persistent identifiers
  5. ★★★☆☆ Query hashes and short_urls

Mitigation Query Rot using snapquery

  1. ★★★★★ Parameterized queries
  2. ★★★☆☆ https://web.archive.org/web/20150512231123/http://answers.semanticweb.com:80/questions/12147/whats-the-best-way-to-parameterize-sparql-queries
  3. ★★★☆☆ https://jena.apache.org/documentation/query/parameterized-sparql-strings.html
  4. ★★★★☆ Scholia Jinja templates
  5. ★★★★☆ Technical debt and accidental complexity
  6. ★★★☆☆ How to deal with aspects that do not (usually) influence the execution of a SPARQL query, like whitespace, comments, capitalization and variable names?

SnapQuery Implementation

  1. ★★★☆☆ Natural Language input
  2. ★★★☆☆ Automatic syntax repairs
  3. ★★★☆☆ Automatic conversion of SQL input, SPARQL output

Evaluation

  1. ★★★★☆ Wikidata example queries
  2. ★★★★★ Scholia and Wikidata graph split
  3. ★★★☆☆ Other knowledge graphs, e.g., DBLP, OpenStreetMap
  4. ★★☆☆☆ Perhaps also some NFDI examples or some custom knowledge graphs like FAIRJupyter
  5. ★★★★★ Quality criteria https://github.com/WolfgangFahl/snapquery/issues/26
  6. ★★★★☆ List of standard refactoring activities and the support by this approach
  7. ★★★★☆ Getting your own copy of Wikidata; the infrastructure effort needs to be mentioned
  8. ★★★☆☆ Usability evaluation https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
  9. ★★★★☆ https://github.com/ad-freiburg/qlever/wiki/QLever-performance-evaluation-and-comparison-to-other-SPARQL-engines
  10. ★★★★☆ A closed issue should have at least one example that runs

Conclusion and Future Work

  1. ★★★★☆ SPARQL standard changes
  2. ★★★★★ Hypothesis by Stefan Decker: Query rot is more prominent in KG environments than with relational databases
  3. ★★★☆☆ Ambiguity of names
  4. ★★★☆☆ Sensitivity Analysis

Additional Resources

  1. ★★☆☆☆ https://stackoverflow.com/questions/tagged/sparql
  2. ★★★☆☆ https://www.semantic-web-journal.net/system/files/swj3076.pdf
  3. ★★☆☆☆ https://arxiv.org/pdf/cs/0605124
  4. ★★★☆☆ https://arxiv.org/pdf/1402.0576 optimizing queries
  5. ★★☆☆☆ https://www.w3.org/TR/REC-rdf-syntax/
  6. ★★☆☆☆ https://biblio.ugent.be/publication/8632551/file/8653456 Towards supporting multiple semantics of named graphs using N3 rules
  7. ★★★☆☆ ESWC 2019 proceedings (978-3-030-21348-0.pdf)
  8. ★★☆☆☆ Linked Data Fragments https://linkeddatafragments.org/ e.g. https://ldfclient.wmflabs.org/ 404 error

Related Work

  1. ★★★☆☆ Link rot
  2. ★★★★☆ Information Hiding and Dependency Inversion Principles
  3. ★★★☆☆ Federated Queries
  4. ★★★☆☆ grlc
  5. ★★☆☆☆ querypulator

Use cases

Scholarly Publishing

Semantic Publishing Challenge – Assessing the Quality of Scientific Output

2

Semantification of CEUR-WS with Wikidata as a Target Knowledge Graph

1


Misc

A Comparison of the Cognitive Difficulties Posed by SPARQL Query Constructs

3


Using SPARQL – The Practitioners’ Viewpoint

4

LSQ: The Linked SPARQL Queries Dataset

5


Detecting SPARQL Query Templates for Data Prefetching

6

An analytical study of large SPARQL query logs

7


Testsuites

★★★☆☆ W3C SPARQL 1.1 Test Suite

W3C test set - why did we not use that as an example

  • Official test suite developed by the W3C SPARQL Working Group
  • Designed to test conformance to the SPARQL 1.1 specification
  • Covers a wide range of SPARQL features and edge cases
  • Primarily focused on correctness rather than performance

see https://wikitech.wikimedia.org/wiki/User:AndreaWest/WDQS_Testing/Running_TFT

Benchmarks

★★★★☆ An Ngoc Lam et al.'s ESWC 2023 paper "Evaluation of a Representative Selection of SPARQL Query Engines Using Wikidata"

  • Evaluates performance of 5 RDF triplestores and 1 experimental SPARQL engine
  • Uses complete version of Wikidata knowledge graph
  • Compares importing time, loading time, exporting time, and query performance
  • Evaluates 328 queries defined by Wikidata users
  • Also uses SP2Bench synthetic benchmark for comparison
  • Provides detailed analysis of query execution plans and profiling information
  • Offers insights on triplestore performance with large-scale real-world data

Wikidata Graph Pattern Benchmark (WGPB) for RDF/SPARQL by Aidan Hogan et al., 2020

DOI: 10.5281/zenodo.4035223

  • Focuses on evaluating performance of graph pattern matching in SPARQL engines
  • Uses a subset of Wikidata as the dataset
  • Provides a large set of SPARQL basic graph patterns
  • Designed to test the benefits of worst-case optimal join algorithms
  • Exhibits a variety of increasingly complex join patterns
  • Allows for systematic testing of query optimization techniques
  • Offers insights into the performance characteristics of different SPARQL engines on complex graph patterns

References

  1. ^  Paul Warren;Paul Mulholland. (2020) "A Comparison of the Cognitive Difficulties Posed by SPARQL Query Constructs" - 3-19 pages. doi: 10.1007/978-3-030-61244-3_1at: EKAW 2022
  2. ^  Paul Warren;Paul Mulholland. (2018) "Using SPARQL – The Practitioners’ Viewpoint" - 485-500 pages. doi: 10.1007/978-3-030-03667-6_31
  3. ^  |  Muhammad Saleem;Muhammad Intizar Ali;Aidan Hogan;Qaiser Mehmood;Axel-Cyrille Ngonga Ngomo. (2015) "LSQ: The Linked SPARQL Queries Dataset" - 261-269 pages. doi: 10.1007/978-3-319-25010-6_15
  4. ^  Johannes Lorey;Felix Naumann. (2013) "Detecting SPARQL Query Templates for Data Prefetching" - 124-139 pages. doi: 10.1007/978-3-642-38288-8_9
  5. ^  Angela Bonifati;Wim Martens;Thomas Timm. (2020) "An analytical study of large SPARQL query logs" - 655-679 pages. doi: 10.1007/s00778-019-00558-9