Snapquery EKAW 2024 paper

From BITPlan cr Wiki
Jump to navigation Jump to search

Key Points

  1. Query rot versus link rot
  2. Transparency vs. complexity of SPARQL queries
  3. Use cases for named queries
  4. Ambiguity of names
  5. Persistent identifiers
  6. Query hashes
  7. How to deal with aspects that do not (usually) influence the execution of a SPARQL query, like whitespace, comments, capitalization and variable names?
  8. Parameterized queries
  9. technical debt and accidential complexity
  10. Wikidata example queries
  11. Scholia and Wikidata graph split
  12. Other knowledge graphs, e.g., DBLP, OpenStreetMap
  13. Perhaps also some NFDI examples or some custom knowledge graphs like FAIRJupyter
  14. An Ngo Lam's ESWC 2023 paper as a pointer to the style of comparison
  15. Quality criteria https://github.com/WolfgangFahl/snapquery/issues/26
  16. List of standard refactoring activities and the support by this approach
  17. SPARQL standard changes
  18. Getting your own copy of Wikidata; the infrastructure effort needs to be mentioned
  19. W3C test set - why did we not use that as an example
  20. Useability evaluation https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
  21. https://github.com/ad-freiburg/qlever/wiki/QLever-performance-evaluation-and-comparison-to-other-SPARQL-engines
  22. Automatic syntax repairs - automatic conversion of SQL input, SPARQL output.
  23. A closed issue should have at least one example that runs
  24. Hypothesis by Stefan Decker: Query rot is more prominent in KG environments than with relational databases.
  25. https://stackoverflow.com/questions/tagged/sparql
  26. https://web.archive.org/web/20150512231123/http://answers.semanticweb.com:80/questions/12147/whats-the-best-way-to-parameterize-sparql-queries
  27. https://jena.apache.org/documentation/query/parameterized-sparql-strings.html
  28. https://www.semantic-web-journal.net/system/files/swj3076.pdf
  29. https://arxiv.org/pdf/cs/0605124
  30. https://arxiv.org/pdf/1402.0576 optimizing queries
  31. https://www.w3.org/TR/REC-rdf-syntax/
  32. https://biblio.ugent.be/publication/8632551/file/8653456 Towards supporting multiple semantics of named graphs using N3 rules
  33. Wikidata Graph Pattern Benchmark (WGPB) for RDF/SPARQL by Aidan Hogan et al., 2020 (DOI: 10.5281/zenodo.4035223)
  34. ESWC 2019 proceedings (978-3-030-21348-0.pdf)

References

  1. ^  |  Christoph Lange;Angelo Di Iorio. (2014) "Semantic Publishing Challenge – Assessing the Quality of Scientific Output" - 61-76 pages. doi: 10.1007/978-3-319-12024-9_8
  2. ^  Wolfgang Fahl; Tim Holzheim; Christoph Lange; Stefan Decker. (2023) "Semantification of CEUR-WS with Wikidata as a Target Knowledge Graph" . url: https://ceur-ws.org/Vol-3447/Text2KG_Paper_13.pdf
  3. ^  Paul Warren;Paul Mulholland. (2020) "A Comparison of the Cognitive Difficulties Posed by SPARQL Query Constructs" - 3-19 pages. doi: 10.1007/978-3-030-61244-3_1at: EKAW 2022
  4. ^  Paul Warren;Paul Mulholland. (2018) "Using SPARQL – The Practitioners’ Viewpoint" - 485-500 pages. doi: 10.1007/978-3-030-03667-6_31
  5. ^  |  Muhammad Saleem;Muhammad Intizar Ali;Aidan Hogan;Qaiser Mehmood;Axel-Cyrille Ngonga Ngomo. (2015) "LSQ: The Linked SPARQL Queries Dataset" - 261-269 pages. doi: 10.1007/978-3-319-25010-6_15
  6. ^  Johannes Lorey;Felix Naumann. (2013) "Detecting SPARQL Query Templates for Data Prefetching" - 124-139 pages. doi: 10.1007/978-3-642-38288-8_9
  7. ^  Angela Bonifati;Wim Martens;Thomas Timm. (2020) "An analytical study of large SPARQL query logs" - 655-679 pages. doi: 10.1007/s00778-019-00558-9