Search by property

Jump to navigation Jump to search

This page provides a simple browsing interface for finding entities described by a property and a named value. Other available search interfaces include the page property search, and the ask query builder.

Search by property

A list of all pages that have property "Task objective" with value "Task 2 was designed to test the ability to extract data from the full text of the papers. It follows last year’s Task 2, which focused on extracting information about citations. The rationale was that the network of citations of a paper – including papers citing it or cited by that paper – is an important dimension to assess its relevance and to contextualise it within a research area. This year we included further contextual information. Scientific papers are not isolated units. Factors that directly or indirectly contribute to the origin and development of a paper include citations, the institutions the authors are affiliated to, funding agencies, and the venue where a paper was presented. Participants had to make such information explicit and exploit it to answer queries providing a deeper understanding of the context in which papers were written. The dataset’s format is another difference from 2014. Instead of XML sources, we used PDF this year, taken from CEUR-WS.org. PDF is still the predominant format for publishing scientific papers, despite being designed for printing. The internal structure of a PDF paper does not correspond to the logical structure of its content, rather to a sequence of layouting and formatting commands. The challenge for participants was to recover the logical structure, to extract contextual information, and to represent it as semantic assertions.". Since there have been only a few results, also nearby values are displayed.

Showing below up to 2 results starting with #1.

View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)


    

List of results

    • Task2  + (Task 2 was designed to test the ability toTask 2 was designed to test the ability to extract data from the full text of the</br>papers. It follows last year’s Task 2, which focused on extracting information</br>about citations. The rationale was that the network of citations of a paper –</br>including papers citing it or cited by that paper – is an important dimension to</br>assess its relevance and to contextualise it within a research area.</br>This year we included further contextual information. Scientific papers are</br>not isolated units. Factors that directly or indirectly contribute to the origin</br>and development of a paper include citations, the institutions the authors are</br>affiliated to, funding agencies, and the venue where a paper was presented. Participants </br>had to make such information explicit and exploit it to answer queries</br>providing a deeper understanding of the context in which papers were written.</br>The dataset’s format is another difference from 2014. Instead of XML sources,</br>we used PDF this year, taken from CEUR-WS.org. PDF is still the predominant</br>format for publishing scientific papers, despite being designed for printing. The</br>internal structure of a PDF paper does not correspond to the logical structure</br>of its content, rather to a sequence of layouting and formatting commands.</br>The challenge for participants was to recover the logical structure, to extract</br>contextual information, and to represent it as semantic assertions.nd to represent it as semantic assertions.)