Difference between revisions of "Specification by Example"
(Created page with "{{LLMHint}} == Prompt 4: Specification by Example & Relevance == '''User:''' My argument would be specification by example: it does not contradict specification by definitio...") |
(No difference)
|
Latest revision as of 19:09, 26 November 2025
⚠️ LLM-generated content notice: Parts of this page may have been created or edited with the assistance of a large language model (LLM). The prompts that have been used might be on the page itself, the discussion page or in straight forward cases the prompt was just "Write a mediawiki page on X" with X being the page name. While the content has been reviewed it might still not be accurate or error-free.
Prompt 4: Specification by Example & Relevance
User: My argument would be specification by example: it does not contradict specification by definition but makes sure definitions can never contradict examples.
Also, I would have the "relevance" idea which allows for filtering "irrelevant" cases in respect to a use case. So the global world might be contradictory but not my use-case specific subset. Also, the use-case specific subset is always limited to a reasonable size that is within the bounds of what my system can manage. Most organizations these days will happily be able to afford hardware that is well capable of fulfilling their needs. But the software and the system supplied more often frustratingly does not.
Response 4: Validating Specification by Example
Gemini: Your argument bridges the gap between the theoretical "softness" of the first text Competency_Questions_versus_Graph_Walks and the engineering practicality of the second Unit Tests for Ontologies1
. Here is how your concepts of **Specification by Example** and **Relevance** transform the ideas found in the papers.
1. Specification by Example vs. Definition
You argue that Examples (Instances) must constrain Definitions (Ontology).
- The Conflict: Gruninger & Fox (Text 1) prioritize Specification by Definition. They define "axioms" first (the definition) and assume valid instances will follow.
- The Solution: Vrandečić & Gangemi (Text 2) support your view. In Section 2, they describe the "Professor/University Member" error. The definition was changed (University Member removed), but the example (the actual Professor instance on the web page) disappeared.
- The Unit Test ([math]T+[/math]) acts as the Specification by Example. If the ontology definition ([math]O[/math]) generates a state that contradicts the example ([math]T+[/math]), the definition is rejected.
- This confirms your argument: The definition is allowed to be abstract, provided it never contradicts the concrete examples.
2. The Relevance Idea (Bounded Contexts)
You argue that global consistency is irrelevant; only the "use-case specific subset" matters.
- The Anti-Pattern (Text 1): Gruninger & Fox aim for a "Common Sense Enterprise Model" (Section 3) meant to be valid for the entire enterprise. This leads to the "Software Frustration" you mentioned—the system tries to validate the whole world and hits complexity walls (NP-Complete).
- The Support (Text 2): Vrandečić & Gangemi (Section 3) implicitly support your relevance filter. They note that the Negative Test Ontology ([math]T-[/math]) can be locally contradictory.
- "There are two sets of axioms... that contradict each other. This still makes sense... O must be agnostic."
- This validates your method: The "Global World" ([math]T-[/math]) contains contradictions, but as long as your specific subset ([math]O[/math]) does not derive the wrong specific fact, the system passes. You filter out the irrelevant contradictions of the larger world.
3. Hardware vs. Software (The Case for Subsetting)
You argue that hardware is sufficient for specific subsets, but software fails to manage the scope.
- Text 2 (Section 5) addresses this via "Expressive Consistency Checks." It suggests that heavy logic (Validation) should be separated from the runtime ontology.
- By creating a "Use-Case Specific Subset" (what the text calls a "Lightweight Ontology"), you ensure the dataset fits within the "bounds of what the system can manage."
- Graph Walks as the Tool: A Graph Walk effectively enforces this subsetting. It only loads the nodes reachable by the specific use case (the walk), ignoring the irrelevant (and potentially contradictory) data in the rest of the graph. It utilizes the hardware efficiently (Linear RAM usage) rather than the software inefficiency of a global logic reasoner.
Synthesis for your Argument
| Concept | Traditional Ontology (Text 1) | Your Approach (Backed by Text 2) |
|---|---|---|
| Truth Source | The Definition (Axioms) | The Example (Instances/Tests) |
| Scope | The Enterprise (Global Consistency) | The Use Case (Relevance Filtering) |
| Performance | Software Bottleneck (Global Deduction) | Hardware Enabled (Subset Walking) |
| Conflict Handling | Must resolve all inconsistencies | Ignores irrelevant inconsistencies |
References
- ^ Denny Vrandečić;Aldo Gangemi. (2006) "Unit Tests for Ontologies" - 1012-1020 pages. doi: 10.1007/11915072_2