简体   繁体   中英

Inference with SPARQL CONSTRUCT

I am modeling an ontology in Protégé 5.1.0. I want to model something that, once the inference-engince is running, it is inferred that, under certain circumstances an instance be of class A.

<owl:Class rdf:about="http://example.org#classA">
    <owl:equivalentClass>
        <owl:Restriction>
            <owl:onProperty rdf:resource="http://example.org#meetsRequirements"/>
            <owl:hasValue rdf:datatype="http://www.w3.org/2001/XMLSchema#boolean">true</owl:hasValue>
        </owl:Restriction>
    </owl:equivalentClass>
</owl:Class>

However, these "restrictions" are not as simple as the one shown in the example. One of the restrictions, for example, is an operator "greater than" which, as far as I know, cannot be modeled in OWL-DL. (Is that correct?)

Surfing on the Internet, I found the SPARQL CONSTRUCT type of query. So, I come up with a query such as:

CONSTRUCT {?ins rdf:type <http://example.org#classA}
FROM http://example.org/myBase
WHERE {?ins example:hasValue ?val}
FILTER (?val > 10^^xsd:double)}

I think that this query should return a Graph where all instances with example:hasValue > 10 are of type ClassA .

I want this result to be reflected on my Graph (where all my triples are). Is there any possibility for that? Has anyone dealt with this kind of situation?

I see two options.

One, run the CONSTRUCTs that build inferred knowledge, save the results on files (eg, as Turtle) and load them into your triple store (in a separated graph, so that you can deal with updates). This approach makes queries performant, although it creates redundancy in your storage backend and it's static, you typically do it periodically and you cannot afford to do things this way for each small change you make on the explicit knowledge base. Clearly, you could run these CONSTRUCT rules while creating RDF (eg, from SQL/XML/CSV converters), which typically allows you to do it on small data sets and exploiting parallelism.

Option two, most triple stores (eg, Virtuoso, Fuseki/Jena) have rules to rewrite SPARQL queries and get more results than you do without rules. The problem is that this approach usually is not very performant and, in case of engines like Fuseki/Jena, doesn't work very well with large data sets, because their reasoning engine (as most of OWL reasoners) need to load the whole base data set in memory, before being able to apply any inference.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM