简体   繁体   English

挂载SPARQL端点以与自定义本体和三重RDF一起使用

[英]Mount a SPARQL endpoint for use with custom ontologies and triple RDFs

I've been trying to figure out how to mount a SPARQL endpoint for a couple of days, but as much as I read I can not understand it. 我一直在试图弄清楚如何装载一个SPARQL端点几天,但是尽管我读到了我无法理解它。

Comment my intention: I have an open data server mounted on CKAN and my goal is to be able to use SPARQL queries on the data. 评论我的意图:我在CKAN上安装了一个开放数据服务器,我的目标是能够对数据使用SPARQL查询。 I know I could not do it directly on the datasets themselves, and I would have to define my own OWL and convert the data I want to use from CSV format (which is the format they are currently in) to RDF triple format (to be used as linked data). 我知道我不能直接对数据集本身进行操作,我必须定义自己的OWL并将我想要使用的数据从CSV格式(它们当前的格式)转换为RDF三元格式(将是用作链接数据)。

The idea was to first test with the metadata of the repositories that can be generated automatically with the extension ckanext-dcat , but is that I really do not find where to start. 我的想法是首先测试可以使用扩展名ckanext-dcat自动生成的存储库的元数据,但是我真的找不到从哪里开始。 I've searched for information on how to install a Virtuoso server for the SPARQL, but the information I've found leaves a lot to be desired, not to say that I can find nowhere to explain how I could actually introduce my own OWLs and RDFs into Virtuoso itself. 我已经搜索了有关如何为SPARQL安装Virtuoso服务器的信息,但是我发现的信息还有很多不足之处,并不是说我无法解释如何能够真正介绍我自己的OWL和RDF进入Virtuoso本身。

Someone who can lend me a hand to know how to start? 有人可以借给我一个知道如何开始的人吗? Thank you 谢谢

I'm a little confused. 我有点困惑。 Maybe this is two or more questions? 也许这是两个或更多的问题?

1. How to convert tabular data, like CSV, into the RDF semantic format? 1.如何将表格数据(如CSV)转换为RDF语义格式?

This can be done with an R2RML approach. 这可以使用R2RML方法完成。 Karma is a great GUI for that purpose. Karma是一个很棒的GUI用于此目的。 Like you say, a conversion like that can really be improved with an underlying OWL ontology. 就像你说的那样,使用底层的OWL本体可以真正改善这样的转换。 But it can be done without creating a custom ontology, too. 但它也可以在不创建自定义本体的情况下完成。

I have elaborated on this in the answer to another question. 我在另一个问题的答案中详细阐述了这一点。

2. Now that I have some RDF formatted data, how can I expose it with a SPARQL endpoint? 2.现在我有一些RDF格式的数据,如何用SPARQL端点公开它?

Virtuoso is a reasonable choice. Virtuoso是一个合理的选择。 There are multiple ways to deploy it and multiple ways to load the data, and therefore LOTs of tutorial on the subject. 有多种方法可以部署它和多种方式来加载数据,因此有很多关于这个主题的教程。 Here's one good one, from DBpedia. 这是一个很好的,来自DBpedia。

If you'd like a simpler path to starting an RDF triplestore with a SPARQL endpoint, Stardog and Blazegraph are available as JARs, and RDF4J can easily be deployed within a container like Tomcat. 如果您想要一个更简单的路径来启动带有SPARQL端点的RDF三元组StardogBlazegraph可用作JAR,并且RDF4J可以轻松地部署在像Tomcat这样的容器中。

All provide web-based graphical interfaces for loading data and running queries , in addition to SPARQL REST endpoints. 除了SPARQL REST端点之外,所有这些都提供了用于加载数据和运行查询的基于Web的图形界面 At least Stardog also provides command-line tools for bulk loading. 至少Stardog还提供批量加载的命令行工具。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM