简体   繁体   English

使用Rexster和Titan Graph DB实现可扩展的应用程序

[英]Using Rexster and Titan Graph DB for scalable applications

I have a python application communicating with Titan graph database backed by Cassandra. 我有一个python应用程序与Cassandra支持的Titan图形数据库进行通信。

Python App ---------> Rexster Server + Titan Graph DB + Cassandra. Python App ---------> Rexster Server + Titan Graph DB + Cassandra。

The "Rexster Server + Titan Graph DB + Cassandra" is inside a single JVM. “Rexster Server + Titan Graph DB + Cassandra”位于单个JVM中。

My python application runs on multiple Virtual machines.ie Each virtual machine has an identical copy of my application. 我的python应用程序在多个虚拟机上运行。每个虚拟机都有一个相同的应用程序副本。 The idea is to make the application scalable. 我们的想法是使应用程序可扩展。 Now clearly for the initial implementation I am using a single instance of "Rexster Server + Titan Graph DB + Cassandra". 现在很明显,对于初始实现,我使用的是“Rexster Server + Titan Graph DB + Cassandra”的单个实例。 This means that the backend database is a single node. 这意味着后端数据库是单个节点。 My applications running on different virtual machines talk to the same backend. 我在不同虚拟机上运行的应用程序与同一后端通信。

My questions are as follows. 我的问题如下。

1) I want to make the backend database scalable as well. 1)我想让后端数据库也可以扩展。 How can I do this? 我怎样才能做到这一点?

2) Do I need to use the same "Rexster + Titan Graph DB" and configure multiple cassandra nodes? 2)我是否需要使用相同的“Rexster + Titan Graph DB”并配置多个cassandra节点?

3) Is Titan Graph DB the best option for this use case? 3)Titan Graph DB是此用例的最佳选择吗? Or can I substitute Titan Graph DB with Neo4j and Rexster with Neo4jserver. 或者我可以用Neo4j和Rexster用Neo4jserver替换Titan Graph DB。 why/whynot? 为什么/ whynot?

Titan is a highly scalable graph database as has been demonstrated in their examples. Titan是一个高度可扩展的图形数据库,已在其示例中进行了演示。 To answer your questions, I think it's necessary to express how big is your project could be. 为了回答你的问题,我认为有必要表明你的项目有多大。 If you intend to deploy a hadoop cluster, make sure the rexster is configured to connect to the Zookeeper address of the backend (if managed by it) and not a list of addresses of the nodes. 如果您打算部署hadoop集群,请确保将rexster配置为连接到后端的Zookeeper地址(如果由其管理),而不是节点的地址列表。

1. I want to make the backend database scalable as well. 我想让后端数据库也可以扩展。 How can I do this? 我怎样才能做到这一点?
If you intend to scale beyond the confine of one machine, you could refer to this page for more info : Titan-Cassandra Configuration . 如果您打算超出一台机器的范围,可以参考此页面获取更多信息: Titan-Cassandra配置 As to whether how to make the backend database to be scalable, Cassandra and HBase are very scalable databases and I suggest you read more about Hadoop ecosystem to understand how Titan DB fits into this. 至于如何使后端数据库可扩展,Cassandra和HBase是非常可扩展的数据库,我建议你阅读更多有关Hadoop生态系统的内容,以了解Titan DB如何适应这一点。 You could have many HBase/Cassandra nodes that rexster could talk to 你可以拥有许多rexster可以与之交谈的HBase / Cassandra节点

2. Do I need to use the same "Rexster + Titan Graph DB" and configure multiple cassandra nodes? 2. 我是否需要使用相同的“Rexster + Titan Graph DB”并配置多个cassandra节点?
You could start several rexster servers on a different machine in the cluster, with each connecting to the same backend. 您可以在群集中的其他计算机上启动多个rexster服务器,每个服务器都连接到同一个后端。 But each graph from the rexster is independent of each other, so you have to manually partition your graph operations. 但是rexster中的每个图形都是相互独立的,因此您必须手动对图形操作进行分区。 And in this scenario, it only good for a high number of users instead of deep-traversals/queries 在这种情况下,它只适用于大量用户而不是深度遍历/查询

3. Is Titan Graph DB the best option for this use case? 3. Titan Graph DB是此用例的最佳选择吗? Or can I substitute Titan Graph DB with Neo4j and Rexster with Neo4jserver. 或者我可以用Neo4j和Rexster用Neo4jserver替换Titan Graph DB。 why/whynot? 为什么/ whynot? Because it seems you're going to deploy a cluster, I think Titan is the better choice unless you're willing to pay for the Enterprise edition of Neo4j to support clustering. 因为看起来你要部署集群,我认为Titan是更好的选择,除非你愿意支付企业版的Neo4j以支持集群。 Neo4j editions Another point to consider : Titan vs OrientDB Neo4j版本需要考虑的另一点: Titan vs OrientDB

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM