繁体   English   中英

Neo4j性能调优

[英]Neo4j performance tuning

我是Neo4j的新手,目前我正在尝试将交友网站设置为POC。 我有4GB的输入文件,看起来像是波纹管格式。

其中包含viewerId(male / female),viewedId,这是他们查看过的ID的列表。 基于此历史记录文件,当任何用户上线时,我都需要给出建议。

输入文件:

viewerId   viewedId 
12345   123456,23456,987653 
23456   23456,123456,234567 
34567   234567,765678,987653 
:

为此,我尝试了以下方法,

USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:/home/hadoopuser/Neo-input " AS row
FIELDTERMINATOR '\t'
WITH row, split(row.viewedId, ",") AS viewedIds
UNWIND viewedIds AS viewedId
MERGE (p2:Persons2 {viewerId: row.viewerId})
MERGE (c2:Companies2 {viewedId: viewedId})
MERGE (p2)-[:Friends]->(c2)
MERGE (c2)-[:Sees]->(p2);

我的Cypher查询得到的结果是,

MATCH (p2:Persons2)-[r*1..3]->(c2: Companies2)
RETURN p2,r, COLLECT(DISTINCT c2) as friends 

要完成此任务,将需要3天。

我的系统配置:

Ubuntu -14.04  
RAM -24GB

Neo4j配置:
neo4j.properties:

neostore.nodestore.db.mapped_memory=200M
neostore.propertystore.db.mapped_memory=2300M
neostore.propertystore.db.arrays.mapped_memory=5M
neostore.propertystore.db.strings.mapped_memory=3200M
neostore.relationshipstore.db.mapped_memory=800M

Neo4j的-wrapper.conf

wrapper.java.initmemory=12000
wrapper.java.maxmemory=12000

为了减少时间,我通过以下链接https://github.com/jexp/batch-import搜索并在Internet中获得了一个想法,例如Batch importer。

在该链接中,它们具有node.csv,rels.csv文件,它们已导入Neo4j。 我对他们如何创建node.csv和rels.csv文件以及正在使用的脚本一无所知。

谁能给我示例脚本来为我的数据制作node.csv和rels.csv文件?

或者,您可以提出任何建议以加快导入和检索数据的速度吗?

提前致谢。

您不需要逆关系,只有一个就足够了!

对于“导入”,将堆(neo4j-wrapper.conf)配置为12G,将页面缓存(neo4j.properties)配置为10G。

试试这个,应该在几分钟内完成。

create constraint on (p:Persons2) assert p.viewerId is unique;
create constraint on (p:Companies2) assert p.viewedId is unique;

USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:/home/hadoopuser/Neo-input " AS row
FIELDTERMINATOR '\t'
MERGE (p2:Persons2 {viewerId: row.viewerId});

USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:/home/hadoopuser/Neo-input " AS row
FIELDTERMINATOR '\t'
FOREACH (viewedId IN split(row.viewedId, ",") |
  MERGE (c2:Companies2 {viewedId: viewedId}));

USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:/home/hadoopuser/Neo-input " AS row
FIELDTERMINATOR '\t'
WITH row, split(row.viewedId, ",") AS viewedIds
MATCH (p2:Persons2 {viewerId: row.viewerId})
UNWIND viewedIds AS viewedId
MATCH (c2:Companies2 {viewedId: viewedId})
MERGE (p2)-[:Friends]->(c2);

对于合并关系,如果您有一些拥有数十万甚至数百万个视图的公司,则可能要使用以下方法:

USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:/home/hadoopuser/Neo-input " AS row
FIELDTERMINATOR '\t'
WITH row, split(row.viewedId, ",") AS viewedIds
MATCH (p2:Persons2 {viewerId: row.viewerId})
UNWIND viewedIds AS viewedId
MATCH (c2:Companies2 {viewedId: viewedId})
WHERE shortestPath((p2)-[:Friends]->(c2)) IS NULL
CREATE (p2)-[:Friends]->(c2);

关于您的查询?

您想通过检索所有人员和所有公司(最多3个层次)之间的交叉产品来实现什么? 这些可能是数万亿条路径?

通常你想知道这是一个人或公司。

更新查询

例如。 对于123456,所有被查看过该公司的人是12345,23456,那么这些人查看过的公司是1​​2345 123456,23456,987653 23456 23456,123456,234567,那么我需要给公司-123456推荐为23456,987653, 23456,234567结果的不同(最终结果)23456,987653,234567

match (c:Companies2)<-[:Friends]-(p1:Persons2)-[:Friends]->(c2:Companies2)
where c.viewedId = 123456
return distinct c2.viewedId;

对于所有公司而言,这可能会有所帮助:

match (c:Companies2)<-[:Friends]-(p1:Persons2)
with p1, collect(c) as companies
match (p1)-[:Friends]->(c2:Companies2)
return c2.viewedId, extract(c in companies | c.viewedId);

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM