簡體   English   中英

Neo4j性能調優

[英]Neo4j performance tuning

我是Neo4j的新手,目前我正在嘗試將交友網站設置為POC。 我有4GB的輸入文件,看起來像是波紋管格式。

其中包含viewerId(male / female),viewedId,這是他們查看過的ID的列表。 基於此歷史記錄文件,當任何用戶上線時,我都需要給出建議。

輸入文件:

viewerId   viewedId 
12345   123456,23456,987653 
23456   23456,123456,234567 
34567   234567,765678,987653 
:

為此,我嘗試了以下方法,

USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:/home/hadoopuser/Neo-input " AS row
FIELDTERMINATOR '\t'
WITH row, split(row.viewedId, ",") AS viewedIds
UNWIND viewedIds AS viewedId
MERGE (p2:Persons2 {viewerId: row.viewerId})
MERGE (c2:Companies2 {viewedId: viewedId})
MERGE (p2)-[:Friends]->(c2)
MERGE (c2)-[:Sees]->(p2);

我的Cypher查詢得到的結果是,

MATCH (p2:Persons2)-[r*1..3]->(c2: Companies2)
RETURN p2,r, COLLECT(DISTINCT c2) as friends 

要完成此任務,將需要3天。

我的系統配置:

Ubuntu -14.04  
RAM -24GB

Neo4j配置:
neo4j.properties:

neostore.nodestore.db.mapped_memory=200M
neostore.propertystore.db.mapped_memory=2300M
neostore.propertystore.db.arrays.mapped_memory=5M
neostore.propertystore.db.strings.mapped_memory=3200M
neostore.relationshipstore.db.mapped_memory=800M

Neo4j的-wrapper.conf

wrapper.java.initmemory=12000
wrapper.java.maxmemory=12000

為了減少時間,我通過以下鏈接https://github.com/jexp/batch-import搜索並在Internet中獲得了一個想法,例如Batch importer。

在該鏈接中,它們具有node.csv,rels.csv文件,它們已導入Neo4j。 我對他們如何創建node.csv和rels.csv文件以及正在使用的腳本一無所知。

誰能給我示例腳本來為我的數據制作node.csv和rels.csv文件?

或者,您可以提出任何建議以加快導入和檢索數據的速度嗎?

提前致謝。

您不需要逆關系,只有一個就足夠了!

對於“導入”,將堆(neo4j-wrapper.conf)配置為12G,將頁面緩存(neo4j.properties)配置為10G。

試試這個,應該在幾分鍾內完成。

create constraint on (p:Persons2) assert p.viewerId is unique;
create constraint on (p:Companies2) assert p.viewedId is unique;

USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:/home/hadoopuser/Neo-input " AS row
FIELDTERMINATOR '\t'
MERGE (p2:Persons2 {viewerId: row.viewerId});

USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:/home/hadoopuser/Neo-input " AS row
FIELDTERMINATOR '\t'
FOREACH (viewedId IN split(row.viewedId, ",") |
  MERGE (c2:Companies2 {viewedId: viewedId}));

USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:/home/hadoopuser/Neo-input " AS row
FIELDTERMINATOR '\t'
WITH row, split(row.viewedId, ",") AS viewedIds
MATCH (p2:Persons2 {viewerId: row.viewerId})
UNWIND viewedIds AS viewedId
MATCH (c2:Companies2 {viewedId: viewedId})
MERGE (p2)-[:Friends]->(c2);

對於合並關系,如果您有一些擁有數十萬甚至數百萬個視圖的公司,則可能要使用以下方法:

USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "file:/home/hadoopuser/Neo-input " AS row
FIELDTERMINATOR '\t'
WITH row, split(row.viewedId, ",") AS viewedIds
MATCH (p2:Persons2 {viewerId: row.viewerId})
UNWIND viewedIds AS viewedId
MATCH (c2:Companies2 {viewedId: viewedId})
WHERE shortestPath((p2)-[:Friends]->(c2)) IS NULL
CREATE (p2)-[:Friends]->(c2);

關於您的查詢?

您想通過檢索所有人員和所有公司(最多3個層次)之間的交叉產品來實現什么? 這些可能是數萬億條路徑?

通常你想知道這是一個人或公司。

更新查詢

例如。 對於123456,所有被查看過該公司的人是12345,23456,那么這些人查看過的公司是1​​2345 123456,23456,987653 23456 23456,123456,234567,那么我需要給公司-123456推薦為23456,987653, 23456,234567結果的不同(最終結果)23456,987653,234567

match (c:Companies2)<-[:Friends]-(p1:Persons2)-[:Friends]->(c2:Companies2)
where c.viewedId = 123456
return distinct c2.viewedId;

對於所有公司而言,這可能會有所幫助:

match (c:Companies2)<-[:Friends]-(p1:Persons2)
with p1, collect(c) as companies
match (p1)-[:Friends]->(c2:Companies2)
return c2.viewedId, extract(c in companies | c.viewedId);

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM