简体   繁体   中英

Neo4j create nodes and relationships from pandas dataframe with py2neo

Getting results on a pandas dataframe from a cypher query on a Neo4j database with py2neo is really straightforward, as:

>>> from pandas import DataFrame
>>> DataFrame(graph.data("MATCH (a:Person) RETURN a.name, a.born LIMIT 4"))
   a.born              a.name
0    1964        Keanu Reeves
1    1967    Carrie-Anne Moss
2    1961  Laurence Fishburne
3    1960        Hugo Weaving

Now I am trying to create (or better MERGE) a set of nodes and relationships from a pandas dataframe into a Neo4j database with py2neo. Imagine I have a dataframe like:

LABEL1 LABEL2
p1 n1
p2 n1
p3 n2
p4 n2

where Labels are column header and properties as values. I would like to reproduce the following cypher query (for the first row as example), for every rows of my dataframe:

query="""
    MATCH (a:Label1 {property:p1))
    MERGE (a)-[r:R_TYPE]->(b:Label2 {property:n1))
"""

I know I can tell py2neo just to graph.run(query) , or even run a LOAD CSV cypher script in the same way, but I wonder whether I can iterate through the dataframe and apply the above query row by row WITHIN py2neo.

You can use DataFrame.iterrows() to iterate through the DataFrame and execute a query for each row, passing in the values from the row as parameters.

for index, row in df.iterrows():
    graph.run('''
      MATCH (a:Label1 {property:$label1))
      MERGE (a)-[r:R_TYPE]->(b:Label2 {property:$label2))
    ''', parameters = {'label1': row['label1'], 'label2': row['label2']})

That will execute one transaction per row. We can batch multiple queries into one transaction for better performance.

tx = graph.begin()
for index, row in df.iterrows():
    tx.evaluate('''
      MATCH (a:Label1 {property:$label1))
      MERGE (a)-[r:R_TYPE]->(b:Label2 {property:$label2))
    ''', parameters = {'label1': row['label1'], 'label2': row['label2']})
tx.commit()

Typically we can batch ~20k database operations in a single transaction.

I found out that the proposed solution doesn't work for me. The code above creates new nodes even though the nodes already exist. To make sure you don't create any duplicates, I suggest matching both a and b node before merge :

tx = graph.begin()
for index, row in df.iterrows():
    tx.evaluate('''
       MATCH (a:Label1 {property:$label1}), (b:Label2 {property:$label2})
       MERGE (a)-[r:R_TYPE]->(b)
       ''', parameters = {'label1': row['label1'], 'label2': row['label2']})
tx.commit()

Also in my case, I had to add relationship properties simultaneously (see the code below). Moreover, I had 500k+ relationships to add, so I expectedly run into the java heap memory error. I solved the problem by placing begin() and commit() inside the loop, so for each new relationship a new transaction is created:

for index, row in df.iterrows():
    tx = graph.begin()
    tx.evaluate('''
       MATCH (a:Label1 {property:$label1}), (b:Label2 {property:$label2})
       MERGE (a)-[r:R_TYPE{property_name:$p}]->(b)
       ''', parameters = {'label1': row['label1'], 'label2': row['label2'], 'p': row['property']})
    tx.commit()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM