简体   繁体   中英

How does Apache-Spark work with methods inside a class

I am learning Apache-Spark now. After carefully reading Spark tutorials, I understand how to pass a Python function to Apache-Spark to deal with RDD dataset. But now I still have no ideas on how Apache-Spark works with methods inside a class. For example, I have my code as below:

import numpy as np
import copy
from pyspark import SparkConf, SparkContext

class A():
    def __init__(self, n):
        self.num = n

class B(A):
    ### Copy the item of class A to B.
    def __init__(self, A):
        self.num = copy.deepcopy(A.num)

    ### Print out the item of B
    def display(self, s):
        print s.num
        return s

def main():
    ### Locally run an application "test" using Spark.
    conf = SparkConf().setAppName("test").setMaster("local[2]")

    ### Setup the Spark configuration.
    sc = SparkContext(conf = conf)

    ### "data" is a list to store a list of instances of class A. 
    data = []
    for i in np.arange(5):
        x = A(i)
        data.append(x)

    ### "lines" separate "data" in Spark.  
    lines = sc.parallelize(data)

    ### Parallelly creates a list of instances of class B using
    ### Spark "map".
    temp = lines.map(B)

    ### Now I got the error when it runs the following code:
    ### NameError: global name 'display' is not defined.
    temp1 = temp.map(display)

if __name__ == "__main__":
    main()

In fact, I used the above code to parallelly generate a list of instances of class B using temp = lines.map(B) . After that, I did temp1 = temp.map(display) , as I wanted to parallelly print out each of the items in that list of instances of class B . But now the error shows up: NameError: global name 'display' is not defined. I am wondering how I can fix the error, if I still use Apache-Spark parallel computing. I really appreciate if anyone helps me.

Structure

.
├── ab.py
└── main.py

main.py

import numpy as np
from pyspark import SparkConf, SparkContext
import os
from ab import A, B

def main():
    ### Locally run an application "test" using Spark.
    conf = SparkConf().setAppName("test").setMaster("local[2]")

    ### Setup the Spark configuration.
    sc = SparkContext(
            conf = conf, pyFiles=[
               os.path.join(os.path.abspath(os.path.dirname(__file__)), 'ab.py')]
    ) 

    data = []
    for i in np.arange(5):
        x = A(i)
        data.append(x)

    lines = sc.parallelize(data)
    temp = lines.map(B)

    temp.foreach(lambda x: x.display()) 

if __name__ == "__main__":
    main()

ab.py

import copy

class A():
    def __init__(self, n):
        self.num = n

class B(A):
    ### Copy the item of class A to B.
    def __init__(self, A):
        self.num = copy.deepcopy(A.num)

    ### Print out the item of B
    def display(self):
        print self.num

Comments:

  • once again - printing is a bad idea. Ignoring Spark architecture there is a good chance it will be a bottleneck in your program.
  • if you need diagnostic output consider logging or collect a sample and inspect locally: for x in rdd.sample(False, 0.001).collect(): x.display()
  • for side effects use foreach instead of map
  • I modified display method. I wasn't sure what should be s in this context

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM