简体   繁体   中英

How should I structure this execution flow in Spark?

I have been playing around with spark but I can't get my head around how to structure this execution flow. Pseudo code follows:

from pyspark import SparkConf, SparkContext, SQLContext
sc = SparkContext(conf=conf)
sqlSC = SQLContext(sc)

df1 = getBigDataSetFromDb()
ddf1 = sqlSC.createDataFrame(sc.broadcast(df1))

df2 = getOtherBigDataSetFromDb()
ddf2 = sqlSC.createDataFrame(sc.broadcast(df2))

datesList = sc.parallelize(aListOfDates)

def myComplicatedFunc(cobDate):
    filteredDF1 = ddf1.filter(ddf1['BusinessDate'] == cobDate)
    filteredDF2 = ddf2.filter(ddf2['BusinessDate'] == cobDate)
    #some more complicated stuff that uses filteredDF1 & filteredDF2
    return someValue

results = datesList.map(myComplicatedFunc)

However, what I get is something like this:

Traceback (most recent call last):
  File "/net/nas/SysGrid_Users/John.Richardson/Code/HistoricVars/sparkTest2.py", line 76, in <module>
    varResults = distDates.map(varFunc).collect()
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/rdd.py", line 771, in collect
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/rdd.py", line 2379, in _jrdd
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/rdd.py", line 2299, in _prepare_for_python_RDD
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 428, in dumps
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 646, in dumps
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 107, in dump
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 408, in dump
    self.save(obj)
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 740, in save_tuple
    save(element)
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 199, in save_function
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 236, in save_function_tuple
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 725, in save_tuple
    save(element)
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 770, in save_list
    self._batch_appends(obj)
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 797, in _batch_appends
    save(tmp[0])
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 193, in save_function
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 241, in save_function_tuple
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 841, in _batch_setitems
    save(v)
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 520, in save
    self.save_reduce(obj=obj, *rv)
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 542, in save_reduce
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 475, in save
    f(self, obj) # Call unbound method with explicit self
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 810, in save_dict
    self._batch_setitems(obj.items())
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 836, in _batch_setitems
    save(v)
  File "/net/nas/uxhome/condor_ldrt-s/Python/lib/python3.5/pickle.py", line 495, in save
    rv = reduce(self.proto)
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/sql/utils.py", line 45, in deco
  File "/net/nas/uxhome/condor_ldrt-s/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 312, in get_return_value
py4j.protocol.Py4JError: An error occurred while calling o44.__getstate__. Trace:
py4j.Py4JException: Method __getstate__([]) does not exist
        at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:335)
        at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:344)
        at py4j.Gateway.invoke(Gateway.java:252)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:209)
        at java.lang.Thread.run(Thread.java:745)

I suspect that I am going about this the wrong way. I assumed that the point of using a broadcast variable was that I could use inside a closure. But perhaps I must do some sort of join instead?

Although I agree with the comment about the lack of domain context, I don't think this is what you want:

df2 = getOtherBigDataSetFromDb()
ddf2 = sqlSC.createDataFrame(sc.broadcast(df2))

You don't say what the type of df2 is, but let's assume it's an array and not actually a DataFrame already (despite being named df* ). If it's an array, what you probably want is:

df2 = getOtherBigDataSetFromDb()
ddf2 = sqlSC.createDataFrame(sc.parallelize(df2))

That being said, getOtherBigDataSetFromDb implies it's actually, well, a big data set. So while this flow could work, if your dataset is really REALLY big, you might want to consume it in chunks. Which you could write yourself, or probably there's already a library that reads from your DB or choice. But regardless, I believe you mean parallelize and not broadcast

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM