簡體   English   中英

pyspark:創建數據框時無法調用“ RDD”

[英]pyspark: 'RDD' is not callable while creating dataframe

我正在嘗試根據最終用戶提供的參數(通過REST API)創建一個數據框,以便將預測從我的模型中提取出來。 但是我在創建創建數據框時遇到錯誤。

** Approach#1** (using tuple of values and list of columns)期間的錯誤Approach#1** (using tuple of values and list of columns)

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/02/10 13:01:13 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/02/10 13:01:14 WARN Utils: Your hostname, pyspark-VirtualBox resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3)
18/02/10 13:01:14 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
18/02/10 13:01:17 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
 ### Tuple is  [(800, 0, 0.3048, 71.3, 0.0026634)]
 ### schema --> struct<Freq_Hz:int,displ_thick_m:double,Chord_m:double,V_inf_mps:double,AoA_Deg:int>
 ### session --> <pyspark.sql.conf.RuntimeConfig object at 0x7f1b68086860>
 ### data frame --> MapPartitionsRDD[8] at toJavaRDD at NativeMethodAccessorImpl.java:0
127.0.0.1 - - [10/Feb/2018 13:01:37] "GET /test HTTP/1.1" 500 -
Traceback (most recent call last):
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1997, in __call__
    return self.wsgi_app(environ, start_response)
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1985, in wsgi_app
    response = self.handle_exception(e)
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1540, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/_compat.py", line 33, in reraise
    raise value
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1982, in wsgi_app
    response = self.full_dispatch_request()
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1615, in full_dispatch_request
    return self.finalize_request(rv)
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1630, in finalize_request
    response = self.make_response(rv)
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1740, in make_response
    rv = self.response_class.force_type(rv, request.environ)
  File "/home/pyspark/.local/lib/python3.5/site-packages/werkzeug/wrappers.py", line 921, in force_type
    response = BaseResponse(*_run_wsgi_app(response, environ))
  File "/home/pyspark/.local/lib/python3.5/site-packages/werkzeug/wrappers.py", line 59, in _run_wsgi_app
    return _run_wsgi_app(*args)
  File "/home/pyspark/.local/lib/python3.5/site-packages/werkzeug/test.py", line 923, in run_wsgi_app
    app_rv = app(environ, start_response)
TypeError: 'RDD' object is not callable

方法2中的錯誤 (使用元組和架構)

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/02/10 12:56:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/02/10 12:56:48 WARN Utils: Your hostname, pyspark-VirtualBox resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3)
18/02/10 12:56:48 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
18/02/10 12:56:51 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
 ### Tuple is  [(800, 0, 0.3048, 71.3, 0.0026634)]
 ### schema --> struct<displ_thick_m:double,Chord_m:double,Freq_Hz:int,AoA_Deg:int,V_inf_mps:double>
 ### session --> <pyspark.sql.conf.RuntimeConfig object at 0x7efd4df9e860>
127.0.0.1 - - [10/Feb/2018 12:56:53] "GET /test HTTP/1.1" 500 -
Traceback (most recent call last):
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1997, in __call__
    return self.wsgi_app(environ, start_response)
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1985, in wsgi_app
    response = self.handle_exception(e)
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1540, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/_compat.py", line 33, in reraise
    raise value
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1982, in wsgi_app
    response = self.full_dispatch_request()
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1614, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1517, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/_compat.py", line 33, in reraise
    raise value
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1612, in full_dispatch_request
    rv = self.dispatch_request()
  File "/home/pyspark/.local/lib/python3.5/site-packages/flask/app.py", line 1598, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/home/pyspark/Desktop/building_py_rec/lin_reg/server.py", line 48, in test
    df = session.createDataFrame(tup, schema)
  File "/home/pyspark/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/session.py", line 522, in createDataFrame
    rdd, schema = self._createFromLocal(map(prepare, data), schema)
  File "/home/pyspark/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/session.py", line 383, in _createFromLocal
    data = list(data)
  File "/home/pyspark/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/session.py", line 505, in prepare
    verify_func(obj, schema)
  File "/home/pyspark/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/types.py", line 1360, in _verify_type
    _verify_type(v, f.dataType, f.nullable)
  File "/home/pyspark/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/types.py", line 1324, in _verify_type
    raise TypeError("%s can not accept object %r in type %s" % (dataType, obj, type(obj)))
TypeError: DoubleType can not accept object 800 in type <class 'int'>

在這里,我了解了將值(元組中的值)提供給createDataframe的順序與schema的順序不匹配。 因此,TypeError。

相關代碼

@app.route('/test')
def test():
    # create spark Session
    session = SparkSession.builder.appName('lin_reg_api').getOrCreate();

    # Approach#1
    tup = [(800,0,0.3048,71.3,0.0026634)]
    cols = ["Freq_Hz", "AoA_Deg", "Chord_m", "V_inf_mps", "displ_thick_m"];
    print(' ### Tuple is ', tup);

    #Approach#2
    schema = StructType({
        StructField("Freq_Hz", IntegerType(), False),
        StructField("AoA_Deg", IntegerType(), False),
        StructField("Chord_m", DoubleType(), False),
        StructField("V_inf_mps", DoubleType(), False),
        StructField("displ_thick_m", DoubleType(), False),
    });
    print(' ### schema -->', schema.simpleString());
    # session = linReg.getSession(); # returns the spark session
    print(' ### session -->', session.conf);

    # Approach 1
    #df = session.createDataFrame(tup, cols)
    # Approach 2
    df = session.createDataFrame(tup, schema)
    print(' ### data frame -->', df.toJSON())
    return df.toJSON()

我想了解如何使兩種方法對我都有效。

  • 您發布的代碼中不會發生第一個錯誤,而是在應用程序初始化期間發生。 您發布的代碼無法復制該代碼。

  • 明確指出了第二個問題,但例外情況是:

    TypeError:DoubleType不能接受類型為800的對象

    發生這種情況是因為您使用set{...} )定義架構,並且字段順序未定義。 使用已定義順序的序列,例如list

     schema = StructType([ StructField("Freq_Hz", IntegerType(), False), StructField("AoA_Deg", IntegerType(), False), StructField("Chord_m", DoubleType(), False), StructField("V_inf_mps", DoubleType(), False), StructField("displ_thick_m", DoubleType(), False), ]) 

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM