简体   繁体   中英

How to use jupyter, pyspark and cassandra together on google cloud dataproc cluster

I'm trying to make these 3 tools work together on Google cloud platform. So I used Dataproc to create a Spark cluster with initialization scripts to install cassandra and jupyter.

When I ssh the cluster and launch "pyspark --packages datastax:spark-cassandra-connector:2.3.0-s_2.11" everything seems to be OK

Edit : in fact, it's ok with spark-shell but with pyspark it is not.

I can't figure out how to launch jupyter with the pyspark kernel and the cassandra connector. Edit : the problem seems to be more linked to pyspark than jupyter

I tried to modify the kernel.json

    {
     "argv": [
        "bash",
        "-c",
        "PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS='kernel -f {connection_file}' pyspark"],
     "env": {
        "PYSPARK_SUBMIT_ARGS": "--master local[*] pyspark-shell --packages datastax:spark-cassandra-connector:2.3.0-s_2.11"
     },
     "display_name": "PySpark",
     "language": "python"
    }

But this doesn't seem to work. When in jupyter, I can't find anything concerning cassandra and have some exceptions like :

java.lang.ClassNotFoundException: Failed to find data source: pyspark.sql.cassandra.

(I tried other PYSPARK_SUBMIT_ARGS and also adding the --package in the PYSPARK_DRIVER_PYTHON_OPTS, but nothing works)

Edit : When I launch pyspark, i have some warnings. I can't see any that seems linked to my problem but maybe am i wrong so here are the pyspark starting messages :

    myuserhome@spark-cluster-m:~$ pyspark --packages com.datastax.spark:spark-cassandra-connector_2.11:2.3.0
    Python 2.7.9 (default, Jun 29 2016, 13:08:31) 
    [GCC 4.9.2] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    Ivy Default Cache set to: /home/myuserhome/.ivy2/cache
    The jars for the packages stored in: /home/myuserhome/.ivy2/jars
    :: loading settings :: url = jar:file:/usr/lib/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
    com.datastax.spark#spark-cassandra-connector_2.11 added as a dependency
    :: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
            confs: [default]
            found com.datastax.spark#spark-cassandra-connector_2.11;2.3.0 in central
            found com.twitter#jsr166e;1.1.0 in central
            found commons-beanutils#commons-beanutils;1.9.3 in central
            found commons-collections#commons-collections;3.2.2 in central
            found joda-time#joda-time;2.3 in central
            found org.joda#joda-convert;1.2 in central
            found io.netty#netty-all;4.0.33.Final in central
            found org.scala-lang#scala-reflect;2.11.8 in central
    :: resolution report :: resolve 2615ms :: artifacts dl 86ms
            :: modules in use:
            com.datastax.spark#spark-cassandra-connector_2.11;2.3.0 from central in [default]
            com.twitter#jsr166e;1.1.0 from central in [default]
            commons-beanutils#commons-beanutils;1.9.3 from central in [default]
            commons-collections#commons-collections;3.2.2 from central in [default]
            io.netty#netty-all;4.0.33.Final from central in [default]
            joda-time#joda-time;2.3 from central in [default]
            org.joda#joda-convert;1.2 from central in [default]
            org.scala-lang#scala-reflect;2.11.8 from central in [default]
            ---------------------------------------------------------------------
            |                  |            modules            ||   artifacts   |
            |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
            ---------------------------------------------------------------------
            |      default     |   8   |   0   |   0   |   0   ||   8   |   0   |
            ---------------------------------------------------------------------
    :: retrieving :: org.apache.spark#spark-submit-parent
            confs: [default]
            0 artifacts copied, 8 already retrieved (0kB/76ms)
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    18/06/17 11:08:22 WARN org.apache.hadoop.hdfs.DataStreamer: Caught exception
    java.lang.InterruptedException
            at java.lang.Object.wait(Native Method)
            at java.lang.Thread.join(Thread.java:1252)
            at java.lang.Thread.join(Thread.java:1326)
            at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:973)
            at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:624)
            at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:801)
    18/06/17 11:08:23 WARN org.apache.hadoop.hdfs.DataStreamer: Caught exception
    java.lang.InterruptedException
            at java.lang.Object.wait(Native Method)
            at java.lang.Thread.join(Thread.java:1252)
            at java.lang.Thread.join(Thread.java:1326)
            at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:973)
            at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:624)
            at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:801)
    18/06/17 11:08:23 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/myuserhome/.ivy2/jars/com.datastax.spark_spark-cassandra-connector_2.11-2.3.0.jar added multiple times to distributed cache.
    18/06/17 11:08:23 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/myuserhome/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar added multiple times to distributed cache.
    18/06/17 11:08:23 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/myuserhome/.ivy2/jars/commons-beanutils_commons-beanutils-1.9.3.jar added multiple times to distributed cache.
    18/06/17 11:08:23 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/myuserhome/.ivy2/jars/joda-time_joda-time-2.3.jar added multiple times to distributed cache.
    18/06/17 11:08:23 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/myuserhome/.ivy2/jars/org.joda_joda-convert-1.2.jar added multiple times to distributed cache.
    18/06/17 11:08:23 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/myuserhome/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar added multiple times to distributed cache.
    18/06/17 11:08:23 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/myuserhome/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar added multiple times to distributed cache.
    18/06/17 11:08:23 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/myuserhome/.ivy2/jars/commons-collections_commons-collections-3.2.2.jar added multiple times to distributed cache.
    18/06/17 11:08:24 WARN org.apache.hadoop.hdfs.DataStreamer: Caught exception
    java.lang.InterruptedException
            at java.lang.Object.wait(Native Method)
            at java.lang.Thread.join(Thread.java:1252)
            at java.lang.Thread.join(Thread.java:1326)
            at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:973)
            at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:624)
            at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:801)
    ivysettings.xml file not found in HIVE_HOME or HIVE_CONF_DIR,/etc/hive/conf.dist/ivysettings.xml will be used
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _\ \/ _ \/ _ `/ __/  '_/
       /__ / .__/\_,_/_/ /_/\_\   version 2.2.1
          /_/

    Using Python version 2.7.9 (default, Jun 29 2016 13:08:31)
    SparkSession available as 'spark'.
    >>> import org.apache.spark.sql.cassandra
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ImportError: No module named org.apache.spark.sql.cassandra
    >>> import pyspark.sql.cassandra
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    ImportError: No module named cassandra

Edit About the try to import java package in pyspark it is just the simplest command i found raising th exception i'm facing. Here is another :

    dfout.write.format("pyspark.sql.cassandra").mode("overwrite").option("confirm.truncate","true").option("spark.cassandra.connection.host","10.142.0.4").option("spark.cassandra.connection.port","9042").option("keyspace","uasb03").option("table","activite").save()

    > "An error occurred while calling o113.save.\n: java.lang.ClassNotFoundException: Failed to find data source: pyspark.sql.cassandra.

I think i've tried org.apache.spark.sql.cassandra too but i'll have to retry it : your answer clarifies many things i tried a little blindly (the --master=local[*] is also a try among others).

About the cluster : it is created the way you suggest (for jupyter) except the --properties. And jupyter works allright except the fact i can't use the cassandra connector.

Edit : with Karthik Palaniappan's advice

Now When I use pyspark via SSH, it works. But with Jupyter, I still have an error :

    df=spark.read.format("csv").option("header","true").option("inferSchema","true").option("nullValue","NA").option("timestampFormat","ddMMMyyyy:HH:mm:ss").option("quote", "\"").option("delimiter", ";").option("mode","failfast").load("gs://tidy-centaur-b1/data/myfile.csv")

    import pyspark.sql.functions as F

    dfi = df.withColumn("id", F.monotonically_increasing_id()).withColumnRenamed("NUMANO", "numano")

    dfi.createOrReplaceTempView("pathologie")

    dfi.write.format("org.apache.spark.sql.cassandra").mode("overwrite").option("confirm.truncate","true").option("spark.cassandra.connection.host","10.142.0.3").option("spark.cassandra.connection.port","9042").option("keyspace","mykeyspace").option("table","mytable").save()

    Py4JJavaError: An error occurred while calling o115.save.
    : java.lang.ClassNotFoundException: Failed to find data source: org.apache.spark.sql.cassandra. Please find packages at http://spark.apache.org/third-party-projects.html

I recreated the cluster the way you suggest :

    gcloud dataproc clusters create spark-cluster \
         --async \
         --project=tidy-centaur-205516 \
         --region=us-east1 \
         --zone=us-east1-b \
         --bucket=tidy-centaur-b1 \
         --image-version=1.2 \
         --num-masters=1 \
         --master-boot-disk-size=10GB \
         --master-machine-type=n1-standard-2 \
         --num-workers=2 \
         --worker-boot-disk-size=10GB \
         --worker-machine-type=n1-standard-1 \
         --metadata 'CONDA_PACKAGES="numpy pandas scipy matplotlib",PIP_PACKAGES=pandas-gbq' \
         --properties spark:spark.packages=com.datastax.spark:spark-cassandra-connector_2.11:2.3.0 \
         --initialization-actions=gs://tidy-centaur-b1/init-cluster.sh,gs://dataproc-initialization-actions/jupyter2/jupyter2.sh

the init-cluster.sh installs cassandra

I executed jupyter notebook --generate-config modified the pyspark kernel.json

    {
     "argv": [
        "bash",
        "-c",
        "PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS='kernel -f {connection_file}' pyspark"],
     "env": {
        "PYSPARK_SUBMIT_ARGS": "pyspark-shell --packages com.datastax.spark:spark-cassandra-connector_2.11:2.3.0"
     },
     "display_name": "PySpark",
     "language": "python"
    }

According to the spark-cassandra connector's docs , you're supposed to use data sources API in PySpark. Eg spark.read.format("org.apache.spark.sql.cassandra")... . Under the hood, this will use the Java/Scala package you added. I'm not sure why you're trying to import the Java package in pyspark.

Please use the Jupyter (Python 3 + Conda) or Jupyter2 (Python 2 + Pip) initialization actions to install Jupyter+PySpark correctly. Importantly, you do not want to use --master=local[*] , as that will only utilize the master node.

Also, the --packages flag is the same thing as the spark property spark.packages . You can set spark properties when creating a cluster using --properties spark:spark.jars.packages=<package> .

So I think you want something like this:

gcloud dataproc clusters create <cluster-name> \
    --initialization-actions gs://dataproc-initialization-actions/jupyter/jupyter.sh
    --properties spark:spark.jars.packages=datastax:spark-cassandra-connector:2.3.0-s_2.11

Then, follow the instructions in the connector's pyspark docs. Eg

 spark.read \
    .format("org.apache.spark.sql.cassandra") \
    .options(table="kv", keyspace="test") \
    .load().show()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM