[英]running sparknlp DocumentAssembler on EMR
我正在嘗試在 EMR 上運行 sparknlp。 我登錄到我的 zeppelin 筆記本並運行以下命令
import sparknlp
spark = SparkSession.builder \
.appName("BBC Text Categorization")\
.config("spark.driver.memory","8G")\
.config("spark.memory.offHeap.enabled",True)\
.config("spark.memory.offHeap.size","8G") \
.config("spark.driver.maxResultSize", "2G") \
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.4.5")\
.config("spark.kryoserializer.buffer.max", "1000M")\
.config("spark.network.timeout","3600s")\
.getOrCreate()
from sparknlp.base import DocumentAssembler
documentAssembler = DocumentAssembler()\
.setInputCol("description") \
.setOutputCol('document')
這導致了以下錯誤:
Fail to execute line 1: documentAssembler = DocumentAssembler()\
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-4581426413302524147.py", line 380, in <module>
exec(code, _zcUserQueryNameSpace)
File "<stdin>", line 1, in <module>
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/__init__.py", line 110, in wrapper
return func(self, **kwargs)
File "/usr/local/lib/python3.6/site-packages/sparknlp/base.py", line 148, in __init__
super(DocumentAssembler, self).__init__(classname="com.johnsnowlabs.nlp.DocumentAssembler")
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/__init__.py", line 110, in wrapper
return func(self, **kwargs)
File "/usr/local/lib/python3.6/site-packages/sparknlp/internal.py", line 72, in __init__
self._java_obj = self._new_java_obj(classname, self.uid)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/ml/wrapper.py", line 67, in _new_java_obj
return java_obj(*java_args)
TypeError: 'JavaPackage' object is not callable
為了理解這個問題,我嘗試登錄主服務器並在 pyspark 控制台中運行上述命令。 一切運行良好,如果我使用以下命令啟動 pyspark 控制台,我不會收到上述錯誤: pyspark --packages JohnSnowLabs:spark-nlp:2.4.5
但是我在使用命令pyspark
時遇到與以前相同的錯誤
如何在我的 zeppelin 筆記本上完成這項工作?
設置細節:
EMR 5.27.0
spark 2.4.4
openjdk version "1.8.0_272"
OpenJDK Runtime Environment (build 1.8.0_272-b10)
OpenJDK 64-Bit Server VM (build 25.272-b10, mixed mode)
這是我的引導腳本:
#!/bin/bash
sudo yum install -y python36-devel python36-pip python36-setuptools python36-virtualenv
sudo python36 -m pip install --upgrade pip
sudo python36 -m pip install pandas
sudo python36 -m pip install boto3
sudo python36 -m pip install re
sudo python36 -m pip install spark-nlp==2.7.2
確保您使用受支持的 EMR 版本,請參閱此處了解受支持的版本
您的引導腳本應包含
#!/bin/bash
set -x -e
echo -e 'export PYSPARK_PYTHON=/usr/bin/python3
export HADOOP_CONF_DIR=/etc/hadoop/conf
export SPARK_JARS_DIR=/usr/lib/spark/jars
export SPARK_HOME=/usr/lib/spark' >> $HOME/.bashrc && source $HOME/.bashrc
sudo python3 -m pip install awscli boto spark-nlp
set +x
exit 0
[{
"Classification": "spark-env",
"Configurations": [{
"Classification": "export",
"Properties": {
"PYSPARK_PYTHON": "/usr/bin/python3"
}
}]
},
{
"Classification": "spark-defaults",
"Properties": {
"spark.yarn.stagingDir": "hdfs:///tmp",
"spark.yarn.preserve.staging.files": "true",
"spark.kryoserializer.buffer.max": "2000M",
"spark.serializer": "org.apache.spark.serializer.KryoSerializer",
"spark.driver.maxResultSize": "0",
"spark.jars.packages": "com.johnsnowlabs.nlp:spark-nlp_2.12:3.4.4"
}
}
]
aws emr create-cluster \
--name "Spark NLP 3.4.4" \
--release-label emr-6.2.0 \
--applications Name=Hadoop Name=Spark Name=Hive \
--instance-type m4.4xlarge \
--instance-count 3 \
--use-default-roles \
--log-uri "s3://<S3_BUCKET>/" \
--bootstrap-actions Path=s3://<S3_BUCKET>/emr-bootstrap.sh,Name=custome \
--configurations "https://<public_access>/sparknlp-config.json" \
--ec2-attributes KeyName=<your_ssh_key>,EmrManagedMasterSecurityGroup=<security_group_with_ssh>,EmrManagedSlaveSecurityGroup=<security_group_with_ssh> \
--profile <aws_profile_credentials>
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.