简体   繁体   English

如何创建 Spark dataframe 以从 np.arrays 列表(由 RDKit 生成)提供给 sparks 随机森林实现?

[英]How to create a Spark dataframe to feed to sparks random forest implementation from a list of np.arrays (generated by RDKit)?

I am trying to generate molecular descriptors using RDKit and then perform machine learning on them all using Spark.我正在尝试使用 RDKit 生成分子描述符,然后使用 Spark 对它们进行机器学习。 I have managed to generate the descriptors and I have found the following code for doing Random Forest .我设法生成了描述符,并且找到了以下代码来执行 Random Forest That code loads the dataframe from a file stored in svmlight format and I can create such a file using dump_svmlight_file but writing to file doesn't feel very "Sparky".该代码从以 svmlight 格式存储的文件中加载 dataframe,我可以使用dump_svmlight_file创建这样的文件,但写入文件感觉不是很“闪亮”。

I have come this far:我已经走到这一步了:

from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem import DataStructs
import numpy as np
from sklearn.datasets import dump_svmlight_file

from pyspark.ml import Pipeline
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.feature import VectorIndexer
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.sql import SparkSession

spark = SparkSession.builder.appName("SimpleApp").getOrCreate()
df = spark.read.option("header","true")\
               .option("delimiter", '\t').csv("acd_logd_100.smiles")
mols = df.select("canonical_smiles").rdd.flatMap(lambda x : x)\
         .map(lambda x: Chem.MolFromSmiles(x))\
         .map(lambda x: AllChem.GetMorganFingerprintAsBitVect(x, 2, nBits=1024))\
         .map(lambda x: np.array(x))
spark.createDataFrame(mols)

But clearly I can't create a DataFrame from my RDD of np.arrays like this.但显然我不能像这样从我的 np.arrays 的 RDD 创建一个 DataFrame。 (I get a strange error message about ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ). (我收到一条关于ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() )。

I guess I also need to add the y values and somehow tell the Random forest implementation what in the dataframe is x and what is y but I can't yet create a dataframe at all from this data.我想我还需要添加 y 值并以某种方式告诉随机森林实现 dataframe 中的 x 是什么,y 是什么,但我还不能从这些数据中创建 dataframe。 How to do this?这个怎么做?


EDIT: I have tried to go via pyspark.ml.linalg.Vectors to create a dataframe loosely based on Creating Spark dataframe from numpy matrix but I can not seem to create a Vector as something like: EDIT: I have tried to go via pyspark.ml.linalg.Vectors to create a dataframe loosely based on Creating Spark dataframe from numpy matrix but I can not seem to create a Vector as something like:

from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem import DataStructs
import numpy as np
from sklearn.datasets import dump_svmlight_file

from pyspark.ml import Pipeline
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.feature import VectorIndexer
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.sql import SparkSession

from pyspark.ml.linalg import Vectors

spark = SparkSession.builder.appName("SimpleApp").getOrCreate()
df = spark.read.option("header","true")\
               .option("delimiter", '\t').csv("acd_logd_100.smiles")
mols = df.select("canonical_smiles").rdd.flatMap(lambda x : x)\
         .map(lambda x: Chem.MolFromSmiles(x))\
         .map(lambda x: AllChem.GetMorganFingerprintAsBitVect(x, 2, nBits=1024))\
         .map(lambda x: np.array(x))\
         .map(lambda x: Vectors.sparse(x))
print(mols.take(5))         

mydf = spark.createDataFrame(mols,schema=["features"])

I get:我得到:

TypeError: only size-1 arrays can be converted to Python scalars

which I don't understand at all.我根本不明白。

So if you found your way here I thought I would share what I ended up with.所以如果你在这里找到了自己的方式,我想我会分享我最终得到的结果。 I went with dense vectors in the end because it was easier.最后我选择了密集向量,因为它更容易。 The only way I came up with to go from the RDKit vector was to first create a numpy.array and then a Spark Vectors.dense from that.我从 RDKit 向量中想到 go 的唯一方法是首先创建一个numpy.array ,然后从中创建一个 Spark Vectors.dense I also had realised that I need to haul the y values along for the entire transformation, apparently you can't add that column to the ataframe at the end once the x values are sorted out, hence the complicated touple.我也意识到我需要为整个转换拖拽 y 值,显然,一旦 x 值被整理出来,你就不能将该列添加到最后的 ataframe,因此复杂的 touple。

from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem import DataStructs
import numpy as np
from sklearn.datasets import dump_svmlight_file

from pyspark.ml import Pipeline
from pyspark.ml.regression import RandomForestRegressor
from pyspark.ml.feature import VectorIndexer
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.sql import SparkSession

from pyspark.ml.linalg import Vectors

spark = SparkSession.builder.appName("SimpleApp").getOrCreate()
df = spark.read.option("header","true")\
               .option("delimiter", '\t').csv("acd_logd_100.smiles")

print(df.select("canonical_smiles", "acd_logd").rdd)

data = df.select("canonical_smiles", "acd_logd").rdd.map( lambda row: (row.canonical_smiles, float(row.acd_logd)) )\
         .map( lambda x: (Chem.MolFromSmiles(x[0]), x[1]) )\
         .map( lambda x: (AllChem.GetMorganFingerprintAsBitVect(x[0], 2, nBits=1024), x[1]) )\
         .map( lambda x: (np.array(x[0]),x[1]) )\
         .map( lambda x: (Vectors.dense(x[0].tolist()),x[1]) )\
         .map( lambda x: (x[0],x[1]))\
         .toDF(["features", "label"] )

# Automatically identify categorical features, and index them.
# Set maxCategories so features with > 4 distinct values are treated as continuous.
featureIndexer =\
    VectorIndexer(inputCol="features", outputCol="indexedFeatures", maxCategories=4).fit(data)

# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = data.randomSplit([0.7, 0.3])

# Train a RandomForest model.
rf = RandomForestRegressor(featuresCol="indexedFeatures")

# Chain indexer and forest in a Pipeline
pipeline = Pipeline(stages=[featureIndexer, rf])

# Train model.  This also runs the indexer.
model = pipeline.fit(trainingData)

# Make predictions.
predictions = model.transform(testData)

# Select example rows to display.
predictions.select("prediction", "label", "features").show(5)

# Select (prediction, true label) and compute test error
evaluator = RegressionEvaluator(
    labelCol="label", predictionCol="prediction", metricName="rmse")
rmse = evaluator.evaluate(predictions)
print("Root Mean Squared Error (RMSE) on test data = %g" % rmse)

rfModel = model.stages[1]
print(rfModel)  # summary only

spark.stop()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM