This is my python spark code
def parseLinesEcf4(line): #get the fields we need
fields = line.split('\t')
id1 = fields[0]
id2 = fields[1]
ecfp4 = float(fields[2])
return (id1, id2, ecfp4) #return two fields
conf = SparkConf().setMaster("local").setAppName("Second")
sc = SparkContext(conf = conf)
fileTwo = sc.textFile("PS21_ECFP4.tsv") #loads the data
dataTwo = fileTwo.map(parseLinesEcf4)
My input looks like this
and the size of my file is around 900GB. What I need is to take the rows of which unique values of column 1 correspond to 10% of the unique values of the same column, because one compound has more than one entries.
I tried takeSampe() and sampleBy() but both don't return what I am looking for.
Any help??
You can try to use pyspark.ml library.
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.regression import LinearRegression
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit
# Prepare training and test data.
data = spark.read.format("libsvm")\
.load("data/mllib/sample_linear_regression_data.txt")
train, test = data.randomSplit([0.9, 0.1], seed=12345)
But be aware, to use it you need to vectorise your data using VectorAssembler
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
dataset = spark.createDataFrame(
[(0, 18, 1.0, Vectors.dense([0.0, 10.0, 0.5]), 1.0)],
["id", "hour", "mobile", "userFeatures", "clicked"])
assembler = VectorAssembler(
inputCols=["hour", "mobile", "userFeatures"],
outputCol="features")
output = assembler.transform(dataset)
print("Assembled columns 'hour', 'mobile', 'userFeatures' to vector column 'features'")
output.select("features", "clicked").show(truncate=False)
https://spark.apache.org/docs/latest/ml-features.html#vectorassembler
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.