简体   繁体   中英

Specify subset of elements in Spark RDD (Scala)

My dataset is a RDD[Array[String]] with more than 140 columns. How can I select a subset of columns without hard-coding the column numbers (.map(x => (x(0),x(3),x(6)...)) ?

This is what I've tried so far (with success):

val peopleTups = people.map(x => x.split(",")).map(i => (i(0),i(1)))

However, I need more than a few columns, and would like to avoid hard-coding them.

This is what I've tried so far (that I think would be better, but has failed):

// Attempt 1
val colIndices = [0,3,6,10,13]
val peopleTups = people.map(x => x.split(",")).map(i => i(colIndices))

// Error output from attempt 1:
<console>:28: error: type mismatch;
 found   : List[Int]
 required: Int
       val peopleTups = people.map(x => x.split(",")).map(i => i(colIndices))

// Attempt 2
colIndices map peopleTups.lift

// Attempt 3
colIndices map peopleTups

// Attempt 4
colIndices.map(index => peopleTups.apply(index))

I found this question and tried it, but because I'm looking at an RDD instead of an array, it didn't work: How can I select a non-sequential subset elements from an array using Scala and Spark?

You should map over the RDD instead of the indices.

val list = List.fill(2)(Array.range(1, 6))
// List(Array(1, 2, 3, 4, 5), Array(1, 2, 3, 4, 5))

val rdd = sc.parallelize(list) // RDD[Array[Int]]
val indices = Array(0, 2, 3)

val selectedColumns = rdd.map(array => indices.map(array)) // RDD[Array[Int]]

selectedColumns.collect() 
// Array[Array[Int]] = Array(Array(1, 3, 4), Array(1, 3, 4))

What about this?

val data = sc.parallelize(List("a,b,c,d,e", "f,g,h,i,j"))
val indices =  List(0,3,4)
data.map(_.split(",")).map(ss => indices.map(ss(_))).collect

This should give

res1: Array[List[String]] = Array(List(a, d, e), List(f, i, j))

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM