简体   繁体   中英

Spark StringIndexer.fit is very slow on large records

I have large data records formatted as the following sample:

// +---+------+------+
// |cid|itemId|bought|
// +---+------+------+
// |abc|   123|  true|
// |abc|   345|  true|
// |abc|   567|  true|
// |def|   123|  true|
// |def|   345|  true|
// |def|   567|  true|
// |def|   789| false|
// +---+------+------+

cid and itemId are strings.

There are 965,964,223 records.

I am trying to convert cid to an integer using StringIndexer as follows:

dataset.repartition(50)
val cidIndexer = new StringIndexer().setInputCol("cid").setOutputCol("cidIndex")
val cidIndexedMatrix = cidIndexer.fit(dataset).transform(dataset)

But these lines of code are very slow (takes around 30 minutes). The problem is that it is so huge that I could not do anything further after that.

I am using amazon EMR cluster of R4 2XLarge cluster with 2 nodes (61 GB of memory).

Is there any performance improvement that I can do further? Any help will be much appreciated.

That is an expected behavior, if cardinality of column is high. As a part of the training process, StringIndexer collects all the labels, and to create label - index mapping (using Spark's oasutil.collection.OpenHashMap ).

This process requires O(N) memory in the worst case scenario, and is both computationally and memory intensive.

In cases where cardinality of the column is high, and its content is going to be used as feature, it is better to apply FeatureHasher (Spark 2.3 or later).

import org.apache.spark.ml.feature.FeatureHasher

val hasher = new FeatureHasher()
  .setInputCols("cid")
  .setOutputCols("cid_hash_vec")
hasher.transform(dataset)

It doesn't guarantee uniqueness and it is not reversible, but it is good enough for many applications, and doesn't require fitting process.

For column that won't be used as a feature you can also use hash function:

import org.apache.spark.sql.functions.hash

dataset.withColumn("cid_hash", hash($"cid"))

Assuming that:

  • You plan to use the cid as a feature (after StringIndexer + OneHotEncoderEstimator)
  • Your data sits in S3

A few questions first:

Without knowing much more, my first guess is that you should not worry about memory now and check your degree of parallelism first. You only have 2 R4 2XLarge instances that will give you:

  • 8 CPUs
  • 61GB Memory

Personally, I would try to either:

  • Get more instances
  • Swap the R4 2XLarge instances with others that have more CPUs

Unfortunately, with the current EMR offering this can only be achieved by throwing money at the problem:

Finally, what's the need to repartition(50) ? That might just introduce further delays...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM