簡體   English   中英

如何在不使用collec()的情況下將scd中的RDD [CassandraRow]轉換為List [CassandraRow]

[英]How to convert RDD[CassandraRow] to List[CassandraRow] in scala without using collec()

我在scala中將RDD [CassadraRow]列出了[CassandraRow]。 在下面的代碼中我遇到內存泄漏問題:

val rowKeyRdd: Array[CassandraRow] =
sc.cassandraTable(keyspace, table).select("customer_id", "uniqueaddress").collect()

val clientPartitionKeys = rowKeyRdd.map(x => ClientPartitionKey(
x.getString("customer_id"), x.getString("uniqueaddress"))).toList

val clientRdd: RDD[CassandraRow] =
sc.parallelize(clientPartitionKeys).joinWithCassandraTable(keyspace, table)
  .where("eventtime >= ?", startDate)
  .where("eventtime <= ?", endDate)
  .map(x => x._2)

clientRdd.cache()

我已經刪除了cache()然后仍然出現問題。

 org.jboss.netty.channel.socket.nio.AbstractNioSelector
 WARNING: Unexpected exception in the selector loop.
 java.lang.OutOfMemoryError: Java heap space
at org.jboss.netty.buffer.HeapChannelBuffer.<init>(HeapChannelBuffer.java:42)
at org.jboss.netty.buffer.BigEndianHeapChannelBuffer.<init>(BigEndianHeapChannelBuffer.java:34)
at org.jboss.netty.buffer.ChannelBuffers.buffer(ChannelBuffers.java:134)
at org.jboss.netty.buffer.HeapChannelBufferFactory.getBuffer(HeapChannelBufferFactory.java:68)
at org.jboss.netty.buffer.AbstractChannelBufferFactory.getBuffer(AbstractChannelBufferFactory.java:48)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:80)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

ERROR 2016-02-12 07:54:48 akka.actor.ActorSystemImpl: Uncaught fatal error from thread [sparkDriver-akka.remote.default-remote-dispatcher-5] shutting down ActorSystem [sparkDriver]

java.lang.OutOfMemoryError:超出了GC開銷限制

如何避免內存泄漏。 我嘗試使用每核8GB。 和表包含數百萬條記錄。

在這一行中,您的變量名表明您有一個RDD,但實際上,因為您使用的是collect()所以它不是RDD,如類型聲明所示,它是一個Array:

val rowKeyRdd: Array[CassandraRow] =
  sc.cassandraTable(keyspace, table).select("customer_id", "uniqueaddress").collect()

這會將所有數據從工作程序中提取到驅動程序中,因此工作程序上的內存量(每個內核8GB)不是問題,驅動程序中沒有足夠的內存來處理此收集。

由於您對這些數據所做的全部工作就是將其映射,然后將其重新並行化為RDD,因此您應該映射它而不必調用collect() 我沒有嘗試下面的代碼,因為我無法訪問您的數據集,但應該大致正確:

val rowKeyRdd: RDD[CassandraRow] =
sc.cassandraTable(keyspace, table).select("customer_id", "uniqueaddress")

val clientPartitionKeysRDD = rowKeyRdd.map(x => ClientPartitionKey(
x.getString("customer_id"), x.getString("uniqueaddress")))

val clientRdd: RDD[CassandraRow] =
clientPartitionKeysRDD.joinWithCassandraTable(keyspace, table)
  .where("eventtime >= ?", startDate)
  .where("eventtime <= ?", endDate)
  .map(x => x._2)

clientRdd.cache()

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM