[英]Spark job not parallelising locally (using Parquet + Avro from local filesystem)
edit 2 编辑2
Indirectly solved the problem by repartitioning the RDD into 8 partitions. 通过将RDD重新划分为8个分区来间接解决该问题。 Hit a roadblock with avro objects not being "java serialisable" found a snippet here to delegate avro serialisation to kryo.
遇到一个障碍,就是avro对象不是“ java可序列化的”,在这里找到了一个片段来将avro序列化委派给kryo。 The original problem still remains.
原来的问题仍然存在。
edit 1: Removed local variable reference in map function 编辑1:在地图功能中删除了局部变量引用
I'm writing a driver to run a compute heavy job on spark using parquet and avro for io/schema. 我正在编写一个驱动程序,使用镶木地板和avro for io / schema在spark上运行计算繁重的工作。 I can't seem to get spark to use all my cores.
我似乎无法使用所有内核。 What am I doing wrong ?
我究竟做错了什么 ? Is it because I have set the keys to null ?
是因为我将键设置为null吗?
I am just getting my head around how hadoop organises files. 我只是想弄清楚hadoop如何组织文件。 AFAIK since my file has a gigabyte of raw data I should expect to see things parallelising with the default block and page sizes.
AFAIK,因为我的文件具有千兆字节的原始数据,所以我应该期望看到与默认块和页面大小平行的事物。
The function to ETL my input for processing looks as follows : 将我的输入用于处理的ETL函数如下所示:
def genForum {
class MyWriter extends AvroParquetWriter[Topic](new Path("posts.parq"), Topic.getClassSchema) {
override def write(t: Topic) {
synchronized {
super.write(t)
}
}
}
def makeTopic(x: ForumTopic): Topic = {
// Ommited to save space
}
val writer = new MyWriter
val q =
DBCrawler.db.withSession {
Query(ForumTopics).filter(x => x.crawlState === TopicCrawlState.Done).list()
}
val sz = q.size
val c = new AtomicInteger(0)
q.par.foreach {
x =>
writer.write(makeTopic(x))
val count = c.incrementAndGet()
print(f"\r${count.toFloat * 100 / sz}%4.2f%%")
}
writer.close()
}
And my transformation looks as follows : 我的转换如下所示:
def sparkNLPTransformation() {
val sc = new SparkContext("local[8]", "forumAddNlp")
// io configuration
val job = new Job()
ParquetInputFormat.setReadSupportClass(job, classOf[AvroReadSupport[Topic]])
ParquetOutputFormat.setWriteSupportClass(job,classOf[AvroWriteSupport])
AvroParquetOutputFormat.setSchema(job, Topic.getClassSchema)
// configure annotator
val props = new Properties()
props.put("annotators", "tokenize,ssplit,pos,lemma,parse")
val an = DAnnotator(props)
// annotator function
def annotatePosts(ann : DAnnotator, top : Topic) : Topic = {
val new_p = top.getPosts.map{ x=>
val at = new Annotation(x.getPostText.toString)
ann.annotator.annotate(at)
val t = at.get(classOf[SentencesAnnotation]).map(_.get(classOf[TreeAnnotation])).toList
val r = SpecificData.get().deepCopy[Post](x.getSchema,x)
if(t.nonEmpty) r.setTrees(t)
r
}
val new_t = SpecificData.get().deepCopy[Topic](top.getSchema,top)
new_t.setPosts(new_p)
new_t
}
// transformation
val ds = sc.newAPIHadoopFile("forum_dataset.parq", classOf[ParquetInputFormat[Topic]], classOf[Void], classOf[Topic], job.getConfiguration)
val new_ds = ds.map(x=> ( null, annotatePosts(x._2) ) )
new_ds.saveAsNewAPIHadoopFile("annotated_posts.parq",
classOf[Void],
classOf[Topic],
classOf[ParquetOutputFormat[Topic]],
job.getConfiguration
)
}
Can you confirm that the data is indeed in multiple blocks in HDFS? 您可以确认HDFS中的数据确实位于多个块中吗? The total block count on the forum_dataset.parq file
forum_dataset.parq文件上的总阻止计数
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.