簡體   English   中英

有沒有一種方法可以對分區的spark數據集並行運行操作?

[英]Is there a way to run manipulations on partitioned spark datasets in parallel?

我有一個數據集列表,我想按我所有數據集共有的特定鍵進行分區,然后運行一些聯接/分組,這對所有分區的數據集都是相同的。

我正在嘗試以某種方式設計算法,以便使用Spark的partitionBy通過特定鍵創建分區。

現在,一種方法是在循環中在每個分區上運行操作,但這並不高效。

我想查看是否已對數據進行手動分區,是否可以對這些數據集並行運行操作。

我剛剛開始學習Spark,因此請原諒我一個天真的問題。

考慮一個客戶ID數據集及其在不同數據集中的行為數據,例如瀏覽/點擊等。 說一個瀏覽,點擊另一個。 首先,我正在考慮按客戶ID對數據進行分區,然后針對每個分區(客戶),加入一些屬性,例如瀏覽器或設備,以查看每個客戶的行為。 所以基本上,它就像一個嵌套的並行化。

在Spark中甚至有可能嗎? 有什么明顯的我想念的東西嗎? 我可以參考一些文檔嗎?

嘗試這個 -

1. Create test dataset (Totol Record = 70000+) to perform parallel operation on each 

scala> ds.count
res137: Long = 70008

scala> ds.columns
res124: Array[String] = Array(awards, country)

2. Assume partition column as "country".

scala> ds.select("country").distinct.show(false)
+-------+
|country|
+-------+
|CANADA |
|CHINA  |
|USA    |
|EUROPE |
|UK     |
|RUSSIA |
|INDIA  |
+-------+

3. Get sum of records for each country [ **Without parallel process for each partition**]

scala> val countries = ds.select("country").distinct.collect
countries: Array[org.apache.spark.sql.Row] = Array([CANADA], [CHINA], [USA], [EUROPE], [UK], [RUSSIA], [INDIA])

scala> val startTime = System.currentTimeMillis()
startTime: Long = 1562047887130

scala> countries.foreach(country => ds.filter(ds("country") === country(0)).groupBy("country").count.show(false))
+-------+-----+
|country|count|
+-------+-----+
|CANADA |10001|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|CHINA  |10001|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|USA    |10001|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|EUROPE |10001|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|UK     |10002|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|RUSSIA |10001|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|INDIA  |10001|
+-------+-----+


scala> val endTime = System.currentTimeMillis()
endTime: Long = 1562047896088

scala> println(s"Total Execution Time :  ${(endTime - startTime) / 1000} Seconds")
Total Execution Time :  **8 Seconds**

4. Get sum of records for each country [ **With parallel process for each partition**]

scala> val startTime = System.currentTimeMillis()
startTime: Long = 1562048057431

scala> countries.par.foreach(country => ds.filter(ds("country") === country(0)).groupBy("country").count.show(false))

+-------+-----+
|country|count|
+-------+-----+
|INDIA  |10001|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|CANADA |10001|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|RUSSIA |10001|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|USA    |10001|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|UK     |10002|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|CHINA  |10001|
+-------+-----+

+-------+-----+
|country|count|
+-------+-----+
|EUROPE |10001|
+-------+-----+


scala> val endTime = System.currentTimeMillis()
endTime: Long = 1562048060273

scala> println(s"Total Execution Time :  ${(endTime - startTime) / 1000} Seconds")
Total Execution Time :  **2 Seconds**

結果:-

With    parallel process on each partition, it took ~ **2 Seconds**
Without parallel process on each partition, it took ~ **8 Seconds**

我測試過檢查每個國家的記錄數,可以執行任何過程,例如,寫入配置單元表或hdfs文件等。

希望這會有所幫助

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM