簡體   English   中英

Spark 1.6:使用轉義列名稱刪除DataFrame中的列

[英]Spark 1.6: drop column in DataFrame with escaped column names

試圖在DataFrame中刪除一列,但我有一些帶有點的列名,我將其轉義。

在我逃避之前,我的架構看起來像這樣:

root
 |-- user_id: long (nullable = true)
 |-- hourOfWeek: string (nullable = true)
 |-- observed: string (nullable = true)
 |-- raw.hourOfDay: long (nullable = true)
 |-- raw.minOfDay: long (nullable = true)
 |-- raw.dayOfWeek: long (nullable = true)
 |-- raw.sensor2: long (nullable = true)

如果我嘗試刪除列,我得到:

df = df.drop("hourOfWeek")
org.apache.spark.sql.AnalysisException: cannot resolve 'raw.hourOfDay' given input columns raw.dayOfWeek, raw.sensor2, observed, raw.hourOfDay, hourOfWeek, raw.minOfDay, user_id;
        at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
        at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:60)
        at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:57)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:319)
        at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:319)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:53)

請注意,我甚至沒有嘗試在名稱中刪除帶有點的列。 因為在沒有轉義列名的情況下似乎無法做很多事情,所以我將模式轉換為:

root
 |-- user_id: long (nullable = true)
 |-- hourOfWeek: string (nullable = true)
 |-- observed: string (nullable = true)
 |-- `raw.hourOfDay`: long (nullable = true)
 |-- `raw.minOfDay`: long (nullable = true)
 |-- `raw.dayOfWeek`: long (nullable = true)
 |-- `raw.sensor2`: long (nullable = true)

但這似乎沒有幫助。 我仍然得到同樣的錯誤。

我嘗試轉義所有列名稱,並使用轉義名稱刪除,但這也不起作用。

root
 |-- `user_id`: long (nullable = true)
 |-- `hourOfWeek`: string (nullable = true)
 |-- `observed`: string (nullable = true)
 |-- `raw.hourOfDay`: long (nullable = true)
 |-- `raw.minOfDay`: long (nullable = true)
 |-- `raw.dayOfWeek`: long (nullable = true)
 |-- `raw.sensor2`: long (nullable = true)

df.drop("`hourOfWeek`")
org.apache.spark.sql.AnalysisException: cannot resolve 'user_id' given input columns `user_id`, `raw.dayOfWeek`, `observed`, `raw.minOfDay`, `raw.hourOfDay`, `raw.sensor2`, `hourOfWeek`;
        at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
        at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:60)

是否有另一種方法可以刪除在此類數據上不會失敗的列?

好吧,我似乎終於找到了解決方案:

df.drop(df.col("raw.hourOfWeek"))似乎有效

val data = df.drop("Customers");

適用於普通列

val new = df.drop(df.col("old.column"));

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM