[英]How to filter spark dataframe entries based on a column value which is a map
我有一個像這樣的 dataframe
+-------+------------------------+
|key | data|
+-------+------------------------+
| 61|[a -> b, c -> d, e -> f]|
| 71|[a -> 1, c -> d, e -> f]|
| 81|[c -> d, e -> f] |
| 91|[x -> b, y -> d, e -> f]|
| 11|[a -> a, c -> b, e -> f]|
| 21|[a -> a, c -> x, e -> f]|
+-------+------------------------+
我想過濾其數據列 map 包含鍵'a'
並且value of key 'a' is 'a'
行。 所以下面的dataframe就是想要的output。
+-------+------------------------+
|key | data|
+-------+------------------------+
| 11|[a -> a, c -> b, e -> f]|
| 21|[a -> a, c -> x, e -> f]|
+-------+------------------------+
我嘗試將值轉換為 map 但我收到此錯誤
== SQL ==
Map
^^^
at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitPrimitiveDataType$1.apply(AstBuilder.scala:1673)
at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitPrimitiveDataType$1.apply(AstBuilder.scala:1651)
at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:108)
at org.apache.spark.sql.catalyst.parser.AstBuilder.visitPrimitiveDataType(AstBuilder.scala:1651)
at org.apache.spark.sql.catalyst.parser.AstBuilder.visitPrimitiveDataType(AstBuilder.scala:49)
at org.apache.spark.sql.catalyst.parser.SqlBaseParser$PrimitiveDataTypeContext.accept(SqlBaseParser.java:13779)
at org.apache.spark.sql.catalyst.parser.AstBuilder.typedVisit(AstBuilder.scala:55)
at org.apache.spark.sql.catalyst.parser.AstBuilder.org$apache$spark$sql$catalyst$parser$AstBuilder$$visitSparkDataType(AstBuilder.scala:1645)
at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleDataType$1.apply(AstBuilder.scala:90)
at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleDataType$1.apply(AstBuilder.scala:90)
at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:108)
at org.apache.spark.sql.catalyst.parser.AstBuilder.visitSingleDataType(AstBuilder.scala:89)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parseDataType$1.apply(ParseDriver.scala:40)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parseDataType$1.apply(ParseDriver.scala:39)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:98)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parseDataType(ParseDriver.scala:39)
at org.apache.spark.sql.Column.cast(Column.scala:1017)
... 49 elided
如果我只想根據'key'
列進行過濾,我可以通過執行df.filter(col("key") === 61)
來進行 go 。 但問題是,值是 Map。
有沒有像df.filter(col("data").toMap.contains("a") && col("data").toMap.get("a") === "a")
您可以像這樣過濾df.filter(col("data.x") === "a")
其中x是數據內的嵌套列。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.