簡體   English   中英

聚合標准偏差並計算 sparklyr 中的非 NA

[英]Aggregating the standard deviation and counting non-NAs in sparklyr

我有一個很大的 data.frame 並且我一直在使用summariseacross summarise眾多變量的匯總統計信息。 由於我的 data.frame 的大小,我不得不開始在sparklyr處理我的數據。

由於sparklyr不支持across我使用summarise_each 這是工作正常,但summarise_eachsparklyr似乎並不支持sdsum(!is.na(.))

下面是一個示例數據集以及我通常如何使用dplyr處理它:

test <- data.frame(ID = c("Group1","Group1",'Group1','Group1','Group1','Group1','Group1',
                          "Group2","Group2","Group2",'Group2','Group2','Group2',"Group2",
                          "Group3","Group3","Group3"),
                      Value1 = c(-100,-10,-5,-5,-5,1,2,1,2,3,4,4,4,4,1,2,3),
                      Value2 = c(50,100,10,-5,3,1,2,2,2,3,4,4,4,4,1,2,3))
test %>% 
  group_by %>%
  summarise(across((Value1:Value2), ~sum(!is.na(.), na.rm = TRUE), .names = "{col}_count"),
            across((Value1:Value2), ~min(., na.rm = TRUE), .names = "{col}_min"),
            across((Value1:Value2), ~max(., na.rm = TRUE), .names = "{col}_max"),
            across((Value1:Value2), ~mean(., na.rm = TRUE), .names = "{col}_mean"),
            across((Value1:Value2), ~sd(., na.rm = TRUE), .names = "{col}_sd"))

# A tibble: 1 x 10
  Value1_count Value2_count Value1_min Value2_min Value1_max Value2_max Value1_mean Value2_mean Value1_sd Value2_sd
         <int>        <int>      <dbl>      <dbl>      <dbl>      <dbl>       <dbl>       <dbl>     <dbl>     <dbl>
1           17           17       -100         -5          4        100       -5.53        11.2      24.7      25.8

我也能夠使用 summarise_each 成功獲得相同的答案,如下所示:

test %>% 
  group_by(ID) %>%
  summarise_each(funs(min = min(., na.rm = TRUE),
                      max = max(., na.rm = TRUE),
                      mean = mean(., na.rm = TRUE), 
                      sum = sum(., na.rm = TRUE),
                      sd = sd(., na.rm = TRUE)))

  ID     Value1_min Value2_min Value1_max Value2_max Value1_mean Value2_mean Value1_sum Value2_sum
  <fct>       <dbl>      <dbl>      <dbl>      <dbl>       <dbl>       <dbl>      <dbl>      <dbl>
1 Group1       -100         -5          2        100      -17.4        23          -122        161
2 Group2          1          2          4          4        3.14        3.29         22         23
3 Group3          1          1          3          3        2           2             6          6

使用sparklyr我已經成功地計算出minmaxmeansum ,如下所示:

sc <- spark_connect(master = "local", version = "2.4.3")
test <- spark_read_csv(sc = sc, path = "C:\\path\\test space.csv")

test %>% 
  group_by(ID) %>%
  summarise_each(funs(min = min(., na.rm = TRUE),
                      max = max(., na.rm = TRUE),
                      mean = mean(., na.rm = TRUE), 
                      sum = sum(., na.rm = TRUE)))
# Source: spark<?> [?? x 9]
  ID     Value1_min Value_2_min Value1_max Value_2_max Value1_mean Value_2_mean Value1_sum Value_2_sum
  <chr>       <int>       <int>      <int>       <int>       <dbl>        <dbl>      <dbl>       <dbl>
1 Group2          1           2          4           4        3.14         3.29         22          23
2 Group3          1           1          3           3        2            2             6           6
3 Group1       -100          -5          2         100      -17.4         23          -122         161

但是在嘗試獲取sdsum(!is.na(.))時我收到錯誤消息下面是我收到的代碼和錯誤消息。 是否有任何解決方法可以幫助匯總這些值?

test %>% 
  group_by(ID) %>%
  summarise_each(funs(min = min(., na.rm = TRUE),
                      max = max(., na.rm = TRUE),
                      mean = mean(., na.rm = TRUE), 
                      sum = sum(., na.rm = TRUE),
                      sd = sd(., na.rm = TRUE)))

Error: org.apache.spark.sql.catalyst.parser.ParseException: 
mismatched input 'AS' expecting ')'(line 1, pos 298)

== SQL ==
SELECT `ID`, MIN(`Value1`) AS `Value1_min`, MIN(`Value_2`) AS `Value_2_min`, MAX(`Value1`) AS `Value1_max`, MAX(`Value_2`) AS `Value_2_max`, AVG(`Value1`) AS `Value1_mean`, AVG(`Value_2`) AS `Value_2_mean`, SUM(`Value1`) AS `Value1_sum`, SUM(`Value_2`) AS `Value_2_sum`, stddev_samp(`Value1`, TRUE AS `na.rm`) AS `Value1_sd`, stddev_samp(`Value_2`, TRUE AS `na.rm`) AS `Value_2_sd`
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^^^
FROM `test_space_30172a44_c0aa_4305_9a5e_d45fa77ba0b9`
GROUP BY `ID`

    at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:241)
    at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:117)
    at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48)
    at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:69)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
    at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sparklyr.Invoke.invoke(invoke.scala:147)
    at sparklyr.StreamHandler.handleMethodCall(stream.scala:136)
    at sparklyr.StreamHandler.read(stream.scala:61)
    at sparklyr.BackendHandler$$anonfun$channelRead0$1.apply$mcV$sp(handler.scala:58)
    at scala.util.control.Breaks.breakable(Breaks.scala:38)
    at sparklyr.BackendHandler.channelRead0(handler.scala:38)
    at sparklyr.BackendHandler.channelRead0(handler.scala:14)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
In addition: Warning messages:
1: Named arguments ignored for SQL stddev_samp 
2: Named arguments ignored for SQL stddev_samp 

問題是na.rm參數。 Spark 的stddev_samp函數沒有這樣的參數,而sparklyr似乎沒有處理它。

SQL 中總是會刪除缺失值,因此您無需指定na.rm

test_spark %>% 
  group_by(ID) %>%
  summarise_each(funs(min = min(.),
                      max = max(.),
                      mean = mean(.), 
                      sum = sum(.),
                      sd = sd(.)))
#> # Source: spark<?> [?? x 11]
#>   ID     Value1_min Value2_min Value1_max Value2_max Value1_mean Value2_mean
#>   <chr>       <dbl>      <dbl>      <dbl>      <dbl>       <dbl>       <dbl>
#> 1 Group2          1          2          4          4        3.14        3.29
#> 2 Group1       -100         -5          2        100      -17.4        23   
#> 3 Group3          1          1          3          3        2           2   
#>   Value1_sum Value2_sum Value1_sd Value2_sd
#>        <dbl>      <dbl>     <dbl>     <dbl>
#> 1         22         23      1.21     0.951
#> 2       -122        161     36.6     38.6  
#> 3          6          6      1        1  

這看起來像是一個特定於summarise的錯誤,因為sdna.rmmutate na.rm 工作正常

test_spark %>% 
  group_by(ID) %>%
  mutate_each(funs(sd = sd(., na.rm = TRUE))) 

對於sum(!is.na(.)) ,您只需將其寫為sum(ifelse(is.na(.), 0, 1))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM