简体   繁体   中英

User defined aggregate function in SparkR

I have recods of mailings like this:

Name MailingID  Timestamp    Event
1 John         1 2014-04-18     Sent
2 John         2 2015-04-21     Sent
3 Mary         1 2015-04-22 Returned
4 Mary         2 2015-04-25     Sent
5 John         1 2015-05-01  Replied

which can be created as DataFrame :

df <- createDataFrame(sqlContext, data.frame(Name = c('John','John','Mary','Mary','John'),
                                             MailingID = c(1,2,1,2,1),
                                             Timestamp=c('2014-04-18','2015-04-21','2015-04-22','2015-04-25','2015-05-01'),
                                             Event=c('Sent','Sent','Returned','Sent','Replied')))

I want to find out who has replied any of the 2 latest mails sent to him/her, so with a summary helper function and dplyr I can do:

localDf <- collect(df)

library(lubridate)
library(magrittr)
library(dplyr)

hasRepliedLatest <- function(MailingID, Timestamp, Event, Latest_N) {
  length(intersect(MailingID[Event == 'Replied'], MailingID[Event == 'Sent'][1:Latest_N])) > 0
}

localDf %>%
  arrange(desc(Timestamp)) %>%
  group_by(Name) %>%
  summarize(RepliedLatest = hasRepliedLatest(MailingID, Timestamp, Event, 2))

detach(package:dplyr) # to avoid function confliction with SparkR

the outcome is:

  Name RepliedLatest
1 John          TRUE
2 Mary         FALSE

Now I want to do this with SparkR , ie, on DataFrame instead of on local data.frame . So I tried:

df %>%
  arrange(desc(df$Timestamp)) %>%
  group_by(df$Name) %>%
  summarize(RepliedLatest = hasRepliedLatest(df$MailingID, df$Timestamp, df$Event, 2))

Then I got error saying my function won't work with S4 class DataFrame . How to do this correctly in SparkR ? Solutions using SQL query with sqlContext created by sparkRHive.init or sparkRSQL.init are also welcome.

SparkSQL <= 1.4 doesn't support user defined aggregate functions and as far as I know SparkR doesn't UDFs at all so unless you're using current development branch or 1.5 RC UDFs are not an option.

I am still not sure if I understand your data model and logic but you can try something like this:

# Select last 2 sent events and all other which occurred in this window
tmp <- sql(sqlContext,    
   "SELECT *, SUM(CASE WHEN event = 'Sent' THEN 1 ELSE 0 END) OVER w AS ind
    FROM df WHERE Event IN ('Sent', 'Replied')
    HAVING ind <= 2
    WINDOW w AS (PARTITION BY name ORDER BY DATE(Timestamp) DESC)")


# Split sent and replied
sent <- tmp %>% filter(tmp$Event == "Sent")
replied <- tmp %>% filter(tmp$Event == "Replied")

registerTempTable(sent,  "sent")
registerTempTable(replied,  "replied")

# Join and count
sql(sqlContext,
    "SELECT
        sent.name,
        SUM(
            CASE WHEN replied.event IS NOT NULL THEN 1
            ELSE 0 END
        ) > 0 AS repliedlatest 
     FROM sent LEFT JOIN replied ON
        sent.name = replied.name AND
        sent.mailingid = replied.mailingid
     -- Not part of the original logic
     WHERE DATE(sent.timestamp) <= DATE(replied.timestamp) 
     GROUP BY sent.name") %>% head()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM