简体   繁体   中英

Using Windows to group 5 mins timeframe

the csv file is:

#+----+-----------+-------------------+
#|col1|       col2|          timestamp|
#+----+-----------+-------------------+
#|   0|Town Street|01-02-2017 06:01:00|
#|   0|Town Street|01-02-2017 06:03:00|
#|   0|Town Street|01-02-2017 06:05:00|
#|   0|Town Street|01-02-2017 06:06:00|
#|   0|Town Street|02-02-2017 10:01:00|
#|   0|Town Street|02-02-2017 10:05:00|
#+----+-----------+-------------------+

compare the times on each date to see if there is a 5 minute difference, if their is count them

output:

 #+----+-----------+-------------------+
#|col1|       col2|          timestamp|
#+----+-----------+-------------------+
#|   0|Town Street|01-02-2017 06:01:00|
#|   0|Town Street|01-02-2017 06:03:00|
#|   0|Town Street|01-02-2017 06:05:00|
#|   0|Town Street|01-02-2017 06:06:00|
#|   0|Town Street|02-02-2017 10:01:00|
#|   0|Town Street|02-02-2017 10:05:00|
#+----+-----------+-------------------+

Code right now:

from pyspark.sql import SQLContext
import pyspark.sql.functions as F

    def my_main(sc, my_dataset_dir):
        sqlContext = SQLContext(sc)
        df = sqlContext.read.csv(my_dataset_dir,sep=';').rdd.zipWithIndex().filter(lambda x: x[1] > 1).map(lambda x: x[0]).toDF(['status','title','datetime'])

This code just gives a null result for 5 min window.

Not sure if this exactly what you want but it should push you in the right direction . You could convert your timestamp to timestamptype and datetype . To create a window to partitionBy date and rangebetween the timestamp in seconds(300) .

#df.show()
#sampledataframe
#+----+-----------+-------------------+
#|col1|       col2|          timestamp|
#+----+-----------+-------------------+
#|   0|Town Street|01-02-2017 06:01:00|
#|   0|Town Street|01-02-2017 06:03:00|
#|   0|Town Street|01-02-2017 06:05:00|
#|   0|Town Street|01-02-2017 06:06:00|
#|   0|Town Street|02-02-2017 10:01:00|
#|   0|Town Street|02-02-2017 10:05:00|
#+----+-----------+-------------------+

from pyspark.sql import functions as F
from pyspark.sql.window import Window

w=Window().partitionBy("date").orderBy(F.col("timestamp").cast("long")).rangeBetween(Window.currentRow,60*5)

df.withColumn("timestamp", F.to_timestamp("timestamp",'MM-dd-yyyy HH:mm:ss'))\
  .withColumn("date", F.to_date("timestamp"))\
  .withColumn('collect', F.size(F.collect_list("timestamp").over(w))).filter("collect>1")\
  .select(F.date_format("date","yyyy-MM-dd").alias("date"), F.array(F.date_format("timestamp","HH:mm:ss"),F.col("collect")).alias("time"))\
  .orderBy("date").show()

#+----------+-------------+
#|      date|         time|
#+----------+-------------+
#|2017-01-02|[06:01:00, 4]|
#|2017-01-02|[06:05:00, 2]|
#|2017-01-02|[06:03:00, 3]|
#|2017-02-02|[10:01:00, 2]|
#+----------+-------------+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM