简体   繁体   中英

PySpark distinct().count() on a csv file

I'm new to spark and I'm trying to make a distinct().count() based on some fields of a csv file.

Csv structure(without header):

id,country,type
01,AU,s1
02,AU,s2
03,GR,s2
03,GR,s2

to load .csv I typed:

lines = sc.textFile("test.txt")

then a distinct count on lines returned 3 as expected:

lines.distinct().count()

But I have no idea how to make a distinct count based on lets say id and country .

In this case you would select the columns you want to consider, and then count:

sc.textFile("test.txt")\
  .map(lambda line: (line.split(',')[0], line.split(',')[1]))\
  .distinct()\
  .count()

This is for clarity, you can optimize the lambda to avoid calling line.split two times.

分割线可以如下优化:

sc.textFile("test.txt").map(lambda line: line.split(",")[:-1]).distinct().count()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM