简体   繁体   中英

similar to groupByKey() in Spark but using SQL queries

I trying to make

 ID    CATEGORY  VALUE
'AAA'    'X'      123
'AAA'    'Y'      456
'BBB'    'X'      321
'BBB'    'Y'      654

into

 ID     VALUE_X   VALUE_Y
'AAA'     123       456
'BBB'     321       654

using only SQL queries. It is kind of similar to using groupByKey() in pyspark.

Is there a way to do this?

Just use conditional aggregation. One method is:

select id,
       max(case when category = 'X' then value end) as x_value,
       max(case when category = 'Y' then value end) as y_value
from t
group by id;

In Postgres, this would be phrased using the standard filter clause:

select id,
       max(value) filter (where category = 'X'),
       max(value) filter (where category = 'Y')
from t
group by id;

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM