简体   繁体   中英

Databricks SQL string_agg

Hopefully a quick one;

Migrating some on-premise SQL views to Databricks and struggling to find conversions for some functions. the main one is the string_agg function.

string_agg(field_name, ', ')

Anyone know how to convert that to Databricks SQL?

Thanks in advance.

The rough equivalent would be using collect_set and array_join but note you have lost the order:

%sql
SELECT col1, array_join(collect_set(col2), ',') j
FROM tmp
GROUP BY col1

I do not think STRING_AGG guarantees order (unless you specify the WITHIN GROUP...ORDER BY clause) but you should expect the order not to match. Hopefully the order does not matter to your process but you should double-check it does not have any implications for your process. As per the official documentation :

[ collect_list ] is non-deterministic because the order of collected results depends on the order of the rows which may be non-deterministic after a shuffle.

They have recently added the ordinal argument to STRING_AGG to Azure SQL DB, Managed Instance and Synapse, but presumably you don't yet have that feature on-premises anyway.

Thanks for the answer @wBob. I am able to guarantee sort by modifying your code in the following way:

array_join(array_sort(collect_set(col2)),",") j

The array_sort() sorts the items returned by collect_set() and array_join() converts that output into a single string.

You can use concat functions as described here https://spark.apache.org/docs/latest/api/sql/index.html#concat_ws

SELECT concat_ws(' ', 'Spark', 'SQL');

Databricks SQL support is for basic SQL queries only. So procedure-oriented queries are not supported with current Databricks SQL version. This would fall under a new feature request. You can handle basic SQL functions only link

Note: Databricks SQL provides a simple experience for SQL users who want to run quick ad-hoc queries on their data lake, create multiple visualization types to explore query results from different perspectives, and build and share dashboards. It is not supposed to replace ETL workloads running in Python/PySpark which we are currently handling.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM