[英]Azure Synapse - Incremental Data Load
We load data from on-prem database servers to Azure Data Lake Storage Gen2 using Azure Data Factory and Databricks store them as parquet files.我们使用Azure 数据工厂将数据从本地数据库服务器加载到Azure Data Lake Storage Gen2 , Databricks将它们存储为 parquet 文件。 Every run, we get only get the new and modified data from last run and UPSERT into existing parquet files using databricks merge statement.
每次运行,我们只使用 databricks合并语句将上次运行和 UPSERT 的新数据和修改数据获取到现有的镶木地板文件中。
Now we are trying to move this data from parquet files Azure Synapse .现在我们正在尝试从镶木地板文件Azure Synapse 中移动这些数据。 Ideally, I would like to do this.
理想情况下,我想这样做。
The problem is merge statement is not available in Azure Syanpse.问题是合并语句在 Azure Synpse 中不可用。 Here is the solution Microsoft suggests for incremental load
这是微软建议的增量加载解决方案
CREATE TABLE dbo.[DimProduct_upsert]
WITH
( DISTRIBUTION = HASH([ProductKey])
, CLUSTERED INDEX ([ProductKey])
)
AS
-- New rows and new versions of rows
SELECT s.[ProductKey]
, s.[EnglishProductName]
, s.[Color]
FROM dbo.[stg_DimProduct] AS s
UNION ALL
-- Keep rows that are not being touched
SELECT p.[ProductKey]
, p.[EnglishProductName]
, p.[Color]
FROM dbo.[DimProduct] AS p
WHERE NOT EXISTS
( SELECT *
FROM [dbo].[stg_DimProduct] s
WHERE s.[ProductKey] = p.[ProductKey]
)
;
RENAME OBJECT dbo.[DimProduct] TO [DimProduct_old];
RENAME OBJECT dbo.[DimProduct_upsert] TO [DimProduct];
Basically dropping and re-creating the production table with CTAS.基本上是使用 CTAS 删除和重新创建生产表。 Will work fine with small dimenstion tables, but i'm apprehensive about large fact tables with 100's of millions rows with indexes.
对小维度表可以正常工作,但我对具有索引的数百万行的大型事实表感到担忧。 Any suggestions what would be the best way to do incremental loads for really large fact tables.
任何建议对真正大的事实表进行增量加载的最佳方法是什么。 Thanks!
谢谢!
Till the time SQL MERGE is officially supported, the recommended way fwd to update target tables is to use T SQL insert/update commands between the delta records and target table.在正式支持 SQL MERGE 之前,推荐的更新目标表的方法是在增量记录和目标表之间使用 T SQL 插入/更新命令。
Alternatively, you can also use Mapping Data Flows (in ADF) to emulate SCD transactions for dimensional/fact data load.或者,您也可以使用映射数据流(在 ADF 中)来模拟 SCD 事务以加载维度/事实数据。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.