[英]Azure Data Factory activity copy: Evaluate column in sink table with @pipeline().TriggerTime
With Data Factory V2 I'm trying to implement a stream of data copy from one Azure SQL database to another.使用数据工厂 V2,我试图实现从一个 Azure SQL 数据库到另一个的数据复制流。
I have mapped all the columns of the source table with the sink table but in the sink table I have an empty column where I would like to enter the pipeline run time.我已经用接收器表映射了源表的所有列,但在接收器表中,我有一个空列,我想在其中输入管道运行时间。
Does anyone know how to fill this column in the sink table without it being present in the source table?有谁知道如何在接收器表中填充此列而不出现在源表中?
Below there is the code of my copy pipeline下面是我的复制管道的代码
{
"name": "FLD_Item_base",
"properties": {
"activities": [
{
"name": "Copy_Team",
"description": "copytable",
"type": "Copy",
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"typeProperties": {
"source": {
"type": "SqlSource"
},
"sink": {
"type": "SqlSink",
"writeBatchSize": 10000,
"preCopyScript": "TRUNCATE TABLE Team_new"
},
"enableStaging": false,
"dataIntegrationUnits": 0,
"translator": {
"type": "TabularTranslator",
"columnMappings": {
"Code": "Code",
"Name": "Name"
}
}
},
"inputs": [
{
"referenceName": "Team",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "Team_new",
"type": "DatasetReference"
}
]
}
]
}
}
In my sink table I already have the column data_load
where I would like to insert the pipeline execution date, but I did not currently map it.在我的接收器表中,我已经有了
data_load
列,我想在其中插入管道执行日期,但我目前没有映射它。
Based on your situation, please configure SQL Server stored procedure in your SQL Server sink as a workaround.根据您的情况,请在您的 SQL Server 接收器中配置 SQL Server 存储过程作为解决方法。
Please follow the steps from this doc :请按照此文档中的步骤操作:
Step 1: Configure your Sink dataset:第 1 步:配置您的 Sink 数据集:
Step 2: Configure Sink section in copy activity as follows:第 2 步:在复制活动中配置 Sink 部分,如下所示:
Step 3: In your database, define the table type with the same name as sqlWriterTableType.第 3 步:在您的数据库中,定义与 sqlWriterTableType 同名的表类型。 Notice that the schema of the table type should be same as the schema returned by your input data.
请注意,表类型的架构应与输入数据返回的架构相同。
CREATE TYPE [dbo].[testType] AS TABLE(
[ID] [varchar](256) NOT NULL,
[EXECUTE_TIME] [datetime] NOT NULL
)
GO
Step 4: In your database, define the stored procedure with the same name as SqlWriterStoredProcedureName
.第 4 步:在您的数据库中,定义与
SqlWriterStoredProcedureName
同名的存储过程。 It handles input data from your specified source, and merge into the output table.它处理来自您指定源的输入数据,并合并到输出表中。 Notice that the parameter name of the stored procedure should be the same as the "tableName" defined in dataset.
请注意,存储过程的参数名称应与数据集中定义的“tableName”相同。
Create PROCEDURE convertCsv @ctest [dbo].[testType] READONLY
AS
BEGIN
MERGE [dbo].[adf] AS target
USING @ctest AS source
ON (1=1)
WHEN NOT MATCHED THEN
INSERT (id,executeTime)
VALUES (source.ID,GETDATE());
END
you can consider using stored procedure at the sink side to apply the source data into the sink table by designating " sqlWriterStoredProcedureName " of the SqlSink.可以考虑在sink端使用存储过程,通过指定SqlSink的“ sqlWriterStoredProcedureName ”将源数据应用到sink表中。 Pass the pipeline run time to the stored procedure as the parameter and insert into sink table.
将管道运行时间作为参数传递给存储过程并插入到接收器表中。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.