简体   繁体   中英

Getting error exceeded the maximum length of 128 while copying some text files from azure blob to synapse

I am getting the following error upon trying to copy text files from Azure Blob to Synapse:

*{
    "errorCode": "2200",
    "message": "Failure happened on 'Source' side. 'Type=System.Data.SqlClient.SqlException,Message=Parse Error: Identifier 'ARCHIVED_AT,ID,DISPENSATION_ID,ETL_RUN_TIMESTAMP' exceeded the maximum length of 128.,Source=.Net SqlClient Data Provider,SqlErrorNumber=104307,Class=16,ErrorCode=-2146232060,State=1,Errors=[{Class=16,Number=104307,State=1,Message=Parse Error: Identifier 'ARCHIVED_AT,ID,DISPENSATION_ID,ETL_RUN_TIMESTAMP' exceeded the maximum length of 128.,},],'",
    "failureType": "UserError",
    "target": "Copy Blob to Synapse",
    "details": []
}*

While I have been able to successfully copy most text files from Azure blob to Azure Synapse using ADF pipeline, I am not sure what is going wrong with this one.

Can anyone help me figure what I need to do to resolve this?

I was using a txt file with pipe delimiter and not a csv file. There were double quotes in multiple columns of my table as well as hidden characters which was causing the issue. It got resolved after I got them replaced at source.

I agree with @Joel Cochran. It seams that the header ARCHIVED_AT,ID,DISPENSATION_ID,ETL_RUN_TIMESTAMP is recognized as one column.

I guess your txt file should has four columns like bellow format:

ARCHIVED_AT, ID, DISPENSATION_ID, ETL_RUN_TIMESTAMP
data1,1,100,2020-07-02 05:51:32.910
data2,2,102,2020-07-03 05:51:32.910
···
···

Please check the row delimiter of you source txt file that if it is comma(,) : 在此处输入图像描述

Hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM