[英]Copy nested objects from SQL Server to Azure CosmosDB using a Data Factory
Let's say I have the following data structure:假设我有以下数据结构:
public class Account
{
public int AccountID { get; set; }
public string Name { get; set; }
}
public class Person
{
public int PersonID { get; set; }
public string Name { get; set; }
public List<Account> Accounts { get; set; }
}
I want to move my data from an SQL Server database to Azure Cosmos DB using a Data Factory.我想使用数据工厂将我的数据从 SQL Server 数据库移动到 Azure Cosmos DB。 For each person, I want to create a json file containing the accounts as nested objects like this:对于每个人,我想创建一个包含帐户作为嵌套对象的 json 文件,如下所示:
"PersonID": 1,
"Name": "Jim",
"Accounts": [{
"AccountID": 1,
"PersonID": 1,
"Name": "Home"
},
{
"AccountID": 2,
"PersonID": 1,
"Name": "Work"
}]
I wrote a stored procedure to retrieve my data.我写了一个存储过程来检索我的数据。 In order to include the accounts as nested objects, I convert the SQL query's result to json:为了将帐户包含为嵌套对象,我将 SQL 查询的结果转换为 json:
select (select *
from Person p join Account Accounts on Accounts.PersonID = p.PersonID
for json auto) as JsonResult
Unfortunately, my data gets copied into a single field instead of the proper object structure:不幸的是,我的数据被复制到单个字段而不是正确的对象结构中:
Does anyone know what I should do to fix this?有谁知道我应该怎么做才能解决这个问题?
Edit There is a similar question here but I didn't find a good answer: Is there a way to insert a document with a nested array in Azure Data Factory?编辑这里有一个类似的问题,但我没有找到一个好的答案: Is there a way to insert an document with an nested array in Azure Data Factory?
For anyone in the same situation, I ended up writing a .net application to read the entries from the database and import using the SQL API.对于处于相同情况的任何人,我最终编写了一个 .net 应用程序来读取数据库中的条目并使用 SQL API 导入。
https://docs.microsoft.com/en-us/azure/cosmos-db/create-sql-api-dotnet https://docs.microsoft.com/en-us/azure/cosmos-db/create-sql-api-dotnet
That method is little slow for large imports because it has to serialize the each object and then import them individually.该方法对于大型导入来说有点慢,因为它必须序列化每个对象,然后单独导入它们。 A much faster way I found later is to use the bulk executor library which allows you to import json in bulk without serializing it first:我后来发现的一种更快的方法是使用批量执行程序库,它允许您批量导入 json,而无需先对其进行序列化:
https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started
https://docs.microsoft.com/en-us/azure/cosmos-db/bulk-executor-overview https://docs.microsoft.com/en-us/azure/cosmos-db/bulk-executor-overview
Edit编辑
After installing the NuGet package Microsoft.Azure.CosmosDB.BulkExecutor:安装 NuGet 包 Microsoft.Azure.CosmosDB.BulkExecutor 后:
var documentClient = new DocumentClient(new Uri(connectionConfig.Uri), connectionConfig.Key);
var dataCollection = documentClient.CreateDocumentCollectionQuery(UriFactory.CreateDatabaseUri(database))
.Where(c => c.Id == collection)
.AsEnumerable()
.FirstOrDefault();
var bulkExecutor = new BulkExecutor(documentClient, dataCollection);
await bulkExecutor.InitializeAsync();
Then import the docs:然后导入文档:
var response = await client.BulkIMportAsync(docunemts);
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.