简体   繁体   English

Power BI 数据流的初始刷新问题 - 对数据源要求太高

[英]Problems with inital refresh of Power BI Data Flow - Too demanding on data source

I am faced with a challenge I would love to get some pointers for.我面临着一个挑战,我很想得到一些指示。

I have a couple of very large tables housed in an SQL Server instance that is also the primary production table for the application it supports.我在 SQL 服务器实例中有几个非常大的表,这也是它支持的应用程序的主要生产表。

I want to load 2 years worth of historical data, after which I will be implementing incremental refresh.我想加载 2 年的历史数据,之后我将实施增量刷新。 The problem is that when I tried to do the initial load, the end users of the application experienced time outs and all the other stuff we want to see in production.问题是当我尝试进行初始加载时,应用程序的最终用户遇到了超时以及我们希望在生产中看到的所有其他内容。

I am looking for a way to feed the data flow either in small steps of a month or smaller, or provide the historical data separately via a.csv or in some other way.我正在寻找一种以一个月或更短的小步长提供数据流的方法,或者通过 a.csv 或其他方式单独提供历史数据。

Can anybody share some insight on how to go about this?任何人都可以分享一些关于如何 go 的见解吗? I've tried researching this issue, but I've not found a way so far.我试过研究这个问题,但到目前为止我还没有找到方法。

Thank you in advance!先感谢您!

If you dont have the option to mirror your data i would suggest a dirty read.如果您没有镜像数据的选项,我建议您进行脏读。

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

BEGIN TRANSACTION

-- your sql script

COMMIT TRANSACTION

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM