Microsoft is giving away 50,000 FREE Microsoft Certification exam vouchers. Get Fabric certified for FREE! Learn more
In my company we work with Dataflows Gen1, with a large volume of data, which makes us in some cases have some problems with data loading, as there are many data and we load them in Denodo, which has a maximum loading time (timeout) of 15 minutes, time that runs out in some cases.
To solve this problem we have made partitions by semesters to load the data without the volume of data being so large, but even so there are times when we get the timeout error because there is not enough time to load all the data.
How should we structure these dataflows so that we do not have these problems?
What is the best way to work with a large amount of data with Dataflows Gen1?
Clarification: we have to work with Gen1, since we are not allowed to enter Gen2 yet.
Does it need to be a dataflow? Would a bunch of CSV or Parquet files work as well?
Also, this: Known issue - Visuals using the Denodo connector might show connection errors - Microsoft Fabric | M...
Check out the April 2025 Power BI update to learn about new features.
Explore and share Fabric Notebooks to boost Power BI insights in the new community notebooks gallery.
User | Count |
---|---|
16 | |
13 | |
7 | |
5 | |
4 |