Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hello,
I would like to understand how I can leverage the new Fabric capabilities in our solution.
We are currently using:
Snowflake as a data warehousing solution
Import into Power BI using a Power BI Dataflow
Then it moves into a Power BI dataset
Is visualized in a Power BI report
We have been having issues with long refresh times (it can take around 4 hours to get the data from Snowflake to the report). We have tried implementing incremental refresh but have run into issues since we need to consider deletes and due the how the incremental refresh partitions work, this then creates duplicates which take too long to remove using Power Query.
Is there a way to rework the architecture using the new Fabric capabilities so it performs better?
We are on premium capacity.
Thank you for any help!
Hi. The answer would be "It depends". It's not necessary an issue about the tech, but the decisions of the data transformations. How are you using dataflow? what kind of data transformations (joins, append, changing types, etc) are you doing? how many rows? how many columns? Those questions might help us understand if there is an issue about practices or if it is the tech.
I have worked with small millions of data three years ago, without dataflows and worked ok. I mean Snowflake -> Power Bi Dataset -> Power Bi Report.
All transformations were applied at snowflake with queries creating dim and fact tables. The dataset would only read table.
I hope that helps
Happy to help!
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
Check out the November 2025 Power BI update to learn about new features.