Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!View all the Fabric Data Days sessions on demand. View schedule
As of two weeks ago, my Premium per user workspace was handling data refresh fine, large datasets didn't take long.
After the new look and feel, and guessing that also some other technical changes, I find myself struggling to manage simple dataset refresh of 300 MB within this workspace, the recurring message is that the gateway is unreachable:
Error del origen de datos: {"error":{"code":"DM_GWPipeline_Client_GatewayUnreachable","pbi.error":{"code":"DM_GWPipeline_Client_GatewayUnreachable","parameters":{},"details":[],"exceptionCulprit":1}}}
However, I have the latest version of the gateway:
I have 30 other dashboards and 3 of them have a lot more data than the 2 that I'm struggling with, and I though that the solution would be to set up another premium per user workspace, I proceeded to do so with no luck.
Another thing that I'm making sure is that the user is not stuck in the database, which is not, I asked the DBA to check if the query is stuck, and actually it's not, there is not a workload stuck on the database side.
In short, as of right now I've tried:
- Update gateway
- Set up new premiu per user workspace
- reduce dataset size
- make sure no other refresh is programed at the same time
- Applied the option to the dataset to be of large size
All with no luck.
My question is: what changed from a couple of weeks ago? Everything was working fine.
I would really appreciate any help, I cannot find anything related to the changes made from 2 weeks ago.
The only other thing could be if there is something in your steps which is causing the refresh to take so long?
Thanks @GilbertQ , it's actually just two simple steps of joining data from 2 different SQL Databases:
On a good day, the refresh takes a couple of minutes:
When the workspace wants to, it takes 5 hours and fails at the end.
Hi @diegoalberto,
Load those 2sql tables in individual dataflows and join them in 3rd dataflow and see whether anything changed or not.
Thanks,
Sai Teja
Thanks @SaiTejaTalasila , how can I achieve this in Power BI Desktop? I can't recall an option to tell every table to load in sequences instead of how it's done un parallel.
Hi @diegoalberto ,
Currently you are pulling two tables on a same dataflow instead of it .You can pull your tables on indipendent dataflows and you can join the output of dataflow 1&2 tables on 3rd dataflows.It will definitely help you to reduce data refresh time and also helps you to identify which part of the flow taking more time.
Thanks,
Sai Teja
Thanks,
Sai Teja
thanks @SaiTejaTalasila , I understand the idea of loading each table, waiting to load 1 by 1 and then load them in a third dataflow.
The question is, ho can I achieve this? Where? In power BI desktop? in my workspace?
Thanks again.
Hi @diegoalberto ,
Just follow the steps -
1.In your workspace create new dataflow and import table 1 and apply all required transformations.
2.You can one more dataflow and import table 2 and apply all your transformations.
3.If your workspace is premium then you can create new dataflow and import output of above newly created dataflows (import from dataflows)and merge them.If your workspace is non-premium(pro workspace) then you directly create a report on your desktop and import data from above dataflows and you can merge them.
First you will refresh first 2 dataflows once the refresh is completed then you will refresh the 3rd dataflow or your report.
Thanks,
Sai Teja
Thanks @SaiTejaTalasila and sorry for the delay.
However, doing this for more than 30 dashboards and reportes it's unfeaseable, and the loading performance aspect of this matter I addressed this way:
- Talked with our DBA and optimized indexes in the whole SQL Server DB
- Reduced datasets in each PBIX
- Making 5 reportes into 1, let's say that instead of having 5 semantic models refresh, just one.
- Bought a new Power BI Premium per User workspace, thinking the problem was shared resources.
Also for the networking side of things:
- Moved the on-premise gateway to a new server, just dedicated to it
- Tested an on-premise gateway localy, in my laptop
With all of the above and with more frustration in hand, I opened a ticket with Microsoft (TrackingID#2407050040008568), and as much as I sent logs from the gateway, in no response did I get a diagnostic of those logs. They just kept telling me "here's the documentation", "check network connectivity".
Once again, since the frustration is huge, more than 2 months with this problem, I tested one last thing, turn off this option:
The documentation and the label on the gateway says that it's recommended, but it seems something messes things up when this option is on.
It's being 3 days and EVERYTHING, and when I say EVERYTHING is EVERYTHING is working fine, even faster, nothing has failed.
I'm inclined to say that this was the thing that was giving me a headache, but I'm still frustrated that there is no traceability of what every gateway update does, in this link there's NOTHING about what's new in every change:
https://learn.microsoft.com/en-us/data-integration/gateway/service-gateway-monthly-updates
This basically what does is to be aware of every sudden intermitance that might cause more problems like this, but without any kind of visibility from Microsoft.
Also, the documentation that Microsoft posts, recently (as fas as May 2024) changed something regarding network connectivity, but in no place I can identify to turn off the HTTPS option if problems arise:
I want to wait until the end of the week to imply that the problem has being solved. But my God, what a ride ๐
Hey, did the problem occured again since you turned off this option?
I am experiencing the same issue, also since April/May.
Hello @TugaySonakalan , the problem did not occured again since I turned off this option. No error at all since this change.
I have not read or heard of any issues with regards to semantic model refreshers failing I would highly recommend chatting to your network admin to see if there were any network changes. I would also highly recommend chatting to your DBA to double check that the queries are actually getting to the database server before they are being refreshed successfully.
Thanks @GilbertQ , network is working fine, every connection within the server where the gateway is, is online. DBA's are checking constantly the connections to the sources and the curious thing is that the sessions to the DB's don't last long, it's on the PBI workspace that takes that long to refresh.
Also, continuing with the random problems on this matter, today a 3 MB dataset took 5 hours to refresh just to end up in an error:
So, yeah... I just don't know what else to check.
Check out the November 2025 Power BI update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!