Fabric is Generally Available. Browse Fabric Presentations. Work
towards your Fabric certification with the Cloud Skills Challenge.
Ever since about 6am Eastern today, my dataflow refreshes have been just spinning - no errors, just spinning like it's trying to load the data.
Existing ICM for similar issue(ICM 297756751). It should be a new known issue that dataflow refresh taking long times that usual to complete. Employee engages this issue as well.
Yeah. I'm still seeing some issues. Trying to create a connection to a new dataflow in PBI Desktop and just spinning ants.
Could you please let us know when does this issue start? I'm working with a customer that is facing the issue we are raising an internal incident.
We have applied a walkaround: moving the workspace from Premium Capacity to Shared Capacity (Pro or PPU) and then move back to Premium Capacity. Try this, and let us know if that worked for you.
I'm a Support Engineer, so I have not all the visibility about the issue but I will let you know as soon I got any news.
Started Wednesday - seemed to work ok by mid afternoon then slow but working on Thursday but since Friday - no dice except randomly. There are typically no errors it's just as if the dataflow service is not able to successfully to connect OR its taking too much time somehow. Datasets are fine and non premium dataflows are fine too.
Has this been resolved? We seem to be having the same issue. Started this morning 3/28 with our 8:00AM EST refresh. No issues for us on Friday. Dataflows are stuck in refresh. No time out or errors. Tried manually canceling and restarting, and still haven't had any success.
Hey, folks, just an FYI. We were able to resolve this by pausing & restarting our gen2 instance in the azure portal. After the pause/restart, dataflow refreshes operated and completed normally.
We upgraded our gateways to the newest version last night and things resolved almost immediately. I'm not saying this will solve it but it did for us.
However we did test an alternative for the next time (as we believe there will be a next time.) Since this seems to only affect our Premium dataflows we created the same data flows on a non premium workspace and pointed the report to those. Since the dataset stays with the report in the premium workspace the user/security setup we have still works so for us it's a perfect backup plan.
To make it easier we setup text parameters in the reports for the workspace id and data flow ids (we have 2/3 dataflows per report.) That way if the premium workflows fail we can easily flip the dataset to the non premium workflows on the service itself so we dont have to download giant files and sync them back up 🙂
Example for how to setup parameters for data flows :
Create Parameter for Workspace ID (make sure its text)- ours is named WorkspaceID - enter the ids for current and default
Create Parameter for Dataflow ID (make sure it's text) - ours is named DimensionDataflowID - enter the ids for current and default
Change your datasource code in advanced editor to call the workspace and dataflow parameters like this.
Once you republish the report you will now be able to change the workpace id and dataflowid(s) to the workspace and dataflows on a non premium workspace if needed without downloading and uploading the pbix again.
same issue for us as well... High business impact.
Ticket created but no response about active fix.
This is an on going struggle we have with dataflow, hope Microsoft will put more focus on this to fix it ASAP.
Support reached out next week in responding the ticket. They provided a workaround which worked for us, it requries moving the workspace out of the premium capacity, wait for 5 minutes and then adding it back, It also requires re-configuring Gateways on the data flows in the workspace.
That is simply an unacceptable workaround in a fortune 100 company with tens of thousands of Premium free users around the world.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.