March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
Hi,
I updated my On Premise gateway cluster to 3000.202.13 (December) yesterday. Since then all Gen2 Dataflows have failed to write to lakehouse giving error:
999999 Couldn't refresh the entity because of an issue with the mashup document MashupException.Error: The value is not updatable. Details: Reason = Expression.Error;Microsoft.Data.Mashup.Error.Context = User GatewayObjectId: 2bc...
Reverting one of the gateways to 3000.198.17 (November) and disabling the other gateways in the cluster has fixed the issue. Would suggest waiting for an update if you're using to copy data from on prem sources.
I am writing here, hoping that I will receive notification of a solution when one becomes available. Good find Ben, but hoping you keep poking at the issues with the new gateway.
Thanks.
Waiting on our IT to action but I think the on prem gateway may have changed the format of the SQL connection string. We've previously added firewall rules to allow communication with:
*.datawarehouse.pbidedicated.windows.net on 1433
However all of my lakehouse endpoints now appear to be of the form:
*.datawarehouse.fabric.microsoft.com
Have asked our IT to modify which I have a hunch will fix the problem. Will update here if succesful.
Ben
I have the previous endpoint in my firewall rules and I can write to the lakehouse and running the latest January 2024 PBI data gateway version. So how could this be the issue?
But I have to chain 2 DFg2 in order to ingest the data into a lakehouse. First DFg2 has no destination. Second Dfg2 uses the first DFg2 as a source and specifies a data destination--the lakehouse--and the ingestion proceeds successfully. I then orchestrate these 2 DFg2s with a pipeline set to refresh hourly and it works fine.
What does not work is using only one DFg2 to ingest from an on-prem DB--the refresh always fails with error 104100 internal error.
EDIT on 2024-02-06: It is not the number of queries that is the problem, it is a single query of this kind:
let
Source = #shared
in
Source
or
let
Source = #sections
in
Source
Also, if there are too 'many' queries (20 in this instance) in a DFg2, you get a 'Dynamic datasources not supported' error EVEN THOUGH the sources are 2 lakehouse tables! Go figure.
And this DF works fine in the Service as we published it more than a year ago from Power BI Desktop with daily refreshes and no issues.
Did you try to split your DF into 2 DFs, one for ingestion but no destination, and feed that into the second DF to do whatever transformation and publish to the lakehouse destination? Works for me every single time with an on-prem DB and one Power BI data gateway.
Hi @bcdobbs
Thanks for using Microsoft Fabric Community.
Apologies for the issue that you are facing here.
As I understand that you are facing and error message while upgrading On Prem gateway cluster.
Try to follow the below mentioned troubleshooting step that might help you.
For more details please refer Link.
I hope this information helps. Please do let us know if you have any further questions.
Hi @bcdobbs
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet.
In case if you have any resolution please do share that same with the community as it can be helpful to others.
Otherwise, will respond back with the more details and we will try to help.
Thanks
Hi @bcdobbs
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. In case if you have any resolution please do share that same with the community as it can be helpful to others.
If you have any question relating to the current thread, please do let us know and we will try out best to help you.
In case if you have any other question on a different issue, we request you to open a new thread.
Thanks
Sorry I didn't reply because I haven't had time and didn't feel my initial message had been understood.
I'm well aware that the cluster should be kept in sync.
1) Installed latest gateway on both servers in the cluster. This is when issues started.
2) The only way I could make Gen2 dataflows write to lakehouse was by uninstalling the latest version on one of the server and installing an earlier version (November). To prevent a mismatch I disabled the gateway on the 2nd server. Effectively reducing it to a single gateway. I only did it on one because downgrading requires uninstalling, reinstalling and then taking over the gateway.
3) Today I noticed a new version had been released so have tried that and am still getting the issue. Again reverting to November fixes the issue.
Thanks
Ben
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.
Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.