Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
nathan-verghis
Regular Visitor

Dataflow refresh job failed: Job instance failed without detail error

Hello,

 

I've been building a series of Dataflows to transform and load data between our Bronze and Silver Lakehouses across a couple different data providers. In my architecture, we have 1 dataflow per data provider, which controls the transformations being applied to each applicable table before the final load step.

 

First off, the error code 20302 is incredibly useless. The attached link in Fabric takes you to a general error code docs page where the code isn't even listed. As you've seen in the subject, it seems to be used as a catch all error code.

 

Through experimentation, I've learned a couple nuanced quirks about Fabric (while hoping to solve 20302), and maybe I'm green, but I haven't found any resources online discussing these issues.

 

For starters, it looks like there is some backend dependency on the lakehouse connection established in the Dataflow, and the associated schema in the lakehouse. What I mean is, if personA sets up a Dataflow/Lakehouse, and personB gets into the Dataflow, personB will be asked to configure the connection (since at the time, the connection will have been configured through personA's account). But even if personB doesn't make any other change, when the dataflow is triggered again, it will fail on error 20302. I don't really know what causes this issue, and it may have some relation to the dataflow resource ownership, lakehouse connection authorization, or schedule ownership (for a pipeline), but for whatever blackbox reason, it fails. Currently the bandaid we've been using is to reestablish all the connections, but this still fails occasionally without reason.

 

Which leads me to my next point. One of my data providers is from a Snowflake db, with the initial pull into Bronze being done through a Notebook (via API), which leverages Streams to reduce costs. This is the only discernable difference, but the issue isn't with how data is being pulled into Bronze, that happens consistently without fail. Its the dataflow, which gives us the 20302 error, even if no one else has touched the dataflow (contradictory to my findings above). I don't really know what to change, or where to experiment to make Fabric happy, but right now it will only successfully run if I completely drop and replace the tables/connections in my Lakehouse/Dataflow.

If anyone has any experience with this issue, or insight into what I might be doing wrong, I'd greatly appreciate your help. Apologies if my tone is a little aggressive, I've found this to be incredibly frustrating.

 

Thanks!

1 ACCEPTED SOLUTION

Hi @nathan-verghis ,
Thanks for the follow-up.

Since the issue persists even after trying the available workarounds and there's no clear indicator from the Dataflow logs, the best next step would be to raise a support ticket with Microsoft for deeper investigation. If you've already raised a support request, please consider sharing any insights or resolutions provided by the support team here, it would be helpful for others facing the same issue.

Also thanks to @miguel and @Ilgar_Zarbali for addressing this and sharing your valuable insights.

 

Best Regards,
Vinay,
Community Support Team.

 

View solution in original post

11 REPLIES 11
Ilgar_Zarbali
Most Valuable Professional
Most Valuable Professional

Error 20302 in Dataflow Gen2 usually relates to connection or ownership issues. If a different user edits or opens the dataflow, it may break the connection. To avoid this, use workspace-managed connections and ensure all users have the right permissions. Also, check that the Lakehouse schema hasn’t changed. Sometimes, re-binding the connection or re-saving the dataflow helps.

I have made sure that all users in the workspace also have access to the connection, however this issue is occurring even if other users don't interact/update the Dataflow

v-veshwara-msft
Community Support
Community Support

Hi @nathan-verghis ,

Following up to see if your query has been resolved. If any of the responses helped, please consider marking the relevant reply as the 'Accepted Solution' to assist others with similar questions.

If you're still facing issues, feel free to reach out.

Thank you.

Unfortunately this is still an issue. I've attempted the solutions posted so far, however haven't had any success. So far just manually running the dataflow seems to work better than it being triggered by the pipeline on a schedule, but I don't see why that should cause any issues. For insight, this error tends to occur about a minute into the dataflow run, but I don't see any schema changes that would cause the issue (especially since this is only happening between Bronze and Silver, not the initial load into Bronze)

Hi @nathan-verghis ,
Thanks for the follow-up.

Since the issue persists even after trying the available workarounds and there's no clear indicator from the Dataflow logs, the best next step would be to raise a support ticket with Microsoft for deeper investigation. If you've already raised a support request, please consider sharing any insights or resolutions provided by the support team here, it would be helpful for others facing the same issue.

Also thanks to @miguel and @Ilgar_Zarbali for addressing this and sharing your valuable insights.

 

Best Regards,
Vinay,
Community Support Team.

 

Hi @nathan-verghis ,

We’re following up as we haven’t heard from you in a while. May I ask if you were able to raise a support ticket and receive any guidance from the support team?

If you’ve already shared the ticket details via direct message with @miguel, that’s perfectly fine.

If there are any key insights or resolutions you can share here (while keeping any sensitive information private), it would be helpful for others facing similar issues.

If there are no further updates, we may consider closing this thread.

 

For any future questions or assistance, please don’t hesitate to start a new discussion in the Microsoft Fabric Community Forum. We’ll be happy to help.

Thank you for being a valued member of the Microsoft Fabric Community.

Hi!

My name is Miguel Escobar and I'm a product manager for Dataflow Gen2. Will be happy to help you reach a solution.

 

Could you please share some more information?

  • Are you using a Dataflow Gen2 or is it a Dataflow Gen2 with CI/CD support?
  • Could you please share exactly where you're seeing the 20302 error? is it within the Data pipelines? if it is within the pipelines, could you please head over to the Workspace list, select the acitons menu of the Dataflwo that failed the refresh, select the option with the label "recent runs", find the run that failed and see what errors are shown within that detailed report of the failed refresh? if you could provide a screenshot of the error from the recent runs that would help tremendously

If you have a support ticket, you can also share it with me via a private direct message and we can follow up through that secure support channel.

v-veshwara-msft
Community Support
Community Support

Hi @nathan-verghis ,

Just checking in to see if you query is resolved and if any responses were helpful. If so, kindly consider marking the helpful reply as 'Accepted Solution' to help others with similar queries. 

Otherwise, feel free to reach out for further assistance.

Thank you.

jcantwell
Regular Visitor

Hi all, 

 

I just wanted to add a few things. I've seen a number of different 20302 errors on dataflow refreshes. Some I have figured out, some I haven't. 

I just saw this specific version today: 
"Dataflow refresh job failed with status: Failed. Error Info: { errorCode: JobInstanceStatusNotFound, message: Job instance not found, requestId: 1702ea03-eada-4865-96c2-73c4f9c6ee9b }"

I looped through all the dataflows (Gen 2 CICD) ```````````````````````````````````````````````````````````````````````````` in a workspace (all created by me, with my own connections, no one else has touched them, and 2 out of 8 failed with that message. I checked out the recent runs of those dataflows. interestingly, these 2 dataflows were still actively running. I thought maybe it was some sort of resource error or timeout error so I was looking at the durations. One of the two that caused that error in the pipeline happened after 6m 10s. The other after 5m 14 seconds. However one of the successful runs had a duration of 5m 14 seconds. I checked the job instance status later for these dataflows, and both show failed, though the logs of the runs themselves show everything succeeded. 

A couple of the other errors I've seen, one with a somewhat helpful error message, one very much not. 

 

The first has an error something like notpublishederror..I can't remember exactly. With Gen2 CICD, some folks might not yet realize that when you deploy them to a new workspace you have to 1: Manually update the data destination. There is no functionality yet to use deploymen t parameters for datasources or data destinations for Gen 2 CICD. And 2: You must Save and Run (the new "publish") the CICD once deployed. All of this, every time. Kind of annoying but it works. There just doesn't seem to be a good way to handle deployments with varying data sources and destinations by environment yet, despite these being CICD versions. 

The other error will simply say "Invalid Request, Unexpected Dataflow Error". What I've learned so far is that the dataflow activity will work fine with Gen 2 CICD dataflows if you select them from the dropdown statically. Which, of course, isn't a very realistic use case. When running the dataflows dynamically and supplying the dataflow Id at runtime, the activity doesn't know that it's a CICD version, and it's missing a specific dataflow type property and value that it needs in order to process these. I have found a kind of work around where you can manually edit the pipeline json and add this property to the activity object code. And it works. But, if you open that dataflow activity and work with it and then close it, it has no overwritten your code change and defaulted back to removing that property and it will fail again. There is some good information in this Reddit thread. https://www.reddit.com/r/MicrosoftFabric/comments/1khpav5/issue_refreshing_gen_2_cicd_dataflow_in_pi... 

Hi @jcantwell ,
Thanks for engaging with Microsoft Fabric Community and for the detailed explanation.

At the moment, parameterizing the DataflowId in the Dataflow activity within Pipelines is not supported for Dataflows with CI/CD (Git integration) enabled. This is a current limitation while the feature is in preview. If you use dynamic DataflowId references in Pipelines, they will only work for legacy Dataflow Gen2 instances without CI/CD support. For now, CICD Dataflows should be referenced by selecting them directly from the dropdown in the activity configuration. As an alternative, it’s possible to trigger Dataflow Gen2 CI/CD variants using the new Fabric REST APIs.

Reference: Dataflow activity - Microsoft Fabric | Microsoft Learn

Similar discussion: How to make a Dataflow pipeline generic in Microso... - Microsoft Fabric Community

 

For deployments using Dataflow Gen2 with CI/CD, after deploying to a new workspace, it’s necessary to manually update the Data destination, as deployment parameters are not yet supported for data sources or destinations.

Additionally, Publish is replaced by Save and Run to finalize any changes.

These limitations are temporary, and Microsoft is working on supporting dynamic usage of CICD-enabled Dataflows and improving the deployment experience.

Hope this helps, and thanks for your patience.

 

v-veshwara-msft
Community Support
Community Support

Hi @nathan-verghis ,

Thank you for reaching out and for sharing the details of your architecture and observations.

The error code 20302 in Fabric is a generic message that does not provide much detail, which makes it difficult to troubleshoot directly. Based on the information provided and known patterns with similar cases, this error can sometimes occur due to several underlying reasons.

 

One common cause relates to how Dataflow and Lakehouse connections are configured and managed, especially when different users create or modify the connections. When a Dataflow is created or connected to a Lakehouse using one user’s credentials, another user accessing or modifying the Dataflow may encounter errors if their permissions or authentication context differs from the original creator. Reestablishing connections or using a shared service account can sometimes mitigate this.

Related discussion: Solved: Re: Pipeline Failed: Error Code 20302 User config... - Microsoft Fabric Community

 

There have also been reports of intermittent network issues affecting Dataflow runs. While your data ingestion from Snowflake into the Bronze layer works consistently, the failure in the subsequent Dataflow could be related to transient network problems that are not always surfaced clearly in error messages.

Issue With Dataflow Gen2 Lakehouse Connection - Microsoft Fabric Community

 

Another possibility is resource limitations, especially when working with larger Dataflows or complex transformations. If the Dataflow exceeds available system resources during execution, it may fail without detailed logging.

Re: BUG::TRANSIENT::ERROR CODE 20302::user config ... - Microsoft Fabric Community

 

To investigate further, please check the detailed error logs in the Dataflow’s refresh history. Sometimes, nested messages within the job history may provide additional clues beyond the generic 20302 error. If any specific error messages are visible there, please share them, that would help to narrow down the root cause.

 

Some additional similar discussions: Solved: Re: Couldn't refresh the entity because of an issu... - Microsoft Fabric Community

Data Flow Gen2 Issue - Microsoft Fabric Community

 

Hope this helps. Please reach out for further assistance.
If this post helps, then please consider to give a kudos and Accept as the solution to help the other members find it more quickly.


Thank you.

Helpful resources

Announcements
May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.