Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now

Reply
AnthonyGenovese
Resolver III
Resolver III

Datasource crednetials missing or Invalid AFTER data is being loaded

We are trying to load some larger on prem SQL Server tables to the Lakehouse via Gen2 Dataflow. We are receiving the below error many minutes after the dataflow has ran. 

 

Error Code: Challenge Error, Error Details: Data source credentials are missing or invalid. Please update the connection credentials in settings, and try again. (Request ID: 17e029f6-256d-4763-b9f1-0940f17c14de).

 

The thing is, we see the query run on prem. The error looks to be thrown after the amount of time we would expect the data to take to load to Fabric. So from our perspective, it looks like it connects, runs the query, returns data, then throws a credential error!
See screenshots below. 

AnthonyGenovese_0-1689885451742.png

 

AnthonyGenovese_1-1689885469726.png

 

 

1 ACCEPTED SOLUTION
SidJay
Microsoft Employee
Microsoft Employee

For Gateway-based refreshes we have an existing limitation with token refresh that causes Gateway jobs over an hour to fail. It's not quite as strict as the entire job needing be under an hour, but if portions of the job take more than an hour, the limitation is hit.

 

Are you temporarily able to partition your refreshes so that the Gateway-based queries are partitioned into jobs that take less than an hour? You can then append (union) the partitions via a separate dataflow that runs in the cloud.

 

Thanks

View solution in original post

4 REPLIES 4
AnthonyGenovese
Resolver III
Resolver III

Still having this issue.

hey! have you created a support ticket for it? if yes, could you please share that ticket identifier?

SidJay
Microsoft Employee
Microsoft Employee

For Gateway-based refreshes we have an existing limitation with token refresh that causes Gateway jobs over an hour to fail. It's not quite as strict as the entire job needing be under an hour, but if portions of the job take more than an hour, the limitation is hit.

 

Are you temporarily able to partition your refreshes so that the Gateway-based queries are partitioned into jobs that take less than an hour? You can then append (union) the partitions via a separate dataflow that runs in the cloud.

 

Thanks

AnthonyGenovese
Resolver III
Resolver III

I wonder if this is also another issue related just to the data set size?

Helpful resources

Announcements
New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.

Top Solution Authors
Top Kudoed Authors