The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
We are trying to load some larger on prem SQL Server tables to the Lakehouse via Gen2 Dataflow. We are receiving the below error many minutes after the dataflow has ran.
Error Code: Challenge Error, Error Details: Data source credentials are missing or invalid. Please update the connection credentials in settings, and try again. (Request ID: 17e029f6-256d-4763-b9f1-0940f17c14de).
The thing is, we see the query run on prem. The error looks to be thrown after the amount of time we would expect the data to take to load to Fabric. So from our perspective, it looks like it connects, runs the query, returns data, then throws a credential error!
See screenshots below.
Solved! Go to Solution.
For Gateway-based refreshes we have an existing limitation with token refresh that causes Gateway jobs over an hour to fail. It's not quite as strict as the entire job needing be under an hour, but if portions of the job take more than an hour, the limitation is hit.
Are you temporarily able to partition your refreshes so that the Gateway-based queries are partitioned into jobs that take less than an hour? You can then append (union) the partitions via a separate dataflow that runs in the cloud.
Thanks
Still having this issue.
hey! have you created a support ticket for it? if yes, could you please share that ticket identifier?
For Gateway-based refreshes we have an existing limitation with token refresh that causes Gateway jobs over an hour to fail. It's not quite as strict as the entire job needing be under an hour, but if portions of the job take more than an hour, the limitation is hit.
Are you temporarily able to partition your refreshes so that the Gateway-based queries are partitioned into jobs that take less than an hour? You can then append (union) the partitions via a separate dataflow that runs in the cloud.
Thanks
I wonder if this is also another issue related just to the data set size?