March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
Hello,
I'm looking to do something relatively simple, recover data from a SQL Server On premise database.
The last error message I got was this one:
A problem has occurred accessing your storage account or content. Check that it is correctly configured and accessible, then try again later. (Request ID: 8e57de0b-a75b-49c3-bd0c-f37ccce01098).
Any ideas?
Solved! Go to Solution.
Hi @amolt !
Could you please share your ticket ID / service request ID with me so I can take a closer look at your case? You can post it here or send it privately to me over a direct message. I wonder what could be consuming so much and if perhaps there are opportunities of improvements at the query level.
I will be marking this as resolved as the underlying issue is related to capacity and just resource consumption. Such resource consumptions depend on a number of factors beyond just the capacity as it also relies on your data source, the connector in use and the overall query or dataflow that was created. There are techniques in which you could stage the data first and perform the transformations afterwards, so we could privately explore those options that would work best for your specific case.
Best!
Hi everyone,
After MANY hours with MS support working through the issue(s), I have confirmed that the issue is related to the capacity itself.
First test I did was to move my worspaces back to the Trial Capacity, which resolved all issues (took about 2 minutes and I was able to load and see everything again).
Then, through the Azure Portal, I created a NEW capacity (F8 like my old capacity) and I switched all my workspaces to use this new capacity, and everything is now working as expected. (Make sure to Pause your old capacity to not be charged for it).
The support person acknoledged that MS is working hard to fix some performance issues with the Fabric capacities, and this was a functional workaround (considering that the product is still in "Preview").
Hope this helps others.
I just paused my fabric element on Azure, switched it back on and viola. All working again.
Thanks!
Hi,
Thanks for your feedback.
Although my capacity (F4) has now been paused for two days, I am still unable to generate a 19-line PDF report without triggering an error: "The visual element has exceeded the available resources". If even such a simple action isn't possible with an F4 capability, I wonder what the point of sizing is. It's useless.
Hi @amolt !
Could you please share your ticket ID / service request ID with me so I can take a closer look at your case? You can post it here or send it privately to me over a direct message. I wonder what could be consuming so much and if perhaps there are opportunities of improvements at the query level.
I will be marking this as resolved as the underlying issue is related to capacity and just resource consumption. Such resource consumptions depend on a number of factors beyond just the capacity as it also relies on your data source, the connector in use and the overall query or dataflow that was created. There are techniques in which you could stage the data first and perform the transformations afterwards, so we could privately explore those options that would work best for your specific case.
Best!
Hi @miguel , my open case number is 2310130040005839 regarding my capacity issues. I agree that there might be something I (we) are doing that is inefficient and would welcome any input as to how we can work around these issues.
Thank you
I agree there is a definite issue here!
I am processing a total of 4 Gb of data, using a combination of dataflows (Gen2), pipelines and Notebooks - with 3 lakehouses and 1 warehouse.
I am using an F8 capacity and after 24 hour on my new capacity, I am back on the same situation. I had to revert to the Trial capacity again.
If I really need a F64 capacity to process 4 GB of data, Fabric may not be a feasable option for us.
Hi @JFTxJ ,
In fact I had to do the same thing as you, I went back to trial capacity.
However, I noticed that the smaller my Gen2 dataflows were, the more likely they were to terminate. So I have a set of dataflows that I run in series and certainly not in parallel. But despite this, I still have problems with dataflows that merge two requests and then the process never finishes correctly.
If possible, you could try reducing the size of your dataflows and serializing their execution in a pipeline. I hope this helps.
HAHA! This is exactly what I was in the process of doing/trying.
FYI: the only reason I am using dataflows is to access data through the on-prem data gateway to our on-prem SQL servers, which is not currently possible with the Copy Data activity... So since I am not doing any merges, I am hoping this will help stabilize my pipelines.
Let's keep this conversation going! Thank you for sharing your experience with me. 🙂
This is my current stats (if it can help anyone):
Utilization:
Overages:
Hi @amolt
Sorry for the inconvenience. I have escalated this to our internal team and will definitely come back to you if I get an update. Please continue using Fabric Community for help regarding your queries.
Appreciate your patience.
@amolt did you get any resolution on this? I am in the same situation. Lakehouse is unavailable, Pipelines fail to load, Dataflows (Gen2) are throwing the error you mentionned.
Having the exact same issue, litreally every DFG2 is failing.
Did you get a fix?
Hi @JFTxJ,
I have no real resolution at the moment, but I think I have an idea of the cause.
A few days ago I bought an F4 capacity and migrated some of my workspaces to this capacity. Since then, I've been having a multitude of problems which, it seems to me, are linked to overexploitation of the capacity.
If I look at the Fabric Capacity Metrics report, I'm over 100% CU utilization. I think that's the cause of the problem.
On the other hand, if this is the case, the consequences are significant, as all my workspaces are impacted, not just the one concerned by the processing that generated these spikes in utilization.
Nevertheless, after a while (between 5 and 10 minutes) the lakes become available again.
Hi!
When exactly did you get that error? was it during a refresh operation or during the authoring stage?
What version of the gateway are you using? if possible, please check if you can update or make sure that you're using the latest version of the Gateway.
I get this error during refresh of a Gen2 data flow
Gateway : 3000.190.19 (September 2023)
Additional information:
Today, I still get the same error message. My lakehouse is unavailable. Here is the error message :
OR
Lakehouse explorer :
Thanks
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.
Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.