Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started

Reply
amolt
Advocate II
Advocate II

A problem has occurred accessing your storage account or content

Hello, 

I'm looking to do something relatively simple, recover data from a SQL Server On premise database.

amolt_0-1696867667690.png


The last error message I got was this one:
A problem has occurred accessing your storage account or content. Check that it is correctly configured and accessible, then try again later. (Request ID: 8e57de0b-a75b-49c3-bd0c-f37ccce01098).

Any ideas?

1 ACCEPTED SOLUTION

Hi @amolt !

Could you please share your ticket ID / service request ID with me so I can take a closer look at your case? You can post it here or send it privately to me over a direct message. I wonder what could be consuming so much and if perhaps there are opportunities of improvements at the query level.

 

I will be marking this as resolved as the underlying issue is related to capacity and just resource consumption. Such resource consumptions depend on a number of factors beyond just the capacity as it also relies on your data source, the connector in use and the overall query or dataflow that was created. There are techniques in which you could stage the data first and perform the transformations afterwards, so we could privately explore those options that would work best for your specific case.

 

Best!

View solution in original post

16 REPLIES 16
JFTxJ
Advocate II
Advocate II

Hi everyone,

 

After MANY hours with MS support working through the issue(s), I have confirmed that the issue is related to the capacity itself.

 

First test I did was to move my worspaces back to the Trial Capacity, which resolved all issues (took about 2 minutes and I was able to load and see everything again).

 

Then, through the Azure Portal, I created a NEW capacity (F8 like my old capacity) and I switched all my workspaces to use this new capacity, and everything is now working as expected.  (Make sure to Pause your old capacity to not be charged for it).

 

The support person acknoledged that MS is working hard to fix some performance issues with the Fabric capacities, and this was a functional workaround (considering that the product is still in "Preview").

 

Hope this helps others.

I just paused my fabric element on Azure, switched it back on and viola. All working again.

Thanks!

Hi, 

Thanks for your feedback.
Although my capacity (F4) has now been paused for two days, I am still unable to generate a 19-line PDF report without triggering an error: "The visual element has exceeded the available resources". If even such a simple action isn't possible with an F4 capability, I wonder what the point of sizing is. It's useless.

Hi @amolt !

Could you please share your ticket ID / service request ID with me so I can take a closer look at your case? You can post it here or send it privately to me over a direct message. I wonder what could be consuming so much and if perhaps there are opportunities of improvements at the query level.

 

I will be marking this as resolved as the underlying issue is related to capacity and just resource consumption. Such resource consumptions depend on a number of factors beyond just the capacity as it also relies on your data source, the connector in use and the overall query or dataflow that was created. There are techniques in which you could stage the data first and perform the transformations afterwards, so we could privately explore those options that would work best for your specific case.

 

Best!

HI @miguel ,

I have not opened a ticket concerning my problem.

Hi @miguel ,  my open case number is 2310130040005839 regarding my capacity issues.  I agree that there might be something I (we) are doing that is inefficient and would welcome any input as to how we can work around these issues.

 

Thank you

I agree there is a definite issue here!

I am processing a total of 4 Gb of data, using a combination of dataflows (Gen2), pipelines and Notebooks - with 3 lakehouses and 1 warehouse.

 

I am using an F8 capacity and after 24 hour on my new capacity, I am back on the same situation.  I had to revert to the Trial capacity again.

If I really need a F64 capacity to process 4 GB of data, Fabric may not be a feasable option for us.

Hi @JFTxJ ,

 

In fact I had to do the same thing as you, I went back to trial capacity.
However, I noticed that the smaller my Gen2 dataflows were, the more likely they were to terminate. So I have a set of dataflows that I run in series and certainly not in parallel. But despite this, I still have problems with dataflows that merge two requests and then the process never finishes correctly.
If possible, you could try reducing the size of your dataflows and serializing their execution in a pipeline. I hope this helps.

HAHA!  This is exactly what I was in the process of doing/trying.

 

FYI: the only reason I am using dataflows is to access data through the on-prem data gateway to our on-prem SQL servers, which is not currently possible with the Copy Data activity...  So since I am not doing any merges, I am hoping this will help stabilize my pipelines.

 

Let's keep this conversation going!  Thank you for sharing your experience with me.  🙂

This is my current stats (if it can help anyone):

Utilization:

JFTxJ_0-1697203397118.png

Overages:

JFTxJ_1-1697203444874.png

 

v-cboorla-msft
Community Support
Community Support

Hi @amolt 

 

Sorry for the inconvenience. I have escalated this to our internal team and will definitely come back to you if I get an update. Please continue using Fabric Community for help regarding your queries.

 

Appreciate your patience.

JFTxJ
Advocate II
Advocate II

@amolt did you get any resolution on this?  I am in the same situation.  Lakehouse is unavailable, Pipelines fail to load, Dataflows (Gen2) are throwing the error you mentionned.

Having the exact same issue, litreally every DFG2 is failing.

 

Did you get a fix?

Hi @JFTxJ,
I have no real resolution at the moment, but I think I have an idea of the cause.
A few days ago I bought an F4 capacity and migrated some of my workspaces to this capacity. Since then, I've been having a multitude of problems which, it seems to me, are linked to overexploitation of the capacity.
If I look at the Fabric Capacity Metrics report, I'm over 100% CU utilization. I think that's the cause of the problem.

amolt_0-1696948316853.png


On the other hand, if this is the case, the consequences are significant, as all my workspaces are impacted, not just the one concerned by the processing that generated these spikes in utilization.
Nevertheless, after a while (between 5 and 10 minutes) the lakes become available again.

miguel
Community Admin
Community Admin

Hi!

When exactly did you get that error? was it during a refresh operation or during the authoring stage?

What version of the gateway are you using? if possible, please check if you can update or make sure that you're using the latest version of the Gateway.

 

I get this error during refresh of a Gen2 data flow

Gateway : 3000.190.19 (September 2023)

 

Additional information:
Today, I still get the same error message. My lakehouse is unavailable. Here is the error message :

amolt_0-1696928966984.png

OR 

amolt_2-1696929277355.png

 

 

 

Lakehouse explorer : 

amolt_1-1696929028902.png

 

Thanks

Helpful resources

Announcements
Europe Fabric Conference

Europe’s largest Microsoft Fabric Community Conference

Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.

PBI_Carousel_NL_June

Fabric Community Update - June 2024

Get the latest Fabric updates from Build 2024, key Skills Challenge voucher deadlines, top blogs, forum posts, and product ideas.

MayFBCUpdateCarousel

Fabric Monthly Update - May 2024

Check out the May 2024 Fabric update to learn about new features.

Top Solution Authors