Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
SørenBrandt
Regular Visitor

400 error when accessing lakehouse files from notebook

Hi all!

Pretty much every day, I experience spurious errors when accessing (csv) files in a notebooks default lakehouse. By spurious, I mean that they may happen in one session, but then later the same day in a different session everything works absolutely fine.

In practice, the errors occur on statements like the following:

 
df = spark.read.format("csv").option("header","true").option("delimiter",";").load("<path to a file in the notebooks default lakehouse>")

The error message is something like the following:

Py4JJavaError: An error occurred while calling o4753.load.
: Operation failed: "Bad Request", 400, HEAD, http://onelake.dfs.fabric.microsoft.com/xxx-xxx-xxx-xxx-xxx/user/trusted-service-user/Files/x/y/filename.csv?upn=false&action=getStatus&timeout=90

Any idea what could be causing this, and if there's anything can do about it?

As bonus information, the account I am logging into Fabric with is a guest account in the Fabric tenant.

 

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @SørenBrandt 

 

You may try reading the CSV file with ABFS path instead and check whether this will be more reliable. This could avoid failures in a scenario that the default lakehouse might be changed.

 

Best Regards,
Jing
Community Support Team

View solution in original post

6 REPLIES 6
TheHetz
Regular Visitor

I too have seen this behavior with the default lakehouse.  In my case, it seemed to be cuased by another operation outside of Fabric processing the file, so to speak.  The csvs in my case are written by an external service within Business Central, which runs every 2 hours.  I needed to ensure that my notebook is only run after the write operation completed.  If the file was being written while the notebook was being run, I would get this error.

 

Hope this helps!

Excellent - thanks for pointing this out. I had the exact same scenario happening to me, and didn't think about that.

Anonymous
Not applicable

I feel like there might be some factors or issues that would cause the notebook session to fail to recognize the default lakehouse sometimes. Using the ABFS path is more stable because it directly specifies the absolute path of the lakehouse, rather than relying on the default lakehouse.

 

To improve the stability and reliability of reading the default lakehouse, you might try using the %%configure command at the beginning of the notebook to explicitly specify a default lakehouse. This will take effect in the current session. This magic command is supported when running the notebook directly or as a notebook activity in a pipeline.

ReferenceSpark session configuration magic command

 

Best Regards,
Jing

Anonymous
Not applicable

Hi @SørenBrandt 

 

You may try reading the CSV file with ABFS path instead and check whether this will be more reliable. This could avoid failures in a scenario that the default lakehouse might be changed.

 

Best Regards,
Jing
Community Support Team

Anonymous
Not applicable

I have the exact same problem and can't use the ABFS path because my notebooks are part of a deployment pipeline that changes the default lakehouse via deployment rule. So I totally rely on the default lakehouse being flexible.

Hi @Anonymous ,

Thankyou for your suggestion 👍. I have changed to using the ABFS path, and right now it seems to be going through on first attempt. However, since this has been a bit unpredictable, I will keep trying to see if the problem has gone for good.

That said, the default lakehouse has not changed as far as I'm aware, so if this turns out to be the solution, then the root cause may be something else.

BR,

Søren 

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.