Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi. Hoping someone knows how to tackle this one - stuck on it for a couple of days now 😞
When I run the following code:
df = spark.read.format("parquet").load("Files/landing-zone/yellow_tripdata_2024-01.parquet")
display(df)
It works as expected.
However, when I run the following code:-
df = spark.read.format("parquet").load("Files/landing-zone/*")
I receive the following error:
Spark_System_ABFS_OperationFailed - An operation with ADLS Gen2 has failed, which is typically due to a permissions issue.
Ensure that the user running the Spark job has the Storage Blob Data Contributor role assigned to all referenced ADLS Gen2 resources.
Check the Spark logs for the storage account name experiencing the issue.
I’ve confirmed that the Storage Blob Data Contributor role has been granted, but the logs indicate a SAS-related issue, which I find puzzling.
Not a Firewall issue. Went to the extent of simply creating a new Storage Account - just in case.
Hoping that someone has encountered this before - if you have any insights into what might be going on?
Solved! Go to Solution.
Quick update on this: -
I raised a support ticket with Microsoft Fabric Support, and they’ve conducted an in-depth investigation. They were able to replicate the issue when uploading a single Parquet file via the shortcut method above. For this test, I used NYC data from yellow_tripdata_2024-01.parquet.
Interestingly, the issue disappears when multiple files are present in storage or when using other file formats. This confirms that the problem is specific to single Parquet files accessed via shortcuts.
I've now shared this blog with the team, and they have escalated it as a potential bug. Once a solution is found, they will provide an update here.
In the meantime, as a workaround for Parquet files, I'm using: -
df = spark.read.parquet("abfss://container@storage_account.dfs.core.windows.net/*")
display(df)
Hope this helps anyone facing a similar issue!
SukiB
Quick update on this: -
I raised a support ticket with Microsoft Fabric Support, and they’ve conducted an in-depth investigation. They were able to replicate the issue when uploading a single Parquet file via the shortcut method above. For this test, I used NYC data from yellow_tripdata_2024-01.parquet.
Interestingly, the issue disappears when multiple files are present in storage or when using other file formats. This confirms that the problem is specific to single Parquet files accessed via shortcuts.
I've now shared this blog with the team, and they have escalated it as a potential bug. Once a solution is found, they will provide an update here.
In the meantime, as a workaround for Parquet files, I'm using: -
df = spark.read.parquet("abfss://container@storage_account.dfs.core.windows.net/*")
display(df)
Hope this helps anyone facing a similar issue!
SukiB
Hi Sukhminder,
Here’s a workaround that we can offer for this issue as this is a known bug.
In this case, we are placing the file at the container level, which makes it inaccessible due to a known bug in the fabric product. Below is the link to the Bug Item and a supporting document to mention it as a known issue.
Known Issues Tracker - [External ADLS] Shortcuts Load to Tables does not work
Bug Item created on this Bug 1548911 Head call to OneLake for External ADLS is failing at container level.
However, to ease your productivity, we have prioritized this and identified workarounds for testing:
and To use the internal storage as a shortcut.
ETA for the bug fix is provided by the end of March.
Hi @SukiB,
Thank you for reaching out to the Microsoft Forum Community.
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.
Thanks for following up. Unfortunately, the issue remains unresolved. I have raised a support ticket with Microsoft's Fabric Support Team, and they are currently investigating further. It doesn’t appear to be a straightforward issue to resolve, but as soon as I have a definitive solution, I’ll get back to you and share how we resolved it.
Just trying to narrow this issue down - surprisingly this works...
Not sure. If we have to do something special while creating shortcuts. Its a straightforward process in UI.
anyways you can mark your last message as solution, as it might help someone struggling with this kind of issue
Hello @SukiB ,
have you checked ACL for the respective folder?
I have just tested what you have configured and it works perfectly.
In my example, I created folders manually and uploaded Parquet files to each folder. Then I tested it and it works.
How did you load the data? There may be a problem with loading the data and the subsequent authorizations.
Many Greetings
Thanks spaceman127. I've created a brand new storage from sratch, and given permission to the user via IAM. My Fabric username is part of the following: -
Owner, Contributor, Storage Blob Data Contributor and Storage Blob Data Reader.
Ref. the ACL - I've set the Security Principal as follows...
Owner: $superuser - Read, Write, Execute
Owning group: $superuser - Read, Write, Execute
Other - Read, Write, Execute
Still get the same error.
Hello @SukiB
Did you Enable hierarchical namespace?
Verify execute permissions on parent folders and read permissions on files in ADLS Gen2’s hierarchical namespace
Thanks Nilendra.
Hierarchical Namespace is enabled. For manage ACL, I've provision everything to Read, Write and Execute - just in case. Still no luch I'm afraid.
I found a similar post here with some errors while reading multiple CSV files: https://community.fabric.microsoft.com/t5/Fabric-platform/Read-multiple-files-in-Fabric-Notebook/td-.... Can you try to only include the folder which contains the files, like this:
df = spark.read.format("parquet").load("Files/landing-zone")
Thanks FabianSchut.
Tried both the following...
df = spark.read.format("parquet").load("Files/landing-zone")
as well as
df = spark.read.format("parquet").option("header","true").load("Files/landing-zone")
again - same error...
Spark_System_ABFS_OperationFailed
An operation with ADLS Gen2 has failed. This is typically due to a permissions issue. 1. Please ensure that for all ADLS Gen2 resources referenced in the Spark job, that the user running the code has RBAC roles "Storage Blob Data Contributor" on storage accounts the job is expected to read and write from. 2. Check the logs for this Spark application. Inspect the logs for the ADLS Gen2 storage account name that is experiencing this issue.
Furthermore, do you know how the Shortcut Authorization is set up? Did you create the shortcut to ADLS Gen2 yourself, or did you use an existing shortcut? The shortcut is probably set up with a SAS-token, given the error message. Make sure that the SAS-token has sufficient permissions or set up a new shortcut with your user account as authorization and try again.
What's weird is that whether I use Account Key or SAS Authorisation - it gives me the same error...
Spark_System_ABFS_OperationFailed
An operation with ADLS Gen2 has failed. This is typically due to a permissions issue. 1. Please ensure that for all ADLS Gen2 resources referenced in the Spark job, that the user running the code has RBAC roles "Storage Blob Data Contributor" on storage accounts the job is expected to read and write from. 2. Check the logs for this Spark application. Inspect the logs for the ADLS Gen2 storage account name that is experiencing this issue.
But, loading from OneDrive in Fabric works - so I know the code is good. In summary...
df = spark.read.parquet("Files/landing-zone-sas/*") --doesn't work
df = spark.read.parquet("Files/landing-zone-accountkey/*") --doesn't work
df = spark.read.parquet("Files/landing-zone-onedrive/*") --works - this is the files being uploaded into onedrive
Any further thoughts - much appreciated.
Hello @SukiB
Give this a try
df = spark.read.parquet("Files/landing-zone/*.parquet")
df = spark.read.parquet("mylakehouse/Files/landing-zone/*.parquet")
Thanks
Thanks Nilendra. Still no luck...
Spark_System_ABFS_OperationFailed
An operation with ADLS Gen2 has failed. This is typically due to a permissions issue. 1. Please ensure that for all ADLS Gen2 resources referenced in the Spark job, that the user running the code has RBAC roles "Storage Blob Data Contributor" on storage accounts the job is expected to read and write from. 2. Check the logs for this Spark application. Inspect the logs for the ADLS Gen2 storage account name that is experiencing this issue.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
13 | |
4 | |
3 | |
3 | |
3 |
User | Count |
---|---|
8 | |
7 | |
6 | |
6 | |
5 |