Don't miss your chance to take exam DP-600 or DP-700 on us!
Request nowFabric Data Days Monthly is back. Join us on March 26th for two expert-led sessions on 1) Getting Started with Fabric IQ and 2) Mapping & Spacial Analytics in Fabric. Register now
Hi,
I'm not sure I'm posting this in the right place. Based on here as I couldn't find a forum about PySpark.
In summary I want to write spark dataframe to directory as delta output with pyspark. But I get the error "Authentication Failed with Bearer token is not available in request" and not much is gained from there.
Is there anyone to help?
Thanks,
Tolga
Solved! Go to Solution.
Hi @LineshGajeraq,
I solved this issue many month ago 🙂 Try this pyspark code please.
I was also able to resolve it as well. Basically, you need to add lakehouse as source in the notebook and it will work
I think that's because you haven't specified the area (Files/Tables) to which you want to write the data.
Assuming you want to create a delta table in a managed area of the lakehouse (Tables), try this:
df_bonus.write.format("delta").save("Tables/WriteTest")Make sure you have your lakehouse pinned on in the Lakehouse explorer on the left.
I am having the same issue. I am simply reading a file from Lakehouse, and made sure file exist and path is correct
Hi @LineshGajeraq,
I solved this issue many month ago 🙂 Try this pyspark code please.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Check out the February 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 23 | |
| 12 | |
| 10 | |
| 7 | |
| 7 |
| User | Count |
|---|---|
| 45 | |
| 40 | |
| 23 | |
| 15 | |
| 14 |