Microsoft is giving away 50,000 FREE Microsoft Certification exam vouchers!
Enter the sweepstakes now!Prepping for a Fabric certification exam? Join us for a live prep session with exam experts to learn how to pass the exam. Register now.
Hi,
I'm not sure I'm posting this in the right place. Based on here as I couldn't find a forum about PySpark.
In summary I want to write spark dataframe to directory as delta output with pyspark. But I get the error "Authentication Failed with Bearer token is not available in request" and not much is gained from there.
Is there anyone to help?
Thanks,
Tolga
I was also able to resolve it as well. Basically, you need to add lakehouse as source in the notebook and it will work
I think that's because you haven't specified the area (Files/Tables) to which you want to write the data.
Assuming you want to create a delta table in a managed area of the lakehouse (Tables), try this:
df_bonus.write.format("delta").save("Tables/WriteTest")
Make sure you have your lakehouse pinned on in the Lakehouse explorer on the left.
I am having the same issue. I am simply reading a file from Lakehouse, and made sure file exist and path is correct
Hi @LineshGajeraq,
I solved this issue many month ago 🙂 Try this pyspark code please.
Check out the April 2025 Fabric update to learn about new features.
Explore and share Fabric Notebooks to boost Power BI insights in the new community notebooks gallery.
User | Count |
---|---|
32 | |
20 | |
17 | |
9 | |
7 |
User | Count |
---|---|
48 | |
35 | |
19 | |
15 | |
13 |