Get certified in Microsoft Fabric—for free! For a limited time, the Microsoft Fabric Community team will be offering free DP-600 exam vouchers. Prepare now
Hi,
I'm not sure I'm posting this in the right place. Based on here as I couldn't find a forum about PySpark.
In summary I want to write spark dataframe to directory as delta output with pyspark. But I get the error "Authentication Failed with Bearer token is not available in request" and not much is gained from there.
Is there anyone to help?
Thanks,
Tolga
I was also able to resolve it as well. Basically, you need to add lakehouse as source in the notebook and it will work
I think that's because you haven't specified the area (Files/Tables) to which you want to write the data.
Assuming you want to create a delta table in a managed area of the lakehouse (Tables), try this:
df_bonus.write.format("delta").save("Tables/WriteTest")
Make sure you have your lakehouse pinned on in the Lakehouse explorer on the left.
I am having the same issue. I am simply reading a file from Lakehouse, and made sure file exist and path is correct
Hi @LineshGajeraq,
I solved this issue many month ago 🙂 Try this pyspark code please.
Check out the October 2024 Fabric update to learn about new features.
Learn from experts, get hands-on experience, and win awesome prizes.
User | Count |
---|---|
3 | |
2 | |
1 | |
1 | |
1 |