Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Fabric Data Days Monthly is back. Join us on March 26th for two expert-led sessions on 1) Getting Started with Fabric IQ and 2) Mapping & Spacial Analytics in Fabric. Register now

Reply
tolgakurt
Frequent Visitor

Write delta file Error : Authentication Failed with Bearer token is not present in the request

Hi,

I'm not sure I'm posting this in the right place. Based on here as I couldn't find a forum about PySpark.

In summary I want to write spark dataframe to directory as delta output with pyspark. But I get the error "Authentication Failed with Bearer token is not available in request" and not much is gained from there.

Is there anyone to help?

 

Thanks,

Tolga

 

Screenshot 2023-07-18 173721.png

1 ACCEPTED SOLUTION

Hi @LineshGajeraq,

 

I solved this issue many month ago 🙂 Try this pyspark code please.

 

# Write to SQL Table
import time
start = time.time()

data.write \
  .format("jdbc") \
  .mode("overwrite") \
  .option("url", "jdbc:sqlserver://[Your DB IP];databaseName=[Your Database Name];") \
  .option("dbtable", "[TableName]") \
  .option("user", "") \
  .option("password", "").save()
end = time.time()
print(f"Execution time: {end-start:.2f} second")
 
If you get an error again, can you let me know?

View solution in original post

4 REPLIES 4
LineshGajeraq
Microsoft Employee
Microsoft Employee

I was also able to resolve it as well. Basically, you need to add lakehouse as source in the notebook and it will work

 

LineshGajeraq_0-1724941090916.png

 

Anonymous
Not applicable

I think that's because you haven't specified the area (Files/Tables) to which you want to write the data.

 

Assuming you want to create a delta table in a managed area of the lakehouse (Tables), try this:

df_bonus.write.format("delta").save("Tables/WriteTest")

Make sure you have your lakehouse pinned on in the Lakehouse explorer on the left.

I am having the same issue. I am simply reading a file from Lakehouse, and made sure file exist and path is correct 

 

df = spark.read.format("csv").option("header","true").load("Files/orders/2019.csv")
display(df)
 
Getting following error
Py4JJavaError: An error occurred while calling o4726.load. : Operation failed: "Bad Request", 400, HEAD, "http://onelake.dfs.fabric.microsoft.com/XXXXXXXXXXXXXXXXX/user/trusted-service-user/Files/orders/201...
 

Hi @LineshGajeraq,

 

I solved this issue many month ago 🙂 Try this pyspark code please.

 

# Write to SQL Table
import time
start = time.time()

data.write \
  .format("jdbc") \
  .mode("overwrite") \
  .option("url", "jdbc:sqlserver://[Your DB IP];databaseName=[Your Database Name];") \
  .option("dbtable", "[TableName]") \
  .option("user", "") \
  .option("password", "").save()
end = time.time()
print(f"Execution time: {end-start:.2f} second")
 
If you get an error again, can you let me know?

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

February Fabric Update Carousel

Fabric Monthly Update - February 2026

Check out the February 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.