Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!
Hello,
I am trying to write a dataframe to Fabric Warehouse using SynapseSQL connector. Somehow it is throwing attribute error.
Is anyone facing the same issue.
Code Snippet :
Note : I am already imported the modules.
The same code works for my colleague, but not for me. The same code created a new table in Warehouse without any warning or error.
Solved! Go to Solution.
Hi Folks,
The same code is working now. I havent changed anything but now it is running. Seems to be something internal issue.
Hi @chetanhiwale,
Do you configure the warehouse anywhere in the notebook?
//For warehouse
spark.conf.set("spark.datawarehouse.<warehouse name>.sqlendpoint", "<sql endpoint,port>")
and are you working with Spark Runtime 1.3? This is only available in runtime 1.3 according to the docs:
Spark connector for Microsoft Fabric Data Warehouse - Microsoft Fabric | Microsoft Learn
I havent setup following the property as the same code worked for my colleage also I have runtime 1.3
spark.conf.set("spark.datawarehouse.<warehouse name>.sqlendpoint", "<sql endpoint,port>").
Hello @chetanhiwale
Could you please try the following steps:
1. Set the append mode to "overwrite" by using df.write.mode("overwrite").synapsesql.
2. Is the warehouse you are writing to located within the same workspace as your notebook?
3. You mentioned that the code works for your colleague. Are they using the same workspace? If not, would you be able to check if their Fabric runtime matches yours?
Hi @deborshi_nag ,
Both warehouse and notebook are in same workspace. Also I do have 1.3 runtime and tried with overwrite. But the issue still persists.
Hi @chetanhiwale make sure you have the following 2 lines in the same cell in your notebook.
Hi Folks,
The same code is working now. I havent changed anything but now it is running. Seems to be something internal issue.
Hi @chetanhiwale if it is working can you mark it as resolved please?
I dont think your schema will be same in df and table.
Make sure your table 'ok' has integer type for column id. Then execute the below code it should able to execute in warehouse.
HI @BalajiL ,
Both schemas are same. As the spark is not able to find the synapsesql connector, more or less it seems issue with Spark connector / Spark runtime not with the schema.
pls check your schema in table and dataframe are equal.