Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hello,
I am developing a Python application which currently runs on my local machine. My application code creates a small data in the form pandas DataFrame and I would like to create a table on Fabric with it from my local machine. (P.S. my application would later run in a stand-alone fashion in a remote Azure VM that is under a different tenant.)
Currently I can read data from Fabric using the standard `pandas.read_sql_query` API by passing an authenticated `pyodbc` connection as a parameter. I am using either Azure CLI or Service Principal authentication.
But I am unable to write data to Fabric.
Refs - the steps to create a token based authentication using `pyodbc` are described in this article
HI @csubhodeep,
Can you please share some more detail about these operations? They should help us clarify your scenario and test.
BTW, did these records has been loaded to the DataFrame? If that the case, you can use panda function to write data to your lakehouse:
Read and write data with Pandas - Microsoft Fabric | Microsoft Learn
Notice: please don't forget to setting the default lakehouse for notebook.
Regards,
Xiaoxin Sheng
Hello @Anonymous ,
I will try to answer your questions below
Can you please share some more detail about these operations?
I am creating a simple `pandas.DataFrame` using the below snippet
import pandas as pd
df = pd.DataFrame({"a": [1,2,3], "b": ["xx", "yyy", "z"]})
did these records has been loaded to the DataFrame?
YES ! As you can see above
If that the case, you can use panda function to write data to your lakehouse:Read and write data with Pandas - Microsoft Fabric | Microsoft Learn
I have already referred the documentation that you have shared but unfortunately this documentation does not exactly match my current use case. In the documentation it is assumed that the notebook instance is running on Fabric but in my case it is a plain Jupyter notebook running in my local PC and not on Fabric. I have an authenticated `pyodbc` connection to Fabric which helps me "read" data from Fabric using the standard `pandas.read_sql_query` method but I cannot use `df.to_sql` method to write records back to Fabric.
What is the best approach to achieve it?
HI @csubhodeep,
Ok, I know you are working on the local jupyter notebook with pyodbc connection to fabric.
What source are you linked? The warehouse or SQL analytics endpoint? AFAIK, the SQL analytic endpoint is read only and not allow you edit on it.
BTW, have you confirm if the issue only appear when you use specific format to save data or all save/edit operations?
If all type of edit operations not allowed, it may means the pyodbc is a read permission connections which not support edit operations.
Regards,
Xiaoxin Sheng
What source are you linked? The warehouse or SQL analytics endpoint?
The SQL Analytics endpoint of the Warehouse and not Lakehouse
BTW, have you confirm if the issue only appear when you use specific format to save data or all save/edit operations?
I can confirm that it is not related to any format as the tables is in-memory and not persisted anywhere.
If all type of edit operations not allowed, it may means the pyodbc is a read permission connections which not support edit operations.
I am not sure we can even configure `pyodbc` specifically for a "read-only" access. AFAIK, the read-only or read/write permissions are set by the workspace admin for my user account. And I can already confirm that my user has write permissions as I can create a table from the query editor on the web browser.
Hi @csubhodeep,
In fact, it is not related to the driver setting and configuration. Currently the SQL analytics endpoint provide 'read only' permission, so the read operation works, but the edit and write not allowed.
You can take a look at the following link to know more about this:
What is data warehousing in Microsoft Fabric? - Microsoft Fabric | Microsoft Learn
Regards,
Xiaoxin Sheng
Currently the SQL analytics endpoint provide 'read only' permission, so the read operation works, but the edit and write not allowed.
I don't think that's true. After creating the `pyodbc` connection as described in the article that I attached with the original post. One could do the following in Python
import sqlalchemy as sa
conn_str = f'mssql+pyodbc:///?odbc_connect={connection_string}'
engine = sa.create_engine(conn_str, connect_args={'attrs_before': attrs_before})
df.to_sql(
"local_data_example_pd",
engine,
if_exists="replace",
index=False,
dtype={
"b": sa.types.String(length=10) # because Fabric currently does not support `VARCHAR(max)` data type
}
)
The variables - `connection_string` and `attrs_before` should be made exactly as how it is instructed in the article.
With the above code snippet I could write a table on Fabric.
Hi @csubhodeep,
If you can't directly use df to save data. Perhaps you can try to concatenate the insert query based on your table and use connection cursor to execute the command to save data?
Load data to MS Fabric Warehouse from notebook - Stack Overflow
Regards,
Xiaoxin Sheng
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
3 | |
1 | |
1 | |
1 | |
1 |