The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I am creating my tables using a PySpark notebook, but I am unable to use SQL to update values in the table. I thought these tables supported updates, deletes, etc.
tables are created like so
Solved! Go to Solution.
Hi @spartan27244 The SQL endpoints in Microsoft Fabric are read-only when connected to a Lakehouse.
(SQL Endpoints are readonly)
To update data in a Lakehouse table in Microsoft Fabric, you cannot use SQL directly, but you can do it using a notebook (typically PySpark)
Something like this:
df = spark.read.format("delta").load("Tables/your_table")
df_updated = df.withColumn("status",when(col("user_id") == 123, "active").otherwise(col("status")))
df_updated.write.format("delta").mode("overwrite").save("Tables/your_table")
Hi @spartan27244,
Thanks for reaching out to the Microsoft fabric community forum.
The issue you're running into is due to how the SQL endpoint is configured in your environment. While Delta tables do support operations like UPDATE, DELETE, and MERGE, these operations are only supported within Spark (e.g., notebooks or Spark jobs) and not through certain SQL endpoints, depending on the environment (especially in platforms like Fabric or Synapse SQL Serverless).
The error you're seeing"DML statements are not supported for this table type..." means that the table is accessible through the SQL endpoint, but it’s read-only in that context you can run SELECT queries, but not UPDATEs or other DML statements.
As @Gpop13 has already responded to your query, kindly go through his response and check if you issue can be resolved.
I would also take a moment to thank @Gpop13, for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
If I misunderstand your needs or you still have problems on it, please feel free to let us know.
Best Regards,
Hammad.
Community Support Team
Hi @spartan27244,
As we haven’t heard back from you, so just following up to our previous message. I'd like to confirm if you've successfully resolved this issue or if you need further help.
If yes, you are welcome to share your workaround and mark it as a solution so that other users can benefit as well. If you find a reply particularly helpful to you, you can also mark it as a solution.
If so, it would be really helpful for the community if you could mark the answer that helped you the most. If you're still looking for guidance, feel free to give us an update, we’re here for you.
Best Regards,
Hammad.
Hi @spartan27244 The SQL endpoints in Microsoft Fabric are read-only when connected to a Lakehouse.
(SQL Endpoints are readonly)
To update data in a Lakehouse table in Microsoft Fabric, you cannot use SQL directly, but you can do it using a notebook (typically PySpark)
Something like this:
df = spark.read.format("delta").load("Tables/your_table")
df_updated = df.withColumn("status",when(col("user_id") == 123, "active").otherwise(col("status")))
df_updated.write.format("delta").mode("overwrite").save("Tables/your_table")
Yes I verified you method does work as well as using SQL in a SQL cell block.