Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
I have a notebook that adds a column to Lakehouse table using:
Hi , @JeffGray,
ALTER TABLE command is not supported for tables in Lakehouse.
This may be helpful to you:
Solved: Re: Lakehouse table column names not reflected in ... - Microsoft Fabric Community
The SQL analytics endpoint is read-only that is automatically generated upon creation from a Lakehouse in Microsoft Fabric.
The Synapse Data Warehouse or Warehouse is a 'traditional' data warehouse and supports the full transactional T-SQL capabilities like an enterprise data warehouse.
You can click here for more information.
Best Regards,
Ada Wang
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
@Anonymous I think the documentation you are referring to, is about T-SQL limitations in Data Warehouse and/or SQL Analytics Endpoint.
However @JeffGray seems to be using SQL inside PySpark in a Lakehouse notebook. From what I have heard earlier, that is a different topic than T-SQL for data warehouse and SQL Analytics Endpoint.
Is there any documentation about limitations for using SQL in Spark?
Here are some other posts which show that we have been able to add columns to Lakehouse tables:
https://youtu.be/2RuoHpNZbc4?si=IoUopjtCXYozEgh2
Yes, exactly... I have a theory that I will test tomorrow, that the SQL endpoint (and power bi) are typing based on a profile of the first written data (as happens with Power BI imports) rather than from metadata of the Lakehouse table. In my case, the first several thousand values written into this column are all '9'. I think the SQL endpoint may be typing this as numeric based on those values. I'll try explicitly writing string data first tomorrow to see if I get a string datatype.
Thanks Ada. I think your answer is incorrect. I understand that SqL endpoint is read only, I altering my Lakehouse table with Spark SQL. The limitations you are quoting refer to warehouse, and even there, the next sentence after the section you reference says that adding nullable columns IS supported...
"
At this time, the following list of commands is NOT currently supported. Don't try to use these commands. Even though they might appear to succeed, they could cause issues to your warehouse.
User | Count |
---|---|
5 | |
4 | |
3 | |
3 | |
2 |
User | Count |
---|---|
10 | |
8 | |
7 | |
6 | |
6 |