Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Score big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount

Reply
JeffGray
Frequent Visitor

Column added to Lakehouse table as string shows up as wrong type in SQL Endpoint and Power BI

I have a notebook that adds a column to Lakehouse table using: 

    spark.sql("alter table EKKO add columns (STATU string);")
 
Seems to work fine, and I can load data into the column  using a merge statement and all looks good.  I can see my data in the table in the Lakehouse and query it from a notebook.  BUT:  when I put this table into a semantic model and build a report, this column shows up looking like a number (sigma sign in front of the field name) and I get a type mismatch error when I try to put it into any visual.  Similarly when I look at it SQL endpoint, the data type shows as "e to the x" which displays as "decimal" when I hover over it.  How can I fix this?!?!?  It is created as string and shows up as string in the lakehouse table.  Why is it not string in SQL endpoint and Power BI?
 
 
4 REPLIES 4
Anonymous
Not applicable

Hi , @JeffGray,

ALTER TABLE command is not supported for tables in Lakehouse.

vyifanwmsft_0-1720492224884.png

This may be helpful to you:
Solved: Re: Lakehouse table column names not reflected in ... - Microsoft Fabric Community

 

The SQL analytics endpoint is read-only  that is automatically generated upon creation from a Lakehouse in Microsoft Fabric. 

The Synapse Data Warehouse or Warehouse is a 'traditional' data warehouse and supports the full transactional T-SQL capabilities like an enterprise data warehouse. 

You can click here for more information.

 

Best Regards,

Ada Wang

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

@Anonymous I think the documentation you are referring to, is about T-SQL limitations in Data Warehouse and/or SQL Analytics Endpoint.

 

However @JeffGray seems to be using SQL inside PySpark in a Lakehouse notebook. From what I have heard earlier, that is a different topic than T-SQL for data warehouse and SQL Analytics Endpoint.

Is there any documentation about limitations for using SQL in Spark?

 

Here are some other posts which show that we have been able to add columns to Lakehouse tables:

 

https://youtu.be/2RuoHpNZbc4?si=IoUopjtCXYozEgh2

 

https://community.fabric.microsoft.com/t5/General-Discussion/SQL-ALTER-command/m-p/3800910/highlight/true#M5715

 

https://community.fabric.microsoft.com/t5/General-Discussion/Dropping-and-recreating-lakehouse-table/m-p/3835426/highlight/true#M6312

Yes, exactly... I have a theory that I will test tomorrow, that the SQL endpoint (and power bi) are typing based on a profile of the first written data (as happens with Power BI imports) rather than from metadata of the Lakehouse table.  In my case, the first several thousand values written into this column are all '9'.  I think the SQL endpoint may be typing this as numeric based on those values.  I'll try explicitly writing string data first tomorrow to see if I get a string datatype.

Thanks Ada.  I think your answer is incorrect.  I understand that SqL endpoint is read only, I altering my Lakehouse table with Spark SQL.  The limitations you are quoting refer to warehouse, and even there, the next sentence after the section you reference says that adding nullable columns IS supported... 

"

Limitations

At this time, the following list of commands is NOT currently supported. Don't try to use these commands. Even though they might appear to succeed, they could cause issues to your warehouse.

  • ALTER TABLE ADD/ALTER/DROP COLUMN
    • Currently, only the following subset of ALTER TABLE operations in Warehouse in Microsoft Fabric are supported:
      • ADD nullable columns of supported column data types"

Helpful resources

Announcements
August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.