Skip to main content
Showing results for 
Search instead for 
Did you mean: 

Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started

Advocate III
Advocate III

Limitations on Lakehouse table columns in SQL endpoint?

Hi, does anyone know if there's a limitation on table columns in the SQL endpoints of a Lakehouse? I've got a table which tracks county statistics that has 720 columns. I was able to load this table with no issues using a notebook. See image 1 below - the table (Raw_CHS) shows up fine in the Lakehouse explorer, and displays preview data with no issues. Also I'm able to query it from a 2nd notebook and build additional dimension tables from it.


However, I'm unable to get the table to show up in the SQL endpoint, see image 2 below.


Also, a "select top 100 * from Raw_CHS" fails (image 3) with an Invalid Object Name error.


Any ideas? I was unable to find any documentation showing limits to the number of columns a table can have in the documentation.




Image 1: Lakehouse view




Image 2: SQL Endpoint - table doesn't show:



Image 3: SQL query fails on the table



Community Support
Community Support

Hi @Scott_Powell 


Thanks for using Microsoft Fabric Community.

Apologies for the issue that you are facing.

As I understand that you were able to see the "Raw_CHS" table in the Lakehouse explorer, but the same table is not shown in the SQL Endpoint.


Could you please confirm the type of the table "Raw_CHS"? Is it managed table or external table?



Hi @v-cboorla-msft , it's definitely a managed table. The only thing "odd" about it is that it has a large number of columns - 700 or so. I hadn't seen this on any other table, so I'm kinda assuming the large # of columns is what did it.


I'm happy to open a support ticket, or take a Teams call from anyone that needs more details.



Hi @Scott_Powell 


Thanks for your response.

We are reaching out to the internal team to get more information related to your query and will get back to you as soon as we have an update.

Hi @Scott_Powell 


Apologies for the delay in response.
Follow up to see if you have a resolution yet on your issue.
In case if you have any resolution please do share that same with the community as it can be helpful to others. 
In case if you didn't get any resolution.

Please go ahead and raise a support ticket to reach our support team:

Please provide the ticket number here as we can keep an eye on it.


Thank you.

Hi @v-cboorla-msft - I actually don't think this is an issue with "too many columns". Instead, I think it's an issue with combining data from multiple .csv input files, where many of them don't have all the columns. I think something is going wrong during the data load to the table (via a notebook using dataframes), but no error is being shown. 


I suspect this because I loaded "only" the latest file - and everything worked fine.


Trying now to figure out how to load all of the years .csv files, plugging in NULLs for columns that don't exist. But I'm not at all fluent in PySpark so really struggling.

Long and short though I don't believe it's a limit on number of columns - everything works fine even with 700 columns if I stick to just a single year.




Helpful resources

Europe Fabric Conference

Europe’s largest Microsoft Fabric Community Conference

Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.


Fabric Community Update - June 2024

Get the latest Fabric updates from Build 2024, key Skills Challenge voucher deadlines, top blogs, forum posts, and product ideas.


Fabric Monthly Update - May 2024

Check out the May 2024 Fabric update to learn about new features.