Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.

Reply
Scott_Powell
Advocate III
Advocate III

Limitations on Lakehouse table columns in SQL endpoint?

Hi, does anyone know if there's a limitation on table columns in the SQL endpoints of a Lakehouse? I've got a table which tracks county statistics that has 720 columns. I was able to load this table with no issues using a notebook. See image 1 below - the table (Raw_CHS) shows up fine in the Lakehouse explorer, and displays preview data with no issues. Also I'm able to query it from a 2nd notebook and build additional dimension tables from it.

 

However, I'm unable to get the table to show up in the SQL endpoint, see image 2 below.

 

Also, a "select top 100 * from Raw_CHS" fails (image 3) with an Invalid Object Name error.

 

Any ideas? I was unable to find any documentation showing limits to the number of columns a table can have in the documentation.

 

Thanks,

Scott

Image 1: Lakehouse view

 

Scott_Powell_0-1699655904643.png

 

Image 2: SQL Endpoint - table doesn't show:

Scott_Powell_1-1699656134366.png

 

Image 3: SQL query fails on the table

Scott_Powell_2-1699656300548.png

 

5 REPLIES 5
v-cboorla-msft
Community Support
Community Support

Hi @Scott_Powell 

 

Thanks for using Microsoft Fabric Community.

Apologies for the issue that you are facing.

As I understand that you were able to see the "Raw_CHS" table in the Lakehouse explorer, but the same table is not shown in the SQL Endpoint.

 

Could you please confirm the type of the table "Raw_CHS"? Is it managed table or external table?

 

 

Hi @v-cboorla-msft , it's definitely a managed table. The only thing "odd" about it is that it has a large number of columns - 700 or so. I hadn't seen this on any other table, so I'm kinda assuming the large # of columns is what did it.

 

I'm happy to open a support ticket, or take a Teams call from anyone that needs more details.

 

Thanks,
Scott

Hi @Scott_Powell 

 

Thanks for your response.

We are reaching out to the internal team to get more information related to your query and will get back to you as soon as we have an update.

Hi @Scott_Powell 

 

Apologies for the delay in response.
Follow up to see if you have a resolution yet on your issue.
In case if you have any resolution please do share that same with the community as it can be helpful to others. 
In case if you didn't get any resolution.

Please go ahead and raise a support ticket to reach our support team: https://support.fabric.microsoft.com/support

Please provide the ticket number here as we can keep an eye on it.

 

Thank you.

Hi @v-cboorla-msft - I actually don't think this is an issue with "too many columns". Instead, I think it's an issue with combining data from multiple .csv input files, where many of them don't have all the columns. I think something is going wrong during the data load to the table (via a notebook using dataframes), but no error is being shown. 

 

I suspect this because I loaded "only" the latest file - and everything worked fine.

 

Trying now to figure out how to load all of the years .csv files, plugging in NULLs for columns that don't exist. But I'm not at all fluent in PySpark so really struggling.

Long and short though I don't believe it's a limit on number of columns - everything works fine even with 700 columns if I stick to just a single year.

 

Thanks,

Scott

Helpful resources

Announcements
RTI Forums Carousel3

New forum boards available in Real-Time Intelligence.

Ask questions in Eventhouse and KQL, Eventstream, and Reflex.

Expanding the Synapse Forums

New forum boards available in Synapse

Ask questions in Data Engineering, Data Science, Data Warehouse and General Discussion.

MayFabricCarousel

Fabric Monthly Update - May 2024

Check out the May 2024 Fabric update to learn about new features.

LearnSurvey

Fabric certifications survey

Certification feedback opportunity for the community.