The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I am generating delta tables using data from a JSON file, pyspark, and a notebook. Everything goes by smoothly; the table creation (If it doesn't exist already) and the table overwriting. However, I can't preview the data or query it at all with SQL. If I try, I instead get the following error:
"[DELTA_READ_TABLE_WITHOUT_COLUMNS] You are trying to read a Delta table [Table Here] that does not have any columns."
This is strange, as when I check the amount of columns in the data-frame before overwriting the table using "len(df.columns)", it is considerably larger than zero.
What is also strange, is that sometimes I can view and query a table that was created in the same exact manner as a table that is problematic. It's sporradic - even with the same, unchanged data.
Looking in the lake warehouse, I find a message with a red circle and a white 'X' next to it:
And when I view the details:
...
This seems somewhat similar to a known issue I found here:
https://learn.microsoft.com/en-us/fabric/get-started/known-issues/known-issue-891-warehouse-tables-n...
I also notice this next to some (But not all) of the table names:
When I hover over it, I get a message that says: "Columns of the specified data types are not supported for...", where it then continues on to list some columns and their (presumed) types. A preview of the table can be seen.
Is the issue I'm having related to the known issue linked above and - if so - is there anything that can be done to remedy this issue? Is it possible the schema is not being inferred correctly and that could be causing all of these issues?
Hi @Anonymous,
I don't believe "Lakehouse schemas (Public Preview)" is enabled on the lakehouse I am working with, so It likely doesn't match unfortunately.
Thank you for your time,
j_doe
Fabric and Power BI can be picky with Delta tables, especially when you’re creating/overwriting them with notebooks.
What usually solves it:
Explicitly define your schema in PySpark when writing your Delta table (don’t rely on Spark to infer from JSON or dynamic data). This makes sure the columns are always visible to downstream tools.
Stick to simple types: Use String, Int, Double, Date, etc. Avoid nested or complex types Power BI just doesn’t like them yet.
If you change the table’s schema, drop and recreate the table instead of just overwriting. Sometimes old metadata lingers and messes things up.
Direct Lake connections in Power BI are generally more reliable than the SQL endpoints for previewing new/changed tables.
If you get the “internal error” or [DELTA_READ_TABLE_WITHOUT_COLUMNS], it’s almost always schema mismatch or a type Power BI can’t read. Clean up the folder and rewrite can help if all else fails.
There are also some ongoing Fabric bugs around metadata sync sometimes it’s just a matter of waiting a few minutes or doing a manual metadata refresh in the portal.
Alright, I'll look into defining the schema explicitly. Thank you for your time!
Hi, @j_doe
Check out this link, does your situation match this?
Best Regards,
Community Support Team _Charlotte
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.