Skip to main content
Showing results for 
Search instead for 
Did you mean: 

Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started

Helper III
Helper III

Using Data from Lakehouse in any way fails - what am I doing wrong?

Hi everyone,

I am quite frustrated and need some help. I've created a Lakehouse in Fabric and created some Delta tables there using a PySpark Notebook.

Unfortunately, now I am stuck and don't know how to proceed in order to visúalize / analyze the data.


As far as I can see, the tables are properly loaded into the Lakehouse, they are classified as delta tables.

Lakehouse ViewLakehouse View







I can query the tables with SparkSQL or PySpark. However when I try using the SQL endpoint, I get an error message with every table (stating Data could not be displayed in the preview).

SQL Endpoint ViewSQL Endpoint View












I can create a semantic model with those tables, however all tables are shown with a warning sign. 


If I try to create a report based on this semantic model,  I get error messages as well.


From Power BI Desktop, I am also not able to connect to the Lakehouse (it appears to be empty?) or the semantic model (error message).


Any hints for me what is wrong? Are there any special requirements for the delta tables in the Lakehouse which I might have violated?

Solution Sage
Solution Sage

Hi @IMett what's the method in which you're creating the Dekta tables? Can you post your process (code etc).  

Have you tried creating a new Lakehouse and repeating the process?


It would also be worth raising a support ticket with Microsoft so they can investigate too.

Hi Andy,

Thank you for your reply.

I create the data from scratch in python. The data is a simulation experiment where data rows are created, no data is ingested from other sources.

The relevant parts of the codes are those:

ergebnisse = pd.DataFrame({
    'ExperimentID': [0],
    'WurfID': [0],
    'farbe': ['test'],
    'buchstabe': ['X'],
    'Typ': ['nix'],
    'AnzahlMarker': [-1],
    'Gescheitert': [False],
    'AnzahlSchiffe': [0]
def sim_0schwerter(anzahl_experimente, tableName, ergebnisse = ergebnisse):
    results = []
... some logic to create the results in a for loop...


    ergebnisse = pd.DataFrame(results, columns=ergebnisse.columns)
    table_path = "Tables/" + tableName
    erg_sparkdf.write.format("delta").mode("overwrite").option("mergeSchema", "true").save(table_path)
sim_0schwerter(anzExpr, "NoSwordsBig",ergebnisse=ergebnisse)
I have tried to copy everything to a different (new) Lakehouse, same result.
Is there something wrong with the last statement, where the delta file is created?

Hi @IMett ,

Just want to check are you still facing the issue?

If yes, the best course of action is to open a support ticket and have our support team take a closer look at it.


Please reach out to our support team so they can do a more thorough investigation on why this it is happening: Link 


After creating a Support ticket please provide the ticket number as it would help us to track for more information.


Hope this helps. Please let us know if you have any other queries.

Hi @v-gchenna-msft 
I opened a support ticket under ticket # 2405290050004525

Helpful resources

Europe Fabric Conference

Europe’s largest Microsoft Fabric Community Conference

Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.

Expanding the Synapse Forums

New forum boards available in Synapse

Ask questions in Data Engineering, Data Science, Data Warehouse and General Discussion.

RTI Forums Carousel3

New forum boards available in Real-Time Intelligence.

Ask questions in Eventhouse and KQL, Eventstream, and Reflex.


Fabric Monthly Update - May 2024

Check out the May 2024 Fabric update to learn about new features.