Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
IMett
Helper III
Helper III

Using Data from Lakehouse in any way fails - what am I doing wrong?

Hi everyone,

I am quite frustrated and need some help. I've created a Lakehouse in Fabric and created some Delta tables there using a PySpark Notebook.

Unfortunately, now I am stuck and don't know how to proceed in order to visúalize / analyze the data.

 

As far as I can see, the tables are properly loaded into the Lakehouse, they are classified as delta tables.

Lakehouse ViewLakehouse View

 

 

 

 

 

 

I can query the tables with SparkSQL or PySpark. However when I try using the SQL endpoint, I get an error message with every table (stating Data could not be displayed in the preview).

SQL Endpoint ViewSQL Endpoint View

 

 

 

 

 

 

 

 

 

 

 

I can create a semantic model with those tables, however all tables are shown with a warning sign. 

IMett_0-1716667042571.png

If I try to create a report based on this semantic model,  I get error messages as well.

IMett_1-1716667148547.png

From Power BI Desktop, I am also not able to connect to the Lakehouse (it appears to be empty?) or the semantic model (error message).

 

Any hints for me what is wrong? Are there any special requirements for the delta tables in the Lakehouse which I might have violated?

4 REPLIES 4
AndyDDC
Super User
Super User

Hi @IMett what's the method in which you're creating the Dekta tables? Can you post your process (code etc).  

Have you tried creating a new Lakehouse and repeating the process?

 

It would also be worth raising a support ticket with Microsoft so they can investigate too.

Hi Andy,

Thank you for your reply.

I create the data from scratch in python. The data is a simulation experiment where data rows are created, no data is ingested from other sources.

The relevant parts of the codes are those:

ergebnisse = pd.DataFrame({
    'ExperimentID': [0],
    'WurfID': [0],
    'farbe': ['test'],
    'buchstabe': ['X'],
    'Typ': ['nix'],
    'AnzahlMarker': [-1],
    'Gescheitert': [False],
    'AnzahlSchiffe': [0]
})
def sim_0schwerter(anzahl_experimente, tableName, ergebnisse = ergebnisse):
    results = []
 
... some logic to create the results in a for loop...

 

    ergebnisse = pd.DataFrame(results, columns=ergebnisse.columns)
    erg_sparkdf=spark.createDataFrame(ergebnisse)
           
    table_path = "Tables/" + tableName
    erg_sparkdf.write.format("delta").mode("overwrite").option("mergeSchema", "true").save(table_path)
 
sim_0schwerter(anzExpr, "NoSwordsBig",ergebnisse=ergebnisse)
 
I have tried to copy everything to a different (new) Lakehouse, same result.
Is there something wrong with the last statement, where the delta file is created?
Anonymous
Not applicable

Hi @IMett ,

Just want to check are you still facing the issue?

If yes, the best course of action is to open a support ticket and have our support team take a closer look at it.

 

Please reach out to our support team so they can do a more thorough investigation on why this it is happening: Link 

 

After creating a Support ticket please provide the ticket number as it would help us to track for more information.

 

Hope this helps. Please let us know if you have any other queries.

Hi @Anonymous 
I opened a support ticket under ticket # 2405290050004525

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.