Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
js15
Helper I
Helper I

The database was evicted, AdomdErrorResponseException

I am trying to read a table from a database but I am getting an AdomdErrorResponseException. I need this table in order to connect my other tables to a dataFrame. I need to get a DataFrame because I need to convert all this data and calculate some Mins/Maxs for some distribution stores. Is there anyway that I can read a large table from a semantic model into Microsoft Fabric and convert it into a DataFrame without getting this error?

 

js15_0-1739397057628.png

 

 

js15_1-1739397067945.png

 

1 ACCEPTED SOLUTION
nilendraFabric
Super User
Super User

hello @js15 

 

 

Give it a try

 

import sempy.fabric as fabric

df = fabric.read_table(
dataset="your_dataset_name",
table="your_large_table",
mode="onelake",
onelake_import_method="spark"
)

 

Fabric’s `read_table` function supports a parameter called `mode` that lets you choose the data retrieval method. By switching the mode from the default `"xmla"` to `"onelake"` and specifying an appropriate import method (such as `"spark"`), you offload the heavy lifting to the Spark runtime rather than the XMLA engine. This approach can be particularly useful for large tables because it is designed to scale with distributed resources

 

if this helps please give kudos and accept the answer 

View solution in original post

3 REPLIES 3
js15
Helper I
Helper I

Just figured it out. Thank you for your help!

 

nilendraFabric
Super User
Super User

hello @js15 

 

 

Give it a try

 

import sempy.fabric as fabric

df = fabric.read_table(
dataset="your_dataset_name",
table="your_large_table",
mode="onelake",
onelake_import_method="spark"
)

 

Fabric’s `read_table` function supports a parameter called `mode` that lets you choose the data retrieval method. By switching the mode from the default `"xmla"` to `"onelake"` and specifying an appropriate import method (such as `"spark"`), you offload the heavy lifting to the Spark runtime rather than the XMLA engine. This approach can be particularly useful for large tables because it is designed to scale with distributed resources

 

if this helps please give kudos and accept the answer 

GilbertQ
Super User
Super User

Hi @js15 

 

What I would recommend doing if it is a large table, instead of trying to extract the entire table in one query, I would rather use a DAX query and loop over through the dates to get the information out and append it to an existing data frame.





Did I answer your question? Mark my post as a solution!

Proud to be a Super User!







Power BI Blog

Helpful resources

Announcements
Power BI DataViz World Championships

Power BI Dataviz World Championships

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!

December 2025 Power BI Update Carousel

Power BI Monthly Update - December 2025

Check out the December 2025 Power BI Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.