Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get inspired! Check out the entries from the Power BI DataViz World Championships preliminary rounds and give kudos to your favorites. View the vizzies.

Reply
js15
Helper I
Helper I

The database was evicted, AdomdErrorResponseException

I am trying to read a table from a database but I am getting an AdomdErrorResponseException. I need this table in order to connect my other tables to a dataFrame. I need to get a DataFrame because I need to convert all this data and calculate some Mins/Maxs for some distribution stores. Is there anyway that I can read a large table from a semantic model into Microsoft Fabric and convert it into a DataFrame without getting this error?

 

js15_0-1739397057628.png

 

 

js15_1-1739397067945.png

 

1 ACCEPTED SOLUTION
nilendraFabric
Community Champion
Community Champion

hello @js15 

 

 

Give it a try

 

import sempy.fabric as fabric

df = fabric.read_table(
dataset="your_dataset_name",
table="your_large_table",
mode="onelake",
onelake_import_method="spark"
)

 

Fabric’s `read_table` function supports a parameter called `mode` that lets you choose the data retrieval method. By switching the mode from the default `"xmla"` to `"onelake"` and specifying an appropriate import method (such as `"spark"`), you offload the heavy lifting to the Spark runtime rather than the XMLA engine. This approach can be particularly useful for large tables because it is designed to scale with distributed resources

 

if this helps please give kudos and accept the answer 

View solution in original post

3 REPLIES 3
js15
Helper I
Helper I

Just figured it out. Thank you for your help!

 

nilendraFabric
Community Champion
Community Champion

hello @js15 

 

 

Give it a try

 

import sempy.fabric as fabric

df = fabric.read_table(
dataset="your_dataset_name",
table="your_large_table",
mode="onelake",
onelake_import_method="spark"
)

 

Fabric’s `read_table` function supports a parameter called `mode` that lets you choose the data retrieval method. By switching the mode from the default `"xmla"` to `"onelake"` and specifying an appropriate import method (such as `"spark"`), you offload the heavy lifting to the Spark runtime rather than the XMLA engine. This approach can be particularly useful for large tables because it is designed to scale with distributed resources

 

if this helps please give kudos and accept the answer 

GilbertQ
Super User
Super User

Hi @js15 

 

What I would recommend doing if it is a large table, instead of trying to extract the entire table in one query, I would rather use a DAX query and loop over through the dates to get the information out and append it to an existing data frame.





Did I answer your question? Mark my post as a solution!

Proud to be a Super User!







Power BI Blog

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code FABINSIDER for a $400 discount!

FebPBI_Carousel

Power BI Monthly Update - February 2025

Check out the February 2025 Power BI update to learn about new features.

March2025 Carousel

Fabric Community Update - March 2025

Find out what's new and trending in the Fabric community.