Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
I am trying to read a table from a database but I am getting an AdomdErrorResponseException. I need this table in order to connect my other tables to a dataFrame. I need to get a DataFrame because I need to convert all this data and calculate some Mins/Maxs for some distribution stores. Is there anyway that I can read a large table from a semantic model into Microsoft Fabric and convert it into a DataFrame without getting this error?
Solved! Go to Solution.
hello @js15
Give it a try
import sempy.fabric as fabric
df = fabric.read_table(
dataset="your_dataset_name",
table="your_large_table",
mode="onelake",
onelake_import_method="spark"
)
Fabric’s `read_table` function supports a parameter called `mode` that lets you choose the data retrieval method. By switching the mode from the default `"xmla"` to `"onelake"` and specifying an appropriate import method (such as `"spark"`), you offload the heavy lifting to the Spark runtime rather than the XMLA engine. This approach can be particularly useful for large tables because it is designed to scale with distributed resources
if this helps please give kudos and accept the answer
Just figured it out. Thank you for your help!
hello @js15
Give it a try
import sempy.fabric as fabric
df = fabric.read_table(
dataset="your_dataset_name",
table="your_large_table",
mode="onelake",
onelake_import_method="spark"
)
Fabric’s `read_table` function supports a parameter called `mode` that lets you choose the data retrieval method. By switching the mode from the default `"xmla"` to `"onelake"` and specifying an appropriate import method (such as `"spark"`), you offload the heavy lifting to the Spark runtime rather than the XMLA engine. This approach can be particularly useful for large tables because it is designed to scale with distributed resources
if this helps please give kudos and accept the answer
Hi @js15
What I would recommend doing if it is a large table, instead of trying to extract the entire table in one query, I would rather use a DAX query and loop over through the dates to get the information out and append it to an existing data frame.
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!