Microsoft Fabric Community Conference 2025, March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for a $400 discount.
Register nowGet inspired! Check out the entries from the Power BI DataViz World Championships preliminary rounds and give kudos to your favorites. View the vizzies.
I am trying to read a table from a database but I am getting an AdomdErrorResponseException. I need this table in order to connect my other tables to a dataFrame. I need to get a DataFrame because I need to convert all this data and calculate some Mins/Maxs for some distribution stores. Is there anyway that I can read a large table from a semantic model into Microsoft Fabric and convert it into a DataFrame without getting this error?
Solved! Go to Solution.
hello @js15
Give it a try
import sempy.fabric as fabric
df = fabric.read_table(
dataset="your_dataset_name",
table="your_large_table",
mode="onelake",
onelake_import_method="spark"
)
Fabric’s `read_table` function supports a parameter called `mode` that lets you choose the data retrieval method. By switching the mode from the default `"xmla"` to `"onelake"` and specifying an appropriate import method (such as `"spark"`), you offload the heavy lifting to the Spark runtime rather than the XMLA engine. This approach can be particularly useful for large tables because it is designed to scale with distributed resources
if this helps please give kudos and accept the answer
Just figured it out. Thank you for your help!
hello @js15
Give it a try
import sempy.fabric as fabric
df = fabric.read_table(
dataset="your_dataset_name",
table="your_large_table",
mode="onelake",
onelake_import_method="spark"
)
Fabric’s `read_table` function supports a parameter called `mode` that lets you choose the data retrieval method. By switching the mode from the default `"xmla"` to `"onelake"` and specifying an appropriate import method (such as `"spark"`), you offload the heavy lifting to the Spark runtime rather than the XMLA engine. This approach can be particularly useful for large tables because it is designed to scale with distributed resources
if this helps please give kudos and accept the answer
Hi @js15
What I would recommend doing if it is a large table, instead of trying to extract the entire table in one query, I would rather use a DAX query and loop over through the dates to get the information out and append it to an existing data frame.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code FABINSIDER for a $400 discount!
Check out the February 2025 Power BI update to learn about new features.
User | Count |
---|---|
64 | |
31 | |
28 | |
26 | |
26 |
User | Count |
---|---|
55 | |
49 | |
41 | |
15 | |
13 |