Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi
In our department, we have an internal python package which queried CUBE and Power BI Dataset and give us the result in pandas DataFrame. The department have done a migration and we do not use CUBE anymore but instead premium dataset. We do now in our package this POST request by using this REST API :
url = "https://api.powerbi.com/v1.0/myorg/datasets/%s/executeQueries"%id
headers = { "Authorization": "Bearer " + caller.access_token.token,
"Content-Type": "application/json",
}
query = {"queries":[{"query":dax}]}
try:
res = requests.post(url, headers=headers, data=json.dumps(query))
The dax variable is the DAX query of the user, the id variable is the id of the premium dataset and caller is an object used during the authentication with DeviceCodeCredential.
It worked however, we faced the limitations described below :
Our users are used to have 600K/800K rows so the results does not reach our expectations.
Usually, with other API there is a "page" parameter and we only need to iterate by page but I don't think there is this kind of parameter or maybe I did not see it.
It is not ideal to split in DAX (the queries are very long), I want to do it only with Python
Thank you for your help.
Hi @ffranc96 - the result of your DAX queries are going to exceed the limits of the API call. Note any of the 3 limitation could apply row limit (in single column table), values (cells in table), size (total bytes of JSON generated - i.e. no really wide columns with loads of text).
There is no paging option. Instead you need to apply filter context to your DAX calls (e.g. filter result by month). You might be able to wrap the dax in CalculateTable function to pass this Filter context.
However, I don't recommend this approach because Power BI is not designed for your use case. Instead, I would recommend looking at using the new feature in Fabric. You could load or stage your the Dataset into a Fabric Data Warehouse. This would be available to use in Python using the Data Engineering tools and also imported in the Power BI dataset or use Direct Lake. Please read up the documentation for more ideas. Data Engineering in Microsoft Fabric documentation - Microsoft Fabric | Microsoft Learn
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 4 | |
| 2 | |
| 2 | |
| 1 | |
| 1 |
| User | Count |
|---|---|
| 4 | |
| 4 | |
| 4 | |
| 3 | |
| 2 |