Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi
In our department, we have an internal python package which queried CUBE and Power BI Dataset and give us the result in pandas DataFrame. The department have done a migration and we do not use CUBE anymore but instead premium dataset. We do now in our package this POST request by using this REST API :
url = "https://api.powerbi.com/v1.0/myorg/datasets/%s/executeQueries"%id
headers = { "Authorization": "Bearer " + caller.access_token.token,
"Content-Type": "application/json",
}
query = {"queries":[{"query":dax}]}
try:
res = requests.post(url, headers=headers, data=json.dumps(query))
The dax variable is the DAX query of the user, the id variable is the id of the premium dataset and caller is an object used during the authentication with DeviceCodeCredential.
It worked however, we faced the limitations described below :
Our users are used to have 600K/800K rows so the results does not reach our expectations.
Usually, with other API there is a "page" parameter and we only need to iterate by page but I don't think there is this kind of parameter or maybe I did not see it.
It is not ideal to split in DAX (the queries are very long), I want to do it only with Python
Thank you for your help.
Hi @ffranc96 - the result of your DAX queries are going to exceed the limits of the API call. Note any of the 3 limitation could apply row limit (in single column table), values (cells in table), size (total bytes of JSON generated - i.e. no really wide columns with loads of text).
There is no paging option. Instead you need to apply filter context to your DAX calls (e.g. filter result by month). You might be able to wrap the dax in CalculateTable function to pass this Filter context.
However, I don't recommend this approach because Power BI is not designed for your use case. Instead, I would recommend looking at using the new feature in Fabric. You could load or stage your the Dataset into a Fabric Data Warehouse. This would be available to use in Python using the Data Engineering tools and also imported in the Power BI dataset or use Direct Lake. Please read up the documentation for more ideas. Data Engineering in Microsoft Fabric documentation - Microsoft Fabric | Microsoft Learn
User | Count |
---|---|
5 | |
4 | |
4 | |
2 | |
2 |
User | Count |
---|---|
8 | |
4 | |
4 | |
4 | |
3 |