The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredCompete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.
I trying to connect a model to a SQL server for real time information and I am having issues with visuals loading. I have reduced the model to one simple visual because I cant get much more than that to load at all. It is a count of tasks and it is taking about 18-20 seconds to load the single visual. I ran the performance analyzer and the Direct query is taking the majority of that time. I do have a couple merged tables in the table that would pertain to this visual so that may be effecting the speed but if that is the issue I don't feel I have a way around that becasue I need to add those tables to do my data modeling. On the most recent test I ran it was 21.4 of the 21.6 seconds. If I want to put another filter on that the visual wont load in 5 or 10 minutes even. I just get the spinning circle in the top corner.
(picture of steps from power query)
This issues goes away with imported data, my visuals load almost imediatley. It takes about 2 minutes or so to refresh the data.
To me the issues it either coming in at the power query stage where I am preping my data or its something to do with SQL. Does anyone have any thoughts as to what may be casuing the direct query to take so long? I am not finding much information else where helping to trouble shoot this.
Here is also a picture of my modeling. Let me know if any other information might help. Thanks!
@Stachu Thank you for your response. I will make the suggested changes you recomend and see how much that imporves the speed. Fact table is 04. I could probably do with getting rid of table 01 I was leaving it there incase I wanted to do any refrence/comparison to previous dates. Same with 03 I can probably also get rid of that as its not being used currently.
Do you know if there is an ability to work on the project in import mode so I can complete the visulas and then maybe connect the dataset on the service to the server? However, if the service side isn't running refreshes like an import and cacheing the data I will probably run into the same issue and the refresh will time out.
My problem is I can do manual refreshes for a few projects but eventualy we would like to be able to have dashbords for each of our projects and refreshes woudl become a full time job. Another thought I had was to seperate needed data to other areas on the servers so I do not have to pull the whole P6 data base in everytime to create a project, and I could just pull the needed project data, however setting that up is out of my jurisdiction.
As for the import & DirectQuery at the same time - it does sound a bit like a composite model:
https://docs.microsoft.com/en-us/power-bi/transform-model/desktop-composite-models
But here you explicitly state which tables will be imported and which will be DQ
As for the multiple refreshes - if the model is the same and the only thing that changes is the project you can look into templates:
https://docs.microsoft.com/en-us/power-bi/create-reports/desktop-templates
you could use a parameter to filter for a specific project. Also do you publish the report in the service or do you work with local pbix? In the service it is possible to schedule the refreshes.
Another interesting options related to DirectQuery is dynamic M parameters:
https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-dynamic-m-query-parameters
this may be helpful with reducing the size of queries
In general DirectQuery is always much slower than imported, so if it's not an absolute must I would really consider moving to the imported mode.
As for your model
1) I see lot of bidirectional relationships, some of them nested (05,06,07) - it's best to use single direction relationships
2) the tables have a lot of columns (remove unnecessary ones)
3) tables 01 & 03 seem to be identical - you could load them as one with additional column determining the type - task/baseline, and manage the split with DAX measures
4) in general star schema is better than snowflake - I would suggest merging 05,06 and 07 into single table, especially that it will reduce nested bidirectional joins. Also regarding the schema - which table is the fact table? 04? 01? 03?
With all that said I would still suggest to reconsider moving to the imported mode