The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi everyone, I wonder if you could have any idea why numbers of dataset refreh duration in Fabric Metric App (it is on drill trhoug page for timepoint details) don't match the same refresh duration in Semantic model Refresh histroy view.
The one on Fabric app is way shorter for the same refresh.
Solved! Go to Solution.
Hi @EduardD
The discrepancy between the dataset refresh duration shown in the Fabric Metrics App (on the drill-through page for timepoint details) and the refresh duration in the Semantic Model Refresh History view is likely due to differences in how these metrics are calculated and logged. The Fabric Metrics App might be capturing only the active processing time, excluding queue time, waiting time, or other background processes, whereas the Refresh History in the Semantic Model likely logs the total end-to-end duration, including any delays before execution begins. Additionally, there may be differences in how these two sources handle time aggregation or refresh stages, such as data load versus transformation time. If the Fabric Metrics App is using real-time telemetry data, it might be capturing a more granular or partial view of the refresh process compared to the full refresh log in the Semantic Model history. Checking the detailed logs in both sources and comparing timestamps might help clarify the inconsistency.
Hi @EduardD,
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you.
Hi @EduardD
The discrepancy between the dataset refresh duration shown in the Fabric Metrics App (on the drill-through page for timepoint details) and the refresh duration in the Semantic Model Refresh History view is likely due to differences in how these metrics are calculated and logged. The Fabric Metrics App might be capturing only the active processing time, excluding queue time, waiting time, or other background processes, whereas the Refresh History in the Semantic Model likely logs the total end-to-end duration, including any delays before execution begins. Additionally, there may be differences in how these two sources handle time aggregation or refresh stages, such as data load versus transformation time. If the Fabric Metrics App is using real-time telemetry data, it might be capturing a more granular or partial view of the refresh process compared to the full refresh log in the Semantic Model history. Checking the detailed logs in both sources and comparing timestamps might help clarify the inconsistency.
Hi @EduardD
This can happen due to the way the metrics is captured and it can take some additional time for the data to get into the Metrics App. So if you can look later say about 20-30mins it should be better?