Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started

The operation has timed out - Evaluation count (hourly): 358624247 / 360000000

Hi,

 

My dataflow refresh on schedule and works for multiple but stop refreshing after a few days and I get error emails with 

 

 

 

dataflow couldn’t be refreshed because there was a problem with one or more entities, or because dataflow capabilities were unavailable

 

 

The dataflow 

is a call to a rest API server, there is a total of 10 tables, when the refresh works, it takes about 18minutes. When it fails, it fails after 8 -15 minutes.

Then I refresh it manually, and the schedule refresh will work again for a few more days (3-4 days). This sequence [ Sch_refreshWorks/ stopworking / refresh manual / Sch_refreshWorks ] as happened three times in a row. 

When it stops working, I waited 2days to see if it would start working again by itself but it doesn't.

 

the refresh log shows

 

 

 

Error: DataSource.Error: The operation has timed out Request ID: ...

 

 

 

In the dataflow, if I click on options thne diagnostic, view usage quota I see (at 10h00 today, 5hr after the first schedule refresh)

 

 

 

Authoring
     Evaluation count (hourly): 18/5000
Refresh
     CdsA
         Evaluation time (daily): 00:22:55 / 100:00:00
         Evaluation count (hourly): 358624247 / 360000000

 

 

 

The evaluation count is extremely high. 

 

My scheduled refresh are at

- dataflow             5am, 12h and the 6pm.

- semantic model : 6am, 1pm, 7pm

 

What could create such a high evaluation count ? Could it be the cause of the failed scheduled refresh?

Thank you

Status: Investigating

Hi @nopeName ,

 

The high evaluation count and the refresh error you’re experiencing could be related. The evaluation count is a measure of how much computation is being done by your dataflow.

Make sure your dataflow is as efficient as possible. This could involve simplifying your queries, removing unnecessary steps, or dividing your document into smaller chunks.
There are two types of refreshes applicable to dataflows: Full and Incremental. Full refresh performs a complete flush and reload of your data, while Incremental refresh processes a subset of your data based on time-based rules. If you’re using Full refresh, consider switching to Incremental if possible.

Some users have reported that disabling the “Enhanced compute engine settings” in the dataflow settings resolved their refresh issues.For more you may refer to:Solved: Dataflow Refresh Error (Merge Queries) - Microsoft Fabric Community

 

Understand and optimize dataflows refresh - Power BI | Microsoft Learn

 

 

Best regards.
Community Support Team_Caitlyn

Comments
v-xiaoyan-msft
Community Support
Status changed to: Investigating

Hi @nopeName ,

 

The high evaluation count and the refresh error you’re experiencing could be related. The evaluation count is a measure of how much computation is being done by your dataflow.

Make sure your dataflow is as efficient as possible. This could involve simplifying your queries, removing unnecessary steps, or dividing your document into smaller chunks.
There are two types of refreshes applicable to dataflows: Full and Incremental. Full refresh performs a complete flush and reload of your data, while Incremental refresh processes a subset of your data based on time-based rules. If you’re using Full refresh, consider switching to Incremental if possible.

Some users have reported that disabling the “Enhanced compute engine settings” in the dataflow settings resolved their refresh issues.For more you may refer to:Solved: Dataflow Refresh Error (Merge Queries) - Microsoft Fabric Community

 

Understand and optimize dataflows refresh - Power BI | Microsoft Learn

 

 

Best regards.
Community Support Team_Caitlyn

nopeName
Frequent Visitor

Hi,

 

I'm calling an API and it return pages that I have to iterate through. But I believe the biggest evaluation count must be coming from the code that creates a dynamic list of columns. I did took that approach as an easy to simply add more table from the API by just changing the target URL. I guess that is very evaluation hungry. 

I will change that for one table and see if the evaluation count changes.

 

Thank you for your help. I will get back once I test a bit more

nopeName
Frequent Visitor

Hi,

So after the last comment I planned to change the dynamic list of columns, but I saw something else and it turns out that the duration of the refresh now takes half as much time to complete.

But it started to fail again after 9successful refresh. 

date5am12pm6pm
June 19thfailedfailedfailed
June 20thfailed 8m34s 20m13s18m40s
June 21rst20m9sfailed 8m35s10m41 (optimized query)
June 22nd9m38s11m10s9m40
June 23rd9m399m399m39
June 24th9m3710m09failed 08m06
June 25thfailed 08m03  

The evalution count (hourly) is now 358844622 / 360000000 which is pretty similar. And each time I look at that, the evaluation hourly never changes. Even 3hr after the refresh. Shouldn't that reset after each hour?

 

Authoring
     Evaluation count (hourly): 11/5000
Refresh
     CdsA
         Evaluation time (daily): 00:19:15 / 100:00:00
         Evaluation count (hourly): 358844622 / 360000000

 

 I have a hard time understanding that the evaluation hasn't lowered while the duration of the refresh is now 50% faster.

 

the refresh history log returns this 

Error: DataSource.Error: The operation has timed out Request ID: ad37b538-2f27-4dfd-92ea-a48d0abca428 Activity ID: 356020b0-7011-43e9-85b8-bad1cbefa94f, always the same table name

I really wish there would be more info in that log.

 

Do you have any idea of what I could investigate or where I should look to get more information on the failure reason? Thanks a lot

nopeName
Frequent Visitor

I think I found what the problem actually was. The api request too so long that powerbi that the webcontent function timed out. I wish the refresh log would be more explicit.

So I added the [Timeout=#duration(0,0,15,0)] in the web.content function an now it works. I guess the server that is returning the data is faster or slower depending on it's load. Which means that sometime it would take more than 5min, while other time it takes slightly less.  This would have been an easy fix if only the error would include the row number that we see in the advanced editor. 

 

I wish I could use the incremental refresh, but I can't as my understand of that feature is that I need to have like a "updatedAt" field to know whether the line has changed or not. In our case even rows with an old date in the date column can have it's data changed.