Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I often run MDX queries against Power BI datasets on the service, but I've recently started seeing an intermittent and unusual message:
"The session was cancelled because the session's current owning core service has changed."
I'm assuming this is just Power BI being Power BI. Should we keep retrying the query until it works?
Is there some reason why the Microsoft service wouldn't retry the operation themselves, whenever the "current owning core service has changed"? It seems a bit obnoxious for the service to send some meaningless message out to client applications, and force client applications to do the retry operations. It seems like the sort of thing that could be kept contained within the service.
Sending back errors to a client normally assumes two things (1) the error is coherent to the clients, and (2) the clients are able to use these errors in an actionable way, based on the type of error or the message... . However neither of these things is true here. The error is meaningless. And client applications don't have any guidance or strategy for handling this particular error. Do we retry after just one second? Or after an hour? Do we give up and go home for the day? Do we open a CSS support ticket? What are we supposed to do with this?
The error was generated from the nuget package for making the ADOMD queries from .Net (Microsoft.AnalysisServices.AdomdClient).
... at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IExecuteProvider.ExecuteTabular
at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.ExecuteReader
Any help would be appreciated. At this point I suspect the error originates from the Power BI service, rather than from the nuget client package. However I haven't found too many results from my google searches so far.
Hi @dbeavon3
MDX queries have to be converted into DAX queries, so if possible could you rather use DAX queries which should make it run faster.
Also the "owning core change" has been created when the dataset might be moving to another node. And this could happen if it needs more memory that is currently available on the node your dataset is in. Hence the request to change the queries to use DAX which uses less memory.
Thanks for the tips. MDX queries against datasets are still very common (Excel uses them, import-mode PQ uses them, and so on).
In general we don't have problems with RAM. We use about 5 GB, and our capacity is 25 GB. The only time we have problems with memory is when a dataset is refreshing certain types of features (calculated columns, and user hierarchies). This behavior in the service seems pretty bad, but it is rare and doesn't typically impact the interactive users, or remote MDX clients.
Had you already encountered that error in the past? I wasn't able to get any additional information from my google searches. I don't even know what an "owning core service" is. What does it own? The dataset? Why does the message say it belongs to the session? Is it purely based on the relationship between the session and the dataset which the session is using?
I do know that there are four front-end cores that are used for P1 query purposes. But I assumed that they remain statically hosted on a VM for long periods of time. Maybe there was a Microsoft-initiated maintenance operation that caused this. If that is the case, then I wouldn't worry about programming my own workaround.
Please let me know. I would like to contact CSS at some point, if this becomes a serious problem. But I don't know how to quantify that.
Hi @dbeavon3
I have seen this happen when it is swapping out to a different node.
It should happen if there are other datasets on the same node using too much memory.
@GilbertQ
We only have P1.
I'm assuming that Microsoft would host the entire P1 capacity ("node"?) on a single VM.
You said they would throw this error "if there are other datasets on the same node using too much memory". Are you referring to datasets of other customers? I assume so. If you are referring to our own datasets, then moving the 8 vcores to another VM would be unlikely to change anything since the same datasets would still need to be loaded into memory to service the current client queries.
>> I have seen this happen when it is swapping out to a different node
Is there a way to get telemetry to say when this happens? As we start encountering this error on a frequent basis, I'm assuming it means Microsoft is underprovisioning RAM. I always assumed that the 25 GB was pre-allocated and is dedicated to us (ie. because we paid for it). But if you are saying that Microsoft is not actually giving us the RAM until we start to need it, then the conclusion is that everyone is over-paying for their premium capacity. It sounds like Microsoft is charging us for RAM but is NOT setting it aside for our exclusive use.
We continue to get this crazy message on a regular basis.
The session was cancelled because the session's current owning core service has changed.
Is there a reason Microsoft doesn't create documentation and instructions on what to do when Power BI customers encounter this?
The message is meaningless. But it is surfaced to users and even developers (like pythoners) who have no clue what it means or what Microsoft intends them to do.
I don't understand why this message should come up at all. If Microsoft wants to move customers to new P1 nodes, then they should queue the queries and run them on the new nodes, instead of making them die. Especially when they are short-running queries (under a minute). This current behavior in he service is just silly.