Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started
I have a large semanti model in f64 fabric capacity. When i hit refresh now button after republishing from deaktop some time the model is compressed correctly as a large semantic model i.e. 8 million row per segment and some time it remains as a small semanti model 1 million row per segment.
I don't know which process is initiated by refresh now button. Is it process full ? Or what.
when i used TE3 to refresh full the model it works fine. So my question is the only mentioned above.
Hi,@muhssamy
Has your problem been solved?
If you have found suitable solutions, please share them as it will help more users with similar problems.
Or you can mark the valid suggestions provided by other users as solutions
I hope my suggestions give you good ideas, if you have any more questions, please clarify in a follow-up reply.
Best Regards,
Carson Jian,
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi,@muhssamy I am glad to help you.
According to your description, you could use TE3 to refresh the whole model, there is a question, when the semantic model is changed to a large semantic model, according to the settings, power BI will automatically set the default segment size to 8 million rows, but you find that sometimes it is still 1 million rows of the small semantic model.
In real time, theoretically, when the large semantic model has been set to be enabled, the relevant configuration information will not be changed again (consistently maintaining the 8 million rows of data), which is rather strange.
It is possible that the large semantic model is not compressed as expected after a refresh due to the lack of available capacity on the current service when performing a refresh (i.e., when you click refresh now again, compression fails).
You can view the actual data configuration by checking the following with the powershell command or by exporting the gateway logs with the power BI On-premises data gateway appliction.
You can also manage worksapce via XMLA end point Related information: Use SSMS to check the estimated semantic model size in the Model Properties window.
Here is the official documentation on the subject, which we hope will help you:
URL:
Large semantic models in Power BI Premium - Power BI | Microsoft Learn
I hope my suggestions give you good ideas, if you have any more questions, please clarify in a follow-up reply.
Best Regards,
Carson Jian,
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
can i know this "It is possible that the large semantic model is not compressed as expected after a refresh due to the lack of available capacity on the current service when performing a refresh " from log analytics ?
the change to small model compression happen after re-publish the model from power bi desktop and initiate refresh now.
Hi,@muhssamy .Thank you for your reply.
Unfortunately, I don't have any experience testing detecting large semantic models at the moment. Here are some of my suggestions
You could try recreating a new workspace with large semantic model selection turned on, and then uploading individual reports of different sizes to detect what kind of scenarios would cause compression to fail. Keep the small semantic model
I've searched the site and found some documentation on testing and managing workspaces that I hope could help you out.
URL:
Using Azure Log Analytics in Power BI - Power BI | Microsoft Learn
New ‘ExecutionMetrics’ event in Azure Log Analytics for Power BI Semantic Models | Micro...
Can you share how you are looking at the workspace that has been set up with the large semantic model feature when refreshed, and how you are detecting if it is compressing properly to a large semantic model after refreshing, rather than incorrectly remaining as a small semantic model?
This will help more users on the forum. Looking forward to your reply.
I hope my suggestions give you good ideas, if you have any more questions, please clarify in a follow-up reply.
Best Regards,
Carson Jian,
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
"Can you share how you are looking at the workspace that has been set up with the large semantic model feature when refreshed, and how you are detecting if it is compressing properly to a large semantic model after refreshing, rather than incorrectly remaining as a small semantic model?
This will help more users on the forum. Looking forward to your reply"
for this :-
we have configured the model itself as large semantic model not all the models in fabric workspace.
i am detecting the compression through the vertipaq analyzer in coulumn tabs. you can see the relation between # segments and # Rows
Hi,@muhssamy Thank you for your reply.
Thank you very much for sharing your technique, using VertiPaq Analyzer to analyze the information of data model is a good way.
Regarding your question: Why the setup fails when you set up a large semantic model, and sometimes a small semantic model is still displayed, this may be caused by a variety of issues, in addition to the capacity issue I mentioned before, it may also be related to the refresh operation, data storage limit, and access rights, etc. You can check the relevant parameters in detail to see if the conversion fails due to other setting issues.
You can check the related parameters in detail to see if the conversion fails due to other setting problems.
Of course, the most important thing is to check whether the failure of large-scale semantic model conversion affects your actual data refresh/display operation, I wish you a speedy solution to your doubts.
I hope my suggestions give you good ideas, if you have any more questions, please clarify in a follow-up reply.
Best Regards,
Carson Jian,
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
i looked into Log analytics and captured the command sent to Analysis service to refresh the model
it is a full refresh as per the following query (i removed only my database id)
also refer to this link to understand the parameters of refresh type
[MS-SSAS-T]: Refresh Model | Microsoft Learn
<Batch Transaction="true" xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Refresh xmlns="http://schemas.microsoft.com/analysisservices/2014/engine">
<DatabaseID>123</DatabaseID>
<MaxParallelism>6</MaxParallelism>
<Model>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:sql="urn:schemas-microsoft-com:xml-sql">
<xs:element>
<xs:complexType>
<xs:sequence>
<xs:element type="row" />
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:complexType name="row">
<xs:sequence>
<xs:element name="RefreshType" type="xs:long" sql:field="RefreshType"
minOccurs="0" />
</xs:sequence>
</xs:complexType>
</xs:schema>
<row xmlns="urn:schemas-microsoft-com:xml-analysis:rowset">
<RefreshType>1</RefreshType>
</row>
</Model>
</Refresh>
<SequencePoint xmlns="http://schemas.microsoft.com/analysisservices/2014/engine">
<DatabaseID>123</DatabaseID>
</SequencePoint>
</Batch>
remaining the question.
why the large semantic model was not compressed as expected after refresh ?
Check out the September 2024 Power BI update to learn about new features.
Learn from experts, get hands-on experience, and win awesome prizes.
User | Count |
---|---|
88 | |
46 | |
25 | |
21 | |
19 |