Microsoft Fabric Community Conference 2025, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount.
Register nowGet certified as a Fabric Data Engineer: Check your eligibility for a 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700. Get started
Hi Community,
We are facing an issue while handling schema changes in Power BI Import Mode after deployment to Power BI Service.
Power BI Service does not automatically handle schema changes in Import Mode, causing dataset refresh failures.
How can we handle schema changes dynamically in Power BI Import Mode without breaking the dataset refresh in Power BI Service?
Are there best practices or workarounds to avoid such errors when new columns are added dynamically?
Would appreciate any guidance or solutions! Thanks in advance.
#PowerBI #ImportMode #SchemaChange #PowerBIService #DynamicColumns
Hi @J_Balaji,
Thank you for reaching out to Microsoft Fabric Community Forum.
If there is a poosibility, Switch to DirectQuery Mode If UDFs are frequently changing and it's challenging to predict which columns might be added, switching to DirectQuery mode could be beneficial.
With DirectQuery, queries are executed in real-time, you don't need to pre-define columns in Power BI. The data is retrieved directly from the source during runtime, adapting to any changes in the schema without requiring adjustments to the model.
If this post helps, then please consider Accepting as solution to help the other members find it more quickly, don't forget to give a "Kudos" – I’d truly appreciate it!
Regards,
Vinay Pabbu
Hi @v-vpabbu , Thanks for the response.
We can't go with direct query mode, since data won't change that frequent even columns too.
Moreover we incorporated complex YTD and to date DAX measures in the file which will under perform with Direct query mode.
Please suggest if you have any other options too...
Hi @J_Balaji,
Manually refresh the schema in Power BI Desktop when columns change, update transformations as needed, verify the dataset, and re-publish it to Power BI Service.
Regards,
Vinay pabbu
Hi @J_Balaji,
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided for the issue worked? or Let us know if you need any further assistance?
If our response addressed, please mark it as Accept as solution and click Yes if you found it helpful.
Regards,
Vinay Pabbu
Hi @J_Balaji,
we wanted to kindly follow up to check if the solution provided for the issue worked? or Let us know if you need any further assistance?
If our response addressed, please mark it as Accept as solution and click Yes if you found it helpful.
Regards,
Vinay Pabbu
Hi @ibarrau . Firstly thanks a lot for spending some time to reply for my post.
I agree with your answer, But wanted to add extra info for my post.
We are facing above issue in "self servicing" Power BI report where end users create their own visuals from deployed dataset.
They also have access to create extra fields under "User Defined Fields" category from ERP, which I referred as "extra columns / new columns" earlier.
So we use same semantic model across multiple databases and multiple users where UDFs won't be same across all the cases and for some cases there won't be UDFs at all.
In such cases when we switch from one env(having UDFs) to another env(with no UDF) by changing Server, DB parameters we encounter the error shown in the snap.
Any thoughts that will help us in this context would be appreciated.
Even after reading at your reply I tried by modifying query in power query editor just to "Select * from table" rather than mentioning necessary fileds as earlier and tried repoducing the scenario with env switch in PBI service.
But again encountered the same error.
Alright, let's talk new alternatives. If UDF scenario brings the whole universe of columns and data, then the best approach is getting that data to the semantic model. Then you can use OLS to prevent users of viewing some columns.
https://learn.microsoft.com/en-us/fabric/security/service-admin-object-level-security?tabs=table
As an alternative if you only have two sets of columns possibles, you could build two semantic models. This approach works but you have rework each time you want to do some operations to the table.
Coming back to the power query code. Try pick up a table from the UI when connecting to source instead of a query. Then make sure there is no further steps using columns names. The engine by default adds Change Types step each time you try a getting data step. Changing types always talks about specific columns in the code.
I hope that helps,
Happy to help!
@ibarrau I should think about OLS option but we already use RLS, which can be considerable point.
But main point is that we are unaware of the new columns count and their names and at which point they gonna added to database also now known earlier.
Reg. alternate option of each time changing in desktop and republishing won't work in our case. Since we wanted to automate the whole process through pipelines.
Hi. As a good practice knowing the columns you need for the report analysis is key. Let's consider this, it doesn't matter if the source has 7, 8 or 10 columns depending on the sql server you pick if you know the 5 columns you should use. Unless those 5 change, you won't care for more columns because even if they are added at the semantic model, you won't see them at the report. The visualizations are built with the 5 you need. There is no dynamic changing visual columns. The problem would be if those 5 necessary columns change because the visuals will be broken and you need that column for the analysis.
My recomendation is to analyze the sources and changes of the tables to make sure you can hold the columns you need for the dashboards. Otherwise it won't change if you can dynamically add them to semantic model or not. They will break anyway when you change a necessary column and won't care if it's a new one that is not used.
Consider that Power Query code is the key to dynamic operations. If you are using a step in the code asking for a specific column name, then you can't miss that column in the source. You need to find a way to reference everything without writing the name at the code if you want to keep it dynamic.
I hope that helps,
Happy to help!
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Prices go up Feb. 11th.
If you love stickers, then you will definitely want to check out our Community Sticker Challenge!
User | Count |
---|---|
41 | |
26 | |
24 | |
20 | |
18 |
User | Count |
---|---|
52 | |
42 | |
24 | |
21 | |
20 |