March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
Hello,
I am trying to bring multiple excel files or SP lists within the Power BI service and load them within the DataLake.
To achieve that, I am using the DataFlow Gen2 connector for each excel file that I am importing so I can import all the tables. I manipulate my data within the DataFlow and make sure that all of those have assigned column type (only using text, decimal, number or date). I have also de-activated the staging from those tables.
After that, I am connecting as destination for each table the DataLake that I have created and in the process I have made it to create a new table with the setting to be update method: Replace & Schema Options on Publish: Dynamic Schema.
Until here all works good, when refresh the data within the datalake works as well and they replace the old table's data.
The problem occurs when I create a new schema within the DataLake. I place the tables into the new created schema and then I am going back to the DataFlow and in the destination connection I now select 'existing table'. In the next step, I leave the update method and publish option the same and making sure that the mapping of the columns of my table match those from the table that exist within my schema and publish the flow.
When the flow tries now to update I am getting the below error message:
Change_WriteToDataDestination: Mashup Exception Expression Error Couldn't refresh the entity because of an issue with the mashup document MashupException.Error: Expression.Error: The value does not support versioning. Details: Reason = Expression.Error;ErrorCode = 10855;Detail = #table({"Title", "Submitted", "Submitter", "Description", "URL", "Meets_Change__criteria_Y/N"}, {});Microsoft.Data.Mashup.Error.Context = User
Any ideas? Specific table has only text format columns.
Hello,biedw ,thanks for your concern about this issue.
And I would like to share some additional solutions below.
Hi,@Kostas .I am glad to help you.
I have carried out a series of tests and have come to some conclusions, I hope that the following suggestions will be helpful to you
You need to choose different models according to the actual situation
1. When your semantic model undergoes schema changes, which are limited to some simple schema changes (table column name changes or append operations), you can choose Existing tables to map the selected existing tables.
It should be noted that in this case, the table does not support the new columns or delete columns, but only support some small changes.
2. When your semantic model undergoes a schema change in the sense of adding new columns (deleting columns), you need to select New Tables and turn on the Use automatic settings option.
I have discussed your case with other members of the team and recommend you to use New Tables instead of Existing Tables, as the scope of supported schema changes is different in these two modes.
Also note that the data types of the data sources are consistent and set up correctly.
No matter which mode, when the data source changes, the first thing you need to do is to refresh the dataflow itself (this is the most important point)
choose Existing Tables: (does not support the table to add columns / delete columns)
choose new Tables (supports adding/deleting columns to tables)
When I add a column to my data source, the system will automatically map it (turn on the Use automatic settings option).
Here's the details of my test procedure:
1. choose New Tables
Create gen 2 data flow
Subsequently modify the architecture of the data source (add a new column)
After refreshing the gen 2 dataflow, it shows that a new column has been added in the dataflow
At this point if I don't look at the updated dataflow, but directly when I check the lakehouse I just linked to first. I find that it doesn't match the dataflow (still shows column data, no architectural changes have occurred)
The data flow needs to be refreshed before it can be properly mapped to a new column
At this point you need to turn on the Use automatic settings button in the top left corner. Once this button is turned on, the system will automatically detect any changes to the data source schema and update it automatically.
At this point, after updating the data source schema again, refreshing the dataflow, the table that was mapped successfully added/decreased columns
2. Choose the existing tables option.
Data source architecture re-updated (one new column added)
After updating the data source schema . I refreshed dataflow,and edited it later, the system shows that the refresh was successful
The dataflow shows the added columns
But the problem is that the table in lakehouse doesn't add a column as expected (unlike the last new a table final rendering)
1. When your semantic model undergoes Schema changes, which are limited to some simple architectural changes (table column name changes or append operations), you can select Existing tables to map the selected existing tables.
In this case, the table does not support new columns or deleted columns
2. When your semantic model undergoes a Schema change that involves the addition of new columns (deletion of columns), you need to select New Tables and turn on the Use automatic settings option.
URL:
Dataflow Gen2 data destinations and managed settings - Microsoft Fabric | Microsoft Learn
Dataflow Gen2 Dynamic Schema Not Updating - Microsoft Fabric Community
If you still have the problem after retesting, please provide relevant screenshots (without sensitive data).
I hope my suggestions give you good ideas, if you have any more questions, please clarify in a follow-up reply.
Best Regards,
Carson Jian,
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hello @v-jtian-msft and thanks for your comprehensive response.
To be honest it is the first time using those features and did multiple tests so most of the points above in relation to add/delete columns in the original table in dataflow and those not migrating within the datalake is something that I already concluded during my testing.
The final part that I don't understand is the same issue that @biedw raised in his comment; more detailed steps below:
1) Create a datalake
2) New source in datalake --> DataFlow Gen2
3) Import a table from an excel workbook in SharePoint --> import specific table let's calle it 'Reviewers' with 4 text columns inside.
4) Transform - change column types to 'text'
5) Data Destination as per your printscreens the data lake house and use of the 'Automatic settings' to replace existing table and use of 'Dynamic Schema'
6) Publish.
Now my table is under the default .dbo schema and when I refresh the dataflow, it automatically replaces the old data in the table with the new one and works correctly.
As it is a datalake though and I have multiple sources, to organise it better I want to create schemas within the place the tables under those. To do that I am taking the below steps:
1) I am going into the Datalake and create a new schema called 'Schema 1'.
2) Transfer the table that already created before within the 'Schema 1'.
3) Navigate into the existing dataflow and in the data destination to now choose 'existing table'.
4) Make sure that the columns match the existing columns and types (system pre-populates it) and select publish.
Now in that scenario, the service will return the error that I placed when raised the comment.
Please be aware that no changes made to the original table (add/remove columns or change type) so we are talking about the same exact table
At last, I did another test in which in the original load in destination (datalake) instead of selecting 'Dynamic' shema I selected the option of a 'Fixed' Schema. I then followed the above steps (publish, move to schema in datalake, assign the table to the schema from the dataflow and refresh the dataflow) and again it return me the same error:
Change_WriteToDataDestination: Mashup Exception Expression Error Couldn't refresh the entity because of an issue with the mashup document MashupException.Error: Expression.Error: The value does not support versioning. Details: Reason = Expression.Error;ErrorCode = 10855;Detail = #table({"Title", "Submitted", "Submitter", "Description", "URL", "Meets_Change__criteria_Y/N"}, {});Microsoft.Data.Mashup.Error.Context = User
I hope that makes sense about the specific issue.
Hi,@Kostas .I am glad to help you.
As there are other people experiencing the same problem as you.
I have not found the same error message as you either.
I somehow doubt that this is due to some current problems with gen2, you could try to get more help by creating a ticket.
The Link of Power BI Support:
Microsoft Fabric Support and Status | Microsoft Fabric
I hope my suggestions give you good ideas, if you have any more questions, please clarify in a follow-up reply.
Best Regards,
Carson Jian,
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
I am also getting the same "value does not support versioning" error. It appears in my case to be due to using a Lakehouse table with a schema as the destination in a DataFlow Gen2. If I change the destination to use a table in the [dbo] schema the DataFlow works.
I guess, this is the fun part of using preview features 🙂
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.
User | Count |
---|---|
22 | |
22 | |
12 | |
10 | |
8 |
User | Count |
---|---|
43 | |
43 | |
24 | |
11 | |
10 |