Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreNext up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now
Following Data source error come due to "Shipping" Column in Data might be misplaced or removed from the original CSV file (onedrive).
{"error":{"code":"ModelRefresh_ShortMessage_ProcessingError","pbi.error":{"code":"ModelRefresh_ShortMessage_ProcessingError","parameters":{},"details":[{"code":"Message","detail":{"type":1,"value":"The column 'Shipping' of the table wasn't found."}}],"exceptionCulprit":1}}} Table: Master Sales History.
Solved! Go to Solution.
@trademobilemart, open the Edit/Transform Data query.
Right-click the open feed editor in the table. From the column list carefully remove the data-typed column.
If you change the name, rename it there.
Hi @trademobilemart ,
Something to look out for is the fact that csv files are imported using a fixed number of columns: the number of columns in the source when first imported. This means that if your source csv has a column added to it, the furthest right column in the file is ignored. Not sure if this is your case, but very useful to know if you're dealing with csv sources nonetheless.
You can fix this so that the csv always picks up all columns in the source as follows:
In Power Query, go to your source step and completely delete the column value argument here:
So that it now looks like this:
You will now always import all available columns as they are added to the source.
Pete
Proud to be a Datanaut!
Hi @trademobilemart ,
Something to look out for is the fact that csv files are imported using a fixed number of columns: the number of columns in the source when first imported. This means that if your source csv has a column added to it, the furthest right column in the file is ignored. Not sure if this is your case, but very useful to know if you're dealing with csv sources nonetheless.
You can fix this so that the csv always picks up all columns in the source as follows:
In Power Query, go to your source step and completely delete the column value argument here:
So that it now looks like this:
You will now always import all available columns as they are added to the source.
Pete
Proud to be a Datanaut!
@trademobilemart, open the Edit/Transform Data query.
Right-click the open feed editor in the table. From the column list carefully remove the data-typed column.
If you change the name, rename it there.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
| User | Count |
|---|---|
| 48 | |
| 40 | |
| 37 | |
| 20 | |
| 15 |
| User | Count |
|---|---|
| 70 | |
| 67 | |
| 32 | |
| 27 | |
| 25 |