Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi - I am unfortunately having column-names like "Fix Version/s" including a "/" - unfortunately the character is not Spark-compatible thus Delta-table columns cannot have the names. In Copy Job, I have made a column-mapping to a valid column-name (in casu "Fix Version_s") - however the column mapping is ignored by Copy Job which is failing on the destination side:
Failure happened on 'destination' side. ErrorCode=DeltaInvalidCharacterInColumnName,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Column name Issue Type contains invalid characters. ",;{}()\n\t=" are not supported.,Source=Microsoft.DataTransfer.ClientLibrary,'
Is there a fix in the pipeline?
Solved! Go to Solution.
Hi @Magic_Mads
Thank you for reaching out to the Microsoft Fabric Community Forum.
Copy Job failures can occur when loading data into Delta Lake tables if column names contain unsupported special characters, such as slashes ("/") or spaces for example, "Fix Version/s" or "Issue Type". Spark and Delta Lake enforce strict column naming rules, and schema validation is performed before any column mapping or renaming in Copy Job. As a result, jobs will fail if the original schema includes invalid column names.
To avoid this, ensure that all column names are compliant before initiating the Copy Job. For sources like SQL Server, consider creating a SQL view that uses aliases to rename columns (e.g., SELECT [Fix Version/s] AS Fix_Version_s FROM OriginalTable). For file-based sources such as CSV or Parquet, you can utilize a Spark notebook to load the data, rename columns as needed, and write the cleansed data to a new Delta table.
In low-code environments, Dataflow Gen2 in Microsoft Fabric enables column renaming using the Select transformation prior to writing to Delta. Please note that preprocessing through views, notebooks, or dataflows is currently the most effective way to manage unsupported column names, as column mapping in Copy Job alone is not sufficient due to the timing of schema validation.
Regards,
Karpurapu D,
Microsoft Fabric Community Support Team.
Hi @Magic_Mads
I wanted to check if you’ve had a chance to review the information provided. If you have any further questions, please let us know. Has your issue been resolved? If not, please share more details so we can assist you further.
Thank You.
Hi @Magic_Mads
We have not yet received a response from you regarding your query if your resolved ?If it did not, please share more details so we can assist you more effectively.
Thank You.
I still have not clear answer of wether SQL Server Mirroring will be able to handle columnnames containing "/"
Hi @Magic_Mads
Thank you for reaching out to the Microsoft Fabric Community Forum.
Copy Job failures can occur when loading data into Delta Lake tables if column names contain unsupported special characters, such as slashes ("/") or spaces for example, "Fix Version/s" or "Issue Type". Spark and Delta Lake enforce strict column naming rules, and schema validation is performed before any column mapping or renaming in Copy Job. As a result, jobs will fail if the original schema includes invalid column names.
To avoid this, ensure that all column names are compliant before initiating the Copy Job. For sources like SQL Server, consider creating a SQL view that uses aliases to rename columns (e.g., SELECT [Fix Version/s] AS Fix_Version_s FROM OriginalTable). For file-based sources such as CSV or Parquet, you can utilize a Spark notebook to load the data, rename columns as needed, and write the cleansed data to a new Delta table.
In low-code environments, Dataflow Gen2 in Microsoft Fabric enables column renaming using the Select transformation prior to writing to Delta. Please note that preprocessing through views, notebooks, or dataflows is currently the most effective way to manage unsupported column names, as column mapping in Copy Job alone is not sufficient due to the timing of schema validation.
Regards,
Karpurapu D,
Microsoft Fabric Community Support Team.
Hi @Magic_Mads
We have not yet heard back from you about whether the response addressed your query. If it did not, please share more details so we can assist you more effectively.
Thank You.
Hi @Magic_Mads
We are following up to see if you have had the chance to review the information provided. If you have any further questions, please do not hesitate to contact us. Could you confirm whether your query has been resolved by the solution provided by @burakkaragoz and @suparnababu8 ? If not, please provide detailed information so we can better assist you.
Thank You.
Hi @Magic_Mads
You can rename your column name like Fix_Version_s nad give a try. let me know if it works.
Thank you!
Did I answer your question? Mark my post as a solution!
Proud to be a Super User!
Hi @Magic_Mads ,
Yeah, this is a pretty common headache when working with Spark/Delta and Copy Job tasks. The "/" character and some others like ":", ";", line breaks and so on just aren't allowed in Delta table column names (or Spark in general), and Copy Job is quite strict about this. Even if you try to do column mapping to a new valid name, the underlying engine still checks the original schema and throws the error if the source has unsupported characters.
From what I know, currently there's no workaround inside Copy Job itself for this – it doesn't remap the columns before schema validation. The only way I got around it in the past was to rename those columns in the source before the Copy Job runs, so the schema is already Spark-friendly. If that's not possible in your flow, some folks script a pre-step to create a view or intermediate table with the cleaned column names, then copy from there.
I haven’t seen any updates about a fix in the pipeline right now, but I’d recommend keeping an eye on the release notes or submitting feedback to Microsoft as well. They occasionally expand support for naming quirks, but for now, it’s a bit of a manual workaround.
Hope that helps a bit! If you want, I can share a sample script for renaming columns in Spark or SQL before running the job.
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
Translation & text editing supported by AI