- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Whitespace in source column name rejected by copy activity
I am using a pipeline with a copy activity to get data from a SQL server. Since I loop through a list of tables, I get the mapping from a meta table that I prepare beforehand. However, if the source column name contains a space, I get the following error:
Failure happened on 'destination' side. ErrorCode=DeltaInvalidCharacterInColumnName,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Column name Location Code contains invalid characters. ",;{}()\n\t=" are not supported.,Source=Microsoft.DataTransfer.ClientLibrary,'
However, the failure is certainly NOT happening on the 'destination' side. The schema looks like this (it works neither with the encompassing backticks nor without them):
{
"type": "TabularTranslator",
"mappings": [
{
"source": { "name": "`timestamp`" },
"sink": { "name": "timestamp" }
},
{
"source": { "name": "`Location Code`" },
"sink": { "name": "Location_Code" }
},
]
}
Since the space is in the name of the source column, I can't change it. So what do I do?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Alright, I got it and it was trivial in hindsight: The translator above is a literal string. In order to pass it correctly as a parameter, you have to put "@json(mapping_string)" instead of what I did, which was simply "@mapping_string".
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Isn't there a way or setting like we do in notebooks so that it can read columns with Spaces and save with _ instead od space
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I was facing the same issue in a similar situation.
The only workaround I could do is to use a notebook and get the data in a dataframe then write it into a delta table as below:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi - were you able to resolve this? I'm encountering the same issue...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @snraedsoeia
You can try using square brackets as the source column name instead of backquotes. For example:
{
"type": "TabularTranslator",
"mappings": [
{
"source": { "name": "[timestamp]" },
"sink": { "name": "timestamp" }
},
{
"source": { "name": "[Location Code]" },
"sink": { "name": "Location_Code" }
}
]
}
Regards,
Nono Chen
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Nice try, but it doesn't work, yielding the same error message.
Just in case, the input for the copy activity really looks like this:
{
"source": {
"type": "SqlServerSource",
"queryTimeout": "02:00:00",
"partitionOption": "None",
"datasetSettings": {
"annotations": [],
"type": "SqlServerTable",
"schema": [],
"typeProperties": {
"schema": "dbo",
"table": "ABCDEFG",
"database": "HIJKLMNOP"
},
"externalReferences": {
"connection": "684354684651687654657684"
}
}
},
"sink": {
"type": "LakehouseTableSink",
"tableActionOption": "OverwriteSchema",
"partitionOption": "None",
"datasetSettings": {
"annotations": [],
"linkedService": {
"name": "Sandbox",
"properties": {
"annotations": [],
"type": "Lakehouse",
"typeProperties": {
"workspaceId": "XXX",
"artifactId": "XXX",
"rootFolder": "Tables"
}
}
},
"type": "LakehouseTable",
"schema": [],
"typeProperties": {
"table": "XXX"
}
}
},
"enableStaging": false,
"translator": "{\n\t\"type\": \"TabularTranslator\",\n\t\"mappings\": [\n\t\t{\n\t\t\t\"source\": { \"name\": \"[timestamp]\" },\n\t\t\t\"sink\": { \"name\": \"timestamp\" }\n\t\t},\n\t\t{\n\t\t\t\"source\": { \"name\": \"[Location Code]\" },\n\t\t\t\"sink\": { \"name\": \"Location_Code\" }, *[here I removed the other columns]*
\n\t]\n}"
}
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Alright, I got it and it was trivial in hindsight: The translator above is a literal string. In order to pass it correctly as a parameter, you have to put "@json(mapping_string)" instead of what I did, which was simply "@mapping_string".

Helpful resources
Subject | Author | Posted | |
---|---|---|---|
06-28-2024 10:42 AM | |||
10-18-2024 02:24 AM | |||
08-28-2024 03:58 PM | |||
02-20-2025 11:40 PM | |||
11-03-2024 05:19 AM |
User | Count |
---|---|
9 | |
8 | |
7 | |
6 | |
4 |