Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin the OneLake & Platform Admin teams for an ask US anything on July 16th. Join now.
Hi there,
I've built a data pipeline copying a table in SQL Server into a parquet file in Azure Blob Storage and got the below error message:
ErrorCode=UserErrorInvalidValueInPayload,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Failed to convert the value in 'compressionCodec' property to 'Microsoft.DataTransfer.Richfile.Core.ParquetCompressionCodec' type. Please make sure the payload structure and value are correct.,Source=Microsoft.DataTransfer.DataContracts,''Type=System.InvalidCastException,Message=Null object cannot be converted to a value type.,Source=mscorlib,'
How to fix it please?
Cheers,
Kylie
Solved! Go to Solution.
Hi @KylieF,
We would need to some debugging to understand which values are actually causing the problem.
Please enable fault tolerance with 'Skip incompatible rows'. This will load all correct data into the table.
Enable logging so that a log is created which contains rows that are causing the problem. You can check individual columns to understand any special character/value that is causing the problem.
In addition to that, you can try mapping manually giving the column names and data types. Note that, parquet has some restrictions on column names.
Hi @KylieF
It looks like your data pipeline is trying to write a Parquet file to Azure Blob Storage, but the compressionCodec property is either missing or set to an invalid value. Here’s how you can fix it:
Set a Valid Compression Codec
Ensure that the compressionCodec property is set correctly. The supported values for Parquet compression in Azure Data Factory are:
Example JSON snippet for the dataset configuration:
Check for Null or Missing Property
The error suggests that compressionCodec might be missing or set to null. If you haven't explicitly defined it, try setting it to "None" instead of leaving it blank.
Validate JSON Payload in the Dataset Definition
If you are using Azure Data Factory (ADF) or Synapse Pipelines, go to the Sink dataset (Parquet file) settings and verify that the compression setting is defined.
Check Your Pipeline Parameters
If compressionCodec is set dynamically using pipeline parameters, ensure the parameter value is correctly assigned before execution.
Restart the Pipeline After Fixing
Once you've corrected the issue, publish the changes and restart the pipeline.
Hi @KylieF,
We would need to some debugging to understand which values are actually causing the problem.
Please enable fault tolerance with 'Skip incompatible rows'. This will load all correct data into the table.
Enable logging so that a log is created which contains rows that are causing the problem. You can check individual columns to understand any special character/value that is causing the problem.
In addition to that, you can try mapping manually giving the column names and data types. Note that, parquet has some restrictions on column names.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |