Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
mrojze
Helper II
Helper II

ParquetBigDecimal cannot be written as Parquet physical type of ByteArray error in Copy Activity

Hi all,

I am trying to load a parquet file from the Lakehouse into a Datawarehouse table.

I can't find my way around this error:

ErrorCode=UnsupportedPhysicalTypeOfParquetBigDecimal,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=ParquetBigDecimal cannot be written as Parquet physical type of ByteArray.,Source=Microsoft.DataTransfer.Richfile.ParquetTransferPlugin,'

 

I've seen some postings but I can't find the way to solve it.

Can anyone help?

 

Thanks,

Martin

1 ACCEPTED SOLUTION

Unfortunately I wasn't able to solve the issue of reading this file. 
We end up requesting a new file. 
Probably the solution would be to read it somewhere outside Fabric, then transform it and bring it into the Lake. 
For now, I am not working on the solution any longer. 

View solution in original post

11 REPLIES 11
v-pnaroju-msft
Community Support
Community Support

Hi mrojze,

Thank you for your update and for sharing your insights and approach in resolving the issue.
Please continue to utilize the Fabric Community for any further assistance with your queries.

Thank you.

v-pnaroju-msft
Community Support
Community Support

Hi mrojze,

We are following up to see if what we shared solved your issue. If you need more support, please reach out to the Microsoft Fabric community.

Thank you.

v-pnaroju-msft
Community Support
Community Support

Hi mrojze,

We would like to follow up and see whether the details we shared have resolved your problem.
If you need any more assistance, please feel free to connect with the Microsoft Fabric community.

Thank you.

Unfortunately I wasn't able to solve the issue of reading this file. 
We end up requesting a new file. 
Probably the solution would be to read it somewhere outside Fabric, then transform it and bring it into the Lake. 
For now, I am not working on the solution any longer. 

v-pnaroju-msft
Community Support
Community Support

Hi mrojze,

Thank you for the update.

From what I understand, the error "[DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 39 exceeds max precision 38" means that the source Parquet file has a decimal field with precision 39, which is more than the allowed maximum of 38 in Spark and Microsoft Fabric Notebooks. Because of this, the file cannot be read by the system, and we cannot apply casting unless the file is read successfully first.

If you have control over the Parquet file source, please change the decimal field to precision 38 or less (for example, DECIMAL(38, x)) and then try reading or transforming the file again using the PySpark notebook.


If you have any other questions, please feel free to ask the Microsoft Fabric community.

Thank you.

v-pnaroju-msft
Community Support
Community Support

Thankyou, @lbendlin, for your response.

Hi mrojze,

We appreciate your query on the Microsoft Fabric Community Forum.

From what I understand, the error occurs because the Parquet file contains Decimal types stored as BYTE_ARRAY, which is currently not supported by Microsoft Fabric’s Copy Activity. This encoding is typically used for BigDecimal in Parquet, but Fabric expects decimals to be stored in supported formats like INT64 or FIXED_LEN_BYTE_ARRAY.

Please follow the workaround steps below where we re-write the Parquet file using PySpark by casting decimal columns to a supported format before ingestion:

  1. Open a notebook in Fabric.
  2. Read the original file as shown below:
    df = spark.read.parquet("Files/input_path/file.parquet")
  3. Cast the decimal fields using the following code:
    from pyspark.sql.functions import col
    df = df.withColumn("column_name", col("column_name").cast("decimal(18,2)"))
  4. Write the updated Parquet file:
    df.write.mode("overwrite").parquet("Files/output_path/converted.parquet")
  5. Use the updated file in the Copy activity.

This ensures that the decimals are stored in a supported physical format and resolves the error.

We hope this information helps you resolve the issue.
If you have any further questions, please feel free to reach out to the Microsoft Fabric community.

Thank you.

Thanks for the help!

I am still getting an error:

 

df = spark.read.parquet("Files/ingestion/barco/oakham_barco_t_histshifts")
df = df.withColumn("C_SHIFTANDSEQ_ID", col("C_SHIFTANDSEQ_ID").cast("bigint"))
df.write.mode("overwrite").parquet("Files/ingestion/barco/oakham_barco_t_histshifts.parquet")
 
[DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 39 exceeds max precision 38.
 
 
I don't think I can even read the column as it is.
 
Thoughts?
mrojze
Helper II
Helper II

So you are telling me that there is no way to read this file?

No workarounds?

You would need to downgrade your byte arrays to Int64 before ingesting.

Makes sense.

How do you do that in a pipeline or any other Farbic tool?

Unfortunatelly, I can't request a new file.

 

lbendlin
Super User
Super User

Power BI has no support for Int96 or Int128.

 

If this is important to you please consider voting for an existing idea or raising a new one at https://ideas.fabric.microsoft.com

Helpful resources

Announcements
Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

October Fabric Update Carousel

Fabric Monthly Update - October 2025

Check out the October 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.