Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedJoin us at the 2025 Microsoft Fabric Community Conference. March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for $400 discount. Register now
Hi,
I'm trying to write some data with COPY activity into a lakehouse. I need to add a customized column to Destination lakehouse table. This column is a date. However, when I write it in the lakehouse table, it's written as string. How can I set this column to date type? I've tried with formating the value to datetime:
But it still writes it as string.
Please don't tell me to set the column type in Mapping tab, because this Data Pipeline is going to be used with diferent sources, that is, different schemas. So I can't specify a fixed data schema.
Any ideas?
I did not test this myself, but in the Mapping settings, you are able to set the 'Type conversation settings'. This does not require a fixed data schema, but allows you to set a date string format that should be read as a date type. If you set the 'Date format' to "yyyy-MM-dd" in combination with your already formatted date string @{formatDateTime(item().file_date,'yyyy-MM-dd')}, it may work. Here is the documentation: https://learn.microsoft.com/en-us/fabric/data-factory/data-type-mapping.
Do note that not all sources are supported. The list of supported items is in the link.
Hi,
Not working. I've set Date format as I have specified in formatDateTime function
and if I check the destination table, the filed is still in string format:
Just to be sure, did you drop the table before running with this new setting?
Hi @amaaiia
It seems difficult to achieve your purpose in the pipeline. I recommend that you do this in dataflow. It is more convenient to clean and transform data in dataflow.
After connecting to the data source in dataflow, you can add custom date columns.
Set the destination as lakehouse.
Regards,
Nono Chen
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
I need to develop this use case with Data pipeline because I need to parameterize all the flow. DFg2 is not good to work with parameters. I need to ingest some tables provided in a warehouse as master data, so I need to iterate through each row of the warehouse to ingest each table specified in the warehouse table. The destination tables, the source, the schemas... of each table is different. Is difficult to develop this in a dataflow.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Check out the February 2025 Fabric update to learn about new features.
User | Count |
---|---|
6 | |
4 | |
2 | |
1 | |
1 |
User | Count |
---|---|
13 | |
10 | |
5 | |
5 | |
4 |