Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
DRSH_9876
Regular Visitor

Error while Processing CSV from AzureBlob

I am using a Fabric trial version, wanted to import the Azure cost management and billing details (CSV) into Fabric Lakehouse or Warehouse.

 

Step1: Created the workspace.

Step2: Selected data pipeline to organize and move the data, had to choose copy data assistant as other options were throwing some errors.

Step3 : Copy data – Choose Azure blobs and gave the connection strings using Organization account with create new connection.

Step4: It shows me the folder path in Microsoft Azure where the CSV is generated.I choose the latest file and this throws the error [Bad data is found at line 2 in source CloudabilityAmmortizedCostExport01192023_f4bc323c-cc94-49dc-adf0-c26e2947adbd.csv. You can ignore bad data by setting BadDataFound to null. IReader state: ColumnCount: 77 CurrentIndex: 21 HeaderRecord: ["InvoiceSectionName","AccountName","AccountOwnerId","SubscriptionId","SubscriptionName","ResourceGroup","ResourceLocation","Date","ProductName","MeterCategory","MeterSubCategory","MeterId","MeterName","MeterRegion","UnitOfMeasure","Quantity","EffectivePrice","CostInBillingCurrency","CostCenter","ConsumedService","ResourceId","Tags","OfferId","AdditionalInfo","ServiceInfo1","ServiceInfo2","ResourceName","ReservationId","ReservationName","UnitPrice","ProductOrderId","ProductOrderName","Term","PublisherType","PublisherName","ChargeType","Frequency","PricingModel","AvailabilityZone","BillingAccountId","BillingAccountName","BillingCurrencyCode","BillingPeriodStartDate","BillingPeriodEndDate","BillingProfileId","BillingProfileName","InvoiceSectionId","IsAzureCreditEligible","PartNumber","PayGPrice","PlanName","ServiceFamily","CostAllocationRuleName","benefitId","benefitName"] IParser state: ByteCount: 0 CharCount: 2544 Row: 2 RawRow: 2 Count: 77 RawRecord: Hidden because ExceptionMessagesContainRawData is false. Activity ID: 59d472c2-dbdb-4c89-88b2-2b38d627abdf]

 

In Preview date pane: If I choose schema agnostic (binary copy)..it works but gives wrong details. If I choose File format as comma, tab…etc none of the options work.

 

Step 5: I choose Lakehouse as data destination and give name..it connects to root folder files and stores the csv data there.

 

Step 6 : Open lakehouse and choose files – select the file and load to tables.

 

In short, I want to ingest the cost management and billing data from Azure on daily basis into lakehouse, transform the data into warehouse and then using powerbi publish reports.

 

Am I missing any steps.. I am using fabric for the first time and trial version.

 

Appreciate your help and guidance

1 ACCEPTED SOLUTION
v-sdhruv
Community Support
Community Support

Hi @DRSH_9876 ,

You can follow these checks to troubleshoot-
1. CSV File Format Check -> The file is encoded in UTF-8.

2. Data Pipeline: Use Copy Data with Custom File Format

In Copy Data activity, after selecting the source blob and file:

Set File format to "Delimited Text", Use comma (,) as delimiter.

Ensure:

First row as header = true

Quote character = "", Escape character = \(optional)

Set Bad rows action to "Skip" or "Log and continue" if possible
If  BadDataFound is throwing an error, it means the second row doesn't match the header format.
You can also, upload the CSV manually into OneLake (Lakehouse > Files) first, validate it, and use "Load to Table" from the Lakehouse Files UI.

3. File Handling Best Practice for Daily Ingestion

To handle daily updates:

4.Store files in a dated folder structure, e.g., container/year/month/day/filename.csv

 

5.Turn on "Preview raw data" in Data Pipeline if troubleshooting malformed CSVs.

6.Consider using Azure Data Factory (if Fabric fails) to clean the CSV and land it in OneLake.

7.Create a Dataflow Gen2 for more robust ingestion and transformation logic.
Additionally, you can follow the steps mentioned in this link to copy data successfully into the lakehouse.

https://learn.microsoft.com/en-us/fabric/data-factory/copy-data-activity
https://learn.microsoft.com/en-us/fabric/data-engineering/load-data-lakehouse

Hope this helps!
If the response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank You!



 

View solution in original post

5 REPLIES 5
v-sdhruv
Community Support
Community Support

Hi @DRSH_9876 ,
Just wanted to check if you had the opportunity to review the suggestions provided?
If the response has addressed your query, please Accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank You

v-sdhruv
Community Support
Community Support

Hi @DRSH_9876 ,
Just wanted to check if you had the opportunity to review the suggestions provided?
If the response has addressed your query, please Accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank You

v-sdhruv
Community Support
Community Support

Hi @DRSH_9876 ,
Just wanted to check if you had the opportunity to review the suggestions provided?
If the response has addressed your query, please Accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank You

R1k91
Super User
Super User

stupid question, did you check it's a valid CSV? "Bad data is found at line 2"
is it using consistent column and line terminators?


--
Riccardo Perico
BI Architect @ Lucient Italia | Microsoft MVP

Blog | GitHub

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
v-sdhruv
Community Support
Community Support

Hi @DRSH_9876 ,

You can follow these checks to troubleshoot-
1. CSV File Format Check -> The file is encoded in UTF-8.

2. Data Pipeline: Use Copy Data with Custom File Format

In Copy Data activity, after selecting the source blob and file:

Set File format to "Delimited Text", Use comma (,) as delimiter.

Ensure:

First row as header = true

Quote character = "", Escape character = \(optional)

Set Bad rows action to "Skip" or "Log and continue" if possible
If  BadDataFound is throwing an error, it means the second row doesn't match the header format.
You can also, upload the CSV manually into OneLake (Lakehouse > Files) first, validate it, and use "Load to Table" from the Lakehouse Files UI.

3. File Handling Best Practice for Daily Ingestion

To handle daily updates:

4.Store files in a dated folder structure, e.g., container/year/month/day/filename.csv

 

5.Turn on "Preview raw data" in Data Pipeline if troubleshooting malformed CSVs.

6.Consider using Azure Data Factory (if Fabric fails) to clean the CSV and land it in OneLake.

7.Create a Dataflow Gen2 for more robust ingestion and transformation logic.
Additionally, you can follow the steps mentioned in this link to copy data successfully into the lakehouse.

https://learn.microsoft.com/en-us/fabric/data-factory/copy-data-activity
https://learn.microsoft.com/en-us/fabric/data-engineering/load-data-lakehouse

Hope this helps!
If the response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank You!



 

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.