Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started

Reply
jamuka
Advocate II
Advocate II

Get all columns from a csv file

Hello all,

 

I have text file under sharepoint and I'm getting data via SharePoint.Files connector. At the moment my files have 12 columns but in the future there will be more. I tried to delete Columns Parameter so I can get new columns too but it didn't work.  

 

 

= Csv.Document(Parameter1,[Delimiter="	",  Columns=12,Encoding=1254, QuoteStyle=QuoteStyle.None])

 

 

Current CodeCurrent Code

 

 

Updated CodeUpdated Code

 

 

 

1 ACCEPTED SOLUTION
OwenAuger
Super User
Super User

You're welcome 🙂

Yes, that query was just an example of the process to determine the number of columns.

It can equally be applied to a file from SharePoint or elsewhere with some adjustment.

 

I would actually recommend creating a function to make your life easier, so that it can be applied to SharePoint files within other queries.

 

Here is one I created and tested myself just now.

Paste this code into a blank query and call it CsvDocumentVariableColumns (or another name of your choosing).

let
  func = (
    #"File Contents" as binary, 
    optional #"Sample Rows" as number, 
    optional Delimiter as text, 
    optional Encoding as number, 
    optional #"Quote Style" as number
  ) as table =>
    let
      // 1. SET PARAMETERS TO DEFAULT VALUES IF NOT SPECIFIED
      // Set Delimiter to tab if not specified
      DelimiterFinal = Delimiter ?? "#(tab)", 
      // Set Encoding to 1254 if not specified
      EncodingFinal = Encoding ?? 1254, 
      // Set QuoteStyle to QuoteStyle.None if not specified
      QuoteStyleFinal = #"Quote Style" ?? QuoteStyle.None, 
      // 2. PROCESS TEXT FILE
      // Return a list of lines of text
      Lines = Lines.FromBinary(#"File Contents", null, null, EncodingFinal), 
      // Take a sample of the lines
      LinesSample = List.FirstN(Lines, #"Sample Rows" ?? (each true)), 
      // Return the maximum number of columns across all sampled lines
      MaxColumns = List.Max(
        List.Transform(
          LinesSample, 
          each List.Count(Splitter.SplitTextByDelimiter(DelimiterFinal)(_))
        )
      ), 
      // Read the file using Csv.Document
      Source = Csv.Document(
        #"File Contents", 
        [
          Delimiter  = DelimiterFinal, 
          Columns    = MaxColumns, 
          Encoding   = EncodingFinal, 
          QuoteStyle = QuoteStyleFinal
        ]
      )
    in
      Source, 
  documentation = [
    Documentation.Name = "CSVDocumentVariableColumns", 
    Documentation.LongDescription
      = "Wrapper for Csv.Document that determines the required number of columns based on all rows or a sample of rows"
  ]
in
  Value.ReplaceType(func, Value.ReplaceMetadata(Value.Type(func), documentation))

 

Then in your code, replace Csv.Document calls like this:

= Csv.Document(Parameter1,[Delimiter="	", Encoding=1254, QuoteStyle=QuoteStyle.None])

with this:

 

= CsvDocumentVariableColumns(Parameter1)

Parameter1 in your original code was (I assume) the result of a function returning binary file contents (may have been auto-generated).

 

The CsvDocumentVariableColumns function takes at minimum one parameter File Contents (type binary). You can optionally specify Sample Rows, Delimiter, Encoding and Quote Style, but these currently default to your values from above.

 

Regarding connecting to SharePoint folders, yes you can definitely do that 🙂

There are various articles/guides out there.

 

Regards

 


Owen Auger
Did I answer your question? Mark my post as a solution!
Blog
Twitter
LinkedIn

View solution in original post

6 REPLIES 6
Saima14
New Member

Hello!

If you're looking to dynamically handle varying column numbers in your SharePoint text file, consider these steps:

  1. Refresh Metadata:

    • After removing the Columns parameter, make sure to refresh the metadata of your SharePoint.Files connector. This action ensures that the connector recognizes any changes in the structure of your text file.
  2. Ensure Compatibility:

    • Confirm that your data source and Power Automate are compatible with dynamic column changes. Some systems may require additional configuration to handle evolving structures.
  3. Use Dynamic Content:

    • Instead of specifying a fixed number of columns, explore using dynamic content options in Power Automate. This allows the flow to adapt to the changing structure of your file.
  4. Handle Missing Columns:

    • Develop a mechanism in your flow to handle cases where new columns are added. This could involve dynamically checking for the presence of expected columns and adjusting the flow accordingly.
  5. Test with Sample Data:

    • If possible, test your Power Automate flow with a sample SharePoint file that has additional columns. This helps ensure that your flow responds appropriately to evolving structures.
  6. Explore Expression Functions:

    • Leverage expression functions within Power Automate to dynamically handle columns. Functions like union or merge can be valuable in scenarios where column structures may change.
  7. Error Handling:

    • Implement robust error handling in your flow. If there are issues due to changing column structures, a well-structured error-handling mechanism can provide insights into what went wrong.
  8. Community Support:

    • Seek advice and insights from the Power Automate community or forums. Others may have encountered similar scenarios and can offer valuable tips.

Remember to document your Power Automate flow thoroughly, especially the parts related to dynamic column handling, to make it easier for you or others to troubleshoot and maintain in the future.

Good luck with your SharePoint file integration, and feel free to ask for more assistance if needed! 🚀

Hello @Saima14,

 

thank you for your detailed answer but I'm not using Power Automate for this process and my knowledge is almost zero about it.

 

kind regards

OwenAuger
Super User
Super User

You're welcome 🙂

Yes, that query was just an example of the process to determine the number of columns.

It can equally be applied to a file from SharePoint or elsewhere with some adjustment.

 

I would actually recommend creating a function to make your life easier, so that it can be applied to SharePoint files within other queries.

 

Here is one I created and tested myself just now.

Paste this code into a blank query and call it CsvDocumentVariableColumns (or another name of your choosing).

let
  func = (
    #"File Contents" as binary, 
    optional #"Sample Rows" as number, 
    optional Delimiter as text, 
    optional Encoding as number, 
    optional #"Quote Style" as number
  ) as table =>
    let
      // 1. SET PARAMETERS TO DEFAULT VALUES IF NOT SPECIFIED
      // Set Delimiter to tab if not specified
      DelimiterFinal = Delimiter ?? "#(tab)", 
      // Set Encoding to 1254 if not specified
      EncodingFinal = Encoding ?? 1254, 
      // Set QuoteStyle to QuoteStyle.None if not specified
      QuoteStyleFinal = #"Quote Style" ?? QuoteStyle.None, 
      // 2. PROCESS TEXT FILE
      // Return a list of lines of text
      Lines = Lines.FromBinary(#"File Contents", null, null, EncodingFinal), 
      // Take a sample of the lines
      LinesSample = List.FirstN(Lines, #"Sample Rows" ?? (each true)), 
      // Return the maximum number of columns across all sampled lines
      MaxColumns = List.Max(
        List.Transform(
          LinesSample, 
          each List.Count(Splitter.SplitTextByDelimiter(DelimiterFinal)(_))
        )
      ), 
      // Read the file using Csv.Document
      Source = Csv.Document(
        #"File Contents", 
        [
          Delimiter  = DelimiterFinal, 
          Columns    = MaxColumns, 
          Encoding   = EncodingFinal, 
          QuoteStyle = QuoteStyleFinal
        ]
      )
    in
      Source, 
  documentation = [
    Documentation.Name = "CSVDocumentVariableColumns", 
    Documentation.LongDescription
      = "Wrapper for Csv.Document that determines the required number of columns based on all rows or a sample of rows"
  ]
in
  Value.ReplaceType(func, Value.ReplaceMetadata(Value.Type(func), documentation))

 

Then in your code, replace Csv.Document calls like this:

= Csv.Document(Parameter1,[Delimiter="	", Encoding=1254, QuoteStyle=QuoteStyle.None])

with this:

 

= CsvDocumentVariableColumns(Parameter1)

Parameter1 in your original code was (I assume) the result of a function returning binary file contents (may have been auto-generated).

 

The CsvDocumentVariableColumns function takes at minimum one parameter File Contents (type binary). You can optionally specify Sample Rows, Delimiter, Encoding and Quote Style, but these currently default to your values from above.

 

Regarding connecting to SharePoint folders, yes you can definitely do that 🙂

There are various articles/guides out there.

 

Regards

 


Owen Auger
Did I answer your question? Mark my post as a solution!
Blog
Twitter
LinkedIn

Hello @OwenAuger,

 

thank you it works, I guess I only updated the source file and forgot Sample File, which causes the problem.

SharePoint folders solved too, turns out there is no specific connector for folders.

 

kind regards 

OwenAuger
Super User
Super User

Hi @jamuka 

This is an interesting one 🙂

When the Columns field is omitted, the number of columns is detected from the first row.

Your particular file appears to have only one column populated in the first row, so omitting Columns doesn't work as intended in this case.

 

You could add steps to detect the maximum number of columns across all rows (or a certain number of rows), then pass the result to the Csv.Document step. My only possible concern would be performance with a large text file.

 

Here is some sample code which you could adapt:

 

let
  FilePath = "C:\temp\test.txt",
  // Tab character
  Delimiter = "#(tab)",
  // If null then all rows are sampled
  NumRows = null,
  // Return a list of lines of text
  Lines = Lines.FromBinary(File.Contents(FilePath), null, null, 1254),
  // Take a sample of the lines
  LinesSample = if NumRows = null then Lines else List.FirstN(Lines, NumRows),
  // Return the maximum number of columns across all sampled lines
  MaxColumns = List.Max(
    List.Transform(LinesSample, each List.Count(Splitter.SplitTextByDelimiter(Delimiter)(_)))
  ),
  // Read the file using Csv.Document
  Source = Csv.Document(
    File.Contents(FilePath),
    [Delimiter = Delimiter, Columns = MaxColumns, Encoding = 1254, QuoteStyle = QuoteStyle.None]
  )
in
  Source

 

 

Does this help?

Regards


Owen Auger
Did I answer your question? Mark my post as a solution!
Blog
Twitter
LinkedIn

Dear @OwenAuger,

 

thank you for your help, it worked when I used your code for a file under C:

But I can't make it work for a file under sharepoint, also instead of  connecting to a file, I'd like to connect to a folder because my files name will be change too.

 

I never used sharepoint as a source, I tried to use several connection types yet none of them work. Also I'm not sure whether it's possible or not to to connect a sharepoint folder, I couldn't find anything about it.

 

regards

 

Helpful resources

Announcements
OCT PBI Update Carousel

Power BI Monthly Update - October 2024

Check out the October 2024 Power BI update to learn about new features.

Sept NL Carousel

Fabric Community Update - September 2024

Find out what's new and trending in the Fabric Community.

Top Solution Authors