Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
dragonhood
Regular Visitor

"The credentials provided for the Sharepoint source are invalid" when using custom functions

I’m getting a credential error in my Power BI Dataflow after adding custom functions that reference a SharePoint folder.

The credentials provided for the SharePoint source are invalid.
I already verified:
• The SharePoint site URL is correct
• I have access to the SharePoint library
• The same SharePoint connection works in other queries
• Parameter and sample file references are correct

But once I connect my parameter, sample file, and transform file functions, the dataflow fails with the credential error above and I'm using these parameters in a table

Here is the structure I’m using:

dragonhood_6-1764571045390.png

 

dragonhood_0-1764570664034.png- ISF Reports New 

dragonhood_1-1764570718576.pngSharePoint file path
dragonhood_2-1764570753428.png- Reads sample workbook

 

 

dragonhood_3-1764570779757.pngCustom function to transform each file

dragonhood_5-1764570948449.pngInvokes the function on all files in the folder

Has anyone faced this issue before?
Is this caused by:
• Using Binary parameter for SharePoint?
• A function referencing a different authentication scope?
• A mismatch between the Dataflow’s linked connection and the function source?

What is the correct way to configure SharePoint credentials so all custom functions authenticate properly?

 



 

13 REPLIES 13
rodrigosan
Responsive Resident
Responsive Resident

Hi, @dragonhood 

The "Invalid Credentials" error in Dataflows often happens because the Power Query engine in the Service struggles to validate credentials inside a separate "Custom Function" object.

The most robust way to fix this is to move the transformation logic directly into your main query (the "Inline" approach). This bypasses the need for the Invoke Custom Function step entirely.

Here is the solution based on your data structure:

Step 1: Create your Parameters In your Dataflow, create two text parameters to handle the connection dynamically:

  1.            Name: SharepointSite

o             Type: Text

o             Current Value: https://yourdomain.sharepoint.com/sites/YourSite

o             (Enter the root URL of your SharePoint site here)

  1.            Name: FolderPath

o             Type: Text

o             Current Value: Shared Documents/YourFolder/

o             (Note: Copy the exact value from the "Folder Path" column in the preview to ensure it matches)

Step 2: Use this M Code Create a blank query and paste the code below. This script connects, filters the specific folder using the parameters above, extracts the "Charts" sheet from the Excel files, and performs the cleanup/splitting logic without calling external functions.

let
  // 1. Load SharePoint files using the parameter
  Source = SharePoint.Files(SharepointSite, [ApiVersion = 15]),

  // 2. Keep only the required folder
  FilteredRows = Table.SelectRows(Source, each [Folder Path] = FolderPath),

  // 3. Exclude hidden files
  VisibleFiles = Table.SelectRows(FilteredRows, each [Attributes]?[Hidden]? <> true),

  // 4. Read Excel content directly (Inline transformation)
  ExcelContent = Table.AddColumn(VisibleFiles, "Excel", each Excel.Workbook([Content], null, true)),

  // 5. Extract specific sheet "Charts"
  ExtractCharts = Table.AddColumn(
    ExcelContent,
    "ChartsSheet",
    each Table.SelectRows([Excel], each [Item] = "Charts" and [Kind] = "Sheet"){0}[Data]
  ),

  // 6. Promote headers inside each nested table
  PromoteHeaders = Table.TransformColumns(
    ExtractCharts,
    {"ChartsSheet", each Table.PromoteHeaders(_, [PromoteAllScalars = true])}
  ),

  // 7. Rename file column to keep source traceability
  RenameSource = Table.RenameColumns(PromoteHeaders, {{"Name", "SourceName"}}),

  // 8. Expand data from the nested tables
  ExpandedData = Table.ExpandTableColumn(
    RenameSource,
    "ChartsSheet",
    Table.ColumnNames(PromoteHeaders{0}[ChartsSheet])
  ),

  // 9. Set Standard types
  ChangeType = Table.TransformColumnTypes(
    ExpandedData,
    {
      {"SourceName", type text},
      {"Period", type date},
      {"Total ISFs Filed", Int64.Type},
      {"Total On Time ISFs", Int64.Type},
      {"Total Late ISFs", Int64.Type},
      {"Percentage On Time", Percentage.Type},
      {"Percentage Late", Percentage.Type}
    }
  ),

  // 10. Remove unneeded columns
  RemoveExtraCols = Table.RemoveColumns(ChangeType, {"Column7", "Period_id"}),

  // 11. Filter only valid rows
  FilterValid = Table.SelectRows(
    RemoveExtraCols,
    each ([Percentage On Time] <> null and [Percentage Late] <> null)
  ),

  // 12. Duplicate SourceName for split logic
  DuplicateSource = Table.DuplicateColumn(FilterValid, "SourceName", "SourceName_Copy1"),

  // 13. Split importer info (Transition from Text to Number)
  SplitImporter = Table.SplitColumn(
    DuplicateSource,
    "SourceName_Copy1",
    Splitter.SplitTextByCharacterTransition({"c"}, each not List.Contains({"0" .. "9"}, _)),
    {"Importer", "ImporterCodePart1", "ImporterCodePart2"}
  ),

  // 14. Clean Importer Name
  CleanImporter = Table.ReplaceValue(
    SplitImporter,
    "ISF REPORT - ",
    "",
    Replacer.ReplaceText,
    {"Importer"}
  ),

  // 15. Merge importer code parts
  MergeImporterCode = Table.CombineColumns(
    CleanImporter,
    {"ImporterCodePart1", "ImporterCodePart2"},
    Combiner.CombineTextByDelimiter("", QuoteStyle.None),
    "ImporterCode"
  ),

  // 16. Clean file extension from code
  CleanImporterCode = Table.ReplaceValue(
    MergeImporterCode,
    ".xlsx",
    "",
    Replacer.ReplaceText,
    {"ImporterCode"}
  ),

  // 17. Reorder final columns
  FinalReorder = Table.ReorderColumns(
    CleanImporterCode,
    {
      "SourceName",
      "Importer",
      "ImporterCode",
      "Period",
      "Total ISFs Filed",
      "Total On Time ISFs",
      "Total Late ISFs",
      "Percentage On Time",
      "Percentage Late"
    }
  )
in
  FinalReorder

This approach is much more stable in Dataflows because all lineage is contained within a single query context.

Let me know if this works for you!

Hi @rodrigosan ,
Thanks for your reply. I'm currently facing this error after saving this query 

dragonhood_0-1764745477994.png

Could you please guide me on this?

 

Hi @dragonhood,

That specific error (There weren't enough elements...) happens because the code is trying to access the first row {0} of a table that is empty.

Why? It means that at least one of the Excel files inside your SharePoint folder does not contain a sheet named exactly "Charts". When the code tries to grab that sheet and doesn't find it, it crashes.

To fix this, we need to modify the Step 5 (ExtractCharts) to handle errors gracefully (using try...otherwise null) and add a filter immediately after to exclude files that don't have the required sheet.

Here is the updated code block. Please replace your entire query with this version:

let
  // 1. Load SharePoint files using the parameter
  Source = SharePoint.Files(SharepointSite, [ApiVersion = 15]),

  // 2. Keep only the required folder
  FilteredRows = Table.SelectRows(Source, each [Folder Path] = FolderPath),

  // 3. Exclude hidden files
  VisibleFiles = Table.SelectRows(FilteredRows, each [Attributes]?[Hidden]? <> true),

  // 4. Read Excel content directly (Inline transformation)
  ExcelContent = Table.AddColumn(VisibleFiles, "Excel", each Excel.Workbook([Content], null, true)),

  // 5. Extract specific sheet "Charts" WITH ERROR HANDLING
  // If the sheet "Charts" doesn't exist, it returns null instead of crashing
  ExtractCharts = Table.AddColumn(
    ExcelContent,
    "ChartsSheet",
    each try Table.SelectRows([Excel], each [Item] = "Charts" and [Kind] = "Sheet"){0}[Data] otherwise null
  ),

  // 5.1 NEW STEP: Filter out files that didn't have the "Charts" sheet
  RemoveMissingSheets = Table.SelectRows(ExtractCharts, each [ChartsSheet] <> null),

  // 6. Promote headers inside each nested table
  PromoteHeaders = Table.TransformColumns(
    RemoveMissingSheets,
    {"ChartsSheet", each Table.PromoteHeaders(_, [PromoteAllScalars = true])}
  ),

  // 7. Rename file column to keep source traceability
  RenameSource = Table.RenameColumns(PromoteHeaders, {{"Name", "SourceName"}}),

  // 8. Expand data from the nested tables
  ExpandedData = Table.ExpandTableColumn(
    RenameSource,
    "ChartsSheet",
    Table.ColumnNames(PromoteHeaders{0}[ChartsSheet])
  ),

  // 9. Set Standard types
  ChangeType = Table.TransformColumnTypes(
    ExpandedData,
    {
      {"SourceName", type text},
      {"Period", type date},
      {"Total ISFs Filed", Int64.Type},
      {"Total On Time ISFs", Int64.Type},
      {"Total Late ISFs", Int64.Type},
      {"Percentage On Time", Percentage.Type},
      {"Percentage Late", Percentage.Type}
    }
  ),

  // 10. Remove unneeded columns
  RemoveExtraCols = Table.RemoveColumns(ChangeType, {"Column7", "Period_id"}),

  // 11. Filter only valid rows
  FilterValid = Table.SelectRows(
    RemoveExtraCols,
    each ([Percentage On Time] <> null and [Percentage Late] <> null)
  ),

  // 12. Duplicate SourceName for split logic
  DuplicateSource = Table.DuplicateColumn(FilterValid, "SourceName", "SourceName_Copy1"),

  // 13. Split importer info (Transition from Text to Number)
  SplitImporter = Table.SplitColumn(
    DuplicateSource,
    "SourceName_Copy1",
    Splitter.SplitTextByCharacterTransition({"c"}, each not List.Contains({"0" .. "9"}, _)),
    {"Importer", "ImporterCodePart1", "ImporterCodePart2"}
  ),

  // 14. Clean Importer Name
  CleanImporter = Table.ReplaceValue(
    SplitImporter,
    "ISF REPORT - ",
    "",
    Replacer.ReplaceText,
    {"Importer"}
  ),

  // 15. Merge importer code parts
  MergeImporterCode = Table.CombineColumns(
    CleanImporter,
    {"ImporterCodePart1", "ImporterCodePart2"},
    Combiner.CombineTextByDelimiter("", QuoteStyle.None),
    "ImporterCode"
  ),

  // 16. Clean file extension from code
  CleanImporterCode = Table.ReplaceValue(
    MergeImporterCode,
    ".xlsx",
    "",
    Replacer.ReplaceText,
    {"ImporterCode"}
  ),

  // 17. Reorder final columns
  FinalReorder = Table.ReorderColumns(
    CleanImporterCode,
    {
      "SourceName",
      "Importer",
      "ImporterCode",
      "Period",
      "Total ISFs Filed",
      "Total On Time ISFs",
      "Total Late ISFs",
      "Percentage On Time",
      "Percentage Late"
    }
  )
in
  FinalReorder

This ensures that if a file is missing the specific tab, it is simply ignored instead of causing an Enumeration error.

Let me know if it runs smoothly now!

Mauro89
Skilled Sharer
Skilled Sharer

Hi @dragonhood,

 

if you want to completely go crazy you can add the SharePoint folder as a shortcut in a Fabric Lakehouse and process the data from there. Find here the link to the docs:

https://learn.microsoft.com/en-us/fabric/onelake/create-onedrive-sharepoint-shortcut

Best regards!

MarkLaf
Super User
Super User

Do you get the error while editing? If yes, at what step does it trigger? Or do you get it when you try to validate or refresh?

 

Can you confirm that the filepath used in the Source step for #"ISF reports - New" and #"Sample File (9)" are the same? These should be identical; otherwise, you are mixing data sources, which could cause issues (although, usually you'll get errors about forumla firewall rather credential errors).

 

Did you create this by going through the combine files UI? Or did you perhaps copy/paste from previous combine files queries? A bit tedious, but may be worth creating a new blank test dataflow where you build the query from scratch to confirm it's reproducible and not an anomaly.

 

Some quick answers to your "is this caused by" questions:

  • Using Binary parameter for SharePoint?
    Maybe? This is ootb behavior of combine files UI, though, so I would assume it shouldn't so easily be breaking things.
  • A function referencing a different authentication scope?
    Most likely, your issue has nothing to do with the function, which just does: 1) take any binary as input, 2) parse it as Excel, 3) drill into 'Charts' sheet, 4) promote headers. Nothing about it introduces a different data source that would complicate your authentication.
  • A mismatch between the Dataflow’s linked connection and the function source?
    See above. There isn't really a concept of "function source" in this context - maybe if there were steps in the function that queried some other data source.

@MarkLaf ,
I'm getting the error while refreshing the dataflow and the file path used in the source step are the same.

Thanks, 

Mauro89
Skilled Sharer
Skilled Sharer

Hi @dragonhood,

 

as of my knowledge Power BI Dataflows treat each distinct URL and each distinct connector function as a separate data source.

Maybe have a try by using exactly the same root reference in every query and function and ensure your sample file query uses the exact same site URL.
If this still does not work consider inside the transform function, to reference only the binary passed in.

Hope this helps!
Best regards!

Hi @Mauro89 thanks for your reply ,

dragonhood_4-1764575319356.png - Parameter9

dragonhood_5-1764575364194.png- Sample File (9)

dragonhood_6-1764575395496.png- Transform File (9)

dragonhood_7-1764575426194.png- Transform Sample File (9)

 

 

 

As I am new to Power BI Dataflows, I didn’t understand this part clearly “Dataflows treat each distinct URL and connector as a separate data source.”
Do you mean I should:
1. Store the SharePoint folder path in a table
2. Call that list in my Dataflow and use the stored path to reference files instead of writing the SharePoint folder URL separately in multiple queries/functions?

Is this the correct approach to make Dataflows treat it as one single data source and avoid refresh/credential issues?
Please confirm in simple terms.
Thanks!

Hi @dragonhood,

 

yes, go for 2. 

Hi @Mauro89 ,

dragonhood_0-1764581546331.png

Should I store the below folder path in a table in Dataflow and reference that table in functions and ISF Reports New instead of writing this path separately in multiple queries?

You can also create a parameter in Power Query and reference the parameter in the Power Query query. No need for a table and you can reuse the parameter every time needed.

Could you please provide a demo or a screenshot on that?

 

Hi @dragonhood,

 

check out this page for parameters: Parameters - Power Query | Microsoft Learn

There it is described how to create and use them.

 

Best regards!

 

Helpful resources

Announcements
November Power BI Update Carousel

Power BI Monthly Update - November 2025

Check out the November 2025 Power BI update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.