Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
Hello everyone,
I'm currently working on a use case where I need to manage and export high-volume datasets (~1 million records) using Power BI, and I'm exploring multiple approaches. I’d appreciate your feedback on the current methods I’ve tried and would love to hear if there are better alternatives or optimizations.
Problem Statement:
Effectively manage and export large datasets (~1M records) in Power BI, while allowing users to select/deselect fields dynamically and ensuring smooth integration with Power Automate and Fabric for downstream processes.
Current Approaches:
Using field parameters in Power BI to allow dynamic column selection.
Passing selected fields to Power Automate, which triggers a Paginated Report (RDL).
Logic in RDL is set up to show/hide columns based on parameters.
Issue:
The show/hide logic is being overridden — despite user selection, all columns are getting displayed. It seems the parameters are not being passed or consumed correctly within the RDL file.
Approach 2: Microsoft Fabric Lakehouse + Semantic Model:
Created a Lakehouse in Fabric to handle large data volumes efficiently.
Built a semantic model on top of the Lakehouse.
Developed reports using this model in Power BI.
Trying to trigger export or automation via Power Automate using Fabric data.
Issue:
Getting a "Bad Request" error while trying to integrate Power Automate with Fabric. Details of the error aren't very descriptive, so it's hard to debug.
I had couple of questions which are as follows:
Are these approaches going in the right direction for large dataset export scenarios?
Has anyone successfully implemented field-level selection with RDL exports based on Power BI parameters? How did you overcome the column visibility issues?
Any known limitations or best practices for using Power Automate with Fabric Lakehouse or Semantic Models?
Are there any alternative approaches or workarounds you’d recommend for:
Efficiently exporting 1M+ rows
Allowing dynamic field selection
Maintaining performance and scalability.
Any insights, samples or even partial suggestions would be highly appreciated. I’m open to reworking my approach if there’s a more scalable or reliable pattern others have used successfully.
Thanks in advance,
Sidhant
Hi @Sidhant , Hope you are doing well. Kindly let us know if the issue has been resolved or if further assistance is needed. Your input could be helpful to others in the community.
Hi @Sidhant , Thank you for reaching out to the Microsoft Community Forum.
You’re on the right track. For bulk export the reliable path is to collect the user’s selections in the report, hand them to Power Automate and do the heavy lifting in Fabric (not the visual export path). The dataset Run a query action works for medium jobs if you batch with a stable key and TOPN, but it isn’t a sustained bulk‐extract channel. For truly large pulls, call a Fabric notebook/pipeline to write CSV/Parquet straight to OneLake/SharePoint and notify the user, this side-steps export caps, timeouts and throttling, and it scales cleanly.
For dynamic columns, don’t push field parameters to the limit. Keep a small Selection table with a user-friendly name and a DaxRef like ’Orders’[Order ID]. Let users pick from that list, use a DAX measure to concatenate the chosen DaxRefs (plus your key for batching/order), pass the string to the flow and build the DAX there. When multiple facts are involved, expose curated export views (one per grain/use case) and let the selection table point to those views; this avoids ambiguous joins and keeps performance predictable.
Embedded is fine, the only thing that matters is identity. RLS is enforced if and only if the export query runs under the viewing user’s security context. In embedded/app-owns-data, pass the effective identity when you query the dataset, or run the Fabric export under delegated user auth. If your flow or API runs under a fixed service principal without effective identity, RLS won’t apply; either switch to delegated auth or replicate the same row filters in the Fabric job.
Your SharePoint work was close. Use library-relative paths for Create folder, Get folder metadata using path and Move file. Use server-relative (URL-encoded, starts with /sites/.../Shared Documents/...) only for REST like breakroleinheritance. Most teams can skip REST altogether and just use Grant access to an item or a folder with the ItemId from Get folder metadata. Add a brief delay after creating the folder and double check you’re hitting the same Site Address, the Teams Documents library maps to Shared Documents and private/shared channels live on different sites.
Email subscriptions aren’t a bypass for limits; they’re for formatted report deliveries with attachment size caps. For end users, the smoothest UX is, button in the report -> flow receives selections and user identity -> Fabric materializes data to OneLake/SharePoint (Parquet for size, optional CSV for convenience) -> flow grants per-user folder permissions and sends a link. Add simple guardrails (max columns, max batches), log each export (user, filters, rowcount, file path) and invite externals as guests when you need per-user access control.
Export data from a Power BI visualization - Power BI | Microsoft Learn
Datasets - Execute Queries - REST API (Power BI Power BI REST APIs) | Microsoft Learn
Security in Power BI embedded analytics - Power BI | Microsoft Learn
Working with folders and files with REST | Microsoft Learn
Manage list item and file permissions with Power Automate | Microsoft Learn
Hi folks,
So I tried to implement the next part of creating files and a folder with respect to user so once the file is created (after the do-until loop) I had few more actions and the flow looks like
In the Create new folder action (sharepoint) in the folder path I am using the previous two actions output to create the folder at the respective location: @{outputs('BaseFolderPath')}/@{outputs('UserFolderName')}
So @v-hashadapu were you referring to something like this?.
The only issue is somehow I can see in the Power Automate that the folder is created but when I checked in the Teams channel at the respective location I could see the file created (after the do-until loop) and even though the folder action was successful the next action 'Move file' could not find the folder and when I checked even I was not able to see the folder (but I had used the same path where the previously the csv file was created and it could be seen)
URI in Send an HTTP request to SharePoint:
_api/web/GetFolderByServerRelativeUrl(@{outputs('Create_new_folder')?['body/{FullPath}']})/ListItemAllFields/breakroleinheritance(copyroleassignments=false, clearlimitedaccessusers=true)
Any inputs what am I missing here and is this correct?.
@Gabry , @Poojara_D12 you can also share your inputs.
Regards,
Sidhant.
Hi folks,
So a quick update so today I just observed the folder whcih was not visible under the Microsoft Teams channel when I checked at the same location on the web interface there I could see the respective folder been created
(In Teams channel) File is created as expected
But the folder isn't, so on web I was able to see the folder:
So I looked out what might be the issue: Queried this issue on CoPilot which suggested to remove the invalid characters and spaces (so since there were not any invalid characters I removed the spaces) and added few more action and even some delays.
To remove the white spacing between the words:
Then in Create folder action used the base folder and the trimmed name
Added few delays in between the actions and then just before move file action added get folder metadata using path: To cross-check if the folder exists but here is it failed again with same issue: status: 400 (could not find the folder)
(Here the destinationFolderPath is a Compose action wherein I have just given the destination: @{outputs('BaseFolderPath')}/@{outputs('TrimmedUserFolderName')})
Error: status: 404; message: Folder not found.
I am not able to figure out why is there a discrepency even when the Create folder action is succedded.
Any idea @v-hashadapu , @Gabry and @Poojara_D12 what is missing?
Regards,
Sidhant
Quick update:
So I had few observations (so I navigated on the web interface for the respective Sharepoint channel)
The files were stored in: Documents/General/ExportLargeData
Previously I had the Base Path as /Shared Documents/General/ExportLargeData
So then I checked from the root level we have:
(All my files are within General/ExportLargeData) and not Shared Documents
Shared Documents also has its own General Folder and currently it creates the folder within it where I have no files
So then I modified the Base path to:
So now this resolved the 1st half that the folder was created at the right path but again the flow failed at the same action which was added to get the folder metadata that it was not able to get the folder
So then when create file action creates the file at the right location and when I had given the same path why did it create inside Shared Documents instead of General
And now when the folder was created at the right path why is not able to get the folder?
Regards,
Sidhant.
So I figured out the issue: The thing is when we use Create new folder action (SharePoint) therein we specify the List or Library: Documents (which Internally corresponds to Shared Documents) so herein we just need to specify the sub-folder part so then in BaseFolderPath (Compose action) I just specified the same:
This created the folder at the right location and then when we doing the cross -check 'Get Folder metadata using path' I used another Compose action (renamed it to ShareBasePathLocation) wherein I simply specified the complete path (along with SharedDocuments)
So this resolved the folder not found issue, now once the file moves in the respective location we need to break the inheritance (default security) and impose new one so for that I have added the 'Send HTTP request to SharePoint' action
uri:
_api/web/GetFolderByServerRelativeUrl('@{outputs('FolderServerRelativeUrl')}')/ListItemAllFields/breakroleinheritance(copyroleassignments=false,clearSubscopes=true
SiteServerRelativePrefix: (Compose action)
/sites/<SharePointSiteName>
FolderLibraryRelativeFullPath: (Compose action)
@{outputs('Create_new_folder')?['body/{FullPath}']}
FolderServerRelativeURL (Compose action):
Inputs: @{outputs('Create_new_folder')?['body/{FullPath}']}This is where the flow fails:
Hi everyone,
So I was trying one last method which was using Power Automate and using the 'Run a Query against a Dataset' action wherein we provide the DAX and get the required result. So I have constructed the flow:
-> Wherein we first define few things the min value (here its 1) the batch size (25,000) and how many times the loop runs (iteration count).
The flow looks like:
The flow fails at the second action 'Run a query against a dataset 2' . Not sure what is incorrect in the query provided in the run a query dataset 2 action. I even tried using a compose before the datset 2 action and then pass the outputs of compose in it but no luck.
// The DAX Query incorporated in the action which is failing)
DEFINE
VAR __DS0FilterTable = FILTER(KEEPFILTERS(VALUES('Sheet1'[Index])),and('Sheet1'[Index] >=@{variables('MinRows')}, 'Sheet1'[Index]) <=@{variables('IncrementRows')} )
VAR __DS0Core =
CALCULATETABLE(
SUMMARIZECOLUMNS(
'Sheet1'[Domain],
'Sheet1'[Location],
'Sheet1'[Value]),
__DS0FilterTable
)
EVALUATE
__DS0Core
ORDER BY
'Sheet1'[Index]o/p: Error for Run a query against dataset 2 action
{
"statusCode": 400,
"headers": {
"Cache-Control": "no-store, must-revalidate, no-cache",
"Pragma": "no-cache",
"Strict-Transport-Security": "max-age=31536000; includeSubDomains",
"X-Frame-Options": "deny",
"X-Content-Type-Options": "nosniff",
"RequestId": "abedf306-b1b6-4f82-862a-8258bc6a4847",
"Access-Control-Expose-Headers": "RequestId",
"x-ms-environment-id": "default-1f4beacd-b7aa-49b2-aaa1-b8525cb257e0",
"x-ms-tenant-id": "1f4beacd-b7aa-49b2-aaa1-b8525cb257e0",
"x-ms-dlp-re": "-|-",
"x-ms-dlp-gu": "-|-",
"x-ms-dlp-ef": "-|-/-|-|-",
"x-ms-mip-sl": "-|-|-|-",
"x-ms-au-creator-id": "77c12444-2bba-4be3-81c8-d55a183a2a28",
"Timing-Allow-Origin": "*",
"x-ms-apihub-cached-response": "true",
"x-ms-apihub-obo": "false",
"Date": "Wed, 22 Oct 2025 10:54:24 GMT",
"Content-Length": "555",
"Content-Type": "application/json"
},
"body": {
"error": {
"code": "DatasetExecuteQueriesError",
"pbi.error": {
"code": "DatasetExecuteQueriesError",
"parameters": {
},
"details": [
{
"code": "DetailsMessage",
"detail": {
"type": 1,
"value": "Query (17, 3) A single value for column '\u003Coii\u003EIndex\u003C/oii\u003E' in table '\u003Coii\u003ESheet1\u003C/oii\u003E' cannot be determined. This can happen when a measure or function formula refers to a column that contains many values without specifying an aggregation such as min, max, count, or sum to get a single result."
}
},
{
"code": "AnalysisServicesErrorCode",
"detail": {
"type": 1,
"value": "3241803789"
}
}
]
}
}
}
}This is my table preview:
Note: I just had created a simple index column within Power BI desktop (starting from 1) for ordering the data and then published the report.
Can you let me know what is missing @v-hashadapu , @Gabry , @Poojara_D12.
Regards,
Sidhant
A quick update so I was able to resolve the issue made few tweaks in the query: one was use Filter with ALL method on the Index column instead of KeepFilters and in Summarize column also included the Index column.
{And the other change was earlier may be I had used Set variable to increment the three variable values MinRows, IncrementRows and counter varaiable} instead I used Increment variable action}
//Dax Query (Run a query against dataset 2 action)
DEFINE
VAR __DS0FilterTable =
FILTER(
ALL('Sheet1'[Index]),
'Sheet1'[Index] >= 1
&& 'Sheet1'[Index] <= 25000
)
VAR __DS0Core =
CALCULATETABLE(
SUMMARIZECOLUMNS(
'Sheet1'[Index],
'Sheet1'[Domain],
'Sheet1'[Location],
'Sheet1'[Value]
),
__DS0FilterTable
)
EVALUATE
__DS0Core
ORDER BY
'Sheet1'[Index]
So I had some follow up-queries which are as follows:
1. Currently what I have done is specified what columns that need to be considered for export (i.e. in Run a query against dataset 2 action you can see in SUMMARIZECOLUMNS the three columns. Now I wanted to understand is there a way that we let end users select the columns that they want (will keep the Index column as that is needed to avoid the error) and then pass that in the query.
2. This flow will be mostly triggered by end users (from the Power BI report) so right now I am creating the CSV file in a Sharepoint Site at a specific location now when we think from the end user perspective assume this being used by multiple users so in that case can we do:
i) The end user will specify the location where the file should be stored and we save that at respective location.
ii) (if i) is not viable) then if we are using SharePoint/One Drive so how we can segregate each users files (Assume I have Tim, John my end users -> We are storing the files in a SharePoint site in a folder: Now both users should not be able to each others data (Tim should not see John's files and John cant see Tim's file), how we can implement this security aspect.
iii) In Create file (SharePoint action; we specify 4 things: Site address -> Folder Path -> File Name -> Content), so right now I have specified everything explicitly (kind of hard-code) by any chance can we take the Site address dynamically (like end user tells which SharePoint site to use).
I am trying to build a solution which is more end-user friendly (they won't be interested in the underlying details), so if anyone can share their inputs do let me know.
Also @v-hashadapu , @Gabry and @Poojara_D12 if you have anything to add do share that.
Regards,
Sidhant.
Hi @Sidhant , Thank you for reaching out to the Microsoft Community Forum.
For end-user-driven dynamic column selection, Power BI and the DAX language do not natively allow passing a flexible list of column names into SUMMARIZECOLUMNS at runtime, the set of fields must be known in advance. The best workaround is to dynamically build the DAX query string inside Power Automate before calling the dataset, inserting only those columns that the user selects via UI or parameters. This lets you control exactly what columns are exported while always including Index for batching.
When it comes to user-specific file storage in SharePoint or OneDrive, best practice is to create a separate folder for each user inside a common document library. After the file export, Power Automate can break inheritance and assign permissions so that only the relevant user can access their own exports. Microsoft’s official guidance covers precisely how to do this using the “Grant access to an item or folder” action: Manage list item and file permissions with Power Automate. Having each user’s exports in a folder named after them and applying unique permissions, ensures data privacy and keeps the system manageable.
Dynamically specifying the SharePoint Site Address in the Create File action is not directly supported via user input in the SharePoint connector UI; it’s typically chosen from a fixed dropdown. However, if you want to make it dynamic, you can leverage environment variables or store a user-to-site mapping and use more advanced Power Automate actions, such as the HTTP connector targeting SharePoint’s REST API.
For more details, please refer:
SUMMARIZECOLUMNS function (DAX).
Environment variables in Power Automate.
Export and email a report with Power Automate - Power BI | Microsoft Learn
Export paginated reports with Power Automate - Power BI | Microsoft Learn
SharePoint - Connectors | Microsoft Learn
Store and manage values in variables in Power Automate - Power Automate | Microsoft Learn
Hi @v-hashadapu ,
Thanks for the reply and giving an overview on how to proceed for the next set of steps. So today I was working on a sub-task which is how to pass the column values by taking input from user. For that I did the following:
// Created a table with two columns one as a display name and one which is DAX friendly
UserSelections =
DATATABLE(
"DisplayName", STRING,
"DaxToken", STRING,
{
{ "Domain", "'Sheet1'[Domain]" },
{ "Location", "'Sheet1'[Location]" },
{ "Value", "'Sheet1'[Value]" }
}
)
// Then created a measure which gets the selections from the slicer and which in turn is passed on to the Power Automate visual
Selected Export Columns =
VAR IsAnyFilter =
ISFILTERED(UserSelections[DisplayName])
|| ISCROSSFILTERED(UserSelections[DisplayName])
VAR cnt = COUNTROWS( VALUES(UserSelections[DaxToken]) )
VAR allCnt = COUNTROWS( ALL(UserSelections[DaxToken]) )
RETURN
IF (
NOT IsAnyFilter || cnt = allCnt,
"", -- empty string → flow fallback
CONCATENATEX( VALUES(UserSelections[DaxToken]),
UserSelections[DaxToken],
"," )
)It looks like: (Had added a card visual just to check the selections are proper)
In the Power Automate flow I simply created a variable (empty) and then simply appended the values
And simply used this variable in the 'Run a query against dataset (2)' in the do-until loop (wherein previously I had hard-coded all the columns)
So I have solved the first part (instead of hard-coding all the columns there is some flexibility to the end user which column they would want in their export)
Now I need to check the download part. I hope this breakdown is useful for others (who might have a similar requirement)
If possible any article that might help (apart from the documentation that you shared earlier that too can help) and if possible a simpe example to get it done (like how I just explained that will be quite useful)
@Gabry , @Poojara_D12 if you have any inputs please do share them.
Regards,
Sidhant.
Hi @Sidhant , Thank you for reaching out to the Microsoft Community Forum.
You’ve done an excellent job solving the dynamic column selection challenge, that’s typically the most complex part of this type of export workflow. Your approach using a mapping table and a DAX measure to generate a query ready column string is exactly how to give end users flexibility while maintaining control over the dataset query structure. By passing that string to Power Automate and dynamically constructing the DAX query inside the flow, you’ve effectively enabled runtime customization without breaking Power BI’s fixed schema requirements.
A practical way to think about your setup is to create a mapping table with display-friendly and DAX column names, let users choose fields through a slicer, use a DAX measure to concatenate the chosen technical names into a string and then pass that string to Power Automate as input. Inside the flow, you can split the string and use it to dynamically build the DAX query for export. This gives users complete flexibility in what they export, while keeping your model design and security consistent.
For the download step, continue this pattern by letting Power Automate handle the export logic, use the Run a query against a dataset action to retrieve only the selected columns (plus your Index field for batching) and export the results to SharePoint or OneDrive. Once the export completes, apply user specific folder permissions so each user can access only their own files.
For documentation and practical walk-throughs, there isn’t a single official article that covers this exact “dynamic column export via Power Automate” pattern end to end but I think the following references along with what I already shared together support each part of your solution:
Power BI - Connectors | Microsoft Learn
Export and email a report with Power Automate - Power BI | Microsoft Learn
Manage list item and file permissions with Power Automate | Microsoft Learn
Hi @v-hashadapu
Thanks for acknowledging my response appreciate that. I had few more queries since previously I clarified as this solution is to be more end-user centric wherein the Power BI report is an embedded report (assume it like when we install any software like VS code, Power BI Desktop on click of the download button it simply gets downloaded on your local super easy) similarly let's say the user wants to download a file (present on SharePoint, OneDrive) they can do that using 'Create File' action but the pre-requisiste here is we need a standard Gateway to be installed and configured
Power Automate Flow to save file to local drive
I came across this article but here I had few queries: Herein we need to have the gateway installed and then set-up but since we have information of end users will this solution work (because in the article they first setuped the gateway on the system and then built the flow).
Then next question I had right now for the export I have just worked on data from a single table, but there can also be a scenario wherein we don't have just one table but data is spread across multiple tables like a single fact and multiple dimensions so in that case how should one be moving ahead.
Also, in terms of security let's say RLS is implemented (assume we have 3 end-users: Jim,Ryan and we have tables: orders, sales, people; So based on RLS Jim is only allowed to access data from orders whereas Ryan from orders and sales), so in that case will be RLS be maintained (that needs to be defined at the workspace level?). Because I cam across one comment wherein they had used a similar approach but the issue was RLS was getting overriden
The other thing is there any hard limit on the number of rows that can be retrieved right now I was able to export 1M rows and for larger size I was thinking of increasing the batch size (from 25,000 to 1,00,000) but in the above image it points that it does not work for 2M rows so for the respective action 'Run a query against a dataset' do we have any limit on what amount it can retrieve.
For folder/item level access you have suggested of using the 'Grant access to an item/folder' (under Manage list item) wherein once the file is created in SharePoint we apply this but my question was how do we create end -user folders (like creating folders based on the client name/end user name and then storing them under that folder), for that we need to capture some information (with respect to end users) but since they won't be part of any entra id how to manage this part. Once this is done then when we use Grant access therein as well need to specify the email recipients so this how to specify the same)
Lastly in the my current flow I also made few tweaks to add value level filter (earlier I had given an option to select which columns should be exported)
And I did observe duplication issue in columns (when we just selected one which was due to Apply to each loop) so I modified it with the following expression
Now the issue that I observed was the there was even header repetition in the csv what I mean to say is this:
// Here are the expression that were being used for the filter values and the update to avoid duplication in column selection
//DomainCsv
@{if(equals(length(variables('DomainsArr')), 0),'',concat('"', join(variables('DomainsArr'), '","'), '"'))}
//DomainFilterClause
@{if(equals(outputs('DomainCsv'), ''),'',concat(' && ''Sheet1''[Domain] IN {', outputs('DomainCsv'), '}'))}
[This speccified the filter condition format wherein we simply specify the table_name[columnname] and then use IN operator & the values are retreieved from the previous action]
//FilterArray:
@coalesce(triggerBody()?['entity']?['Power BI values'], array())
@and(not(empty(item()?['Selected Export Columns'])), not(equals(item()?['Selected Export Columns'], '')))
//ColumnList : To avoid duplication (skipped the Apply to each loop)
@{trim(string(outputs('SelectedColumnRow')?['Selected Export Columns']))}The earlier logic used to keep appending the same value (due to the loop)
The other thing in Filter value slicer if no selection was been done then I simply passed all the values else only pass the selections as you can see in the below image
So if you help me with these queries it will help a lot.
Also if you @Gabry , @Poojara_D12 have any inputs on this do let me know.
Regards,
Sidhant.
Hi folks,
Just a quick update so I was able to resolve the duplicate headers issue just had to make one slight update in the 'Append to String variable' action (within the do-until loop) where previously I was appending the outputs of 'Create CSV' action so what it did was based on the number of iterations it kept adding the headers
So to avoid this I made used the below expression:
if(equals(variables('LoopVar'), 2), body('Create_CSV_Table'), join(skip(split(body('Create_CSV_Table'), decodeUriComponent('%0A')),1), decodeUriComponent('%0A')))Now there are still doubts over few other points which I had shared in my last reply which were:
i) Is there any limitation with respect to the approach that I am using (Running a query against the dataset {semantic model}), as I had come across one comment stating it does not work when records are more than 2M/2.5M
ii) The next thing was I was planning to use this solution in an Power BI embeded report so therein will there be any challenges (with respect to integrating this solution) since in my case the report is an embeded in an existing report of a client through which end user will be interacting.
iii) The other thing was the security aspect, will RLS be carry forwarded in this solution.
(Assume we have 2 users: Tim can access Asia sales and John US & Europe region sales so in this case once they make the selections and export so will they be only get the records which they have access to or will that not be the case?)
iv) With respect to download I just wanted to have few approaches which can be incorporated:
Usually something like a local download (for which I did find an article: which I had shared in one of my earlier reply/comment but that requires a standard gateway setup to be done, which I am unsure will be accepted) and second thing that had asked is there any way that user specifies a location (SharePoint/OneDrive) so can that be passed in Power Automate and we save the file there.
If not, in one of the replies some of recommeded to use common SharePoint site, so I had couple of questions with respect to that:
(Since the solution will be end user centric how the SharePoint site access should be set up when it comes to security so that instead of getting the full control to anonymous users/ users outside the organisation we just give them minimal control -> then since there will be multiple end users {as we will have limited information} how do we create separate folders based on each user and then configure the aceess so that each user can only their own files and not others (for that I have gone through Grant access action {Sharepoint}), but the previous points is where I am unclear.
So @v-hashadapu , @Gabry , @Poojara_D12 have any inputs please do let me know.
Regards,
Sidhant.
Hi @Sidhant , Thank you for reaching out to the Microsoft Community Forum.
The current approach using Run a query against a dataset works well up to a point, but it isn’t designed for sustained multimillion row exports. Once the data volume crosses roughly 2, 2.5 million rows, you can expect timeouts or throttling because the dataset endpoint isn’t optimized for bulk extraction. It’s mainly intended for querying visuals, not streaming data dumps. You can extend performance slightly by batching through indexed ranges or smaller TOPN queries but reliability drops as size grows. For consistent large scale exports, the recommended pattern is to offload that work to a Fabric notebook or pipeline that writes the data directly to OneLake or Blob storage in CSV or Parquet format. This avoids Power BI’s memory and query timeout limits altogether.
When it comes to Power BI Embedded, there’s no inherent problem embedding this type of export driven report. The main consideration is identity, specifically, whose credentials are used when the export flow runs. If the process runs under a service principal or app identity, you’ll lose the per user filtering that Power BI normally enforces. To preserve end user security context, the export should either run using delegated user authentication or explicitly include the user identity through the effectiveIdentity property when calling the dataset or semantic model. As long as the flow executes with the correct user identity, the embedded scenario works seamlessly.
For Row Level Security, it only carries over if the export query runs in the same security context as the viewing user. RLS filters live inside the dataset, but they’re enforced based on who executes the query. If your Power Automate connection or API call uses a fixed service account, it bypasses RLS entirely. The correct implementation is to trigger the export under the user’s identity, so their RLS scope is honored or replicate equivalent row filters in your Fabric notebook or pipeline if delegated access isn’t possible. Testing the flow with restricted users is the quickest way to confirm RLS enforcement before rollout.
Regarding download handling, direct local downloads through Power Automate do require either the on premises data gateway or Power Automate Desktop installed on the user’s machine. These setups can be heavy for large audiences, so a better end user experience is to deliver files via SharePoint or OneDrive. You can dynamically create user specific folders using their email or ID as the folder name, then use the Grant Access action to assign permissions after the file is created. This ensures each user only sees their own exports. If the users aren’t part of your Entra ID tenant, you can either invite them as guest users (which is the secure, supported method) or share files via expiring anonymous links for limited access.
Export and email a report with Power Automate - Power BI | Microsoft Learn
Datasets - Execute Queries - REST API (Power BI Power BI REST APIs) | Microsoft Learn
Security in Power BI embedded analytics - Power BI | Microsoft Learn
Manage list item and file permissions with Power Automate | Microsoft Learn
Hi @v-hashadapu ,
Thanks for the detailed breakdown on my reply so I got some information with respect to the RLS (Row Level security) so they have kind of custom setup, for which I will give an overview for better context
So the thing is end users intteract with the organization portal, so what this portal does (behind the scene is) it generates an unique ID based on the end-users email {this is just created once and stored in backend I presume and not at the time of authentication with Power BI service}
When the Power BI service is accessed (in the backend) at the organization there is a RLS table : 'v_user_facility_map' which is maintained that has 3 columns: id, user_email (end_user email), user_facility_email (kind of admin whose email has the required access within the org)
Example:
table: v_user_facility_map
d_facility_id: 32, user_email: tim@xyz.com, user_facility_email: smith@logi.com
d_facility_id: 32, user_email: jane@yiz.com, user_facility_email: smith@logi.com
And this table is turn connected to one of the dimension table (which denotes what level of access those set of user have, as per what I know)
And then a role is created on top of the RLS table (v_user_facility_map)
[user_facility_security_id] IN
CALCULATETABLE(
VALUES('v_user_facility_map'[user_facility_security_id]),
'v_user_facility_map'[user_email] = USERPRINCIPALNAME()
)And then within the workspace (Fabric workspace) we simply set the security for the semantic model.
So I tested this on my end for which I created a simple table (RLSDomainAccess) wherein I had specified the Domain and the email
Similarly created a role within Power BI desktop:
And then published the respective report in Fabric workspace and set the respective users under the new role
Then to allow the end user to trigger the flow, I had to add the user under the Run-as option (because before that if the end user tried to trigger the flow it was Unsuccessful)
After this the user were ablke to trigger the flow (the users used the connection of the owner) and this also ensured that RLS was maintained like:
Normal view (with the entire access like super user)
Assume 'John' accesses the report who has access to Domain== 'RESTAURANT'
So now based on this setup one thing is clear that RLS will be preserved (based on the setup that we have in the actual organization side) now I had couple of questions now when the flow needs to be triggered right now I tested this with few of my team-mates now in the actual case do the respective admin have to simply mention the email/ or the Azure AD group under Run as so they can trigger the flow.
The other thing you did mention for local download the setup will be quite heavy (for every user) so its better to go for SharePoint, but I'm still unclear about the flow on how to integrate the same in my existing flow.
Queries:
1: How to get the end user details (within this context)
2: You have mentioned to make use of Grant access action, so in this case will the organization where its to be deployed will have to maintain/create a SharePoint site and then the respective SharePoint admin will mange the access related things
3: To create a separate folder for every user there are some intial checks that whether the folder exist for the respective who has triggered (like Tim triggers the flow then we need to check in the respective SP document library a folder exist with his name/email if not then create one -> break the security and then gran the access to only that folder so here is there some kind of Post {HTTP} call that needs to be made: if yes what is the payload that will be passed {API endpoint and the parameters within it}
I hope this context helps you a have better understanding of the context and setup which I am working on.
@Poojara_D12 , @Gabry if you have any inputs please share your thoughts.
Regards,
Sidhant
Hi @v-hashadapu ,
So there has been a progress when it comes to the part of folder creation and storing them within respective folders (you can check the latest comment have started a new reply for the same) now I wanted to understand based on the structure that I just described in my previous reply. The thing is the solution that we have proposed (using Power Automate along with Power BI to overcome the record limit) whether it will work in a complex environment.
The reason being right now what I have done is a just a single table with few set of columns but in actual environment that's not the case there we have multiple fact tables and dimension tables, so therein we need to switch between fact tables, like there are cases wherein field parameters have 10/20 columns, so for now what we were thinking we will just provide the most commonly used columns (from different tables). The other thing is in the UserSelections table as of now I have just placed columns from single table, but is it possible that we can consolidate the columns used in field parameters within a single table (like UserSelections)
'Orders[OrderID]', 'People[Department]' {Here Orders, People are two different tables so can that be consolidated when we construct the UserSelections.
I also wanted to understand that this solution what are the limitiations that things to keep in mind when the data is so large and considering the data model as well.
@Gabry , @Poojara_D12 if you have anything to add please share those points
@SantoshPothnak if I have missed something please do add your points as well.
Regards,
Sidhant
Hi folks,
Thanks again for all the suggestions shared so far they've been very helpful. However, due to several constraints in our environment (as mentioned earlier by @SantoshPothnak and myself), many of the proposed solutions aren’t feasible for us. To quickly summarize:
End-user focus: The export functionality is intended for external end users via an embedded Power BI report, so options like Analyze in Excel, translytical flows, and B2B sharing are not viable.
Tooling restrictions: Use of tools like Fabric Notebooks or external tools such as DAX Studio and Bravo is also not permitted in the current setup either due to environment policies or because they aren’t suited for end users.
Based on internal discussions and feedback from the team working on the implementation, the current acceptable path forward looks like this:
Provide a button or link within the embedded report to allow users to initiate a CSV export.
The exported file would ideally reflect filtered data (based on the report state), but even if that’s not possible, a default dataset dump is acceptable.
The file can be stored at a shared location where users can download it once it’s generated.
It’s understood that since this is a CSV export, formatting and formulas won’t be retained.
Also, we've learned that other teams working in similar constraints have gone with a parallel, report-independent export flow — for example, triggering data exports via stored procedures on Synapse using custom code in a portal. While we don't have full visibility into those implementations, it's reassuring to know this is a path others have taken successfully.
We’re continuing to explore the best way to build this within the current limits — happy to hear if anyone has optimized such an approach further.
@v-hashadapu , @Gabry , @Poojara_D12 if you guys have any inputs please do share them (for the path that is mentioned on how to achieve the same if you have any relevant documents/videos or articles that can please do let me know)
Regards,
Sidhant.
Hi @v-hashadapu , @Gabry , @Poojara_D12
I just wanted to clarify few things about the actual requirement so that all of us are on same page, the things is in the Power BI report we will be have 10-15 slicers (most of them built using field parameters {numeric}, some being of date type and others) which was shared by @ShubhaGampa11 at the very beginning it something looks like:
So the end user plays around these filters and once done, based on that visuals on the respective page are filtered so this data (shown by the visuals) is to be exported and for these reports to be accessed by a wider audience (end users) the Power BI report is been embedded in a web application (kind of website), wherein these interactions will take place, so from that perspective we are looking for a solution (which is more end-user friendly).
So far based on the conversation we have:
1. Power Automate + Fabric Notebook + Lake house (This is one of the ways but a bit complex)
2. Analyze in Excel (doesn't work in case of Embedded reports <- no use)
3. Translytical flow (This cn be an option but since we are using embeded reports as of now as per the official documentation embeded reports isn't supported)
4. Use of DAX QUery/Bravo (external tools): These are more useful for the developers and not for end users.
And my colleague @SantoshPothnak just shared few additional points which describes few technical details (the license and the scenario) based on the context could you help in aligining how should we proceed.
I know this thread has been longer but since there's no direct way to achieve the same, we are trying out things (which takes time and note the issues).
Regards,
Sidhant.
Hi @Sidhant , thanks for the update, just wanted to know if you have found any answer or workaround to achieve your expected results. If so, please share the details here, as it may help others with similar issues.
Thank you.
Hi @v-hashadapu ,
As of now we are still working on this part and there were some additional points which I will share as a new reply.
Regards,
Sidhant
Check out the November 2025 Power BI update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
| User | Count |
|---|---|
| 98 | |
| 72 | |
| 50 | |
| 49 | |
| 42 |