Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
Sidhant
Advocate IV
Advocate IV

Managing and Exporting High-Volume Datasets (~1M Records) in Power BI

Hello everyone,
I'm currently working on a use case where I need to manage and export high-volume datasets (~1 million records) using Power BI, and I'm exploring multiple approaches. I’d appreciate your feedback on the current methods I’ve tried and would love to hear if there are better alternatives or optimizations.


Problem Statement:
Effectively manage and export large datasets (~1M records) in Power BI, while allowing users to select/deselect fields dynamically and ensuring smooth integration with Power Automate and Fabric for downstream processes.

 

Current Approaches:

Approach 1: Field Parameters + Power Automate + Paginated Reports

  • Using field parameters in Power BI to allow dynamic column selection.

  • Passing selected fields to Power Automate, which triggers a Paginated Report (RDL).

  • Logic in RDL is set up to show/hide columns based on parameters.

Issue:

The show/hide logic is being overridden — despite user selection, all columns are getting displayed. It seems the parameters are not being passed or consumed correctly within the RDL file.

 

Approach 2: Microsoft Fabric Lakehouse + Semantic Model:

 

  • Created a Lakehouse in Fabric to handle large data volumes efficiently.

  • Built a semantic model on top of the Lakehouse.

  • Developed reports using this model in Power BI.

  • Trying to trigger export or automation via Power Automate using Fabric data.

Issue:

Getting a "Bad Request" error while trying to integrate Power Automate with Fabric. Details of the error aren't very descriptive, so it's hard to debug.

 

I had couple of questions which are as follows:

 

  • Are these approaches going in the right direction for large dataset export scenarios?

  • Has anyone successfully implemented field-level selection with RDL exports based on Power BI parameters? How did you overcome the column visibility issues?

  • Any known limitations or best practices for using Power Automate with Fabric Lakehouse or Semantic Models?

  • Are there any alternative approaches or workarounds you’d recommend for:

    • Efficiently exporting 1M+ rows

    • Allowing dynamic field selection

    • Maintaining performance and scalability.

Any insights, samples or even partial suggestions would be highly appreciated. I’m open to reworking my approach if there’s a more scalable or reliable pattern others have used successfully.

Thanks in advance,
Sidhant

 

 

22 REPLIES 22
Sidhant
Advocate IV
Advocate IV

Hi @v-hashadapu , @Gabry , @Poojara_D12 
I just wanted to clarify few things about the actual requirement so that all of us are on same page, the things is in the Power BI report we will be have 10-15 slicers (most of them built using field parameters {numeric}, some being of date type and others) which was shared by @ShubhaGampa11 at the very beginning it something looks like:

Sidhant_0-1759485593782.png

 

So the end user plays around these filters and once done, based on that visuals on the respective page are filtered so this data (shown by the visuals) is to be exported and for these reports to be accessed by a wider audience (end users) the Power BI report is been embedded in a web application (kind of website), wherein these interactions will take place, so from that perspective we are looking for a solution (which is more end-user friendly).
So far based on the conversation we have:
1. Power Automate + Fabric Notebook + Lake house (This is one of the ways but a bit complex)
2. Analyze in Excel (doesn't work in case of Embedded reports <- no use)
3. Translytical flow (This cn be an option but since we are using embeded reports as of now as per the official documentation embeded reports isn't supported)
4. Use of DAX QUery/Bravo (external tools): These are more useful for the developers and not for end users.
And my colleague @SantoshPothnak  just shared few additional points which describes few technical details (the license and the scenario) based on the context could you help in aligining how should we proceed.

I know this thread has been longer but since there's no direct way to achieve the same, we are trying out things (which takes time and note the issues).

Regards,
Sidhant.

SantoshPothnak
Frequent Visitor

Hi @Gabry, @v-hashadapu, @Poojara_D12,

Thanks a lot for all the valuable suggestions shared so far. Really appreciate the depth of details—it has helped us narrow down our direction.

Just to align on our scenario:

  • We’re working with Power BI Embedded (Premium Capacity – P1/F64) reports hosted inside a custom portal application, where end-users (business users, not developers) interact with the reports.

  • This report has a complex prompt-based setup (migrated from MicroStrategy) with 15–20 filters, a mix of date filters, slicers, hierarchical field parameters (2-level), and various field parameter selectors.

  • Based on these selections, the report displays a large table visual where the data easily goes from a few rows to frequently 500K+ rows (sometimes 1M+). This exceeds the Power BI export limit (150K), which is the key challenge.

From your shared ideas, we are evaluating the following approach as the most practical:

Proposed Flow

  1. Power BI Report → Users make selections via slicers/parameters (field & hierarchy prompts).

  2. Trigger via Power Automate → Button in the report passes selected filters.

  3. Fabric Notebook → Notebook applies the same filters, queries the dataset, and writes the resulting data to OneLake/Blob Storage in CSV/Excel/Parquet format.

  4. Power Automate Notification → Once the notebook job succeeds, the flow retrieves the output file path and emails the link or attachment back to the user.

This pattern:

  • Cleanly bypasses the 150K export limitation,

  • Keeps the experience simple for end-users (just a button click),

  • Scales to handle 1M+ rows,

  • Gives flexibility in format (CSV/Excel/Parquet for downstream use).

Notes on Alternatives

  • Paginated reports → We tested column visibility rules, but as many of you highlighted, exports still tend to include all columns unless the dataset itself is dynamically shaped. That adds unnecessary complexity for 20 prompts, so we may not proceed here.

  • Translytical Task Flows / UDFs → These look very promising, but as of now they have limitations in Power BI Embedded, hence not adopting in this first phase. We’ll revisit once that support matures.

  • External Tools (Bravo/DAX Studio) @Sidhant  mentioned → Great for developers, but not suited for end-users in an embedded portal scenario.

Where We Are

Our team @Sidhant  @ShubhaGampa11 and myself shall continue POC work with the Fabric Notebook + Power Automate orchestration and loop back here once we validate this end-to-end flow. Thanks again for pointing us in the right direction :-).

Sidhant
Advocate IV
Advocate IV

Hi @v-hashadapu , @Gabry , @Poojara_D12 
Had few follow up questions (with respect to Analyse in excel option) 

especially in the context of embedded reports:

  1. Availability in Embedded Reports:
    The Analyze in Excel and Personalize visuals options are not visible in our embedded reports.

    • Do we need to build custom features using the Power BI Embedded APIs to enable access to these options from our portal?

    • Are there any alternative workarounds, considering that the portal is managed by a separate team with a different development roadmap?

  2. User Permissions:
    Users in our portal have the Viewer role.

    • Is this sufficient to use Analyze in Excel, or are additional permissions or different roles required for direct Excel access or similar functionality?

  3. Compatibility Across Connection Modes:
    Does Analyze in Excel work uniformly across all report connection types such as DirectQuery, Import, and Composite models? Are there any known limitations or caveats?

Any insights on these points would be greatly appreciated!

Regards,
Sidhant.

A quick update the 'Analyze in Excel' option seems not to be available in Embedded reports. So earlier I had shared few options (like the DAX studio and Bravo as external studio), but in terms of end users perspective I wanted to know can we embed these tools in the portal (where the Power BI reports are embedded).
If not what are the alternatives (as per my guess the two tools : DAX studio and Bravo are helpful for a developer.

Sidhant_0-1759224906111.png

Analyze in Excel documentation 
So if : @v-hashadapu , @Gabry , @Poojara_D12 have any inputs do let me know.

Regards,
Sidhant.

Hi @Sidhant , Thank you for reaching out to the Microsoft Fabric Community Forum.

 

The Analyze in Excel feature is only available in the Power BI Service and not through Embedded or API scenarios. To use it, users must have both Viewer and Build permissions on the dataset and the organization's admin must enable the feature in the Power BI tenant settings. This feature supports most dataset types, including Import, DirectQuery and Composite models, though certain advanced setups like field parameters may have unpredictable results.

 

When working with embedded solutions or exporting datasets larger than standard limits (like over 1 million rows), the best practice is to use Microsoft Fabric. This allows data export through pipelines or notebooks, enabling files to be saved as CSV or Parquet in OneLake for easy sharing or automation. Visual-level exports should be avoided for large datasets, as they're capped at around 150,000 rows.

 

Build Permission for Shared Semantic Models - Power BI | Microsoft Learn

Create Excel workbooks with refreshable Power BI data - Power BI | Microsoft Learn

Export data from a Power BI visualization - Power BI | Microsoft Learn

Understand translytical task flows - Power BI | Microsoft Learn

v-hashadapu
Community Support
Community Support

Hi @Sidhant , Hope you're doing fine. Can you confirm if the problem is solved or still persists? Sharing your details will help others in the community.

Poojara_D12
Super User
Super User

Hi @Sidhant 

Both of the approaches you’ve tried are valid directions, but each comes with its own limitations that explain the issues you’re facing. With the paginated report route, simply using visibility rules on columns often fails because while the report view might hide them, many export renderers (especially Excel and CSV) still output all columns regardless of visibility, which is why users keep seeing everything; the more reliable method is to build the dataset dynamically (for example through a stored procedure that returns only the selected columns) so that the export itself contains exactly what was chosen. On the Fabric side, your “Bad Request” errors usually stem from mismatched authentication or payload—Fabric APIs require a proper Azure AD token and very specific endpoint formatting, which Power Automate doesn’t automatically handle unless you set up a service principal or OAuth flow. Best practice for handling 1M+ rows is to avoid pushing them through Power BI visuals or standard exports at all: instead, use paginated reports with dynamic datasets if you need user-driven exports, or better yet, trigger a Fabric pipeline that writes the selected data to OneLake/ADLS in Parquet/CSV and then share the link or notify the user via Power Automate. In short, paginated reports are fine for “ad-hoc but smaller” exports when you control the dataset, while Fabric pipelines are the scalable option for very large datasets; both require careful parameter handling and correct authentication, and often the cleanest pattern is a hybrid—parameters from Power BI or Power Automate passed into a Fabric pipeline that generates the extract on demand.

 

Did I answer your question? Mark my post as a solution, this will help others!
If my response(s) assisted you in any way, don't forget to drop me a "Kudos"

Kind Regards,
Poojara - Proud to be a Super User
Data Analyst | MSBI Developer | Power BI Consultant
Consider Subscribing my YouTube for Beginners/Advance Concepts: https://youtube.com/@biconcepts?si=04iw9SYI2HN80HKS
v-hashadapu
Community Support
Community Support

Hi @Sidhant , Thank you for reaching out to the Microsoft Fabric Community Forum.

 

For the Paginated Report issue, the trick is to pass a single text parameter with all selected columns and then use that in the RDL column visibility expression. That prevents the all columns showing problem you mentioned.

 

On the Fabric + Power Automate side, Bad Request almost always points to the request body or authentication. I’d suggest testing the same call in Postman first, once it works there, copy the request into Power Automate. Pay special attention to the JSON structure and whether your service principal actually has contributor rights on the workspace.

 

For making exports user-friendly, instead of asking users to run external tools, you can give them a Power BI button linked to Power Automate. That button passes their selections into a Fabric pipeline or notebook, which generates the file in OneLake/Blob. The flow can then email them the file link or the file itself. That hides the complexity and keeps it to a couple of clicks for them.

 

Email subscriptions in Power BI don’t solve the row-limit issue, they’ll still be capped. But you can repurpose the idea by instead of subscriptions from the service, use your Power Automate flow to deliver exports on a schedule or on-demand, which gives the same experience without the limit.

Sidhant
Advocate IV
Advocate IV

Hi @Gabry@v-hashadapu ,
So I was going through some posts I did come across one video wherein they have demonstrated how to export more records (more than the limit). In the video the instructor mentioned 3 approaches:
i) Method-1: Using the DAX query view (in-built in Power BI Desktop) wherein we simply use: EVALUATE 'table_name' and run the DAX query and copy the data into a csv/excel

Sidhant_0-1758191963323.png

Con: The limitation is it only supports upto 500K records, beyond that not possible.

The next two approaches that were been discussed were using External Tools:
ii) Bravo

Sidhant_1-1758192253042.pngSidhant_2-1758192269630.png


iii) DAX Studio

Sidhant_3-1758192295595.png

 


In both these tools all we need to do is simply select the table, the export type (excel or csv) and done.
This is better, but now for the end-user to simplify this process can we do something like within a button click (let's say we select the table) and start the export process (by calling any of the external tool explicitly).
Considering the end-user to be a non-technical person they want this process to be simplified like few clicks.

If you have any inputs with respect to this, please do let me know.

Regards,
Sidhant.

Hi,

I’m not sure I fully understand why you’d prefer using Bravo or other external tools, when you already have Fabric notebooks, UDFs, and OneLake available. Is there something specific missing from this approach?

I’m not too familiar with how those tools work under the hood, maybe they rely on the XMLA endpoint?

In any case, if the goal is to add a button inside the report, as far as I know you’d still need to use either Power Automate or a UDF. I’m not aware of other options

 

Hi @Gabry ,
Thanks for the reply, I had shared the external tools as one way (kind of backup). Earlier you did mention to make use of Notebooks since I haven't worked on that front can you please let me know how to achieve (the required functionality) and with Power Automate, if you have any resources that can help to get this (I have worked with Power Automate before but not such a large data).
I did not get 'UDF', what's that.

Regards,
Sidhant.

Thanks for the updates! I understand your point about using external tools like Bravo or DAX Studio for exporting data, but I think it’s worth considering the advantages of using Fabric Notebooks, UDFs, and Power Automate for this kind of task, especially when working with large datasets.

The main benefit of the Fabric + Power Automate approach is that it offers a more streamlined, scalable, and integrated solution.

Here’s how the flow could work:

Power BI Report: Users select fields via slicers or parameters.

Power Automate Flow: The flow captures user selections (through a Power BI button).

Fabric Notebooks: The notebook processes the selected data (filtering, formatting) and exports it as CSV/Excel to OneLake or Blob Storage.

Automated Notification: The flow sends the user an email with the exported file or a link to the file in storage.

Using Power Automate and Notebooks will let you handle much larger datasets efficiently, and it integrates seamlessly into existing Power BI workflows. Additionally, as you've mentioned, bypassing the Power BI export limit (150k rows) is easily achievable with this approach.
On the oder side you can also check translytical task flows 

It leverages UDFs (User Data Functions), allowing you to place a button in the Power BI report that captures the filter context and uses it to run a notebook or python code to export the data you need.

Hi @Gabry,
Thanks for give an idea about how the flow will look like, so I was trying to create a flow had few queries with respect to it:

import json

# 1. Get JSON string from Power Automate (filters parameter)
filters_str = dbutils.widgets.get("filters")
filters = json.loads(filters_str)

print("Received filters:", filters)

# 2. Load data 
df = spark.read.sql("""
    SELECT OrderID, Name, Profit, Quantity, OrderDate, Region
    FROM SalesTable
""")

# 3. Apply filters dynamically
for col, val in filters.items():
    if isinstance(val, list):
        # Multiple selections from slicer
        df = df.filter(df[col].isin(val))
    else:
        # Range handling for dates or numbers
        if "Start" in col:
            base_col = col.replace("Start", "")
            df = df.filter(df[base_col] >= val)
        elif "End" in col:
            base_col = col.replace("End", "")
            df = df.filter(df[base_col] <= val)
        else:
            # Single value equality
            df = df.filter(df[col] == val)

# 4. Save to OneLake / Blob
output_path = "abfss://exportdata@onelake.dfs.fabric.microsoft.com/SalesExports/FilteredExport.csv"

(df
 .coalesce(1)  # single file
 .write
 .mode("overwrite")
 .option("header", "true")
 .csv(output_path))

print("✅ Export completed:", output_path)

The above code was for the Fabric notebook (to accept dynamic range of filters which means there can be n slicers that can be added later so the code should not fail if newer one's are added). Ovver here I was not sure what should be added as the output_path (should I add the URL of lakehouse)

Sidhant_0-1758283764872.png

Then coming to the Power Automate flow:

Sidhant_1-1758283875151.png

Then in the compose action I was using the JSON body to build the Request body which looks like:

{
  "notebookExecution": {
    "parameters": {
      "filters": "@{json(triggerBody()?['filters'])}"
    }
  }
}

Then to run the Fabric Notebook using the HTTP action (premium connector), I did across two URL that can run a notebook:
1st: https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/items/{{ARTIFACT_ID}}/jobs/instances (Received from ChatGPT)
2nd: (which is been used currently and here we don't need to pass anything in the body i.e. empty)
https://api.fabric.microsoft.com/v1/workspaces/{workspace_id}/items/{artifact_id}/jobs/instances?job...
But the issue is I am not sure how to pass the filters (Compose action output if I used the 1st URL and to generate the token do I need to register an app (how to get the token))
-> The next step was to poll the notebook untill it returns 200 (succeded) using the GET request within it and which uses the status code from the previous HTTP action, but here I needed o get the JobInstanceID not sure on how to get that

# GET URL:
GET https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/items/{{ARTIFACT_ID}}/jobs/instances/{{jobInstanceId}}

Right now I have the workspace and ARtifact id which we can find in the URL of the Fabric notebook.

ANd the next steps were to the notebook should save the filtered data in OneLake and using GET link to retrieve the link of the file stored (location) and finally using send email action notify user.

So can you help me out here and is the above flow design correct?.

@Poojara_D12 , @v-hashadapu : If you have anything to to do please do share them as well.
Regards,
Sidhant

Hi, sorry for the delay, the last few days have been quite busy and the topic is getting a bit complex, so I needed some time to think it through.

I reviewed your notebook + Power Automate flow and gave it some thought, and I believe that, at least for now, it might be easier to simplify things. Honestly, it felt like we were overcomplicating what should have been a relatively simple task, so I suggest just relying on user data functions 

These are specific artifacts you can create to easily access the report filter context. I recommend checking the documentation, for example:

Overview

Step-by-step tutorial

There are also YouTube videos available, since it would be difficult to explain everything here in detail.

You can use these artifacts almost like notebooks. With some adjustments, you could place the code you wrote inside a UDF, receive the Power BI filter context as parameters, use it to generate the new dataframe, and then write a new file to the lakehouse in a single step. This way, at least for now, you could avoid adding Power Automate.

In my opinion, the UDF approach is the cleanest and least messy: you keep all the code inside the function, both the part that reads the filter context and the part that writes the files.

Take a look at the docs and let me know what you think.

PS. 

Of course, the filter context can also be passed via power automate. It’s not that complex, but explaining all the steps here would be difficult. You can follow the official documentation here or find tutorials on youtube. Additionally, check here in the section Run a notebook on demand explains how to pass parameters using the REST API, that is one of the steps where you got stuck.

I apologize for bringing up the power automate approach, in hindsight, it would probably have been better to focus solely on translytical task Flows. My recommendation, as mentioned earlier, is to set this approach aside for now and give UDFs a try first.

v-hashadapu
Community Support
Community Support

Hi @Sidhant , Hope you're doing okay! May we know if it worked for you, or are you still experiencing difficulties? Let us know — your feedback can really help others in the same situation.

Hi @v-hashadapu,
Not yet @ShubhaGampa11 is my colleague who is working along with me and we are exploring different ways to achieve the expected o/p, so I did convey to share her points on this thread.
I came across some post wherein they did mention to make use of email subscription, but how use that was not mentioned so if you know anything about it do let me know and if you or @Gabry has anything to add please do.

Regards,
Sidhant.

ShubhaGampa11
Advocate I
Advocate I

Hi @v-hashadapu@Gabry 

This end-to-end solution enables users to dynamically select fields in Power BI and export the filtered data to a well-formatted Excel file via Microsoft Fabric, with automation powered by Power Automate.

The flow begins with a Power BI report where users choose specific columns using slicers or parameters. A Power Automate flow—triggered via HTTP or a Power BI button—captures these selections, along with export metadata like user email, export ID, and record limits. The flow authenticates with Microsoft Fabric using an Azure App Registration and securely triggers a Fabric data pipeline.

Inside Fabric, the pipeline filters and exports the selected data to Excel, formats the output using a notebook (with headers styled, column widths adjusted, and summary metadata added), and stores it in a Lakehouse location. After processing, Power Automate fetches the file and emails it directly to the user, attaching the Excel file with all selected data.

This workflow is scalable, secure, and user-friendly—ideal for automated reporting, scheduled exports, or on-demand sharing. It eliminates manual data exports and delivers polished Excel reports with just one click from Power BI.

This approach is possible but its costly to the user and we need lot of field parameters and slicers for example ,PFB screenshot 

ShubhaGampa11_0-1757987671502.png

So im using run a query against dax to get the dynamic parameters but im facing an issue where i need to implement the slicers and right now im working with a sample dataset with 3 field parameters and a slicer when im using perfomance analyzer i'm seeing the following DAX - 

// DAX Query
DEFINE
VAR __DS0FilterTable = 
TREATAS({"'Orders'[Order ID]"}, 'OrderParameter'[OrderParameter Fields])
 
VAR __DS0FilterTable2 = 
TREATAS({"'People'[Region]"}, 'PeopleParameter'[PeopleParameter Fields])
 
VAR __DS0FilterTable3 = 
TREATAS({"'Returns2'[Returned]"}, 'ReturnsParameter'[ReturnsParameter Fields])
 
VAR __DS0FilterTable4 = 
FILTER(
KEEPFILTERS(VALUES('Orders'[Order Date])),
'Orders'[Order Date] >= DATE(2022, 1, 20)
)
 
VAR __DS0Core = 
SUMMARIZECOLUMNS(
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Year],
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Quarter],
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[QuarterNo],
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Month],
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[MonthNo],
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Day],
__DS0FilterTable,
__DS0FilterTable2,
__DS0FilterTable3,
__DS0FilterTable4,
"SelectedFieldsOrder", 'FieldParamTextOrder'[SelectedFieldsOrder],
"SelectedFieldPeople", 'FieldParamTextPeople'[SelectedFieldPeople],
"SelectedFieldsReturns", 'FieldsParamTextReturns'[SelectedFieldsReturns]
)
 
VAR __DS0BodyLimited = 
TOPN(
1002,
__DS0Core,
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Year],
1,
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[QuarterNo],
1,
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Quarter],
1,
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[MonthNo],
1,
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Month],
1,
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Day],
1
)
 
EVALUATE
__DS0BodyLimited
 
ORDER BY
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Year],
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[QuarterNo],
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Quarter],
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[MonthNo],
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Month],
'LocalDateTable_646b2db8-ef0e-4dc0-83b9-a2a5218245aa'[Day]

  

Can anyone pleas help me on how to make the DAX more dynamic and satisfy our requirment of exporting 1m rows too.

Thankyou

Sidhant
Advocate IV
Advocate IV

Hi @v-hashadapu , @Gabry ,
I had another query by any chance is there any way through which we can increase the 150K (export limit), like for a specific usecase by connecting with the Microsoft Team and increase the limit like how we have in AWS let's say we want to increase some default limit we can easily increase that or in case of Azure Blob storage the default limit {in terms of storage} is 5 PiB which can be increased by contacting the support team (Azure support), similarly can we do that (for a special requirement) {considering if my org is Microsoft Partner}.
Meanwhile I'm also implementing few workarounds, which are in-progress will share those as well (the respective blockers associated with them).

Regards,
Sidhant.

Hi @Sidhant , Thank you for reaching out to the Microsoft Community Forum.

 

In Power BI, the 150K row export limit (to Excel/CSV) is a hard service limitation and can’t be raised by contacting Microsoft support, even if your org is a Microsoft Partner. Unlike Azure services where quotas can be increased, Power BI enforces these limits consistently across tenants for performance and governance reasons.

Paginated reports in Power BI: FAQ - Power BI | Microsoft Learn

 

For true large-scale exports (1M+ rows), the recommended approach is to bypass the built-in export and instead leverage Fabric Lakehouse or Dataflows/Notebooks to generate files (CSV/Parquet) that can be stored in OneLake or Blob Storage and then distributed. That way you remove the export bottleneck, support dynamic column selection (via parameters or notebook inputs) and keep the solution scalable for downstream use in Power Automate or other services.

 

v-hashadapu
Community Support
Community Support

Hi @Sidhant , hope you are doing great. May we know if your issue is solved or if you are still experiencing difficulties. Please share the details as it will help the community, especially others with similar issues.

Helpful resources

Announcements
FabCon Global Hackathon Carousel

FabCon Global Hackathon

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!

September Power BI Update Carousel

Power BI Monthly Update - September 2025

Check out the September 2025 Power BI update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors
Top Kudoed Authors