Share your ideas and vote for future features
Suggest an idea
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Submitted
Monday
Submitted by
v-velagalasr1
Monday

Need an API to get the list of all data sources of the reports available in the capacity.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
v-nandagm
Monday

Need an API that allows the automated monitoring of VNet Data Gateways.
... View more
See more ideas labeled with:
Submitted
Tuesday
Submitted by
jasonhorner
Tuesday
Problem Statement: Currently, the Azure Data Factory REST API connector only supports REST endpoints that return responses in application/json format. However, many modern and legacy REST APIs legitimately return responses as text/plain (e.g., health checks, keys, tokens, or plain status messages). When using such APIs, the REST connector fails to process the response, making it incompatible with otherwise valid REST interfaces. Although it's possible to work around this by invoking a function activity or using custom logic, this introduces unnecessary complexity and overhead. Proposed Solution: Enable the REST connector to support text/plain responses by: Wrapping the plain text in a valid JSON document (e.g., { "response": "raw text here" }), or Exposing the raw response as a string under a default or user-defined property. Business Value: Simplifies pipeline development: Reduces the need for workarounds like function or web activities. Expands connector compatibility: Supports more REST APIs natively, including those used in DevOps, monitoring, security, and identity systems. Improves performance: Avoids additional activities or dependencies that increase execution time and cost. Enhances usability: Makes ADF more flexible and developer-friendly.
... View more
See more ideas labeled with:
Submitted
Wednesday
Submitted by
Thomas_Pouliot
Wednesday
Right now: In deployment pipeline (new version) to deploy from dev to test we have to click test then select the item from dev we want deployed and then deploy. (essentially pulling up from the lower stage versus pushing to the next) If we want to set rules we have to deploy first THEN set the rule which is counterintuitive and a time sink/waste of resources to then deploy again or manually change the parameters. The icon for deployment is a rocketship not a UFO with a tractorbeam so I think push is the intention not a pull. This imagery should convey what I mean when I say push (rocket propulsion) versus pull (UFO tractor beam) Suggested change: Change from a pull deployment to a push deployment Change Deploy From to Deploy To. To deploy from dev to test, user should click on DEV and select report to deploy to test Before deploy user should be able to set any parameter and data source change rules. Remove rules from highest prod level, add rules to lowest dev level Then user can click deploy to have it moved/pushed/deployed up to test. All gateway settings and parameters should be settable through deployment pipeline and pipeline should tell if there is no gateway Add checkbox option to deployment - Refresh on deployment This option should be grayed out or fail without using resources if a gateway is not setup. When checked, after successful deployment, attempt to refresh the report without user having to go into the workspace to refresh it. Add global parameter rules as noted in separate idea Global Deployment Pipeline Rules - Microsoft Fabric Community
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

The primary axis are outdated and require significant improvement when compared to Excel. This makes it difficult for report creators and often leads to problems when trying to manage and style them effectively. By offering more format settings, greater control over displayed data can be provided, especially if axis ticks, new gridlines, and separators are also included.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
v-nandagm
Monday

API to update the VNET description automatically.
... View more
See more ideas labeled with:
Submitted
Wednesday
Submitted by
praveen_511
Wednesday
Introduce an intuitive Undo and Redo feature in Power Query, similar to those available in other environments like Excel and Word. This would empower users to effortlessly reverse or reapply recent changes in their data transformations without manually tracking and reverting edits. This functionality would significantly enhance user experience by providing greater flexibility, reducing errors, and improving efficiency while navigating and transforming datasets in Power Query. It would bring familiarity and ease to the tool, making it even more user-friendly. Sample Example from MS Excel(similar could be applied to Power Query)
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
lg01
yesterday

In an App, one can create a section and under that section have several reports. However, there is no second level section. In other words in would be nice to have "sub sections" and organize reports by different areas in a section. For instance, in our company we would like to have something like: Operations (section) Drilling (second level section) Report 1 Report 2 ... Report n Accounting (second level section) Report 1 Report 2 ... Report n Planning (Section) USA Planning (second level section) Report 1 Report 2 ... Report n Europe Planning (second level section) Report 1 Report 2 ... Report n That way it is much easier to organize our Apps.
... View more
See more ideas labeled with:
Submitted on
03-21-2025
12:46 AM
Submitted by
sajjadniazi
on
03-21-2025
12:46 AM

Problem: Currently, when a dataset is connected to a Lakehouse as a datasource in Power BI Fabric, it defaults to a cloud connection mapped to SSO. In embedded mode, reports built on these datasets fail due to a lack of identity, as they do not inherit authentication from the service principal. To resolve this, users must manually adjust the datasource settings via the Power BI service https://learn.microsoft.com/en-us/fabric/fundamentals/direct-lake-fixed-identity Proposed Solution: Introduce a new REST API or extend the current API functionality to allow programmatic setting (or re-setting) of the connection to the service principal, including updating datasources. Benefits: Automates the process, reducing manual intervention Minimizes downtime for embedded reports Enhances developer experience and deployment efficiency Ensures consistency in authentication settings across different environments Impact: This feature will significantly improve the workflow for organizations embedding Power BI reports using service principals, ensuring seamless and automated datasource authentication.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

It’s challenging and time-consuming for both new and experienced report creators to organize data when trying to split cards into categories. By introducing small multiples, it could be a familiar and easy way for report creators to intuitively categorize data, especially if they had more control over layout and formatting.
... View more
See more ideas labeled with:
Submitted on
03-12-2025
12:11 AM
Submitted by
leyre
on
03-12-2025
12:11 AM
It would be really useful to have security settings to show or hide individual report pages. Currently, audiences control access only at the report level, not per page.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
Cookistador
Monday

When building a report, undo and redo functionality is available for chart modifications. However, this feature is not currently available for table-related actions. Would it be possible to implement this feature for the following actions: Deletion of a table, measure, or column Modification and deletion of relationships Modifications to data type, format, or category Many thanks in advance for your help
... View more
See more ideas labeled with:
Submitted on
03-25-2025
08:34 PM
Submitted by
v-hnishikawa
on
03-25-2025
08:34 PM

I would like to be able to grant write permissions on a directory-by-directory basis to individual users or target groups in OneLake's "Manage OneLake Data Access." Currently, "Manage OneLake Data Access" only has settings for "Read" and "ReadAll." The roles that are automatically granted write permissions are the "Workspace Admin Role," "Member Role," and "Contributor Role." However, as the above are write permissions for the entire lake house, please add a function to grant write permissions on a directory-by-directory basis.
... View more
See more ideas labeled with:
Submitted
Wednesday
Submitted by
crookie
Wednesday
Hi Where a user has no data based on the RLS permissions, instead of showing them a dashboard with all the empty visuals, I'd like to be able to automatically hide all the normal visuals and instead show something like a card with a nice message explaing they have no access to any data etc. There is currently no automated method to accomplish this or to redirect to another page based on a row count being 0. I realise you can make lots of changes to make things look invisible based on a measure, but it requires a lot messing with to get this into the desired state. What I thought we be a neater way would be to be able to group visuals into containers and then be able to dissable/hide/resize the contains. Then if I had say two containers, I could then set container 1 to be enable based on a row count measure > 0, while containter 2 to be enable on a the measure = 0. Many thanks for reading this.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
kopytko95
yesterday
Similarly, allow for column collapsing in table when none of the fields return a value. When can it be used? For example i have a dozen of values on a chart, each as own measure, and my goal is to allow customization which ones appear on the chart. So i'd create a parameter table for example with 12 month names, and each measure will look like if("jan" in MonthParametrs[parameter], value of a measure, blank()) and so on. Currently when user unchecks given value on a parameter, for example january, it still appears on a legend, which is annoying when you have like three lines on graph and for example want to use chart in presentation of sth. ^Yes, in some cases i can just put months in a legend and on slicer and use one measure, but sometimes it's not possible, sometimes the values i want to compare come from different tables or are calculated too differently to make it one measure + legend.
... View more
See more ideas labeled with:
Submitted on
03-27-2025
03:54 AM
Submitted by
bluepond
on
03-27-2025
03:54 AM
Enabling automatic or scheduled mirroring of OneLake data across Azure regions would significantly enhance the platform's suitability for global enterprise scenarios. Key benefits of this capability include: Improved resilience through cross-region redundancy for disaster recovery and high availability Reduced latency by allowing data access from geographically closer regions Simplified data operations, eliminating the need for manual synchronization via Dataflow Gen2 or ADF pipelines Potential cost optimization, especially if replication avoids outbound data transfer charges Recommended configuration options: Selecting destination regions for mirroring Choosing between read-only and read/write replicas Defining sync intervals (e.g., near real-time, hourly, daily) This feature would align OneLake with enterprise expectations for multi-region data availability and performance.
... View more
See more ideas labeled with:
Submitted on
03-21-2025
12:02 PM
Submitted by
frithjof_v
on
03-21-2025
12:02 PM

Make it possible to write SparkSQL without having a Default Lakehouse.
With 3- or 4- part naming, i.e.
[workspace].[lakehouse].[schema].[table]
there should be no need to attach a Lakehouse in order to use SparkSQL.
Needing to attach a Lakehouse is annoying and adds extra complexity.
... View more
See more ideas labeled with:
Submitted on
03-28-2025
09:19 AM
Submitted by
animvin
on
03-28-2025
09:19 AM
Feature Request: Add Key Pair-Based Authentication for Snowflake Connector in Power BI Description: We request the addition of key pair-based authentication to the Snowflake Connector in Power BI to address current challenges with existing authentication methods. Microsoft Entra ID-based authentication requires setting up additional infrastructure and services solely for authentication, which increases complexity and operational overhead. Snowflake's recent enforcement of MFA for password-based authentication makes it difficult to configure and maintain connections on Power BI Gateway, as MFA is not compatible with automated workflows. Key pair-based authentication would provide a secure, lightweight, and efficient alternative, eliminating the need for extra infrastructure while simplifying the configuration process for Power BI Gateway. This feature would enhance usability and align with modern security practices.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
ABarzanti97
Monday
Hi, It'd be great to add a feature that lets you set a fixed width for the column in the table settings. Thank you, Andrea
... View more
See more ideas labeled with:
Submitted on
03-23-2025
06:13 PM
Submitted by
v-tkamiya
on
03-23-2025
06:13 PM

I would like a style slicer that pops up a calendar that allows selection by a single date, rather than a range of dates.
... View more
See more ideas labeled with:
Idea Statuses
- New 15,031
- Need Clarification 7
- Needs Votes 22,636
- Under Review 643
- Planned 268
- Completed 1,650
- Declined 222
Helpful resources
Latest Comments
-
Aala_Ali
on: Importing data (or drag and drop) from Fabric Lake...
- anic on: Enhancing Purview Glossary Integration with Power ...
- yeyu47 on: Deployment Pipeline roles
- giusepper11 on: Reintroduce Workspace Name visibility for Lakehous...
-
michaelu1
on: Scheduled refreshes were turned off after two mont...
-
anshulsharma on: Integrate Fabric Eventhouse with Azure AI Agent se...
- tom_vanleijsen on: Hide "updating" spinners in real-time dashboards
-
kleigh
on: change button slicer selected item color
- SimonKAKI on: OneLake Cross-Region Mirroring
-
jovanpop-msft on: Add native OPENROWSET(json) support in Fabric DW
-
Power BI
38,767 -
Fabric platform
537 -
Data Factory
445 -
Data Factory | Data Pipeline
291 -
Data Engineering
266 -
Data Warehouse
186 -
Data Factory | Dataflow
154 -
Real-Time Intelligence
128 -
Fabric platform | Workspaces
122 -
Fabric platform | OneLake
119 -
Fabric platform | Admin
114 -
Fabric platform | CICD
89 -
Fabric platform | Capacities
66 -
Real-Time Intelligence | Eventhouse and KQL
61 -
Real-Time Intelligence | Activator
53 -
Fabric platform | Governance
51 -
Fabric platform | Security
48 -
Data Science
47 -
Data Factory | Mirroring
37 -
Databases | SQL Database
31 -
Fabric platform | Support
31 -
Real-Time Intelligence | Eventstream
29 -
Fabric platform | Data hub
28 -
Databases
22 -
Data Factory | Apache Airflow Job
3 -
Fabric platform | Real-Time hub
3 -
Product
2 -
Real-Time Hub
1