Share your ideas and vote for future features
Suggest an idea
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Showing ideas with label Data Engineering.
Show all ideas
Submitted
8 hours ago
Submitted by
Thomas_Pouliot
8 hours ago
Add functionality to Manage (Data Model Settings) Page Currently this just shows a grayed out >Data Source Credentials which cannot be maintained in the Power BI Service. Add functionality to manage gateway like PBIX Reports Add functionality to manage paramaters (see below for more detail) Ability to manage parameters in the Power BI Service for Power BI Report Builder Paginated Reports Currently paginated reports (.rdl) do not have the parameter flexibility like the standard power bi reports (.pbix) where parameters can be changed on the back end (e.g. pointing test to test and dev to dev database via a parameter used for the data source). It would be awesome for both Internal paginated report parameters AND power query parameters to be managable from the power bi service in the manage screen. Rename the Manage and Settings menu options in the Power BI Service (elipses options on RDL files) For RDL, Manage option appears to take user to a screen similar to PBIX Data Model 'settings' while Settings appears to take you to Report 'settings'. To make things easier to follow, simply rename Manage to Report Settings and Settings to Data Model Settings. Workspace level setting Add functionality to set which roles can create, read, update, and/or delete ALL associated subscriptions rather than just the ones created by current user for.
... View more
See more ideas labeled with:
-
Data Engineering
-
Data Warehouse
-
Fabric platform | Security
-
Fabric platform | Workspaces
-
Power BI
Submitted
Tuesday
Submitted by
v-aasari1
Tuesday

I would like to request an update to the public information attached below in the Japanese version. The English version is up to date, but the Japanese version has not been updated since 2024/11/20. JP URL: https://learn.microsoft.com/ja-jp/fabric/database/mirrored-database/azure-databricks-tutorial EN URL: https://learn.microsoft.com/en-gb/fabric/database/mirrored-database/azure-databricks-tutorial Reason for updating: I confirmed in the English version that a new "Prerequisites" was added. The Japanese version does not have this notation, so customers cannot test it following the correct procedure.
... View more
See more ideas labeled with:
-
Data Engineering
-
Data Warehouse
Submitted
Tuesday
Submitted by
cathrinew
Tuesday

When I open a Notebook, it automatically opens the expanded Explorer pane (to the left). I rarely use this, so every time I open a Notebook I have to: - Collapse the Explorer pane (to the left) - Click on the View tab in the ribbon (at the tip) - Open the Table of contents pane (to the right) - Wait for the Table of contents to load I would like to be able to persist the view/layout for each notebook (or alternatively as a user setting) so the next time I open the notebook I will by default see the Table of contents instead of the expanded Explorer pane. (Additionally, I would love to be able to move the panes between the left and right sides, but that's less important.) This would be a quality of life improvement that would save developers time and make the development experience much smoother.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
11-26-2024
01:32 PM
Submitted by
kbutti
on
11-26-2024
01:32 PM

we use whl files and create libraries to make configurations and common code accessible across all notebooks in our Data Engineering solution. But publishing these files to environment is not a great experience. I need to try multiple times publishing with no luck. It is hard to predict if publishing was successful or not. It would be helpful if can see the libraries, whl files etc available at cluster level.
... View more
See more ideas labeled with:
-
Data Engineering
-
Fabric platform | Workspaces
Submitted on
11-13-2024
01:07 PM
Submitted by
fbcideas_migusr
on
11-13-2024
01:07 PM
Please fix the SQL Analytics Endpoint sync delays. Many users have been surprised and are receiving old data because of the Lakehouse SQL Analytics Endpoint sync delays. We want Fabric to handle this automatically, so we don't need to think about it.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
11-26-2024
01:25 PM
Submitted by
kbutti
on
11-26-2024
01:25 PM

Lakehouse data access roles are currently limited to 250. This limitation constraints data engineering team from following least access privileges principles. We tried to make use Lakehouse data access roles to restrict data access to user by entity. But, this number limitation is constraining us.
... View more
See more ideas labeled with:
-
Data Engineering
-
Fabric platform | Workspaces
Submitted on
01-23-2025
07:22 AM
Submitted by
fbcideas_migusr
on
01-23-2025
07:22 AM
Hi Folks, There are currently two flavors of notebooks (lakehouse + warehouse). I have a client that wanted to leverage TSQL notebooks. However, there is no post deployment option for setting the default warehouse. Is this required feature on a roadmap somewhere? Sincerely John Miner PS: Pleasantly surprised on the state of this feature. It is getting better every day! EX: Image below shows missing option in my simple test environment.
... View more
See more ideas labeled with:
-
Data Engineering
-
Fabric platform | Workspaces
Submitted on
01-05-2024
07:39 PM
Submitted by
dwilliams3
on
01-05-2024
07:39 PM
Lakehouses, and more importantly the data inside them, are not recoverable if the the lakehouse is deleted. There is also no way to recover a prior lakehouse if the data needs to be rolled back to a previous state. BCDR was recently released but is only for capacity disaster recovery and is too cumbersome as it has an extra cost along with deploying in another region. We need a way to recover a lakehouse and its history, if it has been deleted or if the data has been corrupted.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
05-14-2024
12:27 PM
Submitted by
Thomas_Schlidt
on
05-14-2024
12:27 PM
Data Warehouses and Data Lake House are tied to the user account of their creator. Which means, when the creator leaves an organization and the account is disabled/deleted, the lakehouse and data warehouse stop functioning. This is not sustainable for enterprise organizations that may attempt to build solutions off lakehouses and data warehouses. Either associate the lakehouse and warehouse ownership with the workspace (which means the workspace will need an identity for permission setting at the data lake) or allow administrators to reassign ownership of the lakehouse and data warehouse. Currently there is a powershell script for data warehouse, there isn't for lakehouse. Which means that the lakehouse rebuilt and downsteam artifacts need to be modified.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
10-23-2024
04:07 PM
Submitted by
Plinio_Nunez
on
10-23-2024
04:07 PM
There is an ofter large delay for data to become available via the T-SQL Endpoint. With barely 142 delta tables, some newly created delta tables wont even be available until we manually hit refresh next to the warehouse endpoint in the T-SQL view. While the data is readily available to spark, it takes a while to become available to users on Management Studio.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
09-17-2024
05:08 AM
Submitted by
v-mpramanik
on
09-17-2024
05:08 AM
Certain fabric artifacts, such as notebooks, cannot have deployment rules specified after being deployed from one workspace to another, as we are not the owners of those artifacts. We would like the Product Team to introduce a feature that allows us to take ownership of these artifacts or enable users with at least contributor access to the workspace to set the deployment rules.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
10-02-2024
11:47 PM
Submitted by
fbcideas_migusr
on
10-02-2024
11:47 PM
Hi! Request: please add a standard activity for refreshing the SQL Analytics endpoint as a feature directly in Data Pipelines OR make an option (check box) in the Semantic Model refresh activity (which is in preview) to refresh connected SQL Analytics endpoint. Problem: I have a standard activity for refreshing my Semantic Model, but the SQL Analytics endpoint which is one step before is not automatically refreshed.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
10-21-2024
02:40 PM
Submitted by
nash_g
on
10-21-2024
02:40 PM
Power Query, with its intuitive user interface, has revolutionized self-service data transformation in Microsoft’s ecosystem, allowing users to perform complex transformations without needing deep coding skills. However, while Power Query’s UI is user-friendly, it generates M code, which when processed has its limitations in handling large-scale data processing or more advanced transformations. On the other hand, Apache Spark is a powerful, scalable data processing engine, designed to handle big data workloads efficiently. However, its native interface, especially when working in a Spark notebook, is less accessible to users without coding expertise. There’s an opportunity here: to combine the simplicity and accessibility of Power Query’s UI with the efficiency and scalability of Spark. This will allow users to leverage Spark’s processing power without sacrificing the ease of transformation that Power Query provides. Description of the Idea A Unified Power Query UI for Spark Transformations in Microsoft Fabric The core idea is to extend the Power Query/Dataflow UI within Microsoft Fabric so that, instead of generating M code, it writes transformations directly into a Spark notebook. This would provide users with the best of both worlds: User Experience: The familiar, easy-to-use drag-and-drop UI of Power Query that democratizes data transformation, allowing analysts and business users to manage data transformations without needing to write complex code. Performance and Scalability: By generating Spark code under the hood, the solution leverages Spark’s distributed processing capabilities. This ensures that even complex transformations on large datasets can be handled efficiently, taking full advantage of Spark’s low CU usage, speed and scalability. Key Features of This Approach: Seamless Integration: The UI would allow users to visually build their transformation logic in the same way they do in Power Query, while behind the scenes, Spark code is written and executed within a Spark notebook. Advanced Performance: Leveraging Spark’s powerful distributed architecture, this approach would handle large datasets more efficiently than Power Query’s current M engine. Transformations could be executed at scale, supporting larger and more complex use cases. Interoperability: This solution would be integrated within Microsoft Fabric, making it easy to move between low-code/no-code interfaces and deeper programmatic control when needed. Users could still open the Spark notebook generated from their UI transformations to tweak or optimize the code further if desired. Efficiency Gains for Enterprises: By using the Power Query UI in the frontend and Spark as the backend, users can significantly reduce time spent on large data transformations.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
05-25-2023
06:17 PM
Submitted by
fbcideas_migusr
on
05-25-2023
06:17 PM
if we get an option to connect to Azure KeyVault in Microsoft Fabric in Synapse data engineering through linked Service , We can retrieve values from keyvault which are sensitive in Nature.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
01-31-2025
02:43 PM
Submitted by
todd_chittenden
on
01-31-2025
02:43 PM
I created a Notebook under a Lakehouse. I then added/attached a second lakehouse to the notebook. Under the Explorer, it shows "Lakehouses / 2 item(s) added" with the expand ">" icon. I click that icon and see ONLY ONE lakehouse listed (the default).
... View more
See more ideas labeled with:
-
Data Engineering
-
Fabric platform | Workspaces
Submitted on
10-14-2024
11:31 PM
Submitted by
n_den_boer
on
10-14-2024
11:31 PM
T-SQL temp tables support over multiple notebooks cells so that we can benefit from the usage of markdown cell features to document our code through out with a complete T-SQL script.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
06-05-2024
10:16 AM
Submitted by
mlongtin1
on
06-05-2024
10:16 AM
Creating an environment gives you a nice UI to set a list of Python packages to install by default The user thinks : “Oh cool, this will be faster than doing %pip install every time I start a notebook” The reality is that it now takes 3 minutes to start Spark, instead of 10 seconds to run "%pip install" Either fix the UI to warn users, or fix the startup time.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
04-19-2024
07:40 AM
Submitted by
fbcideas_migusr
on
04-19-2024
07:40 AM
Write data directly into a data warehouse using the Fabric notebook, which utilizes Spark (let Fabric handle the staging process behind the scenes). This functionality resembles what is currently available in Synapse Analytics Workspace.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
12-11-2023
07:20 PM
Submitted by
Scott_Powell1
on
12-11-2023
07:20 PM
We need true role playing capabilities for dimensions in Fabric. Currently, you can include a table only a single time in a semantic model that uses DirectLake. This makes it impossible to use a dimension as a role playing dimension - for example, our date table as admit date, discharge date, surgery date, etc. Also the same for providers - there can be an admitting provider, discharge provider, attending provider, surgeon, primary care physician, etc. We need the ability to: Use a table multiple time in a semantic model built in the service Be able to rename the table, for example rename a generic "date" table to admit date, discharge date, etc. Still have everything work properly with DirectLake The only workaround currently is to create multiple copies of the table. This not only wastes space, but also means separate ETLs have to be updated if there's any change to the base table. Thanks, Scott
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
09-26-2024
05:51 AM
Submitted by
han_tran
on
09-26-2024
05:51 AM
Currently, when using the direct link to Microsoft Fabric on https://make.powerapps.com, it links in the tables on the selected environment over to Microsoft Fabric. This is inconvenient as users often do not need all the tables from their PowerApps environment and usually only need a sub-selection of tables The documentation says this as well: https://learn.microsoft.com/en-us/power-apps/maker/data-platform/azure-synapse-link-view-in-fabric#:~:text=All%20tables%20chosen%20by%20default. When using Azure Synapse Link, the ability to pick specific tables is available, so adding this functionality to the direct link to Microsoft Fabric should be done as well.
... View more
See more ideas labeled with:
-
Data Engineering
Idea Statuses
- New 14,747
- Need Clarification 0
- Needs Votes 22,606
- Under Review 609
- Planned 251
- Completed 1,641
- Declined 217
Helpful resources
Latest Comments
- Thomas_Pouliot on: Enhance Paginated Report Settings Flexibility - Pa...
- chadrenstrom on: Backup of data warehousing
- aniket_yamle on: Feature Request: Freezable Canvas Header for Power...
-
vijaybn on: Fix 'Fabric Ideas' pages
- rnbrown1 on: Power BI Matrix Visual - Increase the number of co...
-
kleigh
on: Bug: New Card visual doesn't move with arrow keys
- CeeVee33 on: Introduce a "Publisher" Role in Power BI Workspace...
-
William_D_Wang on: Fabric Real time dashboard usage telemetry in Work...
-
v-emohankira on: Customize company announcement at MS Fabric Home p...
- jschueller on: Make Export Data the ONLY visual header icon
-
Power BI
38,519 -
Fabric platform
522 -
Data Factory
438 -
Data Factory | Data Pipeline
266 -
Data Engineering
234 -
Data Warehouse
168 -
Data Factory | Dataflow
134 -
Real-Time Intelligence
126 -
Fabric platform | OneLake
98 -
Fabric platform | Admin
96 -
Fabric platform | Workspaces
96 -
Fabric platform | CICD
73 -
Fabric platform | Capacities
65 -
Real-Time Intelligence | Eventhouse and KQL
56 -
Real-Time Intelligence | Activator
50 -
Data Science
41 -
Fabric platform | Security
39 -
Data Factory | Mirroring
36 -
Fabric platform | Governance
33 -
Real-Time Intelligence | Eventstream
30 -
Fabric platform | Data hub
26 -
Fabric platform | Support
25 -
Databases | SQL Database
21 -
Databases
16 -
Data Factory | Apache Airflow Job
3 -
Product
2 -
Real-Time Hub
1