Share your ideas and vote for future features
Suggest an idea
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Showing ideas with label Data Engineering.
Show all ideas
Submitted
yesterday
Submitted by
danielwebb
yesterday
Currently the "Refreshed" and "Next Refresh" columns in the workspace view are blank for Copy Jobs, making it difficult to get a quick "one glance" overview of what has suceeded / failed, and what is and isn't scheduled. The current alternative is having to leave the workspace, go into the monitoring view, filter it down to find the correct information and then go back to the workspace to continue working. Also, it would be good if the copy job scheduler could send failure emails, similar to semantic model refreshes.
... View more
See more ideas labeled with:
-
Data Engineering
-
Fabric platform | Workspaces
Submitted on
03-21-2025
12:02 PM
Submitted by
frithjof_v
on
03-21-2025
12:02 PM

Make it possible to write SparkSQL without having a Default Lakehouse.
With 3- or 4- part naming, i.e.
[workspace].[lakehouse].[schema].[table]
there should be no need to attach a Lakehouse in order to use SparkSQL.
Needing to attach a Lakehouse is annoying and adds extra complexity.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted
Tuesday
Submitted by
Jan_metzger
Tuesday
I would like to request full support connection from power bi service to azure databricks via Private Link without the need for deploying a vnet or on-premise gateway which could be potential bottlenecks.
... View more
See more ideas labeled with:
-
Data Engineering
-
Power BI
Submitted on
03-19-2025
02:53 AM
Submitted by
BobDuffyIRL
on
03-19-2025
02:53 AM
Currently the server name for SQLDB (and DW) is a double GUID of {tenant-Id-workspace-id.}database.fabric.microsoft.com This causes a lot of issues as 1. This is impossible to remember 2. We ahve no way to distinguish test from PRD We request user frirnedly DNS Names like Prodata-Sales-DEV.database.fabric.microsoft.com This would help clearly label an worksapce as DEV or PRD and avoid confusion and make the platform easier to work with.,
... View more
We are reviweing this but no dates to share yet.
See more ideas labeled with:
-
Data Engineering
-
Data Warehouse
-
Databases | SQL Database
Submitted on
04-02-2025
06:30 PM
Submitted by
praveen_511
on
04-02-2025
06:30 PM
Introduce an intuitive Undo and Redo feature in Power Query, similar to those available in other environments like Excel and Word. This would empower users to effortlessly reverse or reapply recent changes in their data transformations without manually tracking and reverting edits. This functionality would significantly enhance user experience by providing greater flexibility, reducing errors, and improving efficiency while navigating and transforming datasets in Power Query. It would bring familiarity and ease to the tool, making it even more user-friendly. Sample Example from MS Excel(similar could be applied to Power Query)
... View more
See more ideas labeled with:
-
Data Engineering
-
Power BI
Submitted on
03-25-2025
12:53 PM
Submitted by
frithjof_v
on
03-25-2025
12:53 PM

For low code users, it would be awesome if there was a Data Pipeline activity that can be used to run Optimize on a table in a Lakehouse (or all tables in a Lakehouse).
For example, when using Copy Activity or Dataflow Gen2 to append data to a Lakehouse table, the tables need to get optimized (compacted) after x runs. But there is no automated, low-code way to do it. So users forget to optimize the table in the destination Lakehouse.
... View more
See more ideas labeled with:
-
Data Engineering
-
Data Factory | Data Pipeline
-
Data Factory | Dataflow
Submitted on
03-04-2025
11:35 PM
Submitted by
v-aasari1
on
03-04-2025
11:35 PM

I would like to request an update to the public information attached below in the Japanese version. The English version is up to date, but the Japanese version has not been updated since 2024/11/20. JP URL: https://learn.microsoft.com/ja-jp/fabric/database/mirrored-database/azure-databricks-tutorial EN URL: https://learn.microsoft.com/en-gb/fabric/database/mirrored-database/azure-databricks-tutorial Reason for updating: I confirmed in the English version that a new "Prerequisites" was added. The Japanese version does not have this notation, so customers cannot test it following the correct procedure.
... View more
See more ideas labeled with:
-
Data Engineering
-
Data Warehouse
Submitted on
03-25-2025
12:49 PM
Submitted by
frithjof_v
on
03-25-2025
12:49 PM

For low code users, it would be awesome if there was a Data Pipeline activity that can be used to run Vacuum of a table in a Lakehouse (or all tables in a Lakehouse).
For example, when using Copy Activity or Dataflow Gen2 to write to a Lakehouse table, the tables need to get vacuumed. But there is no automated, low-code way to do it. So users forget to vacuum the destination Lakehouse.
... View more
See more ideas labeled with:
-
Data Engineering
-
Data Factory
-
Data Factory | Data Pipeline
-
Data Factory | Dataflow
Submitted on
04-02-2025
05:48 PM
Submitted by
awillms
on
04-02-2025
05:48 PM
It would be helpful to have a permission level between viewer and contributor that allowed users to create things like notebooks, pipelines, data flows, and add things to lakehouses, but not have the ability to create new environments. We at Duke would like to be able to create shared environments with the approved libraries and spark settings for different size spark pools, without giving developers the ability to create their own environments, spark settings, or spark pools. This would give us the ability to help control costs and reduce exposure to un-approved libraries.
... View more
See more ideas labeled with:
Submitted on
03-11-2025
07:33 AM
Submitted by
ojc-orchestra
on
03-11-2025
07:33 AM
Hi, Currently, it is not possible to trigger a Fabric notebook run via the REST API, when the notebook is communicating with a lakehouse that has schemas enabled. This feature is marked as 'Public preview', but there are several threads suggesting a fix would have been in place by late 2024: https://community.fabric.microsoft.com/t5/Data-Engineering/Error-when-writing-data-into-schema-enabled-lakehouse-using/td-p/4111069 The workaround suggested does not work either. It seems very strange that it works without the schemas enabled. The underlying issue seems to be: INFO notebookUtils [Thread-62]: [FabricClient][ListWorkspaceByMssparkutils][get] Completed request url https://api.fabric.microsoft.com/v1/workspaces/ with correlation id X, requestId: X, activityId: , status code: 403, total cost 173ms, request cost 166ms A timeline of when this feature would be supported will be appreciated. Regards, Oliver
... View more
See more ideas labeled with:
-
Data Engineering
-
Fabric platform
-
Fabric platform | OneLake
Submitted on
03-06-2025
10:44 PM
Submitted by
petri_parikka
on
03-06-2025
10:44 PM
Could Microsoft support SQLMesh project so that they provide good compatibility against Fabric warehouse and lakehouse (via Spark)? Could SqlMesh be added as Fabric native workload to get it as service since this kind of sql modeling functionality with web UI is missing? Add support for Microsoft Fabric · Issue #3374 · TobikoData/sqlmesh
... View more
See more ideas labeled with:
-
Data Engineering
-
Data Warehouse
-
Databases
Submitted on
03-31-2025
09:57 PM
Submitted by
narendermalik
on
03-31-2025
09:57 PM
Objective: To improve the export capabilities of Power BI reports by allowing users to export reports to PowerPoint in an editable format and ensure consistent color formatting when exporting to Excel. Current Challenge: PowerPoint Export: Currently, when exporting Power BI reports to PowerPoint, the reports are often generated as static images. This limits users' ability to make post-export modifications directly within PowerPoint, such as adjusting text, resizing visuals, or changing layouts. Excel Export: When exporting reports to Excel, users frequently experience inconsistencies in color formatting, which affects the visual consistency and presentation of the data. Proposed Solution: Editable PowerPoint Exports: Enable Power BI reports to be exported to PowerPoint in a fully editable format. Each visual and text box should be converted into native PowerPoint objects, allowing users to edit text, change colors, and resize visuals directly in PowerPoint. Maintain the structural integrity of the report to ensure that the layout remains consistent with the original Power BI report. Consistent Color Formatting in Excel Exports: Implement a feature that preserves the color scheme applied in Power BI when exporting to Excel. Ensure that all visual elements such as charts, tables, and conditional formatting rules retain their colors in the Excel export, providing users with a consistent visual experience and reducing the need for manual adjustments. Benefits: Increased flexibility and efficiency for users who need to tailor reports for specific audiences or presentations without returning to Power BI for minor edits. Improved user satisfaction by reducing the time and effort required to manually adjust formatting in exported files. Enhanced consistency in reporting across different formats, reinforcing the professional appearance of reports. Implementation Considerations: Ensure compatibility with various versions of PowerPoint and Excel to maximize the adoption of this feature. Provide users with options to choose between static and editable exports based on their preferences or needs. Consider offering a preview option to view how reports will look in PowerPoint and Excel before finalizing the export.
... View more
See more ideas labeled with:
Submitted on
03-14-2025
02:42 AM
Submitted by
Artur123
on
03-14-2025
02:42 AM
When I use bookmarks, I wish I could indicate the value to look for in the array for example, my lines as a value Y use the date and as columns X I use categories In the condition of the destination I would like to be able to indicate TODAY as the value of the cell, in this way when I click on the bookmark button the visual matrix automatically scrolls to the line where date = TODAY
... View more
See more ideas labeled with:
-
Data Engineering
-
Power BI
Submitted on
03-18-2025
09:38 AM
Submitted by
PunChili
on
03-18-2025
09:38 AM

Dear community, Do you also face the challenge that you upload csv-files and need to monitor manually if the file was modified? Then you manually start the process to have the current data available in your tables/data models/reports? In the documentation it is explained that a action can be triggered by "if the the file is created or replaced"; this only happens when the file is really uploaded manually (or by a notebook) in the browser in onelake in fabric. If it is only changed in the onelake file explorer for windows preview feature plus saved there, ONLY the activation details arrive but NO action is triggered. It would be nice to have a possibility to trigger an action when the data within the file is modified.
... View more
See more ideas labeled with:
Submitted on
03-04-2025
04:32 AM
Submitted by
cathrinew
on
03-04-2025
04:32 AM

When I open a Notebook, it automatically opens the expanded Explorer pane (to the left). I rarely use this, so every time I open a Notebook I have to: - Collapse the Explorer pane (to the left) - Click on the View tab in the ribbon (at the tip) - Open the Table of contents pane (to the right) - Wait for the Table of contents to load I would like to be able to persist the view/layout for each notebook (or alternatively as a user setting) so the next time I open the notebook I will by default see the Table of contents instead of the expanded Explorer pane. (Additionally, I would love to be able to move the panes between the left and right sides, but that's less important.) This would be a quality of life improvement that would save developers time and make the development experience much smoother.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
11-13-2024
01:07 PM
Submitted by
fbcideas_migusr
on
11-13-2024
01:07 PM
Please fix the SQL Analytics Endpoint sync delays. Many users have been surprised and are receiving old data because of the Lakehouse SQL Analytics Endpoint sync delays. We want Fabric to handle this automatically, so we don't need to think about it.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
01-05-2024
07:39 PM
Submitted by
dwilliams3
on
01-05-2024
07:39 PM
Lakehouses, and more importantly the data inside them, are not recoverable if the the lakehouse is deleted. There is also no way to recover a prior lakehouse if the data needs to be rolled back to a previous state. BCDR was recently released but is only for capacity disaster recovery and is too cumbersome as it has an extra cost along with deploying in another region. We need a way to recover a lakehouse and its history, if it has been deleted or if the data has been corrupted.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
10-02-2024
11:47 PM
Submitted by
fbcideas_migusr
on
10-02-2024
11:47 PM
Hi! Request: please add a standard activity for refreshing the SQL Analytics endpoint as a feature directly in Data Pipelines OR make an option (check box) in the Semantic Model refresh activity (which is in preview) to refresh connected SQL Analytics endpoint. Problem: I have a standard activity for refreshing my Semantic Model, but the SQL Analytics endpoint which is one step before is not automatically refreshed.
... View more
See more ideas labeled with:
-
Data Engineering
Submitted on
03-22-2025
10:39 AM
Submitted by
raym85
on
03-22-2025
10:39 AM
One of the biggest challenges in Microsoft Fabric today is the one branch per workspace limitation, which makes it difficult to work on multiple feature branches simultaneously without creating separate workspaces. To address this, I propose Fabric Desktop, a local development environment that allows users to build and test Fabric artifacts offline before syncing changes to a workspace. Fabric Desktop could function similarly to Power BI Desktop but for Fabric, enabling users to develop lakehouses, dataflows, pipelines, and notebooks locally while integrating seamlessly with version control. This would provide a true branching experience, where multiple developers can work on different features without conflicting changes in a shared workspace. It would also improve offline development, reduce the risk of unintended modifications in production workspaces, and offer faster iteration cycles. whos with me to upvote this one!?
... View more
See more ideas labeled with:
-
Data Engineering
-
Fabric platform | Workspaces
Submitted on
11-26-2024
01:32 PM
Submitted by
kbutti
on
11-26-2024
01:32 PM

we use whl files and create libraries to make configurations and common code accessible across all notebooks in our Data Engineering solution. But publishing these files to environment is not a great experience. I need to try multiple times publishing with no luck. It is hard to predict if publishing was successful or not. It would be helpful if can see the libraries, whl files etc available at cluster level.
... View more
See more ideas labeled with:
-
Data Engineering
-
Fabric platform | Workspaces
Idea Statuses
- New 15,110
- Need Clarification 10
- Needs Votes 22,639
- Under Review 642
- Planned 269
- Completed 1,650
- Declined 223
Helpful resources
Latest Comments
- abhimani on: Key Pair authentication for the snowflake in power...
- kyle_mueller on: Quickly Identify Table Columns Used in Calculation...
-
Aala_Ali
on: Importing data (or drag and drop) from Fabric Lake...
- anic on: Enhancing Purview Glossary Integration with Power ...
- yeyu47 on: Deployment Pipeline roles
- giusepper11 on: Reintroduce Workspace Name visibility for Lakehous...
-
Koen_Verbeeck
on: Fabric REST API - Allow to specify a folder when c...
-
michaelu1
on: Scheduled refreshes were turned off after two mont...
-
anshulsharma on: Integrate Fabric Eventhouse with Azure AI Agent se...
- david-ri on: Add Key pair based authentication for Snowflake Co...
-
Power BI
38,820 -
Fabric platform
542 -
Data Factory
448 -
Data Factory | Data Pipeline
293 -
Data Engineering
276 -
Data Warehouse
188 -
Data Factory | Dataflow
156 -
Fabric platform | Workspaces
131 -
Real-Time Intelligence
126 -
Fabric platform | OneLake
124 -
Fabric platform | Admin
120 -
Fabric platform | CICD
91 -
Fabric platform | Capacities
68 -
Real-Time Intelligence | Eventhouse and KQL
61 -
Fabric platform | Governance
54 -
Real-Time Intelligence | Activator
52 -
Fabric platform | Security
49 -
Data Science
49 -
Data Factory | Mirroring
38 -
Fabric platform | Support
32 -
Databases | SQL Database
32 -
Real-Time Intelligence | Eventstream
31 -
Fabric platform | Data hub
28 -
Databases
22 -
Data Factory | Apache Airflow Job
3 -
Fabric platform | Real-Time hub
3 -
Product
2