Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Next up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now

Reply
Ashwath_Bala_S
Helper II
Helper II

Transforming Data Through SQL Endpoint for Enhanced Power BI Dashboards

Hi, 

I am getting tables as Shortcut Tables into Fabric Workspace, from Dataverse. (Data is imported as shortcut tables)

I would like to perform transformations like handling null values,creating measure, calculated columns, querying using SQL on "SQL EndPoint" and incorporate the same into for developing the dashboard using "Power BI" experience.

First approach, I am performing the using "Visual Query" method and saving the same as a "View" in OneLake, to get that into "Power BI" experience.

Is there any other method available to perform the same and save on OneLake or should we perform only on selected tables (to minimize data export) since data is present in other environment (As data is coming into Fabric Workspace as shortcut tables, from Dataverse).

Else should I select the whole data again from Dataverse using "Visual Query" method and copy the full data into OneLake (I believe doing transformations, doesn't happen on Dataverse tables. I believe, it makes changes and save the same into OneLake)

Any assistance on this would be highly helpful.

 

Thanks in Advance.

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @Ashwath_Bala_S 
Thanks for explaining your ask further.
1) You are right about the first approach.

2)The process cannot be automated. But you can run the notebooks at desired time intervals. After the data gets refreshed in Dataverse, you can schedule to run notebook which would run on daily basis.

vnikhilanmsft_0-1704991006006.png

You can also use the notebook activity which can be scheduled for desired time intervals. Please refer to this document for more information : Link1


Based on both your approaches, the first option is better one.

Hope this helps. Please let us know if you have any further questions.

View solution in original post

5 REPLIES 5
Ashwath_Bala_S
Helper II
Helper II

Hi,

Thanks for your response.

 

The FnO tables is brought into Dataverse. From Dataverse, the tables are linked in Microsoft Fabric (Data present in Dataverse, linked to fabric as shortcut tables).

First approach, bringing the data from Lakehouse into SQL End-point, creating a visual query, removing nulls, creating calculated columns and then saving as a view and then building the report on that. (I believe, once data is updated in Dataverse, gets updated through the view in "Report").

Secondly, using Spark Notebook, loading a dataframe, making transformations and then saving them as a "Delta" table in OneLake, I am uncertain if that will be automated, once if data is refreshed in Dataverse.

Finally, having a warehouse, I would prefer other methods, since data is already in Lakehouse and in the Warehouse, I need to use the same Visual Query and Measure to do the transformations. (Since, data is already in Lakehouse, generally prefer it, but, need to check on the same).

The flow I prefer is like once it is refreshed in D365 F&O, linked with Dataverse, the same should be automated on the report in Fabric. Any assistance for the automated (Either SQL-EndPoint or Spark Notebooks or Data Warehouse) will be helpful.

I hope I have got your points, if not kindly assist, since Fabric is a fresh perspective

Thanks in Advance.

Anonymous
Not applicable

Hi @Ashwath_Bala_S 
Thanks for explaining your ask further.
1) You are right about the first approach.

2)The process cannot be automated. But you can run the notebooks at desired time intervals. After the data gets refreshed in Dataverse, you can schedule to run notebook which would run on daily basis.

vnikhilanmsft_0-1704991006006.png

You can also use the notebook activity which can be scheduled for desired time intervals. Please refer to this document for more information : Link1


Based on both your approaches, the first option is better one.

Hope this helps. Please let us know if you have any further questions.

Anonymous
Not applicable

Hi @Ashwath_Bala_S 
Thanks for using Fabric Community.
If you do not need to transform the data, what you are doing works great. If you need to transform the data, you could use a Spark notebook or add a Warehouse so that you could store the enriched data. You should be able to query the data by creating a shortcut and you would only save any enriched/updated data into Onelake as a new table.
Hope this helps. Please let us know if you have any further questions.

Anonymous
Not applicable

Hi @Ashwath_Bala_S 
We haven’t heard from you on the last response and was just checking back to see if your query got resolved.
Otherwise, will respond back with the more details and we will try to help.
Thanks

Hi @Anonymous ,

Yeah, my query got resolved.

Thanks for the assistance.

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.