Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Compete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.

Reply
uk-roberto92
Frequent Visitor

Recording PBI table into SQL server

Hi,

 

I have a data model in Power BI with a final table containing columns that are built using conditions depending on other linked tables.

 

I was wondering if there's a way to record a snapshot of this final table from PBI into SQL Server?

I guess this would be the opposite of the normal flow: from SQL -> PBI to **SQL -> PBI -> SQL**.

 

Asking this as it would be quite difficult and time consuming to replicate the final table in a SQL Procedure.

 

Thanks

1 ACCEPTED SOLUTION

Hi @uk-roberto92 , Thank you for reaching out to the Microsoft Community Forum.

 

Yes, you are on the right track, If you have access to Microsoft Fabric and are comfortable with PySpark and wants maximum scalability. Create a Fabric Lakehouse and a Notebook. In the Notebook, use PySpark to query your Power BI dataset directly with mssparkutils.notebook.pbi_query, passing EVALUATE FinalTable to load the full table into a DataFrame. Export it to SQL Server using the JDBC connector with an overwrite mode to ensure a fresh snapshot. Set up a Fabric Data Pipeline, add the Notebook as a task, and schedule it. This bypasses REST API row limits, making it ideal for large or complex datasets. However, it requires Fabric licensing and PySpark knowledge, so it’s only worth pursuing if your dataset exceeds 100,000 rows or Power Automate’s setup feels limiting.

 

For the most reliable and immediate solution, I recommend using Power BI Service with Power Automate. Since you’ve confirmed DAX Studio works for a manual export, we’ll build on that. Publish your report to a Power BI Service workspace and schedule a dataset refresh. Create a Power Automate scheduled flow to run shortly after. In the flow, use the “Power BI - Run a Query Against a Dataset” action with your DAX query, EVALUATE FinalTable—to export the full table as JSON. Then, use the “SQL Server - Insert Row” action to parse the JSON and write it to a pre-created SQL table (e.g., PowerBI_Snapshot). For datasets over 100,000 rows, add pagination by modifying the DAX with TOPN and an offset loop in the flow. Publish the workspace as a Power BI App for access. This is fully automated, leverages your existing tools, and requires no extra licensing beyond Power BI Premium (if already in use). It’s efficient for small-to-medium datasets and scalable with minimal effort.

 

If you have access to Microsoft Fabric and are comfortable with PySpark, I suggest an alternative for maximum scalability. Create a Fabric Lakehouse and a Notebook. In the Notebook, use PySpark to query your Power BI dataset directly with mssparkutils.notebook.pbi_query, passing EVALUATE FinalTable to load the full table into a DataFrame. Export it to SQL Server using the JDBC connector with an overwrite mode to ensure a fresh snapshot. Set up a Fabric Data Pipeline, add the Notebook as a task, and schedule it. This bypasses REST API row limits, making it ideal for large or complex datasets. However, it requires Fabric licensing and PySpark knowledge, so it’s only worth pursuing if your dataset exceeds 100,000 rows or Power Automate’s setup feels limiting.

 

For a quick manual option that’s already tested, stick with DAX Studio. Connect it to your Power BI Desktop model, run EVALUATE FinalTable, and export the results to a CSV file. Then, use SQL Server Management Studio (SSMS) to import the CSV into your table. This is perfect for one-off snapshots and requires no setup beyond what you’ve already done. However, it’s not automated, so it’s only a stopgap while setting up one of the above solutions.

 

If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.

View solution in original post

5 REPLIES 5
Cookistador
Super User
Super User

I did something similar for a customer

 

1) create a table with all fields you need to store in your sql server database

2) Use performance analyzer to get the query

3) Add a Power Automate button in Power BI

4) use Power BI execute query and past the code from perfromance analyzer

5) Write the results of the query in your database

 

If you need more information, do not hesitate to ask for help

Deku
Super User
Super User

You can use Execute Queries In Group. If you use performance analyzer you can copy the query from the power bi visual and throw into the API to get the databack

 

Think you can also use a ODBC connection to pull data


Did I answer your question?
Please help by clicking the thumbs up button and mark my post as a solution!
freginier
Super User
Super User

Hi,

To record a snapshot of a Power BI table into SQL Server, there are a few ways you could approach it, depending on your specific requirements:

  1. Exporting Data to SQL Server via Power Query or DAX: You could use Power BI's built-in Power Query editor to connect directly to your SQL Server and load the data back into SQL Server using a custom query or even by writing a stored procedure. However, this requires you to transform the data before sending it back to SQL Server, and it may be tricky if the tables in Power BI are complex.

  2. Using Power Automate (Flow): Power Automate allows you to create automated workflows between your Power BI and SQL Server. You could use Power Automate to trigger an action to export the data from Power BI (as a CSV file, for example) and then push that data into your SQL Server database. It allows more flexibility but requires setup in both Power BI and the SQL Server environment.

  3. Export Data via DAX Studio or External Tools: DAX Studio is an excellent tool that allows you to export data directly from Power BI Desktop to different formats like Excel, CSV, or even a database like SQL Server. You can create a connection to the Power BI model and use it to query the table and export it. 

Hope this helps!

😁😁

Thanks @freginier , I just tried with DAX Studio export and it works fine!

Might be asking too much now, but you know if there's a way to make this automatically? (maybe from PBI App?)

eg update table -> export in SQL

 

I am now looking into Fabric Notebook with PySpark...feel like I am in the right direction

Hi @uk-roberto92 , Thank you for reaching out to the Microsoft Community Forum.

 

Yes, you are on the right track, If you have access to Microsoft Fabric and are comfortable with PySpark and wants maximum scalability. Create a Fabric Lakehouse and a Notebook. In the Notebook, use PySpark to query your Power BI dataset directly with mssparkutils.notebook.pbi_query, passing EVALUATE FinalTable to load the full table into a DataFrame. Export it to SQL Server using the JDBC connector with an overwrite mode to ensure a fresh snapshot. Set up a Fabric Data Pipeline, add the Notebook as a task, and schedule it. This bypasses REST API row limits, making it ideal for large or complex datasets. However, it requires Fabric licensing and PySpark knowledge, so it’s only worth pursuing if your dataset exceeds 100,000 rows or Power Automate’s setup feels limiting.

 

For the most reliable and immediate solution, I recommend using Power BI Service with Power Automate. Since you’ve confirmed DAX Studio works for a manual export, we’ll build on that. Publish your report to a Power BI Service workspace and schedule a dataset refresh. Create a Power Automate scheduled flow to run shortly after. In the flow, use the “Power BI - Run a Query Against a Dataset” action with your DAX query, EVALUATE FinalTable—to export the full table as JSON. Then, use the “SQL Server - Insert Row” action to parse the JSON and write it to a pre-created SQL table (e.g., PowerBI_Snapshot). For datasets over 100,000 rows, add pagination by modifying the DAX with TOPN and an offset loop in the flow. Publish the workspace as a Power BI App for access. This is fully automated, leverages your existing tools, and requires no extra licensing beyond Power BI Premium (if already in use). It’s efficient for small-to-medium datasets and scalable with minimal effort.

 

If you have access to Microsoft Fabric and are comfortable with PySpark, I suggest an alternative for maximum scalability. Create a Fabric Lakehouse and a Notebook. In the Notebook, use PySpark to query your Power BI dataset directly with mssparkutils.notebook.pbi_query, passing EVALUATE FinalTable to load the full table into a DataFrame. Export it to SQL Server using the JDBC connector with an overwrite mode to ensure a fresh snapshot. Set up a Fabric Data Pipeline, add the Notebook as a task, and schedule it. This bypasses REST API row limits, making it ideal for large or complex datasets. However, it requires Fabric licensing and PySpark knowledge, so it’s only worth pursuing if your dataset exceeds 100,000 rows or Power Automate’s setup feels limiting.

 

For a quick manual option that’s already tested, stick with DAX Studio. Connect it to your Power BI Desktop model, run EVALUATE FinalTable, and export the results to a CSV file. Then, use SQL Server Management Studio (SSMS) to import the CSV into your table. This is perfect for one-off snapshots and requires no setup beyond what you’ve already done. However, it’s not automated, so it’s only a stopgap while setting up one of the above solutions.

 

If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.

Helpful resources

Announcements
July PBI25 Carousel

Power BI Monthly Update - July 2025

Check out the July 2025 Power BI update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors