Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.

Reply
Makadoro
Frequent Visitor

Copy semantic models in fabric to database

We need to store semantic models for further usage or historic data. Currently I cannot find a pipeline that allows the usage of semantic models as input.

 

What are your suggestions how to solve this?


I currently came up with the following solution:

  1. Run a notebook after refresh of the semantic model to read the data from the semantic model and save its data to lakehouse (see code example  below)
  2. Create a sql connection from Power BI to lakehouse, to be able to do further transformations via powerquery

This approach seems to work. It also does not seem to run into the DAX export limits as power automate jobs do with large tables.

But isn't there a better/already build-in approach?

Please tell me, if you know any better solutions or have other ideas.

 

Code example:

def process_semantic_to_lakehouse(config )
    from datetime import datetime
    # 1. Read the table from the semantic model
    logger.info("Reading semantic model table from workspace '%s', dataset '%s', table '%s'.",
                config['workspace'], config['dataset'], config['table'])
    df_semantic_model = FabricDataFrame.read_table(
        workspace=config['workspace'],
        dataset=config['dataset'],
        table=config['table']
    )
    logger.info("Successfully read semantic model table with %d rows.", len(df_semantic_model))

    # 2. Optionally add a timestamp column for historical tracking
    if config.get("add_import_timestamp", False )
        logger.info("Adding import timestamp column 'Import Datum' to the DataFrame.")
        display("Adding import timestamp column...")
        df_semantic_model['Import Datum'] = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    else:
        logger.info("Skipping import timestamp column as per configuration.")
   
# 3. Save the data to the lakehouse table
    logger.info("Saving DataFrame to lakehouse table '%s' with mode '%s'.",
                config['lakehouse_table'], config['mode'])

    df_semantic_model.to_lakehouse_table(
        name=config['lakehouse_table'],
        mode=config['mode']
    )
    logger.info("Data successfully saved to lakehouse table '%s'.", config['lakehouse_table'])
1 ACCEPTED SOLUTION
v-saisrao-msft
Community Support
Community Support

Hi @Makadoro ,
Thank you for reaching out to the Microsoft fabric community forum. Thank you  @NandanHegde @pallavi_r,  for your inputs on this issue.

After reviewing the details you provided, I have identified few workarounds that may help resolve the issue. Please follow these steps:
Since Microsoft Fabric does not provide a built-in pipeline component to extract semantic model data directly, the best approach is to use XMLA endpoints to query the dataset and store the extracted data in a Fabric Lakehouse or SQL database.

Extract Data from the Semantic Model (Power BI Dataset): Use XMLA endpoints to connect to the Power BI dataset. Query the dataset using DAX (for structured data extraction). Extract the required tables for historical storage.

Store the Extracted Data in Fabric Lakehouse or SQL Database: After extracting the data, store it in a Fabric Lakehouse or SQL database for historical tracking. Append an Import Timestamp column for versioning. Use partitioning by date to optimize queries in the Lakehouse.

Automate the Process Using Fabric Pipelines: Trigger the process after each semantic model refresh using a pipeline or API. Run a Fabric Notebook that extracts data from the dataset and writes it to the Lakehouse. Schedule the pipeline to ensure automated updates.

Connect Power BI to the Lakehouse for Further Analysis: Use Direct Lake mode or SQL Connection from Power BI to query stored data. Perform transformations in Power Query before visualization.


kindly refer to the below following link for more information:
Semantic model connectivity and management with the XMLA endpoint in Power BI - Power BI | Microsoft...

If this post helps, then please give us ‘Kudos’ and consider Accept it as a solution to help the other members find it more quickly.

Thank you.

View solution in original post

6 REPLIES 6
v-saisrao-msft
Community Support
Community Support

Hi @Makadoro ,
Thank you for reaching out to the Microsoft fabric community forum. Thank you  @NandanHegde @pallavi_r,  for your inputs on this issue.

After reviewing the details you provided, I have identified few workarounds that may help resolve the issue. Please follow these steps:
Since Microsoft Fabric does not provide a built-in pipeline component to extract semantic model data directly, the best approach is to use XMLA endpoints to query the dataset and store the extracted data in a Fabric Lakehouse or SQL database.

Extract Data from the Semantic Model (Power BI Dataset): Use XMLA endpoints to connect to the Power BI dataset. Query the dataset using DAX (for structured data extraction). Extract the required tables for historical storage.

Store the Extracted Data in Fabric Lakehouse or SQL Database: After extracting the data, store it in a Fabric Lakehouse or SQL database for historical tracking. Append an Import Timestamp column for versioning. Use partitioning by date to optimize queries in the Lakehouse.

Automate the Process Using Fabric Pipelines: Trigger the process after each semantic model refresh using a pipeline or API. Run a Fabric Notebook that extracts data from the dataset and writes it to the Lakehouse. Schedule the pipeline to ensure automated updates.

Connect Power BI to the Lakehouse for Further Analysis: Use Direct Lake mode or SQL Connection from Power BI to query stored data. Perform transformations in Power Query before visualization.


kindly refer to the below following link for more information:
Semantic model connectivity and management with the XMLA endpoint in Power BI - Power BI | Microsoft...

If this post helps, then please give us ‘Kudos’ and consider Accept it as a solution to help the other members find it more quickly.

Thank you.

v-saisrao-msft
Community Support
Community Support

Hi @Makadoro,
I wanted to check if you had the opportunity to review the information provided @NandanHegde @pallavi_r. Please feel free to contact us if you have any further questions. If their response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.

NandanHegde
Super User
Super User

You can use REST API and partition and paginate as described below:

https://datasharkx.wordpress.com/2023/03/02/overcoming-data-size-row-limitations-of-power-bi-rest-ap...

 

Or you can even use Dataflow gen 2 that supports semantic models as a source




----------------------------------------------------------------------------------------------
Nandan Hegde (MSFT Data MVP)
LinkedIn Profile : www.linkedin.com/in/nandan-hegde-4a195a66
GitHUB Profile : https://github.com/NandanHegde15
Twitter Profile : @nandan_hegde15
MSFT MVP Profile : https://mvp.microsoft.com/en-US/MVP/profile/8977819f-95fb-ed11-8f6d-000d3a560942
Topmate : https://topmate.io/nandan_hegde
Blog :https://datasharkx.wordpress.com

Do you have that option in dataflow gen 2? I cannot find that, no matter how hard I look.

Makadoro_0-1742794547595.png

 

pallavi_r
Super User
Super User

Hi @Makadoro 

 

Please see the below link if it solves your problem.

https://fabric.guru/fabric-semantic-link-and-use-cases

 

Thanks,

Pallavi

Thanks. Yeah that is kind of what I am doing with my notebook solution above.
I am just not super thrilled about it as it is a coding solution that not every employee can handle.

Helpful resources

Announcements
August Fabric Update Carousel

Fabric Monthly Update - August 2025

Check out the August 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.