The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.
We need to store semantic models for further usage or historic data. Currently I cannot find a pipeline that allows the usage of semantic models as input.
What are your suggestions how to solve this?
I currently came up with the following solution:
This approach seems to work. It also does not seem to run into the DAX export limits as power automate jobs do with large tables.
But isn't there a better/already build-in approach?
Please tell me, if you know any better solutions or have other ideas.
Code example:
def process_semantic_to_lakehouse(config )
from datetime import datetime
# 1. Read the table from the semantic model
logger.info("Reading semantic model table from workspace '%s', dataset '%s', table '%s'.",
config['workspace'], config['dataset'], config['table'])
df_semantic_model = FabricDataFrame.read_table(
workspace=config['workspace'],
dataset=config['dataset'],
table=config['table']
)
logger.info("Successfully read semantic model table with %d rows.", len(df_semantic_model))
# 2. Optionally add a timestamp column for historical tracking
if config.get("add_import_timestamp", False )
logger.info("Adding import timestamp column 'Import Datum' to the DataFrame.")
display("Adding import timestamp column...")
df_semantic_model['Import Datum'] = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
else:
logger.info("Skipping import timestamp column as per configuration.")
# 3. Save the data to the lakehouse table
logger.info("Saving DataFrame to lakehouse table '%s' with mode '%s'.",
config['lakehouse_table'], config['mode'])
df_semantic_model.to_lakehouse_table(
name=config['lakehouse_table'],
mode=config['mode']
)
logger.info("Data successfully saved to lakehouse table '%s'.", config['lakehouse_table'])
Solved! Go to Solution.
Hi @Makadoro ,
Thank you for reaching out to the Microsoft fabric community forum. Thank you @NandanHegde @pallavi_r, for your inputs on this issue.
After reviewing the details you provided, I have identified few workarounds that may help resolve the issue. Please follow these steps:
Since Microsoft Fabric does not provide a built-in pipeline component to extract semantic model data directly, the best approach is to use XMLA endpoints to query the dataset and store the extracted data in a Fabric Lakehouse or SQL database.
Extract Data from the Semantic Model (Power BI Dataset): Use XMLA endpoints to connect to the Power BI dataset. Query the dataset using DAX (for structured data extraction). Extract the required tables for historical storage.
Store the Extracted Data in Fabric Lakehouse or SQL Database: After extracting the data, store it in a Fabric Lakehouse or SQL database for historical tracking. Append an Import Timestamp column for versioning. Use partitioning by date to optimize queries in the Lakehouse.
Automate the Process Using Fabric Pipelines: Trigger the process after each semantic model refresh using a pipeline or API. Run a Fabric Notebook that extracts data from the dataset and writes it to the Lakehouse. Schedule the pipeline to ensure automated updates.
Connect Power BI to the Lakehouse for Further Analysis: Use Direct Lake mode or SQL Connection from Power BI to query stored data. Perform transformations in Power Query before visualization.
kindly refer to the below following link for more information:
Semantic model connectivity and management with the XMLA endpoint in Power BI - Power BI | Microsoft...
If this post helps, then please give us ‘Kudos’ and consider Accept it as a solution to help the other members find it more quickly.
Thank you.
Hi @Makadoro ,
Thank you for reaching out to the Microsoft fabric community forum. Thank you @NandanHegde @pallavi_r, for your inputs on this issue.
After reviewing the details you provided, I have identified few workarounds that may help resolve the issue. Please follow these steps:
Since Microsoft Fabric does not provide a built-in pipeline component to extract semantic model data directly, the best approach is to use XMLA endpoints to query the dataset and store the extracted data in a Fabric Lakehouse or SQL database.
Extract Data from the Semantic Model (Power BI Dataset): Use XMLA endpoints to connect to the Power BI dataset. Query the dataset using DAX (for structured data extraction). Extract the required tables for historical storage.
Store the Extracted Data in Fabric Lakehouse or SQL Database: After extracting the data, store it in a Fabric Lakehouse or SQL database for historical tracking. Append an Import Timestamp column for versioning. Use partitioning by date to optimize queries in the Lakehouse.
Automate the Process Using Fabric Pipelines: Trigger the process after each semantic model refresh using a pipeline or API. Run a Fabric Notebook that extracts data from the dataset and writes it to the Lakehouse. Schedule the pipeline to ensure automated updates.
Connect Power BI to the Lakehouse for Further Analysis: Use Direct Lake mode or SQL Connection from Power BI to query stored data. Perform transformations in Power Query before visualization.
kindly refer to the below following link for more information:
Semantic model connectivity and management with the XMLA endpoint in Power BI - Power BI | Microsoft...
If this post helps, then please give us ‘Kudos’ and consider Accept it as a solution to help the other members find it more quickly.
Thank you.
Hi @Makadoro,
I wanted to check if you had the opportunity to review the information provided @NandanHegde @pallavi_r. Please feel free to contact us if you have any further questions. If their response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.
You can use REST API and partition and paginate as described below:
Or you can even use Dataflow gen 2 that supports semantic models as a source
Do you have that option in dataflow gen 2? I cannot find that, no matter how hard I look.
Hi @Makadoro
Please see the below link if it solves your problem.
https://fabric.guru/fabric-semantic-link-and-use-cases
Thanks,
Pallavi
Thanks. Yeah that is kind of what I am doing with my notebook solution above.
I am just not super thrilled about it as it is a coding solution that not every employee can handle.
User | Count |
---|---|
17 | |
14 | |
8 | |
7 | |
5 |
User | Count |
---|---|
31 | |
29 | |
20 | |
15 | |
11 |