Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi everyone,
I'm currently working with Microsoft Fabric and built a machine learning model in Jupyter Notebook. I would like to integrate this ML model directly into Power BI for visualization and further analysis.
I'm particularly interested in understanding the best approach—whether that's embedding the notebook outputs, using them as data sources or leveraging any other recommended workflows.
Has anyone successfully implemented this integration? If so, are there specific connectors, configurations, or steps I should be aware of?
I’d really appreciate any guidance, examples or best practices you can share.
Thanks in advance!
Solved! Go to Solution.
Hi @SarahHamad , Thank you for reaching out to the Microsoft Community Forum.
In MS Fabric, just writing files to a path like "Tables/ARIMA_Forecast4" won’t make them visible in the Lakehouse or available for querying in Power BI. To fix this, you need to explicitly register the table in the Lakehouse metadata. The most direct way to do this is by using the saveAsTable() method inside your Fabric notebook. If your notebook is properly attached to a Lakehouse, you can register the output like below:
forecast_df_spark.write.format("delta").mode("overwrite").saveAsTable("ARIMA_Forecast4")
Once the command runs successfully, head back to your Lakehouse, go to the Tables tab and click the Refresh button. Your new table should now appear there. If you want to double-check from within the notebook, run below to list all registered tables:
spark.sql("SHOW TABLES").show()
If your table still doesn't show up, Check the output logs in the notebook for any errors, especially around permissions or file paths. Once the table is visible in the Lakehouse, you can load it directly into Power BI using the Lakehouse connector. This allows direct querying using Direct Lake mode, so you won’t need to import or duplicate your data.
If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.
Hi @SarahHamad , Thank you for reaching out to the Microsoft Community Forum.
If you’ve built a machine learning model in a Microsoft Fabric Notebook using Jupyter, the best way to integrate it with Power BI is to write your prediction outputs to a Delta Lake table in a Lakehouse. This lets Power BI connect using Direct Lake mode, which provides fast, real-time analytics without importing or duplicating data. You simply convert your model’s output into a Spark DataFrame and save it using .write().format("delta"). Once saved, Power BI can immediately query the table, enabling seamless visualizations.
To keep predictions fresh, use Data Factory pipelines in Fabric to schedule and automate the notebook execution, ensuring new data is processed regularly. For production, organize everything in one Fabric workspace, secure access with RLS and consider logging model metrics to monitor performance via dashboards. This method is scalable, clean and uses the native tools of the Fabric ecosystem, making it ideal for most real world ML + BI workflows.
If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.
Thank you so much for your reply. I can successfully save my output into a Spark Dataframe; however, I used different write commands (and made sure to refresh my lakehouse) and am unable to find my table in my lakehouse. Do you have any suggestions on how to resolve this?
These are the following commands that I have tried:
Hi @SarahHamad , Thank you for reaching out to the Microsoft Community Forum.
In MS Fabric, just writing files to a path like "Tables/ARIMA_Forecast4" won’t make them visible in the Lakehouse or available for querying in Power BI. To fix this, you need to explicitly register the table in the Lakehouse metadata. The most direct way to do this is by using the saveAsTable() method inside your Fabric notebook. If your notebook is properly attached to a Lakehouse, you can register the output like below:
forecast_df_spark.write.format("delta").mode("overwrite").saveAsTable("ARIMA_Forecast4")
Once the command runs successfully, head back to your Lakehouse, go to the Tables tab and click the Refresh button. Your new table should now appear there. If you want to double-check from within the notebook, run below to list all registered tables:
spark.sql("SHOW TABLES").show()
If your table still doesn't show up, Check the output logs in the notebook for any errors, especially around permissions or file paths. Once the table is visible in the Lakehouse, you can load it directly into Power BI using the Lakehouse connector. This allows direct querying using Direct Lake mode, so you won’t need to import or duplicate your data.
If this helped solve the issue, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.
Thank you for the detailed explanation! What you wrote actually made me think about ensuring that I’m in the correct environment, PySpark, instead of the Python environment. Executing saveAsTable command registered the metadata to "Default" instead of my attached Lakehouse. In PySpark environment, a Spark session is fully initialized with Lakehouse integration and Delta support pre-configured, so the command worked flawlessly.
Thank you again for your help...this issue had me stumped for awhile 🙂
Hi SarahHamad
Here are the key solutions for integrating Jupyter Notebook with Power BI:
1. Use Jupyter Notebook as a Data Source:
Export to CSV/Excel: Save the output of your ML model to CSV or Excel files from Jupyter, then import these files into Power BI as data sources.
Azure Storage: Store the results in Azure Blob Storage or SQL Database, then connect Power BI to these sources.
2. Use Python Scripts in Power BI:
Python Script: Power BI allows you to run Python scripts directly to process data. You can run the same code from Jupyter to retrieve the predictions and integrate them into Power BI.
3. Embed Visualizations in Power BI:
Custom Visuals: Use libraries like matplotlib or seaborn to create custom charts in Jupyter, then import these visuals into Power BI.
Power BI API: Use the Power BI REST API to push results from your Jupyter notebook into a Power BI dataset.
4. Use Azure Synapse or Dataflows:
Leverage Azure Synapse or Azure Dataflows to run Python-based ML models and push the results directly to Power BI, automating data updates.
5. Expose Jupyter Notebook as an API:
If hosted on platforms like Azure Notebooks or Azure ML, you can expose the notebook as a web service and integrate it into Power BI using its API or service URL.
If it has, please consider clicking “Accept Answer” and “Yes” if you found the response helpful.
If you still have any questions or need further assistance, feel free to let us know — we're happy to help!
Thank you!