Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
Ar_Sh
Advocate II
Advocate II

Sharepoint Connection

I’ve been given a task by my team (and my manager mentioned it could be a proving point for me). The task is to connect SharePoint to Fabric, but not through Dataflows, since that approach keeps failing and isn’t feasible. They want the connection to be made either through notebooks or some other alternative. Is there a way to do that?

1 ACCEPTED SOLUTION
Shubham_rai955
Memorable Member
Memorable Member

Yes, there’s definitely a way out. You can connect SharePoint to Microsoft Fabric without using Dataflows. Here are your main alternatives:


Option 1: Use a Fabric Notebook (Python-based connection)

If you have access to OneLake Notebooks in Fabric, this is the cleanest and most flexible approach. You can connect to SharePoint using Python libraries.

Steps:

  1. Create a new Notebook in Fabric under your Workspace.

  2. In the first cell, install and import the needed packages:

     
    !pip install office365-rest-python-client
  3. Then use the following sample code:

     
    from office365.sharepoint.client_context import ClientContext from office365.runtime.auth.user_credential import UserCredential import pandas as pd from io import StringIO # SharePoint site and file details site_url = "https://yourtenant.sharepoint.com/sites/YourSiteName" file_url = "/sites/YourSiteName/Shared Documents/yourfile.xlsx" # Authentication ctx = ClientContext(site_url).with_credentials(UserCredential("your_email@domain.com", "your_password")) # Download the file response = File.open_binary(ctx, file_url) with open("yourfile.xlsx", "wb") as local_file: local_file.write(response.content) # Load into a DataFrame df = pd.read_excel("yourfile.xlsx") # Display data display(df)
  4. From there, you can write the data into a Lakehouse table:

     
    df.to_parquet("/lakehouse/default/Files/yourdata.parquet")

This method avoids Dataflows entirely and lets you automate pulling data from SharePoint directly into your Lakehouse.


Option 2: Use Power Automate as a bridge

If you have Power Automate available, you can:

  • Set up a flow that copies files from SharePoint to OneLake or Azure Blob Storage.

  • Then connect Fabric to that location (which Fabric can read natively).

This is good for scheduled or triggered updates (e.g., when a file changes in SharePoint).


Option 3: Use OneLake File Explorer (manual but quick fix)

If your data isn’t changing too frequently, download the SharePoint file and upload it manually to your Lakehouse or OneLake.
You can then use Shortcuts in Fabric to reference that file as a dataset. Not ideal for automation, but good for one-time or proof-of-concept runs.

View solution in original post

1 REPLY 1
Shubham_rai955
Memorable Member
Memorable Member

Yes, there’s definitely a way out. You can connect SharePoint to Microsoft Fabric without using Dataflows. Here are your main alternatives:


Option 1: Use a Fabric Notebook (Python-based connection)

If you have access to OneLake Notebooks in Fabric, this is the cleanest and most flexible approach. You can connect to SharePoint using Python libraries.

Steps:

  1. Create a new Notebook in Fabric under your Workspace.

  2. In the first cell, install and import the needed packages:

     
    !pip install office365-rest-python-client
  3. Then use the following sample code:

     
    from office365.sharepoint.client_context import ClientContext from office365.runtime.auth.user_credential import UserCredential import pandas as pd from io import StringIO # SharePoint site and file details site_url = "https://yourtenant.sharepoint.com/sites/YourSiteName" file_url = "/sites/YourSiteName/Shared Documents/yourfile.xlsx" # Authentication ctx = ClientContext(site_url).with_credentials(UserCredential("your_email@domain.com", "your_password")) # Download the file response = File.open_binary(ctx, file_url) with open("yourfile.xlsx", "wb") as local_file: local_file.write(response.content) # Load into a DataFrame df = pd.read_excel("yourfile.xlsx") # Display data display(df)
  4. From there, you can write the data into a Lakehouse table:

     
    df.to_parquet("/lakehouse/default/Files/yourdata.parquet")

This method avoids Dataflows entirely and lets you automate pulling data from SharePoint directly into your Lakehouse.


Option 2: Use Power Automate as a bridge

If you have Power Automate available, you can:

  • Set up a flow that copies files from SharePoint to OneLake or Azure Blob Storage.

  • Then connect Fabric to that location (which Fabric can read natively).

This is good for scheduled or triggered updates (e.g., when a file changes in SharePoint).


Option 3: Use OneLake File Explorer (manual but quick fix)

If your data isn’t changing too frequently, download the SharePoint file and upload it manually to your Lakehouse or OneLake.
You can then use Shortcuts in Fabric to reference that file as a dataset. Not ideal for automation, but good for one-time or proof-of-concept runs.

Helpful resources

Announcements
Power BI DataViz World Championships

Power BI Dataviz World Championships

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!

December 2025 Power BI Update Carousel

Power BI Monthly Update - December 2025

Check out the December 2025 Power BI Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.