Overview
It is possible now to access Onelake tables directly from Azure Databricks. This functionality is similar to the shortcut feature in Microsoft Fabric, helps users to read Lakehouse or Warehouse data within Azure Databricks without the need for data duplication.
Pre-requisites
- Azure Databricks Workspace must be enabled with Unity Catalog.
- Azure Databricks compute should use Databricks Runtime 18.0 or higher, with Standard access mode.
- Users must have CREATE CONNECTION and CREATE STORAGE CREDENTIAL privileges on the Unity Catalog metastore associated with the workspace.
- Users must have CREATE CATALOG permission on the metastore and either owner of the connection or have the CREATE FOREIGN CATALOG privilege on the connection.
- Azure Managed Identity or Azure Service Principal is required.
This article explains how to enable OneLake federation and read OneLake data from Azure Databricks, step by step.
Step 1: Enable OneLake Catalog Federation
- Open Azure Databricks, click the user icon at the top right corner, and select Previews.

- In the search box, type Microsoft OneLake Federation and enable the feature.

Step 2: Creating Authentication
Authentication can be established using either a Azure Managed Identity (MI) or Service Principal (SPN). In this guide, Service Principal setup is detailed.
- Navigate to Microsoft Entra ID > App registrations in the Azure Portal and register a new application or use an existing one.
- Note the Application (client) ID and Directory (tenant) ID.
- Under Certificates & secrets, create a new client secret and record the secret value.
Step 3: Create Workspace and Add Permissions
- Create a new workspace and add the Service Principal created above as a member or contributor. Try to give least permission.
Step 4: Create Storage Credential in Unity Catalog
- Create a storage credential in Unity Catalog with the identity created previously.
- For Service Principal, the UI cannot be used to create the credential. Instead, use the provided code below and ensure the user has Azure Databricks account admin privileges for credential creation.

Alternatively you can use python code to create credential, make sure you have created access token and pass on the token to the function

Once the credential is created successfully, visit the Azure Databricks credentials page to confirm that the connection is visible.
Step 5: Create Connection in Azure Databricks
With credentials in place, the next task is to establish connections.
- In Azure Databricks, navigate to the connections page and click on Create Connection.

- On the Connection page, select Onelake as the connection type (enabled in Step 1).
- In the Connection details tab, choose the credential which was created earlier.
- Enter the Workspace ID, click create, and exit the modal by clicking cancel.
Step 6: Create Catalog with Type Foreign
- Create a catalog, select the type as Foreign, and choose the Onelake connection made in the previous steps. Provide the Lakehouse or Warehouse ID (available in the Fabric workspace URL).
- Test the connection. Upon success, navigate to your catalog to view tables available in Lakehouse. If validation errors arise, confirm that your Databricks cluster runtime is version 18 or higher.
Viewing Data and Lineage
The data loaded from OneLake will appear in Azure Databricks.

When this data is used in Databricks notebooks or SQL, downstream items will be visible in the lineage tab, providing clear visibility of data flow within databricks.
Sample snippet of lineage in azure databricks
Synced data is utilised in notebook and table

Graph view:

Key Takeaways
- Data can be queried in Databricks notebooks or SQL.
- Writing data back to OneLake is not supported.
- This integration works only for tables, not for views or stored procedures.
- Lineage for tables used further in Databricks notebooks or SQL is visible in Unity Catalog.
- Both the Databricks cluster and Fabric capacity must be running to query the data.
Reference link
https://learn.microsoft.com/en-us/azure/databricks/query-federation/onelake