Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Score big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount

Reply
SørenBrandt
Frequent Visitor

How to deploy Mirrored Azure Databricks Catalog to another workspace

Dear all,

In a development workspace, I have a Mirrored Azure Databricks Catalog. I would like to roll out the contents of the workspace to many customer specific workspaces, each of which needs to use a customer specific Azure Databricks catalog as source. I prefer the deployment process to be automated.

Using Fabric deployment pipelines does not seem to cut it, as they do not support switching of data source connections in the Mirrored Azure Databricks Catalog. Please do correct me if I'm wrong!

As an alternative to deployment pipelines, I have tried to update the contents of the git repository as follows. Inside the git repository representation of my Mirrored Azure Databricks Catalog, the definition.json file looks as follows:

SrenBrandt_0-1756286435991.png

I tried to update the catalogName, databricksWorkspaceConnectionId and storageConnectionId parameters and pull the updates git repos to a target workspace to see if that would do the trick.

Unfortunately not so. I get this error message during the git update process:

Request ID xxx

Workload Error Code Microsoft.ServicePlatform.Exceptions.ServicePlatformErrorCode

Workload Error Message CatalogName or DatabricksWorkspaceConnectionId mismatch.

Time Wed Aug 27 2025 xxx

Any ideas on how I may achieve what I want?

Best regards,

Søren

1 ACCEPTED SOLUTION
SørenBrandt
Frequent Visitor

Thanks to input from @Vinodh247 and Microsoft support, I quickly realized that what I was trying to achieve is not currently supported. According to MS Support, it is on the roadmap, but without an ETA.

Instead, I decided to downscale my ambitions: In each customer specific workspace, I now manually create the Mirrored Azure Databricks Catalog, along with a Lakehouse and shortcuts pointing to the mirror. With that in place, I can deploy remaining workspace items using deployment pipelines that update the data source for semantic models. It works, and I am ok with this solution for now.

That said, I do look forward to being able to use Terraform to create the Azure Databricks Catalog and Storage connections required for the mirror, and be able to roll out changes to these connections as part of my deployment pipelines. The current solution is a bit too manual for my taste :).

View solution in original post

3 REPLIES 3
SørenBrandt
Frequent Visitor

Thanks to input from @Vinodh247 and Microsoft support, I quickly realized that what I was trying to achieve is not currently supported. According to MS Support, it is on the roadmap, but without an ETA.

Instead, I decided to downscale my ambitions: In each customer specific workspace, I now manually create the Mirrored Azure Databricks Catalog, along with a Lakehouse and shortcuts pointing to the mirror. With that in place, I can deploy remaining workspace items using deployment pipelines that update the data source for semantic models. It works, and I am ok with this solution for now.

That said, I do look forward to being able to use Terraform to create the Azure Databricks Catalog and Storage connections required for the mirror, and be able to roll out changes to these connections as part of my deployment pipelines. The current solution is a bit too manual for my taste :).

Shahid12523
Memorable Member
Memorable Member

You can't redeploy a Mirrored Azure Databricks Catalog by editing its JSON—Fabric validates the original connection and blocks mismatches. Instead, automate creation of new mirrored items per workspace using scripts or API calls, each with its own customer-specific connection. Deployment pipelines don’t support dynamic source switching for mirrored catalogs.

Shahed Shaikh
Vinodh247
Resolver V
Resolver V

This is an expected error. Deployment pipelines currently don’t let you parameterize or override the catalogName, databricksWorkspaceConnectionId, or storageConnectionId when deploying a Mirrored Azure Databricks Catalog.  The REST based Update Mirrored Azure Databricks Catalog API supports modifying only a limited set of properties like autoSync, mirroringMode, and storageConnectionId, but not catalogName or databricksWorkspaceConnectionId. That’s why attempting to change those fields triggers the "CatalogName or DatabricksWorkspaceConnectionId mismatch" error.

 

Fix that you can try:

  1. Per Workspace Setup via Script/ARM Template: Automate the creation of Mirrored Catalogs in each target customer Fabric workspace by scripting against the APIs or using ARM templates. This gives you full control over the catalog name and connections per customer.
  • Use the REST APIs for creating a new Mirrored Azure Databricks Catalog, specifying per workspace details.
  • You can automate this via PowerShell, Azure CLI, or custom orchestration in your CI/CD pipeline.
  1. Template + Post Creation Patch: If you still want a somewhat template driven model...
  • Deploy a base mirrored catalog to each workspace.
  • Immediately call the Update API to adjust properties that are supported (like autoSync, mirroringMode, or storageConnectionId).
  • For properties that you cannot update, you’ll need to recreate (or delete and re-create) the mirrored catalog with correct parameters.

To sum it up, you cannot override the catalog’s connection details through Fabric Git or deployment pipelines as of today. The best option is programmatic provisioning of each mirrored catalog with the proper conn strings per customer. I would recommend building that as a script or ARM based automation to keep it clean and repeatable.

 

Please 'Kudos' and 'Accept as Solution' if this answered your query.

Helpful resources

Announcements
August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.