Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
AKemper
Frequent Visitor

Databricks Connection Fails Only in Fabric Pipeline

Hello all!

 

I am trying to create a Pipeline to get data from my Azure Databricks environment. I have an existing connection that I have used with Dataflows and Semantic Models before that works without any issue, but when I select the connection in a Copy Data activity in my pipeline I get the below error message.

 

Error Message:

ErrorCode=FailedToConnectToDatabricksWorkspace,Failed to connect to Databricks workspace. Error Cluster XXXXX does not exist. Cluster XXXXX does not exist Processed HTTP request failed.

 

It is important to note the Server hostname and HTTP path are for a serverless SQL Warehouse in databricks, the Warehouse is running, I am authenticating with a PAT, the connection shows as "Online" in the "Manage connections and gateways" screen, and that the connection works for Dataflows and Semantic Models.

 

Thank you in advance for the help!

10 REPLIES 10
v-priyankata
Community Support
Community Support

Hi @AKemper 

Thank you for reaching out to the Microsoft Fabric Forum Community.

@BeaBF Thank you so much for your inputs.

I hope the information provided by users was helpful. If you still have questions, please don't hesitate to reach out to the community.

 

BeaBF
Super User
Super User

@AKemper It's true.

1. JDBC URL

Enter the URL without UID/PWD:
jdbc:spark://<ServerHostname>:443/default;transportMode=http;ssl=1;httpPath=<HTTPPath>;AuthMech=3
(notice no UID/PWD)
 
2. Username field
Put literally the string:
token
 
3. Password field
Put your Databricks Personal Access Token here.
 
4. Authentication type
Choose Basic authentication (even though it’s token-based — this is how Databricks expects it).
 
BBF

💡 Did I answer your question? Mark my post as a solution!

👍 Kudos are appreciated

🔥 Proud to be a Super User!

Community News image 1920X1080.png
AKemper
Frequent Visitor

@BeaBF I am getting a different error now regarding ODBC drivers.

 

Error message:

An exception occurred: ODBC: ERROR [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified

 

I am seeing conflicting comments online about how to resolve this so any thoughts you have regarding this would be appreciated.

 

Thanks!

@AKemper have you still the problem? 

You’re no longer hitting Databricks-side auth issues; you’re now hitting a driver problem on the ADF/Synapse side.
Use the JDBC connector instead of ODBC

(Recommended, no driver installation needed)

  • In ADF Linked Services, choose JDBC (not ODBC).

  • Connection string:

     
    jdbc:spark://<ServerHostname>:443/default;transportMode=http;ssl=1;httpPath=<HTTPPath>;AuthMech=3
  • Username: token

  • Password: <your Databricks PAT>

BBF


💡 Did I answer your question? Mark my post as a solution!

👍 Kudos are appreciated

🔥 Proud to be a Super User!

Community News image 1920X1080.png
AKemper
Frequent Visitor

Hi @BeaBF,

I don't have JDBC as a connection option only ODBC.

Connection types.png

@AKemper 

ODBC in ADF only works if:

  1. You install the Databricks ODBC driver, and

  2. You run the pipeline on a self-hosted IR where that driver is installed.

Here the steps:

 

  • Set up a self-hosted Integration Runtime (SHIR)

  • Install the Databricks ODBC driver on that VM

  • Create a System DSN (Windows ODBC Data Source Administrator)

    • Configure it with:

      • Host(s) = your Databricks SQL hostname (e.g. adb-12345.6.azuredatabricks.net)

      • Port = 443

      • HTTP Path = your SQL warehouse path (from Databricks)

      • Authentication = Token

      • Token = your PAT

  • Configure your Linked Service in ADF

    • Type = ODBC

    • Runtime = your self-hosted IR (not auto-resolve / default)

    • Connection string = reference the DSN you created, e.g.:

       
      Driver={Simba Spark ODBC Driver}; Host=<your-databricks-sql-hostname>; Port=443; HTTPPath=<your-sql-http-path>; AuthMech=3; UID=token; PWD=<your-pat>; SSL=1; ThriftTransport=2;

      (note: depending on driver version, some params may be optional)

  • Test connection

    • Should succeed now, because the driver is present on the SHIR.

if you cannot use JDBC and you cannot install a self-hosted IR (so ODBC won’t work, because Microsoft-hosted IR has no Databricks ODBC driver), then unfortunately you’ve hit a real limitation in ADF/Synapse.

The best workaround is to use Dataflows instead of Copy Data

  • You already said your existing connection works fine in Dataflows.

  • If your main goal is just to move data, you could orchestrate with pipelines but perform the copy with Mapping Dataflows instead of Copy Data.

  • Dataflows do support Databricks SQL directly.

 

BBF


💡 Did I answer your question? Mark my post as a solution!

👍 Kudos are appreciated

🔥 Proud to be a Super User!

Community News image 1920X1080.png

 

BeaBF
Super User
Super User

@AKemper  Hi!

Try to use the Azure Databricks (SQL) connector instead of the regular Databricks connector

  • In Copy Data, when selecting the source, choose:

    • Source type → Databricks (SQL endpoint)

    • Provide the server hostname, HTTP path, and PAT.

  • This treats it like a SQL source rather than a cluster-based notebook source.

  • This should work exactly like Dataflows do

BBF


💡 Did I answer your question? Mark my post as a solution!

👍 Kudos are appreciated

🔥 Proud to be a Super User!

Community News image 1920X1080.png
AKemper
Frequent Visitor

@BeaBF Thank you for the quick reply!

 

In Copy Data when selecting a source the only Databricks option I have is Azure Databricks which is what is giving me the error.

 

AKemper_0-1758906760266.png

 

Thank you!

@AKemper ok, in Copy Data, Databricks SQL endpoint is not exposed as a first-class source in some versions of ADF/Synapse.

 

Try to use a JDBC/ODBC connector to query the SQL endpoint

  • Copy Data activity can use generic JDBC instead of the Azure Databricks connector.

  • Example JDBC URL for a SQL endpoint:

jdbc:spark://<ServerHostname>:443/default;
transportMode=http;
ssl=1;
httpPath=<HTTPPath>;
AuthMech=3;
UID=token;
PWD=<PAT> 
 
BBF

💡 Did I answer your question? Mark my post as a solution!

👍 Kudos are appreciated

🔥 Proud to be a Super User!

Community News image 1920X1080.png
AKemper
Frequent Visitor

@BeaBF I am now getting the below error message regarding the UID param in the connection string.

 

An exception occurred: ODBC: The connection string is invalid. The connection property 'uid' can only be provided using credentials.

Helpful resources

Announcements
November Power BI Update Carousel

Power BI Monthly Update - November 2025

Check out the November 2025 Power BI update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors