Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
AshwiniN
New Member

Connectivity between Fabric Notebook and Teradata database

I want to connect Teradata database from Fabric Notebook.For that we have downloaded the jar files from officaila Teradata website and uploaded it in lakehouse and using it in spark code.

But Notebook is not able to detect it from given lakehouse location. Can anyone please suggest how to proceed.

 

Note : I do't want to use DataFlow Gen2 as i want to run a parameterized script.

 

Thanks in Advance!

2 ACCEPTED SOLUTIONS
v-saisrao-msft
Community Support
Community Support

Hi @AshwiniN,

Thank you for reaching out to the Microsoft Fabric Forum Community.

 

The issue happens because uploading the Teradata JDBC driver to the Lakehouse doesn’t automatically make it available to Spark. You need to explicitly load it by manually configuring Spark to recognize the driver. 

One way to solve this is by creating a custom Spark environment in Fabric. Go to Workspace settings, then Data Engineering/Science → Spark settings → Environment, and add the path to the JAR file (e.g., /Files/JDBC/teradata_jdbc.jar). Make sure to select this environment when running the notebook so the driver loads automatically. 

Alternatively, if you want a quicker setup without changing environment settings, you can manually load the JAR file inside the notebook using command. 

This approach works instantly, though you’ll need to reapply it for each new notebook session. After that, ensure your connection details server, username, password, and driver (com.teradata.jdbc.TeraDriver) are correct and test a basic query to confirm everything works. 

Library management in Fabric environments - Microsoft Fabric | Microsoft Learn 

Learn about library management in Fabric, including how to add public and custom libraries to your Fabric environments. 

For more information refer the below link: 

Library management in Fabric environments - Microsoft Fabric | Microsoft Learn 

If this post helps, then please give us ‘Kudos’ and consider Accept it as a solution to help the other members find it more quickly.

 

Thank you. 

View solution in original post

Hi @AshwiniN,

Thank you for the update. Since the previous steps didn't resolve the issue, you can manually load the Teradata JDBC driver directly within your Fabric notebook using the following method:


jar_path = "abfss://<lakehouse_name>@onelake.dfs.fabric.microsoft.com/Files/teradata_jdbc.jar"

spark.sparkContext.addJar(jar_path)

print("Loaded JARs:", spark.sparkContext.getConf().get("spark.jars"))

 

Once the driver is loaded, you can connect to Teradata using this sample code:


jdbc_url = "jdbc:teradata://<Your_Teradata_Server>/DATABASE=<DatabaseName>,CHARSET=UTF8"
properties = {
"user": "<Your_Username>",
"password": "<Your_Password>",
"driver": "com.teradata.jdbc.TeraDriver"
}

query = "(SELECT * FROM <TableName> LIMIT 10) as sample_data"

df = spark.read.jdbc(url=jdbc_url, table=query, properties=properties)
df.show()

 

If this post helps, then please give us ‘Kudos’ and consider Accept it as a solution to help the other members find it more quickly.

 

Thank you.

 

View solution in original post

8 REPLIES 8
AshwiniN
New Member

Thanks for your reply!

 

can you please provide the sample pyspark connection code to teradata as i have tried above suggested changes but still its not working

Hi @AshwiniN,
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.

Hi, I am still not able to connect it.

Also want to ask few things:

print("Loaded JARs:", spark.sparkContext.getConf().get("spark.jars")) - I am still getting None here

and "spark.sparkContext.addJar(jar_path)" didn't work in pyspark (it throw error)

 

Can you please guide how to resolve it

 

Hi @AshwiniN,

Thank you for the update. Since the previous steps didn't resolve the issue, you can manually load the Teradata JDBC driver directly within your Fabric notebook using the following method:


jar_path = "abfss://<lakehouse_name>@onelake.dfs.fabric.microsoft.com/Files/teradata_jdbc.jar"

spark.sparkContext.addJar(jar_path)

print("Loaded JARs:", spark.sparkContext.getConf().get("spark.jars"))

 

Once the driver is loaded, you can connect to Teradata using this sample code:


jdbc_url = "jdbc:teradata://<Your_Teradata_Server>/DATABASE=<DatabaseName>,CHARSET=UTF8"
properties = {
"user": "<Your_Username>",
"password": "<Your_Password>",
"driver": "com.teradata.jdbc.TeraDriver"
}

query = "(SELECT * FROM <TableName> LIMIT 10) as sample_data"

df = spark.read.jdbc(url=jdbc_url, table=query, properties=properties)
df.show()

 

If this post helps, then please give us ‘Kudos’ and consider Accept it as a solution to help the other members find it more quickly.

 

Thank you.

 

Hi @AshwiniN,

 

We haven’t heard back from you regarding your issue. If it has been resolved, please mark the helpful response as the solution and give a ‘Kudos’ to assist others. If you still need support, let us know.

 

Thank you.

Hi @AshwiniN,

May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.

Thank you.

AshwiniN
New Member

Thanks for your reply !

 

I have tried above solution but still not working, can you please provide sample code or any documentation for reference?

v-saisrao-msft
Community Support
Community Support

Hi @AshwiniN,

Thank you for reaching out to the Microsoft Fabric Forum Community.

 

The issue happens because uploading the Teradata JDBC driver to the Lakehouse doesn’t automatically make it available to Spark. You need to explicitly load it by manually configuring Spark to recognize the driver. 

One way to solve this is by creating a custom Spark environment in Fabric. Go to Workspace settings, then Data Engineering/Science → Spark settings → Environment, and add the path to the JAR file (e.g., /Files/JDBC/teradata_jdbc.jar). Make sure to select this environment when running the notebook so the driver loads automatically. 

Alternatively, if you want a quicker setup without changing environment settings, you can manually load the JAR file inside the notebook using command. 

This approach works instantly, though you’ll need to reapply it for each new notebook session. After that, ensure your connection details server, username, password, and driver (com.teradata.jdbc.TeraDriver) are correct and test a basic query to confirm everything works. 

Library management in Fabric environments - Microsoft Fabric | Microsoft Learn 

Learn about library management in Fabric, including how to add public and custom libraries to your Fabric environments. 

For more information refer the below link: 

Library management in Fabric environments - Microsoft Fabric | Microsoft Learn 

If this post helps, then please give us ‘Kudos’ and consider Accept it as a solution to help the other members find it more quickly.

 

Thank you. 

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.