Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Anonymous
Not applicable

Establishing Connection For Oracle EBS from Fabric

Hi team

 

I was working on to establish connection from Oracle EBS to fabric. I have established a Oracle Data Gateway, Hwn using this gateway i was able to connect to the Oracle using Data flow gen 2 and using copy activity not all tables are visible.

 

I want to establish connection using Fabric Notebooks, Can someone provide me assistance/Guidance on how  to establish the connection and get data from Oracle EBS

 

Thanks

1 ACCEPTED SOLUTION
Anonymous
Not applicable

HI @Anonymous,

I'd like to suggest you try to use jdbc to connect to the data source: (upload the driver file to the environment than you can invoke it in the spark config)

# Import necessary libraries
from pyspark.sql import SparkSession

# Create a Spark session and config driver
spark = SparkSession.builder \
    .appName("OracleTest") \
    .config("spark.driver.extraClassPath", "/path/to/your/uploaded/ojdbc11.jar") \
    .getOrCreate()

# JDBC connection and properties
jdbc_url = "jdbc:oracle:thin:@//host:1521/db"

connection_properties = {
    "user": "<your_username>",
    "password": "<your_password>",
    "driver": "oracle.jdbc.driver.OracleDriver"
}

#query
query = "SELECT * FROM your_table"

# Read data from JDBC source
df = spark.read.jdbc(url=jdbc_url, table=query, properties=connection_properties)

# Show the data
df.show()

python - How to use JDBC source to write and read data in (Py)Spark? - Stack Overflow

Regards,

Xiaoxin Sheng

View solution in original post

2 REPLIES 2
Someshn
Regular Visitor

Thank you for this insights 

Anonymous
Not applicable

HI @Anonymous,

I'd like to suggest you try to use jdbc to connect to the data source: (upload the driver file to the environment than you can invoke it in the spark config)

# Import necessary libraries
from pyspark.sql import SparkSession

# Create a Spark session and config driver
spark = SparkSession.builder \
    .appName("OracleTest") \
    .config("spark.driver.extraClassPath", "/path/to/your/uploaded/ojdbc11.jar") \
    .getOrCreate()

# JDBC connection and properties
jdbc_url = "jdbc:oracle:thin:@//host:1521/db"

connection_properties = {
    "user": "<your_username>",
    "password": "<your_password>",
    "driver": "oracle.jdbc.driver.OracleDriver"
}

#query
query = "SELECT * FROM your_table"

# Read data from JDBC source
df = spark.read.jdbc(url=jdbc_url, table=query, properties=connection_properties)

# Show the data
df.show()

python - How to use JDBC source to write and read data in (Py)Spark? - Stack Overflow

Regards,

Xiaoxin Sheng

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.