<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Data Extraction from Oracle in Data Engineering</title>
    <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140137#M15606</link>
    <description>&lt;P&gt;Hello Community,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a design question, we are building a enterprise data warehouse using Fabric Warehouse and I have to stage the data from our Oracle database (source). Currently the way i designed it is, i have .sql files sitting in OneLake Lakehouse which has the extraction script from Oracle. I then run a notebook using high concurrency to read the .sql file and assign it to a variable. This is later used in the for loop copy activity where the source is Oracle and the query is the variable passed from Notebook and dumps into the Lakehouse table. This is all meta data driven and the config comes from a table.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The question i have, is this the best way to do this? What other alternatives have people used?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Challenges:&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. Since HC notebooks can only have 5 connections at the same time i can run only 5 pulls in parallel.&amp;nbsp;&lt;/P&gt;&lt;P&gt;2. I tired to use OPENROWSET from fabric warehouse but it failed reading the contet as a SINGLE_BLOB&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please adivse.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 27 Mar 2026 15:03:40 GMT</pubDate>
    <dc:creator>Ira_27</dc:creator>
    <dc:date>2026-03-27T15:03:40Z</dc:date>
    <item>
      <title>Data Extraction from Oracle</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140137#M15606</link>
      <description>&lt;P&gt;Hello Community,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a design question, we are building a enterprise data warehouse using Fabric Warehouse and I have to stage the data from our Oracle database (source). Currently the way i designed it is, i have .sql files sitting in OneLake Lakehouse which has the extraction script from Oracle. I then run a notebook using high concurrency to read the .sql file and assign it to a variable. This is later used in the for loop copy activity where the source is Oracle and the query is the variable passed from Notebook and dumps into the Lakehouse table. This is all meta data driven and the config comes from a table.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The question i have, is this the best way to do this? What other alternatives have people used?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Challenges:&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. Since HC notebooks can only have 5 connections at the same time i can run only 5 pulls in parallel.&amp;nbsp;&lt;/P&gt;&lt;P&gt;2. I tired to use OPENROWSET from fabric warehouse but it failed reading the contet as a SINGLE_BLOB&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please adivse.&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 27 Mar 2026 15:03:40 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140137#M15606</guid>
      <dc:creator>Ira_27</dc:creator>
      <dc:date>2026-03-27T15:03:40Z</dc:date>
    </item>
    <item>
      <title>Re: Data Extraction from Oracle</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140558#M15610</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/802637"&gt;@Ira_27&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;That’s an interesting approach you’re taking.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;We run a few Oracle databases ourselves, and we use Oracle Mirroring to get the data into Fabric in the first place. In my opinion, this works very well, and it’s free up to your capacity limit. That means you don’t pay anything for storage.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Here you can read this.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/mirroring/overview" target="_blank"&gt;https://learn.microsoft.com/en-us/fabric/mirroring/overview&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Once you have the data in Fabric, you can easily process it using notebooks and populate your lakehouse.&lt;/P&gt;
&lt;P&gt;Here’s an overview of how we do it.&lt;/P&gt;
&lt;P&gt;1. Transfer Oracle data to Fabric using mirroring&lt;BR /&gt;2. Process the data further using notebooks (we use Bronze, Silver, and Gold)&lt;BR /&gt;3. Use the data in reports.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;I hope that helps&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Many greetz&lt;/P&gt;
&lt;P&gt;Rene&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 29 Mar 2026 11:46:22 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140558#M15610</guid>
      <dc:creator>spaceman127</dc:creator>
      <dc:date>2026-03-29T11:46:22Z</dc:date>
    </item>
    <item>
      <title>Re: Data Extraction from Oracle</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140560#M15611</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/802637"&gt;@Ira_27&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;P&gt;You should &lt;STRONG&gt;default to Oracle Mirroring rather than a High Concurrency notebook approach&lt;/STRONG&gt; when you’re staging Oracle data for an enterprise data warehouse. Mirroring keeps your Oracle tables continuously replicated into OneLake using database logs, so you avoid notebook execution limits, Spark session contention, and the operational overhead that comes with orchestrating ingestion through HC notebooks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Once the data is mirrored, &lt;STRONG&gt;Delta Change Data Feed (CDF)&lt;/STRONG&gt; gives you a clean and reliable way to pick up inserts, updates, and deletes and push those changes downstream. You can incrementally build your Bronze and Silver layers and keep your Fabric Warehouse in sync without relying on watermark queries, SQL files, or loop‑based copy logic.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Compared to using HC notebooks for extraction, this takes Spark out of the ingestion path entirely. You’re no longer constrained by shared session limits, capped parallelism, or capacity throttling issues that tend to show up as volumes and table counts increase.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;From a day‑to‑day operations perspective, this is also much easier to live with. You’re not managing notebooks that prepare SQL, tuning parallel copy settings, or handling retries and partial failures. Instead, you rely on a managed replication mechanism and a standard change feed that’s designed exactly for incremental processing.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It’s also a safer approach than query‑based or notebook‑driven Copy patterns, because changes are captured directly from Oracle redo logs rather than inferred from timestamps or custom logic. That reduces the risk of missed updates, race conditions, and messy reprocessing scenarios as your platform grows.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Overall, Oracle Mirroring combined with CDF gives you a much cleaner separation of concerns: ingestion runs continuously and independently, change tracking is consistent and reusable, and EDW modelling stays downstream where the business logic belongs. The result feels simpler, more robust, and far more future‑proof than using High Concurrency notebooks as an ingestion and orchestration engine.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/DIV&gt;</description>
      <pubDate>Sun, 29 Mar 2026 11:59:52 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140560#M15611</guid>
      <dc:creator>deborshi_nag</dc:creator>
      <dc:date>2026-03-29T11:59:52Z</dc:date>
    </item>
    <item>
      <title>Re: Data Extraction from Oracle</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140609#M15614</link>
      <description>&lt;P&gt;Appreciate your responses however this is a third party system that uses Oracle and in order to use Mirroring they need to enable golden gate which is not an option at the moment hence the approach that we are taking.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Given that mirroring is not an option, would love to get additional feedback on this approach.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 29 Mar 2026 16:41:34 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140609#M15614</guid>
      <dc:creator>Ira_27</dc:creator>
      <dc:date>2026-03-29T16:41:34Z</dc:date>
    </item>
    <item>
      <title>Re: Data Extraction from Oracle</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140619#M15615</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/802637"&gt;@Ira_27&lt;/a&gt;&amp;nbsp;,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;BR /&gt;all right.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;So that means you’re not able to configure anything on the Oracle server itself? &lt;BR /&gt;In that case, your only real options are the Copy Activity or the Copy Job.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;You can, of course, work with watermarks to ensure that only new or previously uncopied data is loaded. CDC would be an alternative, but that would require configuration either on the server or within the database.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Another advantage of using a Copy Activity or Copy Job is that it avoids the overhead associated with notebooks.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I can imagine the following approach, though it would need to be tested in practice:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN&gt;Configuration table&lt;/SPAN&gt;&lt;SPAN&gt; (Lakehouse Tables or the Fabric Warehouse)&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Pipeline&lt;/SPAN&gt;&lt;SPAN&gt; that reads the configuration table row by row&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;ForEach activity&lt;/SPAN&gt;&lt;SPAN&gt; with dynamic parallelization&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;Copy Activity&lt;/SPAN&gt;&lt;SPAN&gt; that loads data from Oracle → Lakehouse Tables&lt;BR /&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;SPAN&gt;Later, you can use Notebooks to further process the data.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Would that be an option for you?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Best regards&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;René&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 29 Mar 2026 17:35:42 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5140619#M15615</guid>
      <dc:creator>spaceman127</dc:creator>
      <dc:date>2026-03-29T17:35:42Z</dc:date>
    </item>
    <item>
      <title>Re: Data Extraction from Oracle</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5141513#M15644</link>
      <description>&lt;P&gt;Thank you&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/644531"&gt;@spaceman127&lt;/a&gt;&amp;nbsp;for&amp;nbsp; your response and that is exactly what i am doing right now. The only caviat is that since my Oracle source queries are stored in .sql file i am using notebook to read the content of the file and passing it as a variable in the copy activity. I initially thought of saving the source queries as TEXT in a table but then it add's overhead when someone has to review or update the source queries. Do you have any other recommendation?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 31 Mar 2026 01:38:06 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5141513#M15644</guid>
      <dc:creator>Ira_27</dc:creator>
      <dc:date>2026-03-31T01:38:06Z</dc:date>
    </item>
    <item>
      <title>Re: Data Extraction from Oracle</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5142303#M15663</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/802637"&gt;@Ira_27&lt;/a&gt;&amp;nbsp; ,&lt;BR /&gt;&lt;SPAN&gt;Thanks for the clarification. I understand you are using notebook only to read the .sql file and pass it to Copy activity.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class=""&gt;&lt;SPAN&gt;To avoid notebook, one option you can try is storing the SQL query in a config table and reading it using Lookup activity in pipeline. This way you can directly pass it to Copy activity and remove the HC notebook dependency.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class=""&gt;&lt;SPAN&gt;If maintaining queries in table is difficult, another option is to store the SQL files in OneLake and use pipeline activities like Lookup/Get Metadata + read content to read the file instead of notebook. This helps you keep the files separate and also avoids the 5 connection limit from HC notebooks.&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Please check if this works for you. Let us know if you need any help on this.&lt;BR /&gt;Regards,&lt;BR /&gt;Community Support Teams.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Apr 2026 04:12:29 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5142303#M15663</guid>
      <dc:creator>v-hjannapu</dc:creator>
      <dc:date>2026-04-01T04:12:29Z</dc:date>
    </item>
    <item>
      <title>Re: Data Extraction from Oracle</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5142325#M15664</link>
      <description>&lt;P&gt;Thanks, i totally missed out on using the Get Metadata, i guess i was too much stuck on using Notebook that i overlooked our good old "Get Metadata" friend.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Apr 2026 04:36:16 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Data-Extraction-from-Oracle/m-p/5142325#M15664</guid>
      <dc:creator>Ira_27</dc:creator>
      <dc:date>2026-04-01T04:36:16Z</dc:date>
    </item>
  </channel>
</rss>

