Microsoft Fabric Community Conference 2025, March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for a $400 discount.
Register nowThe Power BI DataViz World Championships are on! With four chances to enter, you could win a spot in the LIVE Grand Finale in Las Vegas. Show off your skills.
What is the best approach in Fabric if i need to read 100 million of records from a Oracle using an on-prem gateway?
Do i have other options besides incremental reloads using pipelines/dataflowgen2
Thanks in advanced
Solved! Go to Solution.
Here's another idea - create Parquet files from Oracle and then directly use them in the lakehouse.
Thanks for your reply.
query is select * .. it takes hours and hours to load .. i stopped it. every day/week, more data is added to the table.
How many columns is * ?
Is the data slowly changing or can you use Incremental Refresh?
Is it any faster when you export to CSV (for example)?
has about 50 columns. 1 week of data already takes one hour.
Incremental for each week would be possible. there are no smart fabric features that i can leverage right? I was looking for mirroring etc.
If i want to explore incremental and want to store the data in an ingestion layer (first layer). Then i have two questions:
* Shoud i use a pipleline for that? or better to use dataflow gen2?
* What is the best way to import the previous data from a CSV file into the lakehouse? Including adding the field mapping.
Here's another idea - create Parquet files from Oracle and then directly use them in the lakehouse.
Please provide more details. Is this a one time load or do you plan to refresh the data? How fast is the Oracle source? How complex is the query?
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Check out the February 2025 Power BI update to learn about new features.
User | Count |
---|---|
45 | |
33 | |
30 | |
26 | |
24 |
User | Count |
---|---|
40 | |
33 | |
19 | |
18 | |
15 |