Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric certified for FREE! Don't miss your chance! Learn more
Hi Everyone,
I am working on Fabric project in which looking to implement SAP CDC functionality which currently available in ADF/Synapse?
Do we have any workaround available in Fabric
Solved! Go to Solution.
Hi @MadhurK,
Thank you for the update. I'm glad the explanation clarified things and that the root cause is now understood.
Your plan to use partitioned extraction and watermark-based incremental loading is the recommended method for handling large SAP tables with the SAP Table connector in Fabric pipelines. This approach helps avoid RFC memory issues and improves pipeline reliability.
Some additional best practices to consider are:
Using range-based batching, such as primary key intervals, if a suitable date column isn't available.
Setting pagination or smaller row counts per request in the copy activity to minimize SAP memory usage.
Selecting only the necessary columns to reduce the dataset size.
For very large or near real-time needs, consider SAP CDC via ADF/Synapse, SAP SLT, or external replication tools for more scalable data landing in Fabric.
Thank you.
Hi @rakeshsasvade,
As we did not get a response, may I know if the above reply could clarify your issue, or could you please help confirm if we may help you with anything else?
Your understanding and patience will be appreciated.
Hi @MadhurK,
Thank you for the update. I'm glad the explanation clarified things and that the root cause is now understood.
Your plan to use partitioned extraction and watermark-based incremental loading is the recommended method for handling large SAP tables with the SAP Table connector in Fabric pipelines. This approach helps avoid RFC memory issues and improves pipeline reliability.
Some additional best practices to consider are:
Using range-based batching, such as primary key intervals, if a suitable date column isn't available.
Setting pagination or smaller row counts per request in the copy activity to minimize SAP memory usage.
Selecting only the necessary columns to reduce the dataset size.
For very large or near real-time needs, consider SAP CDC via ADF/Synapse, SAP SLT, or external replication tools for more scalable data landing in Fabric.
Thank you.
Hi @rakeshsasvade, @MadhurK,
Since we haven't heard back from you yet, I'd like to confirm if you've successfully resolved this issue or if you need further help?
If you still have any questions or need more support, please feel free to let us know.
We are more than happy to continue to help you.
Community Member.
Hi @v-sgandrathi,
Thank you for the detailed explanation and the actionable suggestions.
Your explaination aligns with what I’ve observed so far.
The root cause appears to be SAP's RFC_READ_TABLE2 function trying to load a large result set into SAP’s internal memory, leading to the reported error.
I will look into partitioning the data extraction using an appropriate date or key column to limit each request's size. Incremental loading using watermark columns should be feasible for tables with suitable fields.
Thanks again for your help and prompt response.
Hi @MadhurK,
Thank you for providing the detailed error message. The error “No more memory available to add rows to an internal table” occurs on the SAP source side when the RFC function /SAPDS/RFC_READ_TABLE2 tries to read a large amount of data into an internal table. This happens because the connector loads data into SAP memory before sending it to Fabric, and large datasets can cause SAP to run out of memory. Since smaller tables load without issues, the problem is due to the size of the result set, not the Fabric pipeline configuration.
To address this, avoid extracting the entire table in one request. Instead, use partitioned or filtered loads by leveraging a date column, key ranges, or batching logic to retrieve data in smaller segments. If the table has fields like ERDAT, AEDAT, timestamp, or another change tracking column, configuring incremental (watermark) loading is best, as it limits each run to new or updated records. Also, reducing the payload by selecting only necessary columns and excluding large text or blob fields can help reduce memory usage.
Since the error is from SAP, your SAP team could also consider source-side optimizations, such as adjusting RFC memory settings, enabling pagination with a custom RFC, or using alternative extraction methods instead of /SAPDS/RFC_READ_TABLE2. For very large tables, it's best to use scalable replication solutions like SAP CDC via Azure Data Factory or Synapse, SAP SLT, or third-party replication tools to avoid large full-table reads over RFC.
In summary, this is expected when large SAP tables are accessed in a single RFC call, and partitioned or incremental extraction is recommended to prevent SAP memory issues.
Hi Everyone,
I am trying to ingest a large SAP table (aprox. 2.5 GB data) using the SAP Table Application Server connection in a Fabric pipeline via the Copy Activity. For smaller tables, the ingestion works successfully. However, when handling tables with large data volumes, the process fails with the following error:
```
Failure happened on 'Source' side. ErrorCode=SapRfcClientOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=SapRfcClient operation '/SAPDS/RFC_READ_TABLE2' failed. Please contact SAP support if you need further help. Error message: 'No more memory available to add rows to an internal table.',Source=Microsoft.DataTransfer.Runtime.SapRfcHelper,''Type=SAP.Middleware.Connector.RfcAbapRuntimeException,Message=No more memory available to add rows to an internal table.,Source=sapnco,'
```
From the message, it appears the issue occurs when the SAP RFC `/SAPDS/RFC_READ_TABLE2` tries to load large result sets, and SAP cannot allocate enough memory for the internal table.
Ingestion of smaller data sets works fine, this only happens on large tables.
Has anyone encountered and resolved a similar error?
Is there a recommended way to mitigate or work around this memory issue when copying large SAP tables using Fabric’s copy activity?
Any guidance or reference to relevant documentation would be greatly appreciated!
Thank you
Hi @rakeshsasvade,
At present, SAP CDC (Change Data Capture) capabilities available in Azure Data Factory (ADF) and Synapse pipelines are not natively supported in Microsoft Fabric Data Factory with equivalent built-in connectors and CDC options for SAP sources such as SAP ECC and S/4HANA (using ODP/CDC). As Fabric continues to develop, enterprise connectors like SAP CDC remain on the roadmap or require alternative solutions.
A widely adopted workaround is to leverage ADF or Synapse pipelines for SAP CDC extraction and load incremental data into a staging layer, such as OneLake, Lakehouse, or Data Warehouse in Fabric. In this setup, ADF manages the CDC logic and Fabric processes the incremental data for downstream use. This architecture is currently recommended for enterprise implementations.
Alternatively, you can implement custom incremental loading within Fabric pipelines or notebooks using techniques such as watermark columns, delta tables, or merge logic. This allows you to simulate CDC behavior, though it requires manual design and governance.
Some organizations also utilize third-party SAP replication tools (e.g., SAP SLT, Qlik Replicate, Fivetran) to deliver CDC data into Fabric OneLake, which Fabric can efficiently process using Delta tables. This method is commonly used for near-real-time requirements.
While Fabric does not yet offer a direct SAP CDC connector comparable to ADF or Synapse, you can achieve similar results through hybrid architectures, custom incremental logic, or external replication tools. These approaches are considered best practice until native support becomes available.
Thank you.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Check out the February 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 27 | |
| 11 | |
| 11 | |
| 7 | |
| 7 |
| User | Count |
|---|---|
| 52 | |
| 39 | |
| 29 | |
| 15 | |
| 14 |