Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Hi,
I am investigating the option to create a Fabric shortcut to a AWS S3 standard bucket.
The bucket contains about 200 GB of data in parquet files, that are refreshed daily.
In Fabric, this data would need to be accessed and refreshed daily to update semantic models.
I understand that AWS egress fees/Data Transfer Costs apply when data is transferred out of AWS, and could be significant.
I am trying to determine what the AWS Data Transfer costs would be to read 200GB of data from that S3 bucket, via a shortcut, on a daily basis.
Does the shortcut technology allows for a significant reduction of these costs? If so, how?
We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.
Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support
We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.
@Rufyda, Thanks for your propt response
Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support
Hi @MelisandeRicou,
A shortcut does not copy data, it streams it from S3.
On-demand reads trigger AWS S3 egress (billed as “DataTransfer-Out-Bytes”).
Caching in Fabric can reduce repeated egress
Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support
Copy the data once to OneLake / Azure.
After that, use incremental loads to transfer only daily changes instead of the full 200 GB.
Partition your data so Fabric reads only the required portion.
For permanent, heavy workloads, move the dataset entirely to Azure to avoid ongoing egress costs.
If this helps, consider giving some Kudos. If I answered your question, please mark this as the solution.