Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
I understand that Fabric supports shortcuts for different storages like ADLS Gen2, Amazon S3, Google Cloud Storage(GCS) and Dataverse.
How about Cache? Are Cache supported for all those above different storages (ADLS Gen2, Amazon S3, Google Cloud Storage(GCS) and Dataverse.) as well?
While Microsoft Fabric natively integrates with Azure Data Lake Storage (ADLS) and uses OneLake, which is built on top of ADLS Gen2, it does utilize caching to optimize performance. Specifically, Fabric employs an "intelligent cache" that automatically caches data from both OneLake and ADLS Gen2 storage when shortcuts are used to speed up Spark jobs. This cache is managed transparently and consistently, with automatic invalidation when data changes.
So is the Shortcut cache same as the "intelligent cache" mentioned?
Solved! Go to Solution.
Hi @tan_thiamhuat ,
That is an interesting question!
Shortcuts in OneLake allow you to quickly and easily source data from external cloud providers and use it across all Fabric workloads such as Power BI reports, SQL, Spark and Kusto. However, each time these workloads read data from cross-cloud sources, the source provider (AWS, GCP) charges additional egress fees on the data. Thankfully, shortcut caching allows the data to only be sourced once and then used across all Fabric workloads without additional egress fees.With the general availability for cross-cloud shortcut cache, new capabilities have been added as well. You now have the ability to define the retention period for your shortcut cache. Previously, data was only cached for 24hr, with these updates, you can select a retention period from 1-28 days. This greatly improves the effectiveness and cost savings of cache for sources that are not accessed every day.
Whereas intelligent cache optimizes Spark job performance by caching data at the Spark node level.It also automatically detects changes to the underlying files and automatically refreshes the files in the cache, providing you with the most recent data. When the cache size reaches its limit, the cache automatically releases the least read data to make space for more recent data. This feature lowers the total cost of ownership by improving performance up to 60% on subsequent reads of the files that are stored in the available cache.
This feature benefits you if:
Your workload requires reading the same file multiple times and the file size fits in the cache.
Your workload uses Delta Lake tables, Parquet, or CSV file formats.
So no, they are not the same, though they are complementary.
Refer - Intelligent cache in Microsoft Fabric
Hope this helps!
Hi @tan_thiamhuat ,
Just wanted to check if you had the opportunity to review the explaination provided.
If the response has addressed your query, please accept it as a solution so other members can easily find it.
Thank You
Hi @tan_thiamhuat ,
Just wanted to check if you had the opportunity to review the explaination provided.
I hope it gave you some clarity on Fabric shortcuts and Intelligent Cache.
If the response has addressed your query, please accept it as a solution so other members can easily find it.
Thank You
Hi @tan_thiamhuat ,
That is an interesting question!
Shortcuts in OneLake allow you to quickly and easily source data from external cloud providers and use it across all Fabric workloads such as Power BI reports, SQL, Spark and Kusto. However, each time these workloads read data from cross-cloud sources, the source provider (AWS, GCP) charges additional egress fees on the data. Thankfully, shortcut caching allows the data to only be sourced once and then used across all Fabric workloads without additional egress fees.With the general availability for cross-cloud shortcut cache, new capabilities have been added as well. You now have the ability to define the retention period for your shortcut cache. Previously, data was only cached for 24hr, with these updates, you can select a retention period from 1-28 days. This greatly improves the effectiveness and cost savings of cache for sources that are not accessed every day.
Whereas intelligent cache optimizes Spark job performance by caching data at the Spark node level.It also automatically detects changes to the underlying files and automatically refreshes the files in the cache, providing you with the most recent data. When the cache size reaches its limit, the cache automatically releases the least read data to make space for more recent data. This feature lowers the total cost of ownership by improving performance up to 60% on subsequent reads of the files that are stored in the available cache.
This feature benefits you if:
Your workload requires reading the same file multiple times and the file size fits in the cache.
Your workload uses Delta Lake tables, Parquet, or CSV file formats.
So no, they are not the same, though they are complementary.
Refer - Intelligent cache in Microsoft Fabric
Hope this helps!