Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Find articles, guides, information and community news

Most Recent
Rufyda
Kudo Kingpin
Kudo Kingpin

When organizations work with Microsoft Fabric, one of the most attractive features is the ability to create shortcuts to external storage systems such as AWS S3.
A shortcut gives you the convenience of accessing external data as if it were already part of OneLake, without the need to copy or duplicate files.

But here’s the catch: while shortcuts simplify connectivity, they don’t eliminate one of the biggest hidden costs in cloud analytics — data transfer fees.


How Shortcuts Work

A Fabric shortcut is essentially a pointer to the data. When you query parquet files in S3 through Fabric, the compute engine (running in Azure) must fetch the bytes from AWS. This means the data is leaving AWS, and every gigabyte transferred counts as egress traffic.


So even though the files aren’t duplicated inside Fabric storage, AWS still charges you for every read that crosses into Azure.

The Cost of Reading 200 GB Daily

 

Let’s consider a realistic example:

Your S3 bucket contains about 200 GB of parquet files.

These files are refreshed daily, and your Fabric semantic model needs a daily refresh.

That means 200 GB per day × 30 days = ~6 TB per month.

Based on typical AWS S3 data transfer rates (around $0.09 per GB for the first 10 TB), you’re looking at:

 6,000 GB × $0.09 ≈ $540 per month in AWS egress charges.

That’s before considering Fabric compute costs.

 

Why Shortcuts Don’t Reduce Egress Fees

It’s important to understand that shortcuts don’t magically reduce data transfer charges. They prevent duplication of storage, but the actual bytes must still move from AWS to Azure every time you run a query or refresh your model.

So, if you’re reading the full 200 GB daily, you’ll pay egress fees as if you were downloading the data each day.

Strategies to Optimize Costs

The good news is that you don’t have to accept those fees at face value. There are practical ways to bring them down:

Initial Full Copy + Incremental Loads
Do one large migration of your dataset into OneLake (or Azure Data Lake). After that, only copy the new or updated files each day. This reduces daily transfers to just the delta, which is usually far smaller than the entire dataset.

Partitioning and Predicate Pushdown
Structure your parquet files by date or partition keys. Ensure your queries are selective so that Fabric only reads what’s necessary instead of scanning all 200 GB.

Push Changes from AWS
Instead of letting Fabric pull data every day, configure S3 event triggers (with Lambda or DataSync) to push only the new files into Azure as they arrive.

Compression and Column Pruning
Since parquet is columnar, make sure your reports only pull the columns that are actually needed. This reduces the amount of data read — and the egress bill.

Evaluate Long-Term Data Residency

 

If your workload is permanent and heavy, it may be more cost-effective to migrate the dataset fully into Azure and avoid continuous cross-cloud transfers.

 

Fabric shortcuts offer a great way to connect to S3 without moving data right away, but they don’t avoid AWS data transfer charges. If you access large volumes of S3 data every day, costs can add up quickly.

The most effective approach is usually to copy once, then refresh incrementally, while designing your data to minimize unnecessary reads. That way, you get the best of both worlds: the convenience of Fabric integration and a controlled cloud bill.

uzuntasgokberk
Super User
Super User

Scrape currency data from Hurriyet/Doviz with Python BeautifulSoup and store it in Microsoft Fabric Lakehouse or Warehouse step by step.

Read more...

mk_sunitha
Microsoft Employee
Microsoft Employee

This guide walks you through building a front-end application using React, Apollo Client, and TypeScript, integrated with a GraphQL API hosted in Microsoft Fabric. It highlights how to integrate and configure useful local tools like auto-completion and code generation, focusing on delivering an intuitive and seamless developer experience with GraphQL. 

 

Read more...

NHariGouthami
Microsoft Employee
Microsoft Employee

🚀 Build a Fabric Data Agent in Minutes with GitHub Copilot Agent Mode

Discover how to supercharge your data workflows using GitHub Copilot Agent Mode in VS Code. Learn how to explore schemas, generate AI instructions, and create example queries—all in under an hour. If you're working with Microsoft Fabric, this fast and intuitive method is a game-changer for building conversational analytics agents.

Read more...

Srisakthi
Super User
Super User

What happens when a person leaves organisation or project who had created fabric items and how to take ownership

Read more...

ibarrau
Super User
Super User

Many releases and tools within a single platform are engaging both technical users (data engineers, data scientists, or data analysts) as well as end users. Fabric brought a unification of stakeholders into one shared space. That said, it doesn’t mean we have to use all the tools it offers.

If we already have an excellent data cleaning, transformation, or processing workflow using the very popular Databricks, we can keep using it. Fabric can be adopted or integrated in many ways.

Fabric brings us a next-generation lake storage system using an open data format. This means it allows us to use the most popular data file types for storage, and its file system works with conventional open-source structures. In other words, we can connect to our storage using tools capable of reading from it. We've also shown a bit about Fabric Notebooks and how they enhance the development experience.

In this simple tip, we’ll look at how to read from and write to our Fabric Lakehouse using Databricks.

Read more...

Ilgar_Zarbali
Super User
Super User

Microsoft Fabric revolutionizes data architecture by offering a unified platform that integrates Power BI, data science, real-time analytics, and more. At the heart of this ecosystem is the Lakehouse, a powerful, flexible, and scalable storage layer tailored for modern data engineering workflows.

In this article, we explore how Lakehouses work in Microsoft Fabric, how to set one up, and how they serve as the foundation for managing both files and structured data—all without the traditional complexity of data platforms.

Read more...

Rufyda
Kudo Kingpin
Kudo Kingpin

Microsoft Fabric is a powerful data platform that brings together data movement, transformation, and analytics in one unified environment. One of the core workflows in Fabric involves ingesting, exploring, transforming, and preparing data for analysis. This article provides an overview of how to work with data in Microsoft Fabric—starting from ingestion and ending with clean, ready-to-use datasets.

 

 

Microsoft-Fabric.jpg

Read more...

mabdollahi
Advocate III
Advocate III

💡Ever wondered how to bring AI into your data engineering workflows in Microsoft Fabric?

In my latest hands-on project, I show how to automate Sentiment Analysis on customer feedback using:

  • Microsoft Fabric Lakehouse

  • PySpark notebooks

  • Azure OpenAI (GPT-4)

  • Fabric Data Pipelines

  • Power BI for real-time insights

This solution is fully integrated, no external services required, and takes just 10 minutes to set up.

📖Check out the full blog and video tutorial to see it in action: 

#MicrosoftFabric #AzureOpenAI #DataEngineering #SentimentAnalysis #PowerBI #PySpark

Read more...

paulmd_MSFT
Microsoft Employee
Microsoft Employee

Learn how to connect cross Tenant Azure Data Factory (and other services) to Fabric.

 

Read more...

uzuntasgokberk
Super User
Super User

Simplify analytics with Spark Connector for Microsoft Fabric Data Warehouse: seamlessly access Fabric Warehouse via a secure Spark API.

Read more...

kaysauter
Most Valuable Professional
Most Valuable Professional

In my last article of this series, I covered on how to load csv files in a lakehouse automatically in MS Fabric. In this article, I am going to discuss how we can find and fix errors with notebooks easily. 

Read more...

Ilgar_Zarbali
Super User
Super User

OneLake is a unified storage system in Microsoft Fabric that eliminates data silos by storing all data in a single location. Now, we’re going to discuss Direct Lake, a new way Power BI interacts with this storage for faster performance and efficiency.

Direct Lake.png

Source: https://learn.microsoft.com/en-us/fabric/fundamentals/direct-lake-overview

Read more...

Ilgar_Zarbali
Super User
Super User

A Guide to Working with Lakehouses in Microsoft Fabric

This guide explores the data engineering experience in Microsoft Fabric, focusing specifically on Lakehouses. This guide takes a hands-on approach, demonstrating how to work with Lakehouses and manage data effectively.

 

Downloadable Files 

Read more...

kaysauter
Most Valuable Professional
Most Valuable Professional

In my last newsletter on LinkedIn, I explained how to export AdventureWorks2022 tables to csv files. If you don’t want to generate them, you can get them, as stated in the last post (after my edit in which I realized I made a mistake). My actual blog kayondata.com is currently being moved to another place, hence not updated at the moment.

CSV files are still very common, so I am using this approach to showcase some tricks and to exercise some data engineering stuff. 

Read more...

Helpful resources

Join Blog
Interested in blogging for the community? Let us know.