Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric certified for FREE! Don't miss your chance! Learn more

Find articles, guides, information and community news

Most Recent
Murtaza_Ghafoor
Responsive Resident
Responsive Resident

Fabric notebooks help teams work faster, collaborate better, and build reliable data solutions using the Lakehouse. They are simple to use but powerful enough for real-world data workloads.

Read more...

Mauro89
Super User
Super User

The Confusion Ends Here

Working with Microsoft Fabric? Then it's only a matter of time before encountering the acronym "UDF"—and wondering what it really means. Is it a Power BI thing? Data Engineering? The answer is: it's both.

The good news: once the distinction is clear, choosing the right UDF becomes intuitive. And more importantly, understanding both reveals how Fabric's workloads are designed to work together seamlessly.

What Makes UDFs Worth Understanding

Both User Defined Functions (in Power BI) and User Data Functions (in Data Engineering) embody the same software engineering principle: modularity and the DRY principle—Don't Repeat Yourself. Yet they solve completely different problems.

Power BI's UDFs let analysts encode business logic once and reuse it across every dashboard and report. Data Engineering's UDFs enable data engineers to write transformations once and apply them wherever data needs to be processed. In both cases, the benefit is the same: one source of truth, no duplicated code, and centralized maintenance.

It's the difference between building consistent analytical metrics and processing data at scale—and why organizations need both.

Dive Deeper

Curious about how to leverage both? Ready to architect Fabric solutions that follow software engineering best practices?

Read more...

NHariGouthami
Microsoft Employee
Microsoft Employee

What if your Power BI report could teach your AI Data Agent how to answer questions correctly?
In this article, I show how .pbip files become a knowledge base, Power BI DAX becomes ground truth, and Fabric Data Agents turn into self‑learning, production‑ready analytics assistants—with automated accuracy testing and continuous improvement.

Read more...

AparnaRamakris
Microsoft Employee
Microsoft Employee

Why maintain separate batch pipelines in Fabric? Spark Structured Streaming combined with foreachBatch lets you handle backfills and daily loads without breaking your flow. Batch meets streaming inside OneLake.

Read more...

pallavi_r
Super User
Super User

Traditional Gold tables can struggle as business logic evolves over time. Analytics lineage becomes harder to trace, governance more complex, and maintaining consistent metrics across reports increasingly challenging. Materialized Lake Views in Microsoft Fabric provide a SQL-based, reusable consumption layer that delivers Gold-level performance while remaining closely aligned with the Silver layer.

Read more...

AparnaRamakris
Microsoft Employee
Microsoft Employee

Over the last few years, I’ve had the opportunity to build data platforms from scratch using both Microsoft Fabric and Databricks—sometimes as competing options, and increasingly as complementary pieces of the same architecture.

 

Fabric and Databricks are not chasing the same outcomes and using one to “replace” the other is usually the wrong starting question. This post is not about feature checklists. It’s about how these platforms behave in real-world architectures, why Fabric often wins on speed and coherence, and why Databricks continues to lead when Spark depth and governance precision really matter.

Read more...

AparnaRamakris
Microsoft Employee
Microsoft Employee

The blog is to explore the Materialized Lake View Available in Microsoft Fabric ,its implementation and real time implementation challenges .Please note the feature is in preview and may not be recommended for production workloads as of the date of writing this content.

Read more...

Ilgar_Zarbali
Super User
Super User

Meetup Covers.png

This article is based on official Microsoft Fabric documentation and practical learning resources provided by Microsoft. To move beyond theory and demonstrate real implementation, I also followed a hands-on Lakehouse lab published by Microsoft Learning. The lab walks through core concepts such as creating a lakehouse, ingesting data, and exploring it using different Fabric experiences.

If you would like to explore the same step-by-step exercise used in this article and in my demonstration, you can access the lab here:

Lab 

Read more...

techies
Super User
Super User

This article explains how Microsoft Fabric integrates with Moodle LMS REST API to create a scalable and reliable learning analytics ecosystem. We will walk through API integration, ingestion, lakehouse storage, Spark optimization, and automated pipelines: the foundation required to operationalize LMS analytics at an enterprise level.

Read more...

FataiSanni
Advocate III
Advocate III

If you're working with files stored in SharePoint and need to regularly sync them to Microsoft Fabric Lakehouse, you have a few options. While Dataflow Gen2 provides a UI-driven approach for connecting to SharePoint data sources, it has limitations, it can't handle certain file types, may struggle with complex folder structures, and doesn't always support the flexibility needed for custom ETL logic.

 

What if you needed more control? A code-based solution that could download any file type from SharePoint, apply custom transformations, and load them into your Lakehouse with a single notebook run?

I've built an open-source PySpark notebook that does exactly that. In this post, I'll walk you through the solution, explain how it works, and show you how to get it running in your environment.

Read more...

ibarrau
Super User
Super User

Certainly, when we use notebooks, it’s not all about transforming and cleaning the contents of our lakehouse.

In many cases, we can also use them to integrate data. Notebooks can help us connect to cloud APIs or other cloud environments directly through code.

For this option to be viable, we need to avoid exposing the credentials or keys of the data source used in the code. Otherwise, imagine that anyone with access to the code (either in Fabric or in the repository) could obtain an API access key.
To prevent this, we’ll use an existing Azure service: Azure Key Vault.

Read more...

Rufyda
Super User
Super User

When organizations work with Microsoft Fabric, one of the most attractive features is the ability to create shortcuts to external storage systems such as AWS S3.
A shortcut gives you the convenience of accessing external data as if it were already part of OneLake, without the need to copy or duplicate files.

But here’s the catch: while shortcuts simplify connectivity, they don’t eliminate one of the biggest hidden costs in cloud analytics — data transfer fees.


How Shortcuts Work

A Fabric shortcut is essentially a pointer to the data. When you query parquet files in S3 through Fabric, the compute engine (running in Azure) must fetch the bytes from AWS. This means the data is leaving AWS, and every gigabyte transferred counts as egress traffic.


So even though the files aren’t duplicated inside Fabric storage, AWS still charges you for every read that crosses into Azure.

The Cost of Reading 200 GB Daily

 

Let’s consider a realistic example:

Your S3 bucket contains about 200 GB of parquet files.

These files are refreshed daily, and your Fabric semantic model needs a daily refresh.

That means 200 GB per day × 30 days = ~6 TB per month.

Based on typical AWS S3 data transfer rates (around $0.09 per GB for the first 10 TB), you’re looking at:

 6,000 GB × $0.09 ≈ $540 per month in AWS egress charges.

That’s before considering Fabric compute costs.

 

Why Shortcuts Don’t Reduce Egress Fees

It’s important to understand that shortcuts don’t magically reduce data transfer charges. They prevent duplication of storage, but the actual bytes must still move from AWS to Azure every time you run a query or refresh your model.

So, if you’re reading the full 200 GB daily, you’ll pay egress fees as if you were downloading the data each day.

Strategies to Optimize Costs

The good news is that you don’t have to accept those fees at face value. There are practical ways to bring them down:

Initial Full Copy + Incremental Loads
Do one large migration of your dataset into OneLake (or Azure Data Lake). After that, only copy the new or updated files each day. This reduces daily transfers to just the delta, which is usually far smaller than the entire dataset.

Partitioning and Predicate Pushdown
Structure your parquet files by date or partition keys. Ensure your queries are selective so that Fabric only reads what’s necessary instead of scanning all 200 GB.

Push Changes from AWS
Instead of letting Fabric pull data every day, configure S3 event triggers (with Lambda or DataSync) to push only the new files into Azure as they arrive.

Compression and Column Pruning
Since parquet is columnar, make sure your reports only pull the columns that are actually needed. This reduces the amount of data read — and the egress bill.

Evaluate Long-Term Data Residency

 

If your workload is permanent and heavy, it may be more cost-effective to migrate the dataset fully into Azure and avoid continuous cross-cloud transfers.

 

Fabric shortcuts offer a great way to connect to S3 without moving data right away, but they don’t avoid AWS data transfer charges. If you access large volumes of S3 data every day, costs can add up quickly.

The most effective approach is usually to copy once, then refresh incrementally, while designing your data to minimize unnecessary reads. That way, you get the best of both worlds: the convenience of Fabric integration and a controlled cloud bill.

Barbara_Andrews
Microsoft Employee
Microsoft Employee

Copilot in Microsoft Fabric: 4 Ways It Supercharges Data Work
From building pipelines to optimizing SQL, Copilot turns natural language into powerful data solutions—fast. Discover how this AI assistant helps data engineers and analytics pros work smarter across Fabric.

Ready to prompt your way to productivity?

Read more...

uzuntasgokberk
Super User
Super User

Scrape currency data from Hurriyet/Doviz with Python BeautifulSoup and store it in Microsoft Fabric Lakehouse or Warehouse step by step.

Read more...

mk_sunitha
Microsoft Employee
Microsoft Employee

This guide walks you through building a front-end application using React, Apollo Client, and TypeScript, integrated with a GraphQL API hosted in Microsoft Fabric. It highlights how to integrate and configure useful local tools like auto-completion and code generation, focusing on delivering an intuitive and seamless developer experience with GraphQL. 

 

Read more...

NHariGouthami
Microsoft Employee
Microsoft Employee

🚀 Build a Fabric Data Agent in Minutes with GitHub Copilot Agent Mode

Discover how to supercharge your data workflows using GitHub Copilot Agent Mode in VS Code. Learn how to explore schemas, generate AI instructions, and create example queries—all in under an hour. If you're working with Microsoft Fabric, this fast and intuitive method is a game-changer for building conversational analytics agents.

Read more...

Srisakthi
Super User
Super User

What happens when a person leaves organisation or project who had created fabric items and how to take ownership

Read more...

ibarrau
Super User
Super User

Many releases and tools within a single platform are engaging both technical users (data engineers, data scientists, or data analysts) as well as end users. Fabric brought a unification of stakeholders into one shared space. That said, it doesn’t mean we have to use all the tools it offers.

If we already have an excellent data cleaning, transformation, or processing workflow using the very popular Databricks, we can keep using it. Fabric can be adopted or integrated in many ways.

Fabric brings us a next-generation lake storage system using an open data format. This means it allows us to use the most popular data file types for storage, and its file system works with conventional open-source structures. In other words, we can connect to our storage using tools capable of reading from it. We've also shown a bit about Fabric Notebooks and how they enhance the development experience.

In this simple tip, we’ll look at how to read from and write to our Fabric Lakehouse using Databricks.

Read more...

charlyS
Most Valuable Professional
Most Valuable Professional

Resume your capacity ➜ Run pipelines ➜ Refresh Power BI datasets ➜ Suspend capacity — all automated thanks to PowerShell and Azure Automation ! 
9861af33-7ed9-4344-93b8-d33218c6a408.png

Read more...

Ilgar_Zarbali
Super User
Super User

Microsoft Fabric revolutionizes data architecture by offering a unified platform that integrates Power BI, data science, real-time analytics, and more. At the heart of this ecosystem is the Lakehouse, a powerful, flexible, and scalable storage layer tailored for modern data engineering workflows.

In this article, we explore how Lakehouses work in Microsoft Fabric, how to set one up, and how they serve as the foundation for managing both files and structured data—all without the traditional complexity of data platforms.

Read more...

Rufyda
Super User
Super User

Microsoft Fabric is a powerful data platform that brings together data movement, transformation, and analytics in one unified environment. One of the core workflows in Fabric involves ingesting, exploring, transforming, and preparing data for analysis. This article provides an overview of how to work with data in Microsoft Fabric—starting from ingestion and ending with clean, ready-to-use datasets.

 

 

Microsoft-Fabric.jpg

Read more...

mabdollahi
Advocate IV
Advocate IV

💡Ever wondered how to bring AI into your data engineering workflows in Microsoft Fabric?

In my latest hands-on project, I show how to automate Sentiment Analysis on customer feedback using:

  • Microsoft Fabric Lakehouse

  • PySpark notebooks

  • Azure OpenAI (GPT-4)

  • Fabric Data Pipelines

  • Power BI for real-time insights

This solution is fully integrated, no external services required, and takes just 10 minutes to set up.

📖Check out the full blog and video tutorial to see it in action: 

#MicrosoftFabric #AzureOpenAI #DataEngineering #SentimentAnalysis #PowerBI #PySpark

Read more...

jehebr1
Microsoft Employee
Microsoft Employee

Explore a more native and streamlined alternative to detect and anonymize PII data. With just a few lines of code, Fabric’s built-in AI functions like ai.extract and ai.generate_response allow you to identify and redact PII directly within your data pipelines - no external libraries required.

Read more...

Anusha_M
Microsoft Employee
Microsoft Employee

Discover the Variable Library in Microsoft Fabric, designed to empower users to define and manage variables at the workspace level. Seamlessly integrate across various workspace items, including data pipelines, notebooks, and Lakehouse shortcuts. This feature addresses several pain points and enhances the overall user experience within Fabric.

Read more...

Ayush_Tiwari
Microsoft Employee
Microsoft Employee

In the ever-evolving landscape of data management, optimizing storage and access times is paramount. This article delves into the innovative V-Order feature in Fabric, a game-changer for data read times and storage efficiency. Discover how V-Order's write-time optimization technique enhances performance, reduces costs, and transforms data operations. Join us as we explore the technical intricacies, benefits, and real-world applications of V-Order, and learn how it can revolutionize your data management strategies.

Read more...

Anonymous
Not applicable

Learn how to connect cross Tenant Azure Data Factory (and other services) to Fabric.

 

Read more...

ibarrau
Super User
Super User

There has been a data cleaning extension on the market for a while now that continues to attract attention. I’ve typically come across two types of profiles who clean data: those who love code (using Python or R) and those who use BI tools (Power BI, Tableau, etc.). I believe this extension aims to integrate the best of both worlds—using the power of Python with the visual convenience of traditional tools.

This article tells us about Data Wrangler. The extension that allows you to perform data transformations from a Python or Jupyter file with clicks, as if it were a BI tool.

Read more...

uzuntasgokberk
Super User
Super User

Simplify analytics with Spark Connector for Microsoft Fabric Data Warehouse: seamlessly access Fabric Warehouse via a secure Spark API.

Read more...

kaysauter
Most Valuable Professional
Most Valuable Professional

In my last article of this series, I covered on how to load csv files in a lakehouse automatically in MS Fabric. In this article, I am going to discuss how we can find and fix errors with notebooks easily. 

Read more...

Helpful resources

Join Blog
Interested in blogging for the community? Let us know.