Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The FabCon + SQLCon recap series starts April 14th at 8am Pacific. If you’re tracking where AI is going inside Fabric, this first session is a can't miss. Register now

Find articles, guides, information and community news

Most Recent
Murtaza_Ghafoor
Skilled Sharer
Skilled Sharer

Accidentally deleted a report in Microsoft Fabric? The new soft delete and recovery feature can help you restore it in seconds. Here’s how it works.

Read more...

pankajnamekar25
Super User

pbi-cli is an open-source Python CLI that gives Claude Code 12 domain-specific Power BI skills  enabling natural language authoring of semantic models via .NET TOM and PBIR report files. No MCP server, no sidecar process, sub-second execution. Install once, then just ask Claude.

Read more...

Jaywant-Thorat
Super User
Super User

When I first opened Power BI Desktop, I thought I was looking at a smarter, shinier version of Excel charts.

I was wrong. Not a little wrong. Completely wrong.

After years of using Power BI across corporate trainings, community sessions, and real-world projects — here are the 5 things I wish someone had told me on Day 1.

Read more...

tharunkumarRTK
Super User
Super User

Hardcoding format strings across every DAX measure is a pain. In my latest blog, I show how one DAX UDF handles all your numeric formatting needs, currency symbols, K/M/B/T scaling, bracket negatives, delta icons and more, all with just one DAX User Defined Function. Check it out and let me know your thoughts.

Read more...

slindsay
Community Admin
Community Admin

Do you have an idea for how Semantic Link could simplify, harden, or streamline how developers build and maintain semantic models?

 

We’re launching the Fabric Semantic Link Developer Experience Challenge, a community contest focused on improving how teams develop, test, document, and maintain semantic models in Microsoft Fabric.

Read more...

SolomonovAnton
Super User
Super User

Sometimes you need a report to refresh not on a fixed schedule, but when a specific event occurs.

A common example is when the table behind your semantic model is updated by a trigger or an ETL process that finishes at unpredictable times. In that case, a standard scheduled refresh is not enough.

I ran into exactly this scenario in one of my projects. A monitoring system was collecting CPU load metrics and writing rows into a database table only when CPU usage exceeded a threshold. As soon as those rows appeared, the report needed to be refreshed immediately.

With this kind of logic, you cannot reliably predict the refresh time in advance. CPU load may stay normal for several days, or it may cross the threshold multiple times in one day. For scenarios like this, triggering a refresh through the API is a much better fit.

In this article, I will show a simple and reliable method:

  1. Create a Cache Refresh Plan for the report
  2. Get the refresh plan ID
  3. Trigger it with a single PowerShell command

Power BI Report Server exposes REST API operations for Cache Refresh Plans, including getting plans, executing a plan immediately, and retrieving execution history.

Read more...

grazitti_sapna
Super User
Super User

🚀 The Future of BI: AI & Copilot in Power BI

Power BI is evolving from dashboards to intelligent analytics. With AI and Copilot, users can uncover patterns, predict trends, and ask questions in natural language—getting instant insights, visuals, and explanations.

Powered by Azure Machine Learning and Azure Cognitive Services, Power BI makes advanced analytics accessible to everyone—no data science required.

💡 It’s not just reporting anymore—it’s AI-driven decision-making.

Read more...

techies
Super User
Super User

CONCATENATEX is far more than a string aggregation function. It serves as a bridge between normalized LMS data structures and business-ready analytics. In the context of Moodle and Power BI, it enables the creation of compact, narrative-driven outputs that enhance clarity, reduce visual clutter, and improve decision-making.

Read more...

slindsay
Community Admin
Community Admin

We have a new contest, officially launching Monday, March 16th. 

Build a clear, end-to-end Fabric solution that uses a Medallion architecture: ingest raw data with pipelines, manage both data and metadata, transform it into Silver and Gold layers in a Fabric Data Warehouse (using a dbt job), and finish with an AI-ready Power BI semantic model. The goal is to show a repeatable pattern where well-designed pipelines and warehouse models lead to trustworthy Copilot answers in Power BI.

Read more...

anmolmalviya05
Super User
Super User

In the previous blog of this series, we explored how modern data platforms use columnar file formats like Apache Parquet to store data efficiently and enable faster analytics.

 

While Parquet significantly improves performance and storage efficiency, it still has limitations when it comes to managing data changes, maintaining data consistency, and handling concurrent operations.

 

To solve these challenges, modern platforms introduced Delta Tables.

 

In this blog, we will explore what Delta Tables are, why they were created, and how they improve reliability in modern data platforms such as Microsoft Fabric and Databricks.

Read more...

anmolmalviya05
Super User
Super User

In today’s data-driven world, organizations generate and process massive volumes of data every day. Traditional file formats such as CSV and Excel were once sufficient for storing and analyzing business data. However, as data volumes grew and analytics became more advanced, these formats started showing limitations.

 

Modern data platforms like Microsoft Fabric, Apache Spark, and Databricks rely on optimized data formats designed for large-scale analytics. One of the most widely used formats today is Apache Parquet.

 

In this blog, we will explore what Parquet is, why it was created, and why it has become the foundation of many modern data platforms.

Read more...

anmolmalviya05
Super User
Super User

While working with a mature Power BI semantic model, we faced a simple yet critical question:

Where is the data for each table actually coming from?

To solve this problem, we built a simple automated approach that extracts the data source information for every table directly from the semantic model.

In this article, I will walk through the concept and implementation.

Read more...

anmolmalviya05
Super User
Super User

In modern data engineering, organizations deal with multiple data sources such as Excel files, SQL databases, APIs, and cloud storage systems. The challenge is not just collecting the data, but transforming and storing it efficiently for analytics.

Microsoft Fabric simplifies this process using Dataflow Gen2, a low-code data ingestion and transformation tool.

In this blog, we walk through how to ingest data from any source and store it in a Lakehouse using Dataflow Gen2.

Read more...

anmolmalviya05
Super User
Super User

For many years, SQL Server Reporting Services (SSRS) has been a reliable solution for creating detailed, paginated reports. Organizations have used it extensively for operational reporting, invoices, financial statements, and structured data outputs.

 

However, as businesses increasingly demand interactive dashboards, real-time insights, and cloud integration, Power BI has emerged as the modern analytics platform of choice.

 

Migrating reports from SSRS to Power BI allows organizations to preserve their existing data logic while enabling interactive visualizations, self-service analytics, and scalable cloud-based reporting.

Read more...

Helpful resources

Join Blog
Interested in blogging for the community? Let us know.