Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
We’ve all seen how customer feedback holds a goldmine of insights—but going through it manually isn’t scalable. And while AI sounds great, setting it up can feel a bit… intimidating.
The good news? If you're already working with Microsoft Fabric, you're just a few steps away from building your own AI-powered sentiment analysis solution—without needing to train a model or spin up separate services.
In this post (and accompanying video tutorial), I’ll walk you through how to:
✅Store and prepare customer feedback in a Fabric Lakehouse
✅Use PySpark to analyze sentiment using Azure OpenAI
✅Automate the process in a Data Pipeline
✅Visualize results in Power BI
Let’s dive in!
We're going to build an analytical solution that takes in raw customer feedback and automatically classifies each message as Positive, Neutral, or Negative. We'll use:
Fabric Lakehouse to store the data
PySpark notebook to process it
Azure OpenAI (GPT) for the sentiment logic
Data Pipeline to automate it
Power BI for the final visualization
All of this is done inside Microsoft Fabric and Azure AI Foundry—no need to leave the ecosystem.
Start by creating a new Fabric Workspace and inside it, a Lakehouse.
If you haven’t used Azure OpenAI before, here’s what you need to do by follwoing the steps in Create and deploy an Azure OpenAI in Azure AI Foundry Models resource :
Head to Azure Portal
Create a new Azure Open AI and navigate to AI Foundry
Add a deployment of the gpt-35-turbo model
Note down your endpoint URL and API key
We’ll use these in the next step to send feedback text to the GPT model and get a sentiment label back.
Back in Fabric, create a new Notebook using PySpark; in your Lakehouse, click on "New Notebook" from "Open Notebook" menu.
You can start using your codes on Notebook using Phyton or any preferred language.
Load your customer feedback table:
df = spark.read.format("delta").load("Tables/raw_customer_feedback")
Now let’s connect to the OpenAI API and classify each feedback entry. You’ll use Python’s requests module to send text and get a response:
from pyspark.sql.functions import col, udf
import requests
# Azure OpenAI Config
AZURE_OPENAI_ENDPOINT = "https://customname.openai.azure.com/"
AZURE_OPENAI_API_KEY = "secretkey"
DEPLOYMENT_NAME = "gpt-4o"
headers = {
"Content-Type": "application/json",
"api-key": AZURE_OPENAI_API_KEY
}
def analyze_sentiment(text):
try:
payload = {
"messages": [{"role": "system", "content": "Analyze sentiment as Positive, Neutral, or Negative"},
{"role": "user", "content": text}],
"max_tokens": 100
}
response = requests.post(
f"{AZURE_OPENAI_ENDPOINT}openai/deployments/{DEPLOYMENT_NAME}/chat/completions?api-version=2023-07-01-preview",
headers=headers, json=payload
)
sentiment = response.json()["choices"][0]["message"]["content"]
return sentiment
except Exception as e:
return "Error"
analyze_sentiment_udf = udf(analyze_sentiment)
Then apply that to each row:
df_enriched = df.withColumn("Sentiment", analyze_sentiment_udf(col("Feedback")))
df_enriched.write.format("delta").mode("overwrite").save("Tables/customer_feedback_enriched")
This adds a new sentiment column to your data and saves it back into your Lakehouse.
You can see the created table with new sentiment field in your Lakehouse.
To avoid running the notebook manually every time, create a Data Pipeline.
Add a Notebook activity and point it to the notebook you just wrote.
Set a trigger to run the pipeline on a schedule, or whenever new data arrives.
(Optional) Add a dataflow before the notebook to clean/prepare input data.
Now you’ve got a repeatable pipeline!
Last step—open Power BI (either within Fabric or using Desktop) and connect to your Lakehouse.
Load the feedback_with_sentiment table and build a few visuals:
Pie chart showing the sentiment split
Table of feedback messages with their sentiment
Time-based trends if you’ve got timestamps
Tip: You can also set custom colors for sentiment categories—like red for Negative, green for Positive—to make things more intuitive.
If you prefer to follow along visually, I’ve recorded a full 25-minute video showing every step, including:
Creating the workspace & lakehouse
Writing the PySpark code
Connecting to Azure OpenAI
Automating with a pipeline
Building the report
📂Grab the code and sample data: https://github.com/mehrdadabdollahi/AI-Powered-Sentiment-Analysis-in-Microsoft-Fabric-with-Azure-Ope...
This project shows how easy it is to plug AI into your data pipelines using Microsoft Fabric—no need to build or train models. With PySpark, Azure OpenAI, and Power BI all in one ecosystem, you get a powerful setup that’s easy to maintain and scale.
If you’re already in the Microsoft data world, this is a great way to dip your toes into AI without leaving the tools you know.
Let me know how it goes—or how you’d extend it further (like adding language detection, tagging, or summarization).
Mehrdad Abdollahi
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.