Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi everyone, I have some questions for this Copilot AI.
In my company we are testing this AI for our reports and I ask some questions in my opinion easy ones... And copilot can not resolve it.. Soo..
Is Copilot a bad AI?
Even i asked in others AI likes chatg.. some more difficult issues and solve it, but Copilot can not do... a simple operation?
For an easy example, I have a report for passengers and I have the TOTAL in months, I ask for Copilot if he can tell me whats is the % of evolution between months and answer "I can't do that, I need the measure in the report".
There is some limitations for that or what's happening? This AI is not connected with models and data? Even in the same area we have 2 reports that we have some relational data and we can not ask some question between them.
Hi @Alexx22,
What you’re seeing isn’t because Copilot is a “bad AI”. it’s part of its documented limitations. Copilot in Power BI doesn’t automatically calculate from your reports unless the underlying measures are defined in the data model. That’s why it responded “I need the measure in the report.”
Some key points:
• Copilot depends on structured data models: It can only calculate growth rates or percentages if the measure (like month‑over‑month change) is already defined.
• No cross‑report queries: Copilot won’t join or relate two separate reports unless those relationships are modeled in Power BI.
• Guardrails against guessing: Unlike some AI tools, Copilot won’t invent missing numbers, it only works with what’s explicitly available.
So the limitation you hit is expected and documented. To get the results you want, you’ll need to:
• Define a DAX measure for month‑over‑month % change in your Power BI model.
• Ensure relational data is modeled correctly if you want Copilot to query across datasets.
You can find the official Microsoft documentation here: https://learn.microsoft.com/en-us/power-bi/create-reports/copilot-introduction
Example DAX Formula for Month‑over‑Month % Change
MonthOverMonthChange =
VAR CurrentMonth = SUM('Passengers'[Total])
VAR PreviousMonth = CALCULATE(
SUM('Passengers'[Total]),
DATEADD('Passengers'[Date], -1, MONTH)
)
RETURN
DIVIDE(CurrentMonth - PreviousMonth, PreviousMonth, 0)
This measure calculates the percentage change from the previous month.
Example DAX Formula for Year‑over‑Year % Change
YearOverYearChange =
VAR CurrentYear = SUM('Passengers'[Total])
VAR PreviousYear = CALCULATE(
SUM('Passengers'[Total]),
DATEADD('Passengers'[Date], -1, YEAR)
)
RETURN
DIVIDE(CurrentYear - PreviousYear, PreviousYear, 0)
This measure calculates the percentage change compared to the same month in the prior year.
How to Use These Measures
• Add them to a line chart with `Date` on the X‑axis to see trends over time.
• Place them in a KPI card to highlight the latest % change.
• Once these measures exist in your model, Copilot can reference them, explain the trends, and generate insights automatically.
Hi @Alexx22,
This is a very common confusion, and you’re not wrong to question it — but the issue here is expectations, not that Copilot is “bad AI”.
Copilot in Power BI is not the same thing as ChatGPT or other general-purpose AI tools.
Why Copilot behaves this way (important distinction)
Copilot for Power BI is:
Context-bound
Model-aware, not data-exploratory
Strictly constrained by the semantic model
It cannot invent logic, infer business meaning, or “guess” calculations unless:
The measure already exists, or
The model is clearly defined in a way Copilot can reference.
So when Copilot says:
“I need the measure in the report”
That is actually expected behavior.
Your example: % evolution between months
From a human perspective, this is “easy”.
From Copilot’s perspective:
It does not know which measure represents “Total Passengers”
It does not know which date column defines “month”
It does not know whether you want:
MoM %
YoY %
Rolling comparison
Same month last year
Cumulative vs discrete
Unlike ChatGPT, Copilot will not assume.
Copilot can:
✔ Explain existing measures
✔ Modify existing measures
✔ Generate DAX if the intent is unambiguous and grounded in the model
Copilot cannot:
❌ Infer missing measures
❌ Create business logic from vague questions
❌ Reason across multiple reports
❌ Query data outside the current semantic model
“Is Copilot connected to the data model?”
Yes — but only to the active semantic model of the report.
It does not:
See other reports
Traverse datasets
Join unrelated models
Perform cross-report reasoning
Each report = isolated context.
This explains why:
“We have two reports with related data and Copilot can’t answer questions between them”
That is by design, not a bug.
Why ChatGPT seems “better”
ChatGPT:
Is general reasoning AI
Can make assumptions
Can invent examples
Is not bound by governance or model integrity
Copilot:
Is a governed enterprise assistant
Prioritizes correctness and safety over creativity
Refuses to act when context is incomplete
So ChatGPT feels “smarter” — but Copilot is being intentionally conservative.
How to get better results from Copilot
To make Copilot useful:
1⃣ Create clear base measures
Total Passengers := SUM ( FactPassengers[PassengerCount] )
2⃣ Use a proper Date table with Month/Year
3⃣ Ask specific, model-aware questions, for example:
“Create a Month-over-Month % change measure based on [Total Passengers] using the Date table.”
Now Copilot can help.
Summary
✔ Copilot is not bad AI
✔ It is not a general-purpose assistant
✔ It does not infer missing logic
✔ It operates strictly inside the semantic model
✔ Cross-report questions are not supported
Once you align expectations, Copilot becomes a very good productivity assistant — just not a replacement for a BI developer’s reasoning.
If this explanation helped clarify the limitations and behavior, please give Kudos 👍 and mark this reply as the Accepted Answer ✔ so others evaluating Copilot can set the right expectations as well.
Thanks everyone for the response.
I'll reply you because i understand all that points BUT...
I don't think the same.
My prompt is very complete with the info that i need and the AI needs.
The measures are very clear and i ask for specific answer.
I have the measure with the real name, wrike what i need very clarity and copilot can not resolve it.
ChatGPT resolve it easy, is no hard guys.. I understand the limitations but this is very useless for me. "Intelligent" is nothing for to much AIs not alone for copilot, but this is embarrasing i ask for a aritmetic result is easy for a simple calculator..
When i ask for cross report means some like this:
If one said "passengers" 100.000 in nov.
And the other said "passengers" 80.000 "NS" and 20.000 "SN" in nov.
¿How many passengers of the 100.000 are NS and how SN?
A easy summary for 2 reports...
Thank you for the follow-up — and I genuinely understand your frustration. You’re not wrong that, from a human perspective, this is a very simple arithmetic problem, and yes, ChatGPT can solve it easily.
However, the key point is that Copilot in Power BI is not failing at math — it is refusing to cross semantic boundaries it is not allowed to cross.
What you are describing is a cross-report reconciliation problem, not a calculation problem.
When you say:
“Report A says 100,000 passengers in November
Report B splits November into 80,000 NS and 20,000 SN
How many of the 100,000 are NS vs SN?”
For a human (or ChatGPT), the assumption is obvious: they refer to the same population.
For Copilot:
These are two separate semantic models
There is no guaranteed relationship
There is no shared grain or lineage
There is no enforced business rule that NS + SN = total passengers
Copilot is designed to not assume equivalence, even when it seems obvious.
That’s why it asks for:
The exact measure
The exact table
The exact relationship
All data inside the same model
It’s not intelligence vs stupidity — it’s governance vs inference.
Why ChatGPT “works” here
ChatGPT:
Assumes the numbers are compatible
Ignores data lineage
Doesn’t care if the logic would be invalid in a governed BI system
Copilot:
Must avoid generating potentially wrong business answers
Cannot reconcile two reports unless they are backed by the same dataset
Cannot “peek” into another report or model
The real limitation (and you’re right to call it out)
Where I agree with you 100%:
Copilot does not currently help enough with reconciliation-style reasoning, even when:
Measures exist
Names match
Logic is obvious to a human
That does limit its usefulness today for analysts doing validation, reconciliation, or sanity checks across reports.
So yes — for this use case, Copilot is not helpful. That’s a fair and valid conclusion.
The practical takeaway
Copilot works best when:
All numbers live in one semantic model
Relationships are explicit
Measures are already defined
For:
Cross-report summaries
Reconciliation
“Explain the difference between report A and report B”
Today, external reasoning tools (like ChatGPT) are still better.
You’re not misunderstanding Copilot — you’re simply hitting a real product boundary. Calling that out is fair, and it’s exactly the kind of feedback Microsoft needs to hear.
If this explanation helped clarify why this happens (even if you still disagree with the design), please consider giving Kudos 👍 and marking as the Accepted Answer ✔ so others don’t hit the same frustration without context.
Hi @Alexx22 ,
Thank you for reaching out to the Microsoft Community Forum.
Could you please try the proposed solution shared by @SavioFerraz ? Let us know if you’re still facing the same issue we’ll be happy to assist you further.
Regards,
Dinesh
Hi @Alexx22 ,
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.
Regards,
Dinesh
I've watched many presentations about the performance of AI and the different tools within PowerBI/Fabric. I found nobody explains it better than Marc Lelijveld. Check out his blog post (and catch a presentation from him if you can) where he writes about Co-pilot and alternatives:
He gives elaborato pro's and con's about the usage of each tool including Copilot.
Make sure to check this out.
Copilot isn’t a “bad AI” — it just has very strict limitations in Power BI right now, which is why it struggles with things that seem simple.
Copilot can only work with measures and fields that already exist in the model.
If you ask for a % evolution between months, Copilot cannot create that logic on its own unless a measure already exists in the report. It doesn’t calculate new metrics on the fly.
Copilot does not understand relationships across different reports or datasets.
It only works inside the dataset of the current report. So if you have 2 related reports, Copilot cannot “combine” their data in a single question.
Copilot for Power BI is not a full analytical engine yet.
It mainly helps with:
describing visuals
generating summaries
suggesting DAX when enough fields exist
creating simple visuals from existing data
But it cannot:
derive new KPIs without measures
combine multiple datasets
fix model design issues
perform multi-step reasoning like ChatGPT or other advanced models
ChatGPT is a general-purpose AI with reasoning ability.
Copilot for Power BI is restricted to what the dataset exposes and what the Power BI security model allows. It’s intentionally limited to avoid incorrect calculations or bypassing RLS.
Copilot is not bad — it’s just early-stage and heavily limited.
To get better answers, you need to prepare the data model with all required measures, and Copilot will only work within that structure.
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!