Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
Raja_Sharma
Regular Visitor

Microsoft Fabric Data Agent Ignores Its Own AI Instructions — Also, Is Prep Data for AI Supported?

Hi,

I’m working with the Microsoft Fabric Data Agent connected to a semantic model, and I'm facing several issues that impact the reliability and usefulness of the experience:

  1. The AI instructions added directly in the Data Agent configuration are not being followed.
    Despite clearly defining intent and logic (such as how to interpret specific values or route questions to the correct columns), the agent often ignores these instructions and returns incorrect or incomplete outputs.
    Why does the Data Agent ignore instructions that are explicitly configured in its own setup?

  2. Separately, I’ve also configured AI Instructions and Verified Questions using the “Prep Data for AI” feature at the semantic model level. These work well with Copilot, but they don’t seem to be applied at all by the Fabric Data Agent.

    Does the Fabric Data Agent use Prep Data for AI metadata (like Verified Questions and model-level instructions), or is that only supported in Copilot?
If not supported yet, is that integration planned?

This inconsistency between what's configured and how the agent behaves creates confusion for users and limits trust in using the data agent.

Would appreciate any insights on current limitations and what's on the roadmap to improve instruction adherence in the Data Agent.

Note: Labels like “Data Agent”, “Copilot”, and “Prep Data for AI” were not available at the time of posting. This issue concerns the Fabric Data Agent not following its AI instructions or using Prep Data for AI metadata.

Thanks!

1 ACCEPTED SOLUTION

Hi @Raja_Sharma ,

 

Thanks again for the detailed logs and your follow-up. Based on everything you've shared, it really looks like the Data Agent is triggering the job but not executing the AI instructions as expected.

Here are a few things you might want to double-check:

  1. Agent registration and environment alignment
    Make sure the agent is registered under the same environment where the AI instructions are defined. If there's a mismatch, the agent might skip those steps silently.

  2. Instruction compatibility
    Some AI instructions might require specific runtime or compute settings. If the agent machine doesn't meet those, the instructions might be ignored. Check if the AI step runs in a standalone test pipeline.

  3. FSU job desync
    There could be a stale or corrupted job definition in the Fabric backend. Try restarting the agent service and re-publishing the pipeline from scratch.

  4. Agent version
    Just to be sure, confirm you're using the latest version of the Data Agent. Some earlier builds had issues with instruction parsing.

If none of these help, I’d recommend opening a Fabric support ticket and referencing the activity ID you posted earlier. That’ll help the backend team trace the job execution path more precisely.

Let me know if you want help isolating the AI step or testing it in a clean pipeline.

If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

View solution in original post

8 REPLIES 8
v-pagayam-msft
Community Support
Community Support

Hi @Raja_Sharma ,
Have you got an opportunity to review the information provided by burakkaragoz . Please feel free to contact us if you have any further questions. If the response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.

Thank you for being a part of Microsoft Fabric Community Forum!



v-pagayam-msft
Community Support
Community Support

Hi @Raja_Sharma ,
I wanted to follow up on our previous suggestions regarding the issue you are facing. We would like to hear back from you to ensure we can assist you further.
If our response has addressed your query, please accept it as a solution and give a ‘Kudos’ so other members can easily find it. Please let us know if there’s anything else we can do to help.
Thank you.

Raja_Sharma
Regular Visitor

Hi @burakkaragoz 

Thank you for your detailed response and for providing the troubleshooting steps.

To clarify our environment:
We are not using a Fabric free trial—our tenant is fully licensed with an F64 SKU. All workspaces and the Data Agent are provisioned within this capacity, so trial limitations or activation steps shouldn’t apply.

To restate the core issue:

  • The Microsoft Fabric Data Agent is not consistently following the AI Instructions set in its configuration, even when the instructions are clear and specific. Interestingly, in the follow up if we ask the agent to “please retry,” it usually follows the instructions correctly on the second attempt. However, having to request a retry each time is not a viable solution for end users.

  • “Prep Data for AI” metadata (such as Verified Questions and model-level instructions) configured at the semantic model level works as expected with Copilot, but these improvements do not seem to be applied by the Data Agent.

A few questions for further support:

  1. Does the Fabric Data Agent(connected to semantic model) currently utilize “Prep Data for AI” metadata (including Verified Questions and model-level instructions), or is this functionality exclusive to Copilot?

  2. If this integration is not yet available, is there a published roadmap or ETA for when the Data Agent will support “Prep Data for AI” metadata and more reliably follow configured instructions?

  3. Are there any known workarounds or best practices to improve the instruction adherence of the current Data Agent, beyond what is outlined in the official documentation?

v-pagayam-msft
Community Support
Community Support

Hi @Raja_Sharma ,
Thank you for the helpful response @burakkaragoz !
I wanted to check in on your situation regarding the issue. Have you resolved it? If you have, please consider marking the reply that helped you or sharing your solution. It would be greatly appreciated by others in the community who may have the same question.
Thank you.

Regards,
Pallavi

Hi @v-pagayam-msft ,
Thank you for checking in. The issue is not resolved yet. I’m still waiting for a solution or further guidance from the community.

burakkaragoz
Community Champion
Community Champion

Hi @Raja_Sharma ,

 

Thanks for sharing the details — that error message and the activity IDs are super helpful.

A few things you can try:

  1. Region Limitation: Sometimes the free trial isn’t available in all regions. Since your cluster URI is pointing to East Asia, it’s possible that Fabric trial provisioning is limited or delayed there. You might want to try switching your home region to something like West Europe or East US (if your org allows it) and then try again.

  2. Tenant Restrictions: Some tenants (especially EDU or restricted enterprise tenants) have trial creation blocked by policy. You can check with your Microsoft 365 admin if trial services are allowed for your account.

  3. PPU vs. Fabric Trial: Having a Power BI Premium Per User (PPU) trial doesn’t automatically give you access to Fabric. You need to explicitly activate the Microsoft Fabric trial from the Fabric homepage or Admin Portal.

  4. Try Incognito or Different Browser: Sometimes cached sessions or cookies can interfere with trial activation. Try using an incognito window or a different browser.

If none of that works, you can also raise a support ticket with Microsoft and include the Activity ID and Request ID you posted — that’ll help them trace the issue faster.

Let us know how it goes!

If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

Hi @burakkaragoz 
Thank you for your detailed response and for providing the troubleshooting steps.

To clarify our environment:
We are not using a Fabric free trial—our tenant is fully licensed with an F64 SKU. All workspaces and the Data Agent are provisioned within this capacity, so trial limitations or activation steps shouldn’t apply.

To restate the core issue:

  • The Microsoft Fabric Data Agent(connected to semantic model) is not consistently following the AI Instructions set in its configuration, even when the instructions are clear and specific. Interestingly, if we ask the agent to “please retry,” it usually follows the instructions correctly on the second attempt. However, having to request a retry each time is not a viable solution for end users.

  • “Prep Data for AI” metadata (such as Verified Questions and model-level instructions) configured at the semantic model level works as expected with Copilot, but these improvements do not seem to be applied by the Data Agent.

  • We observe the same behavior in both free trial and F64 licensed environments—there’s no difference regarding this issue.

A few questions for further support:

  1. Does the Fabric Data Agent currently utilize “Prep Data for AI” metadata (including Verified Questions and model-level instructions), or is this functionality exclusive to Copilot?

  2. If this integration is not yet available, is there a published roadmap or ETA for when the Data Agent will support “Prep Data for AI” metadata and more reliably follow configured instructions?

  3. Are there any known workarounds or best practices to improve the instruction adherence of the current Data Agent, beyond what is outlined in the official documentation?

Hi @Raja_Sharma ,

 

Thanks again for the detailed logs and your follow-up. Based on everything you've shared, it really looks like the Data Agent is triggering the job but not executing the AI instructions as expected.

Here are a few things you might want to double-check:

  1. Agent registration and environment alignment
    Make sure the agent is registered under the same environment where the AI instructions are defined. If there's a mismatch, the agent might skip those steps silently.

  2. Instruction compatibility
    Some AI instructions might require specific runtime or compute settings. If the agent machine doesn't meet those, the instructions might be ignored. Check if the AI step runs in a standalone test pipeline.

  3. FSU job desync
    There could be a stale or corrupted job definition in the Fabric backend. Try restarting the agent service and re-publishing the pipeline from scratch.

  4. Agent version
    Just to be sure, confirm you're using the latest version of the Data Agent. Some earlier builds had issues with instruction parsing.

If none of these help, I’d recommend opening a Fabric support ticket and referencing the activity ID you posted earlier. That’ll help the backend team trace the job execution path more precisely.

Let me know if you want help isolating the AI step or testing it in a clean pipeline.

If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.