Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
DennesTorres
Impactful Individual
Impactful Individual

Semantic Model in Fabric

Hi,

This article is very interesting: https://blog.fabric.microsoft.com/en-us/blog/chat-your-data-in-microsoft-fabric-with-semantic-kernel...

but, how is exposed in one of the comments, in no moment the model of the lakehouse is provided to chat gpt. In this way, the real accuracy of the answers will be very limited.

In some internet searches, I found comments about providing our model through an API to chat GPT. But I'm not managing to find which API is that yet.

Is there a way to solve this problem and integrate chat gpt with Fabric in a better way?

 

2 ACCEPTED SOLUTIONS
Anonymous
Not applicable

Hi @DennesTorres ,

The article is a "getting started" article, and I think you are expecting to find the recipe for perfect Data Copilot inside.
The article seems primarily intended to demonstrate how to install SK in our Python environment and how to use it (with a small example). Or, essentially, how one can start to build, almost from scratch, a chat model over one's data, in Fabric. The article does not claim to build the best possible copilot.
You are right, the mini-chat tool demonstrated in the article can be extended to make the GPT system aware of the data model. For this, the you should follow this:

1) Explore the lakehouse structure (either using standard Python code, or the new Semantic Link library in Fabric).
2) Pass the lakehouse structure information to the GPT model as part of the Semantic Kernel prompt .
Please refer this document for more information:  How to write prompts in Semantic Kernel.

Hope this helps. Please let me know if you have any further queries.

View solution in original post

Hi,

Thank you, I end up discovering the missing points.

The trick is on the semantic model skills. The skills are System Prompts which define what answer the LLM will provide. We can provide an entire explanation of our model on the skill, allowing the LLM to query our entire model. 

The entire problem becomes a problem of prompt engineering.

However, it's important to change the JSON as well. The default JSON which comes with the skills has a max_tokens parameter of 16000. This is the maximum for the response, which means the system prompt can have only 384 (more or less). 

I doubt any SQL statement will reach 16000 caracters, so it can be reduced to 5000 for example, giving us a good space to explain our model to the LLM.

Kind Regards,

Dennes


View solution in original post

4 REPLIES 4
Anonymous
Not applicable

Hi @DennesTorres ,

Thanks for using Fabric Community.

I have reached the internal team for help on this. I will update you once I hear from them.
Appreciate your patience.
Refer to this link: Azure OpenAI for big data - Microsoft Fabric | Microsoft Learn

Anonymous
Not applicable

Hi @DennesTorres ,

The article is a "getting started" article, and I think you are expecting to find the recipe for perfect Data Copilot inside.
The article seems primarily intended to demonstrate how to install SK in our Python environment and how to use it (with a small example). Or, essentially, how one can start to build, almost from scratch, a chat model over one's data, in Fabric. The article does not claim to build the best possible copilot.
You are right, the mini-chat tool demonstrated in the article can be extended to make the GPT system aware of the data model. For this, the you should follow this:

1) Explore the lakehouse structure (either using standard Python code, or the new Semantic Link library in Fabric).
2) Pass the lakehouse structure information to the GPT model as part of the Semantic Kernel prompt .
Please refer this document for more information:  How to write prompts in Semantic Kernel.

Hope this helps. Please let me know if you have any further queries.

Hi,

Thank you, I end up discovering the missing points.

The trick is on the semantic model skills. The skills are System Prompts which define what answer the LLM will provide. We can provide an entire explanation of our model on the skill, allowing the LLM to query our entire model. 

The entire problem becomes a problem of prompt engineering.

However, it's important to change the JSON as well. The default JSON which comes with the skills has a max_tokens parameter of 16000. This is the maximum for the response, which means the system prompt can have only 384 (more or less). 

I doubt any SQL statement will reach 16000 caracters, so it can be reduced to 5000 for example, giving us a good space to explain our model to the LLM.

Kind Regards,

Dennes


Anonymous
Not applicable

Hi @DennesTorres ,
Glad that your query got resolved. Please continue using Fabric Community for any help regarding your queries.

Appreciate if you could share the feedback on our feedback channel. Which would be open for the user community to upvote & comment on. This allows our product teams to effectively prioritize your request against our existing feature backlog and gives insight into the potential impact of implementing the suggested feature.

Helpful resources

Announcements
May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.