Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
Anonymous
Not applicable

Copilot Context Issue in Fabric Notebook

Hi all,

 

In Fabric, I'm trying to use Copilot in a Notebook. I have a DataFrame df that I created by reading a table from my lakehouse. When I try to run the following code asking Copilot to summarize the data in df:

 

%describe
df

 

 

 I receive the following error:

 

RuntimeError: StructuredContext.optimize() was called with a mandatory context that was too large to fit in the max_prompt_tokens limit. The mandatory context was 3626 tokens, but the limit was 3346 tokens.

 

 

It's clear an input exceeds the maximum token length. I have nothing else loaded into memory other than df, which has a shape of (137923, 13) though the error still occurs if I try with only 1,000 rows instead. I am not aware of a way to control the context that Copilot includes before passing to gpt-35-turbo-0125. Other magic commands such as %%chat and %%code work normally without issue.

 

Has anyone encountered this or have any tips or things to try? Also, if there's a better place to post this, please let me know. Thanks!

1 ACCEPTED SOLUTION
Anonymous
Not applicable

RESOLVED

 

The issue was that the Lakehouse I was loading a table from had too many tables (only 20 tables, each with ~100,000 rows).

 

For anyone else encountering this issue, try:

  1. Creating a new larger Spark pool (they're currently medium by default, x-large fixed my issue). See your Workspace settings.
  2. Reducing the size of your Lakehouse by deleting tables (not an option for my case)
  3. Creating a new Lakehouse to reference with the table you're trying to load (worked for me when my new Lakehouse was in a different Workspace)

View solution in original post

5 REPLIES 5
Anonymous
Not applicable

Actually it's not just %describe that's having the issue, I'm getting the same error if I try:

%%chat
Analyze the data in df
RuntimeError: StructuredContext.optimize() was called with a mandatory context that was too large to fit in the max_prompt_tokens limit. The mandatory context was 3649 tokens, but the limit was 3346 tokens.

 

Check this video  Microsoft Fabric Notebook Copilot Tutorial - YouTube  although it stops just short of the %%describe example.

lbendlin
Super User
Super User

you're not accidentally omitting the second percent sign?

 

%%describe df

Anonymous
Not applicable

RESOLVED

 

The issue was that the Lakehouse I was loading a table from had too many tables (only 20 tables, each with ~100,000 rows).

 

For anyone else encountering this issue, try:

  1. Creating a new larger Spark pool (they're currently medium by default, x-large fixed my issue). See your Workspace settings.
  2. Reducing the size of your Lakehouse by deleting tables (not an option for my case)
  3. Creating a new Lakehouse to reference with the table you're trying to load (worked for me when my new Lakehouse was in a different Workspace)
Anonymous
Not applicable

Thanks for the quick response! No, I tried that first but for some reason that magic command prefers a single %:

UsageError: Cell magic `%%describe` not found (But line magic `%describe` exists, did you mean that instead?).

Helpful resources

Announcements
November Power BI Update Carousel

Power BI Monthly Update - November 2025

Check out the November 2025 Power BI update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors