Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
jaryszek
Memorable Member
Memorable Member

Changing string storage for DirectLake over OneLake

hi Guys,

today I got the error like here:

 

jaryszek_0-1762170061313.png


I am using DirectLake over OneLake. Can i change this mode in TE3 or via XMLA for DirectLake? 
How to make this change ?

Reference docs:
https://learn.microsoft.com/en-us/analysis-services/multidimensional-models/configure-string-storage...

Best,
Jacek

 

1 ACCEPTED SOLUTION

You are welcome @jaryszek 

By “reshape the data in OneLake,” I mean cleaning or restructuring your source tables before DirectLake reads them:

  • Cut down very long or unique text columns (hash them or move them to a lookup).

  • Trim or split oversized fields.

  • Pre‑aggregate or filter data so fewer distinct values are loaded.

In short you need to adjust the Lakehouse tables so DirectLake doesn’t choke on huge text fields.


Did it work? 👍 A kudos would be appreciated
🟨 Mark it as a solution to help spread knowledge 💡

 

🟩 Follow me on LinkedIn

View solution in original post

4 REPLIES 4
DataVitalizer
Solution Sage
Solution Sage

Hi @jaryszek 

 

The article you linked is correct for SQL Server Analysis Services and Azure Analysis Services models, where you can change StringStoresCompatibilityLevel to 1100 to lift the 4 GB string store limit, but this setting doesn’t apply to Fabric DirectLake datasets, DirectLake doesn’t use string stores in the same way, so you can’t change it in Tabular Editor or via XMLA.

 

If you’re hitting this error in DirectLake, the only real options I can suggest are to reshape the data in OneLake or switch the dataset to Import mode if you need those large text fields.


Did it work? 👍 A kudos would be appreciated
🟨 Mark it as a solution to help spread knowledge 💡

 

🟩 Follow me on LinkedIn

Thank you very much,

"to reshape the data in OneLake " what does you mean by that? 

Best,
Jacek

You are welcome @jaryszek 

By “reshape the data in OneLake,” I mean cleaning or restructuring your source tables before DirectLake reads them:

  • Cut down very long or unique text columns (hash them or move them to a lookup).

  • Trim or split oversized fields.

  • Pre‑aggregate or filter data so fewer distinct values are loaded.

In short you need to adjust the Lakehouse tables so DirectLake doesn’t choke on huge text fields.


Did it work? 👍 A kudos would be appreciated
🟨 Mark it as a solution to help spread knowledge 💡

 

🟩 Follow me on LinkedIn

thank you very much

Helpful resources

Announcements
November Power BI Update Carousel

Power BI Monthly Update - November 2025

Check out the November 2025 Power BI update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors
Top Kudoed Authors