Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
hi Guys,
today I got the error like here:
I am using DirectLake over OneLake. Can i change this mode in TE3 or via XMLA for DirectLake?
How to make this change ?
Reference docs:
https://learn.microsoft.com/en-us/analysis-services/multidimensional-models/configure-string-storage...
Best,
Jacek
Solved! Go to Solution.
You are welcome @jaryszek
By “reshape the data in OneLake,” I mean cleaning or restructuring your source tables before DirectLake reads them:
Cut down very long or unique text columns (hash them or move them to a lookup).
Trim or split oversized fields.
Pre‑aggregate or filter data so fewer distinct values are loaded.
In short you need to adjust the Lakehouse tables so DirectLake doesn’t choke on huge text fields.
Did it work? 👍 A kudos would be appreciated
🟨 Mark it as a solution to help spread knowledge 💡
Hi @jaryszek
The article you linked is correct for SQL Server Analysis Services and Azure Analysis Services models, where you can change StringStoresCompatibilityLevel to 1100 to lift the 4 GB string store limit, but this setting doesn’t apply to Fabric DirectLake datasets, DirectLake doesn’t use string stores in the same way, so you can’t change it in Tabular Editor or via XMLA.
If you’re hitting this error in DirectLake, the only real options I can suggest are to reshape the data in OneLake or switch the dataset to Import mode if you need those large text fields.
Did it work? 👍 A kudos would be appreciated
🟨 Mark it as a solution to help spread knowledge 💡
Thank you very much,
"to reshape the data in OneLake " what does you mean by that?
Best,
Jacek
You are welcome @jaryszek
By “reshape the data in OneLake,” I mean cleaning or restructuring your source tables before DirectLake reads them:
Cut down very long or unique text columns (hash them or move them to a lookup).
Trim or split oversized fields.
Pre‑aggregate or filter data so fewer distinct values are loaded.
In short you need to adjust the Lakehouse tables so DirectLake doesn’t choke on huge text fields.
Did it work? 👍 A kudos would be appreciated
🟨 Mark it as a solution to help spread knowledge 💡
thank you very much
Check out the November 2025 Power BI update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!