Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!View all the Fabric Data Days sessions on demand. View schedule
Hi everyone,
I’m getting the following error in my Power BI DirectLake model: “Error fetching data for this visual. The string store or binary store at compatibility level 1050 has reached its 4 GB limit. To store additional strings, you can change the StringStoresCompatibilityLevel property to 1100 and retry the process. This option is only available for databases with compatibility level 1100 or higher.”
I understand from other posts that this setting applies to SQL Server Analysis Services and Azure Analysis Services, but not to Fabric DirectLake datasets. DirectLake doesn’t use string stores the same way, so we can’t modify this property.
I was advised to “reshape the data in OneLake” or to switch the dataset to Import mode if I need to handle large text fields.
I’m trying to figure out how exactly I could reshape the data in my current model. I’m attaching a screenshot of my model design (see below).
The column that seems to cause the issue contains values like this one:
2025-04-01_/subscriptions/026638b2-1cd4-4348-ac1e-39cc36269aa4/resourcegroups/cloud-shell-storage-westeurope/providers/microsoft.storage/storageaccounts/csb100320023b1d6f59
I’d like to understand what’s the best way to make this field more Power BI–friendly in DirectLake. Should I shorten or hash it, move it to a separate lookup table, or maybe trim part of the path before loading it?
For every answer i need example, how to model lookup table etc.
Any specific examples or guidance on how to reshape this data within OneLake (or in the dataflow before DirectLake) would be very helpful.
Thanks in advance for your time and ideas.
Best,
Jacek
Solved! Go to Solution.
Hi @jaryszek ,
Thank you for the update. Please try below two options.
1. Keep the big string in Dim_EA_AmortizedCosts_Resources_Changed and add a surrogate key.
It is simple and you only modify the existing dimension table. And No extra join complexity. The dimension table still holds the large text column, which contributes to the string store size though less critical than fact table. If the dimension table is also very large and queried often, performance might effect.
2. Create a new lookup table with only ResourceKey + original string.
It Keeps the main dimension lean, only attributes needed for analysis. The big string is isolated in a separate table, used only for drill-through or detailed views. It reduces memory pressure on visuals that don’t need the full path. It adds one more relationship like Fact --> Dim --> Lookup. Slightly more complex model.
Please follow below approach for DirectLake.
1. Use only numeric measures + surrogate keys in Fact table.
2. Use only attributes needed for slicing/filtering in Main dimension.
3. Use only for long text fields used rarely like drill-through in Separate detail table.
Note: Please try option 2, The full path is rarely used in visuals. If you want to minimize memory and improve performance.
Please refer below updated model.
FactUsage --> Dim_EA_AmortizedCosts_Resources_Changed --> ResourcePathLookup
Note: FactUsage stores ResourceKey. Dim_EA_AmortizedCosts_Resources_Changed stores attributes like SubscriptionID, ResourceGroup, Provider, StorageAccount. ResourcePathLookup stores ResourceKey + full original path.
I hope this information helps. Please do let us know if you have any further queries.
Regards,
Dinesh
Hi @jaryszek ,
Thank you for reaching out to the Microsoft Community Forum.
DirectLake cannot change StringStoresCompatibilityLevel like SSAS/AAS, so the only options are Reduce cardinality and/or string length. Move the column out of the main fact table into a lookup table. Replace the string with a surrogate key or hash. Trim unnecessary parts of the string.
Please refer below solutions for Reshaping.
1. If the column is a resource path, break it into components like subscription ID, resource group, provider and account name. Then create a lookup table for these components and reference it via a surrogate key.
Use a Dataflow Gen2 or Lakehouse SQL to split the string using SPLIT() or PARSE_PATH() logic. Generate a surrogate key like ROW_NUMBER() or hash.
2. If you don’t need the full path for reporting, replace it with a short hash like SHA256 or MD5 and keep the original in a separate table for drill-through.
3. If only the Storage Account or Resource Group matters for analysis, extract that and discard the rest.
For example Instead of storing below thing.
/subscriptions/026638b2-1cd4-4348-ac1e-39cc36269aa4/resourcegroups/cloud-shell-storage-westeurope/providers/microsoft.storage/storageaccounts/csb100320023b1d6f59-
Store the below thing.
csb100320023b1d6f59
Note: Keep fact tables, numeric measures + surrogate keys. Move high-cardinality text columns to dimension tables. Avoid storing full URLs or paths in the main table.
Please refer below two alternative options to Reshape.
1. Please try below M code to Split Resource Path into Components.
let
Source = TableName,
AddSplitColumns = Table.SplitColumn(
Source,
"ResourcePath",
Splitter.SplitTextByDelimiter("/", QuoteStyle.Csv),
{"Part1", "Part2", "Part3", "Part4", "Part5", "Part6", "Part7", "Part8", "Part9"}
),
RenameColumns = Table.RenameColumns(AddSplitColumns, {
{"Part1", "DatePrefix"},
{"Part2", "Subscriptions"},
{"Part3", "SubscriptionID"},
{"Part4", "ResourceGroups"},
{"Part5", "ResourceGroupName"},
{"Part6", "Providers"},
{"Part7", "ProviderName"},
{"Part8", "StorageAccounts"},
{"Part9", "StorageAccountName"}
}),
RemoveExtraColumns = Table.RemoveColumns(RenameColumns, {"DatePrefix", "Subscriptions", "ResourceGroups", "Providers", "StorageAccounts"}),
AddResourceKey = Table.AddIndexColumn(RemoveExtraColumns, "ResourceKey", 1, 1, Int64.Type)
in
AddResourceKey
Note: It Splits the path into parts. Renames key parts: SubscriptionID, ResourceGroupName, ProviderName, StorageAccountName. Adds a ResourceKey (surrogate key) for building relationships. You can then create a separate lookup table with these columns and keep only ResourceKey in your fact table.
In Power BI Dataflow Gen2 or Power BI Desktop, go to Transform Data. Paste the M code in Advanced Editor. Adjust TableName and column names. Load the lookup table separately and relate it to your fact table using ResourceKey.
2. SQL script for Lakehouse that splits your resource path into components and creates a lookup table with a surrogate key.
CREATE TABLE ResourceLookup AS
SELECT
ROW_NUMBER() OVER (ORDER BY ResourcePath) AS ResourceKey,
ResourcePath,
PARSENAME(REPLACE(ResourcePath, '/', '.'), 7) AS SubscriptionID,
PARSENAME(REPLACE(ResourcePath, '/', '.'), 5) AS ResourceGroupName,
PARSENAME(REPLACE(ResourcePath, '/', '.'), 3) AS ProviderName,
PARSENAME(REPLACE(ResourcePath, '/', '.'), 1) AS StorageAccountName
FROM (
SELECT DISTINCT ResourcePath FROM FactUsage
) AS DistinctPaths;
CREATE TABLE FactUsageReshaped AS
SELECT
f.Date,
rl.ResourceKey,
f.Metric
FROM FactUsage f
JOIN ResourceLookup rl
ON f.ResourcePath = rl.ResourcePath;
Note: ROW_NUMBER() generates a surrogate key for each unique resource path. PARSENAME(REPLACE(...)) converts / to . and then extracts parts from the end. Adjust indexes based on your path structure. ResourceLookup becomes your dimension table. FactUsageReshaped replaces the long string with ResourceKey.
Fact table now stores only integers (ResourceKey), reducing string store size. Lookup table holds descriptive attributes for drill-through or filtering.
I hope this information helps. Please do let us know if you have any further queries.
Regards,
Dinesh
Thank you very much,
one question:
"1. If the column is a resource path, break it into components like subscription ID, resource group, provider and account name. Then create a lookup table for these components and reference it via a surrogate key."
"and keep the original in a separate table for drill-through."
I have already a lookup table, my dimension table, which you see on my screen: Dim_EA_AmortizedCosts_Resources_Changed.
Would you advise to keep this big string in this table and create just surrogate INT key to connect with fact table
OR
Would you suggest adding the entirely new lookup table connected to Dim_EA_AmortizedCosts_Resources_Changed with only INT Surrogate key and String ?
And why?
Best,
Jacek
Hi @jaryszek ,
Thank you for the update. Please try below two options.
1. Keep the big string in Dim_EA_AmortizedCosts_Resources_Changed and add a surrogate key.
It is simple and you only modify the existing dimension table. And No extra join complexity. The dimension table still holds the large text column, which contributes to the string store size though less critical than fact table. If the dimension table is also very large and queried often, performance might effect.
2. Create a new lookup table with only ResourceKey + original string.
It Keeps the main dimension lean, only attributes needed for analysis. The big string is isolated in a separate table, used only for drill-through or detailed views. It reduces memory pressure on visuals that don’t need the full path. It adds one more relationship like Fact --> Dim --> Lookup. Slightly more complex model.
Please follow below approach for DirectLake.
1. Use only numeric measures + surrogate keys in Fact table.
2. Use only attributes needed for slicing/filtering in Main dimension.
3. Use only for long text fields used rarely like drill-through in Separate detail table.
Note: Please try option 2, The full path is rarely used in visuals. If you want to minimize memory and improve performance.
Please refer below updated model.
FactUsage --> Dim_EA_AmortizedCosts_Resources_Changed --> ResourcePathLookup
Note: FactUsage stores ResourceKey. Dim_EA_AmortizedCosts_Resources_Changed stores attributes like SubscriptionID, ResourceGroup, Provider, StorageAccount. ResourcePathLookup stores ResourceKey + full original path.
I hope this information helps. Please do let us know if you have any further queries.
Regards,
Dinesh
Hi @jaryszek ,
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. And, if you have any further query do let us know.
Regards,
Dinesh
Check out the November 2025 Power BI update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!