Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Sign up nowGet Fabric certified for FREE! Don't miss your chance! Learn more
In the snowflake we have warehouse where all keys have binary(36) data type, which is not readable in PBI, what would be the best and most efficient way to load those keys in PBI? I tried casting, hasing but didn't find any efficient solutions.
Hashing seemed most logical as even though keys are long at least they become integers, but for some reason with hashing then I had issue on join. Same values had different hashing I suppose.
So would really appreciate if someone could help me out and let me know how can I load those keys efficiently in the PBI model.
I have tables over 100 mil rows, so efficiency is a big priority.
I still need help on this please
You can try this:
Binary.ToText([Content], BinaryEncoding.Hex)
I have tables over 100 mil rows, so efficiency is a big priority. But since this is a very large dataset, you can't expect this query to be efficient. This ideally should be done at the source.
Yup, having such strings as keys in a huge dataset will be very inefficient. That's why I am trying to do the changes, at source:
ABS(HASH(FK_PRODUCT_CATEGORY)) % 100 AS Product_Category_ID
I have tried smth like this and then increasing or decreasing length of hash based on the cardinality. But for some columns I didn't use any shortening. However, when importing full hashed values, and creating relationship in PBI, most of values which matched in binary data types, don't match in hashed value.
If you love stickers, then you will definitely want to check out our Community Sticker Challenge!
Check out the January 2026 Power BI update to learn about new features.
| User | Count |
|---|---|
| 65 | |
| 64 | |
| 48 | |
| 21 | |
| 18 |
| User | Count |
|---|---|
| 119 | |
| 117 | |
| 38 | |
| 36 | |
| 27 |