Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!
In the snowflake we have warehouse where all keys have binary(36) data type, which is not readable in PBI, what would be the best and most efficient way to load those keys in PBI? I tried casting, hasing but didn't find any efficient solutions.
Hashing seemed most logical as even though keys are long at least they become integers, but for some reason with hashing then I had issue on join. Same values had different hashing I suppose.
So would really appreciate if someone could help me out and let me know how can I load those keys efficiently in the PBI model.
I have tables over 100 mil rows, so efficiency is a big priority.
I still need help on this please
You can try this:
Binary.ToText([Content], BinaryEncoding.Hex)
I have tables over 100 mil rows, so efficiency is a big priority. But since this is a very large dataset, you can't expect this query to be efficient. This ideally should be done at the source.
Yup, having such strings as keys in a huge dataset will be very inefficient. That's why I am trying to do the changes, at source:
ABS(HASH(FK_PRODUCT_CATEGORY)) % 100 AS Product_Category_ID
I have tried smth like this and then increasing or decreasing length of hash based on the cardinality. But for some columns I didn't use any shortening. However, when importing full hashed values, and creating relationship in PBI, most of values which matched in binary data types, don't match in hashed value.
The Power BI Data Visualization World Championships is back! It's time to submit your entry.
| User | Count |
|---|---|
| 47 | |
| 45 | |
| 33 | |
| 33 | |
| 30 |
| User | Count |
|---|---|
| 136 | |
| 116 | |
| 58 | |
| 58 | |
| 56 |