Microsoft Fabric Community Conference 2025, March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for a $400 discount.
Register nowGet inspired! Check out the entries from the Power BI DataViz World Championships preliminary rounds and give kudos to your favorites. View the vizzies.
In the snowflake we have warehouse where all keys have binary(36) data type, which is not readable in PBI, what would be the best and most efficient way to load those keys in PBI? I tried casting, hasing but didn't find any efficient solutions.
Hashing seemed most logical as even though keys are long at least they become integers, but for some reason with hashing then I had issue on join. Same values had different hashing I suppose.
So would really appreciate if someone could help me out and let me know how can I load those keys efficiently in the PBI model.
I have tables over 100 mil rows, so efficiency is a big priority.
I still need help on this please
You can try this:
Binary.ToText([Content], BinaryEncoding.Hex)
I have tables over 100 mil rows, so efficiency is a big priority. But since this is a very large dataset, you can't expect this query to be efficient. This ideally should be done at the source.
Yup, having such strings as keys in a huge dataset will be very inefficient. That's why I am trying to do the changes, at source:
ABS(HASH(FK_PRODUCT_CATEGORY)) % 100 AS Product_Category_ID
I have tried smth like this and then increasing or decreasing length of hash based on the cardinality. But for some columns I didn't use any shortening. However, when importing full hashed values, and creating relationship in PBI, most of values which matched in binary data types, don't match in hashed value.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code FABINSIDER for a $400 discount!
Check out the February 2025 Power BI update to learn about new features.
User | Count |
---|---|
126 | |
113 | |
71 | |
65 | |
46 |