March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
We are exporting JSON Files from a Mongo DB with the Data Factory. The UUID in the MongoDB is represented like this "5aa399c0-2701-401e-8e83-26b5ce82abc7". In the exported JSON File whe geht the following values:
"_id": {
"$binary": "wJmjWgEnHkCOgya1zoKrxw==",
"$type": "03"
},
Any ideas how I can get the representation as a GUID in the export file instead the "ugly" binary value?
I already tried modifying the connection string to the MongoDB but nothing worked. And there is no setting to configure the driver used by the data factory to read the values.
Solved! Go to Solution.
The 'ugly' string is base64 encoded.
We get the MongoDB file with a copy task from the and write it to a blob storage. Afterwards we import it with databricks and decode the base64.
The function we use:
def base64ToGUID(b64str):
try:
s = bitstring.BitArray(bytes=base64.b64decode(b64str)).hex
def rev2(s_):
def chunks(n):
for i in range(0, len(s_), n):
yield s_[i:i+n]
return "".join(list(chunks(2))[::-1])
return "-".join([rev2(s[:8]),rev2(s[8:][:4]),rev2(s[12:][:4]),s[16:][:4],s[20:]])
except:
return None
We read the JSON to a spark dataframe, convert it to a pandas dataframe and apply the decoding:
# Create pandas dataframe
df_pandas = df_spark.toPandas()
#iterate over rows to decode the columns
for index, row in df_pandas.iterrows():
row["row_name_1"] = base64ToGUID(row["row_name_1"])
row["row_name_2"] = base64ToGUID(row["row_name_2"])
row["row_name_n"] = base64ToGUID(row["row_name_n"])
Hope this helps.
Alternative solutions can be found here: Convert UUID from MongoDb to UNIQUEIDENTIFIER
The 'ugly' string is base64 encoded.
We get the MongoDB file with a copy task from the and write it to a blob storage. Afterwards we import it with databricks and decode the base64.
The function we use:
def base64ToGUID(b64str):
try:
s = bitstring.BitArray(bytes=base64.b64decode(b64str)).hex
def rev2(s_):
def chunks(n):
for i in range(0, len(s_), n):
yield s_[i:i+n]
return "".join(list(chunks(2))[::-1])
return "-".join([rev2(s[:8]),rev2(s[8:][:4]),rev2(s[12:][:4]),s[16:][:4],s[20:]])
except:
return None
We read the JSON to a spark dataframe, convert it to a pandas dataframe and apply the decoding:
# Create pandas dataframe
df_pandas = df_spark.toPandas()
#iterate over rows to decode the columns
for index, row in df_pandas.iterrows():
row["row_name_1"] = base64ToGUID(row["row_name_1"])
row["row_name_2"] = base64ToGUID(row["row_name_2"])
row["row_name_n"] = base64ToGUID(row["row_name_n"])
Hope this helps.
Thanks for your reply. I'm looking for another way because we don't use spark.
@GraceGu could you suggest anything? Can I use Power Query (Dataflow Gen2 in Fabric) or transform it in Data Pipeline before/after Copy?
Would you mind filling a support case for this? We'll need engineering to track this one down. Thank you!
I faced the same problem. any solutions?
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
User | Count |
---|---|
6 | |
5 | |
3 | |
2 | |
1 |
User | Count |
---|---|
16 | |
13 | |
11 | |
7 | |
6 |