Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.

Reply
ToddChitt
Super User
Super User

PySpark Notebook to process complex JSON

Hello. I am using a PySpark notebook in Fabric to process incoming JSON files. The Notebook reads the JSON file into a base dataframe, then from there parse it out into two other dataframes that get dumped into Lakehouse tables.

The JSON is complex and sometimes some elements are missing. 

For example, most times I get JSON data structured like this:

ToddChitt_0-1714497753233.png

But every once in a while I get something like this:

ToddChitt_1-1714497820187.png

 

I have tried the PySpark When/Otherwise syntax like this in my dataframe select statement:

when(col("positionData.manager").isNull(),None).otherwise(col("positionData.manager.id")).alias("ManagerId"),
 
The trouble is that both paths are evaluated, even if one is not needed. For example, if the data is like the lower screenshot, with 
"manager": null
I still get an exception:
AnalysisException: [INVALID_EXTRACT_BASE_FIELD_TYPE] Can't extract a value from "manager". Need a complex type [STRUCT, ARRAY, MAP] but got "STRING".
 
I also tried defining a function to handle this, but it exhibits the same issue:
# Function to handle null values in various fields
def handle_nulls(entity_name, field_name)
    return when(col(f"{entity_name}").isNull(), None).otherwise(col(f"{entity_name}.{field_name}"))
 
I also tried wrapping the return statement in a try/except block, but that still fails with the same error.
 
# Function to handle null values in various fields
def handle_nulls(entity_name, field_name)
    try:
        return when(col(f"{entity_name}").isNull(), None).otherwise(col(f"{entity_name}.{field_name}"))
    except:
        return None

 

 

I'm fairly new at PySpark. Does anyone have any suggestions?

Thanks in advance.




Did I answer your question? If so, mark my post as a solution. Also consider helping someone else in the forums!

Proud to be a Super User!





1 ACCEPTED SOLUTION

Hi @ToddChitt ,

I tried to do some repro around your case, it is working perfectly fine.
Can you please find the code below,

vgchennamsft_0-1714739455024.png

 

vgchennamsft_1-1714739467722.png

 

vgchennamsft_2-1714739481531.png

 

Sample Json:

[
  {
    "id": "00000001-0000-0000-0000-000000000000",
    "positionData": {
      "manager": {
        "id": "00000002-0000-0000-0000-000000000000",
        "employeeNumber": "1234"
      }
    }
  },
  {
    "id": "00000001-0000-0000-0000-000000000000",
    "positionData": {
      "manager": null
    }
  }
]



Sample Code:

from pyspark.sql.types import StructType, StructField, StringType, ArrayType
from pyspark.sql.functions import col, coalesce, lit

# Define the schema for the nested objects
schema = StructType([
    StructField("id", StringType(), True),
    StructField("positionData", StructType([
        StructField("manager", StructType([
            StructField("id", StringType(), True),
            StructField("employeeNumber", StringType(), True)
        ]), True)
    ]), True)
])

# Read JSON data with multiline option and schema
df = spark.read.option("multiline", "true").json("Files/testing.json", schema=schema)

df = df.withColumn("ManagerId", coalesce(col("positionData.manager.id"), lit(None)))

display(df)

 
Please try this and let me know if you have further queries.

View solution in original post

9 REPLIES 9
ToddChitt
Super User
Super User

This is wonderful, thank you.
I guess now I need to learn about defining my JSON schema ahead of time, and building my dataframes using the "withColumn" syntax instead of what I was doing, which was a straight df.select(...).

More stuff to learn, but that's OK.

 

Thanks again.




Did I answer your question? If so, mark my post as a solution. Also consider helping someone else in the forums!

Proud to be a Super User!





Hi @ToddChitt ,

Glad to know that your issue got resolved. Please continue using Fabric Community on your further queries.

ToddChitt
Super User
Super User

I have also tried defining a function like this:

# Function to handle null values in various fields
def handle_nulls(entity_name, field_name😞
    try:
        #return when(col(f"{entity_name}").isNull(), None).otherwise(col(f"{entity_name}.{field_name}"))
        return  coalesce(when(col(f"{entity_name}").isNull(), lit("NULL Value string")).otherwise(None), col(f"{entity_name}.{field_name}"))
    except: return None
And then call that function for a column like this:
handle_nulls("positionData.manager", "id").alias("ManagerId"),
But that generates the same error, which I don't really understand because of the try/except block. Maybe I'm not structuring that portion properly. I would think that if there is an error inside the TRY, then the EXCEPT takes over. 
What am I missing on this one?



Did I answer your question? If so, mark my post as a solution. Also consider helping someone else in the forums!

Proud to be a Super User!





Hi @ToddChitt ,

I tried to do some repro around your case, it is working perfectly fine.
Can you please find the code below,

vgchennamsft_0-1714739455024.png

 

vgchennamsft_1-1714739467722.png

 

vgchennamsft_2-1714739481531.png

 

Sample Json:

[
  {
    "id": "00000001-0000-0000-0000-000000000000",
    "positionData": {
      "manager": {
        "id": "00000002-0000-0000-0000-000000000000",
        "employeeNumber": "1234"
      }
    }
  },
  {
    "id": "00000001-0000-0000-0000-000000000000",
    "positionData": {
      "manager": null
    }
  }
]



Sample Code:

from pyspark.sql.types import StructType, StructField, StringType, ArrayType
from pyspark.sql.functions import col, coalesce, lit

# Define the schema for the nested objects
schema = StructType([
    StructField("id", StringType(), True),
    StructField("positionData", StructType([
        StructField("manager", StructType([
            StructField("id", StringType(), True),
            StructField("employeeNumber", StringType(), True)
        ]), True)
    ]), True)
])

# Read JSON data with multiline option and schema
df = spark.read.option("multiline", "true").json("Files/testing.json", schema=schema)

df = df.withColumn("ManagerId", coalesce(col("positionData.manager.id"), lit(None)))

display(df)

 
Please try this and let me know if you have further queries.

ToddChitt
Super User
Super User

I had to strip out some of the other stuff that is not relevent to the topic. 

Case 1: the positionData.manager object has an id and employeeNumber field that I need to grab:

{
"id": "00000001-0000-0000-0000-000000000000",
"positionData": {
"manager": {
"id": "00000002-0000-0000-0000-000000000000",
"employeeNumber": "1234"
}
}
}

 

Case 2: the manager is simply null:

{
"id": "00000001-0000-0000-0000-000000000000",
"positionData": {
"manager": null
}
}

 

In Case 2, we cannot navigate down to col("positionData.manager.id") becuase it doesn't exist. Hence the error. This seems to happen regarless of the function used ( when/otherwise or coalesce )

I have no control over the incoming JSON structure. 

 

Any suggestion would be appreciated.
Thanks in advance.




Did I answer your question? If so, mark my post as a solution. Also consider helping someone else in the forums!

Proud to be a Super User!





ToddChitt
Super User
Super User

@v-gchenna-msft Unfortunately, the COALESCE function does the same thing as the WHEN / EXCEPT function: It evaluates all paths offered, even though it is only going to take ONE of those paths. In my case, one of the paths will result in an error as shown above, and even though the logic of the function is such that it will not return a certain element, it still needs to evaluate it. 

 

a bit of 'airware' example:

COALESCE ( NULL, "some string not null", 1/0)

This will error out on the 1 devided by zero path even though the logic is to return "some string not null".

 

Any other suggestions?




Did I answer your question? If so, mark my post as a solution. Also consider helping someone else in the forums!

Proud to be a Super User!





Hi @ToddChitt ,

Can you please share the output json, so I can try it at my end and may suggest you?

ToddChitt
Super User
Super User

While I have not tried it, I don't think it is going to work, as is:

>>In this example, coalesce will first try to access the value of the column "positionData.manager.id". If it's null, it will return None instead.<<

This issue is NOT that there is a NULL value in column "positionData.manager.id" it is that the column does not exist, cannot be found. If it is there at all, it is part of some nested JSON structure. 

If I have data like this picture from the original post:

ToddChitt_0-1714659900779.png

and I try to reference col("positionData.manager.id") then I get this error:

AnalysisException: [INVALID_EXTRACT_BASE_FIELD_TYPE] Can't extract a value from "manager". Need a complex type [STRUCT, ARRAY, MAP] but got "STRING".

 

It might be possible to use nested COALESCE statements and / or WHEN/OTHERWISE functions.

 

I'm going to have to experiment. Thanks for the tip.




Did I answer your question? If so, mark my post as a solution. Also consider helping someone else in the forums!

Proud to be a Super User!





v-gchenna-msft
Community Support
Community Support

Hi @ToddChitt ,

Thanks for using Fabric Community.

PySpark provides a function called coalesce that allows you to specify a sequence of columns. The first non-null value in the sequence is returned. You can use this function to handle null values in your when expression like this:

 

from pyspark.sql.functions import col, coalesce

df = df.withColumn("ManagerId", coalesce(col("positionData.manager.id"), lit(None)))

 

In this example, coalesce will first try to access the value of the column "positionData.manager.id". If it's null, it will return None instead.


Can you please check this - pyspark.sql.functions.coalesce — PySpark 3.1.1 documentation (apache.org)

Hope this is helpful. Please let me know incase of further queries.

 

Helpful resources

Announcements
Expanding the Synapse Forums

New forum boards available in Synapse

Ask questions in Data Engineering, Data Science, Data Warehouse and General Discussion.

LearnSurvey

Fabric certifications survey

Certification feedback opportunity for the community.

April Fabric Update Carousel

Fabric Monthly Update - April 2024

Check out the April 2024 Fabric update to learn about new features.

April Fabric Community Update

Fabric Community Update - April 2024

Find out what's new and trending in the Fabric Community.