<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Reusable function for data transformation - user data functions in Data Engineering</title>
    <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4733929#M10193</link>
    <description>&lt;P&gt;&lt;SPAN&gt;Hello&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;BR /&gt;We are following up once again regarding your query. Could you please confirm if the issue has been resolved through the support ticket with Microsoft?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;If the issue has been resolved, we kindly request you to share the resolution or key insights here to help others in the community. If we don’t hear back, we’ll go ahead and close this thread.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Should you need further assistance in the future, we encourage you to reach out via the Microsoft Fabric Community Forum and create a new thread. We’ll be happy to help.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thank you for your understanding and participation.&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Mon, 16 Jun 2025 17:50:52 GMT</pubDate>
    <dc:creator>v-ssriganesh</dc:creator>
    <dc:date>2025-06-16T17:50:52Z</dc:date>
    <item>
      <title>Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4720199#M9909</link>
      <description>&lt;P&gt;I retrieve data from JDE as the source, and the table contains date fields where the information is stored as Julian dates. Currently, there are 8 different sources, with one notebook for each source, all located in the same workspace. Therefore, the same function to convert Julian dates to dates is defined in all 8 notebooks. Since all notebooks use the same code, is it possible within the fabric framework to create a reusable component, like a user-defined function, that includes the transformation code from Julian to standard dates? This function could be called from all the notebooks, enhancing the efficiency of this process.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jun 2025 17:53:38 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4720199#M9909</guid>
      <dc:creator>tinbaj</dc:creator>
      <dc:date>2025-06-04T17:53:38Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4720609#M9913</link>
      <description>&lt;P&gt;Hello &lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;BR /&gt;Thank you for reaching out with your query. &lt;BR /&gt;&lt;BR /&gt;To streamline your Julian date-to-standard date conversion across all eight notebooks, I recommend using Fabric User Data Functions (UDFs). You can create a single UDF in your Fabric workspace to define the conversion logic, which can then be called from all notebooks. This eliminates code duplication, simplifies maintenance, and ensures consistency across your JDE data sources. Simply create a UDF item, define the conversion function, and invoke it in each notebook. For more details, check the Fabric User Data Functions documentation: &lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/user-data-functions/user-data-functions-overview" target="_blank"&gt;Overview - Fabric User data functions (preview) - Microsoft Fabric | Microsoft Learn&lt;/A&gt;.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;If this information is helpful, please &lt;STRONG&gt;“Accept as solution”&lt;/STRONG&gt; and give a &lt;STRONG&gt;"kudos"&lt;/STRONG&gt; to assist other community members in resolving similar issues more efficiently.&lt;BR /&gt;Thank you.&lt;/P&gt;</description>
      <pubDate>Thu, 05 Jun 2025 04:49:22 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4720609#M9913</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-06-05T04:49:22Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4721603#M9938</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/882998"&gt;@v-ssriganesh&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your response. I created a User data function as suggested, however, It did not work. Here is the code and scenario:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;User data function&lt;/STRONG&gt;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;from&lt;/SPAN&gt;&lt;SPAN&gt; datetime &lt;/SPAN&gt;&lt;SPAN&gt;import&lt;/SPAN&gt;&lt;SPAN&gt; datetime,timedelta&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;import&lt;/SPAN&gt;&lt;SPAN&gt; fabric.functions &lt;/SPAN&gt;&lt;SPAN&gt;as&lt;/SPAN&gt;&lt;SPAN&gt; fn&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;import&lt;/SPAN&gt;&lt;SPAN&gt; logging&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;import&lt;/SPAN&gt;&lt;SPAN&gt; pandas &lt;/SPAN&gt;&lt;SPAN&gt;as&lt;/SPAN&gt;&lt;SPAN&gt; pd&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;udf = fn.UserDataFunctions()&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;@udf&lt;/SPAN&gt;&lt;SPAN&gt;.function()&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;def&lt;/SPAN&gt;&lt;SPAN&gt; convert_julian_to_date(juliandate:&lt;/SPAN&gt;&lt;SPAN&gt;str&lt;/SPAN&gt;&lt;SPAN&gt;) -&amp;gt;&lt;/SPAN&gt;&lt;SPAN&gt;str&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;# Extract year and day parts from the julian date string&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; final_date=&lt;/SPAN&gt;&lt;SPAN&gt;""&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;if&lt;/SPAN&gt;&lt;SPAN&gt; juliandate:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; year = (&lt;/SPAN&gt;&lt;SPAN&gt;int&lt;/SPAN&gt;&lt;SPAN&gt;(juliandate[:&lt;/SPAN&gt;&lt;SPAN&gt;1&lt;/SPAN&gt;&lt;SPAN&gt;]) + &lt;/SPAN&gt;&lt;SPAN&gt;19&lt;/SPAN&gt;&lt;SPAN&gt;) * &lt;/SPAN&gt;&lt;SPAN&gt;100&lt;/SPAN&gt;&lt;SPAN&gt; + &lt;/SPAN&gt;&lt;SPAN&gt;int&lt;/SPAN&gt;&lt;SPAN&gt;(juliandate[&lt;/SPAN&gt;&lt;SPAN&gt;1&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;SPAN&gt;3&lt;/SPAN&gt;&lt;SPAN&gt;])&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; day_of_year = &lt;/SPAN&gt;&lt;SPAN&gt;int&lt;/SPAN&gt;&lt;SPAN&gt;(juliandate[&lt;/SPAN&gt;&lt;SPAN&gt;3&lt;/SPAN&gt;&lt;SPAN&gt;:]) - &lt;/SPAN&gt;&lt;SPAN&gt;1&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;# Create a date object for January 1st of the given year&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; date = datetime(year, &lt;/SPAN&gt;&lt;SPAN&gt;1&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;1&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;# Add the day of year to January 1st&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; final_date = date + timedelta(days=day_of_year)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;return&lt;/SPAN&gt;&lt;SPAN&gt; final_date&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Code in the notebook:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Instantiate the function&lt;/STRONG&gt;&lt;BR /&gt;data_functions = notebookutils.udf.getFunctions('data_functions')&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Test the function&lt;/STRONG&gt;&lt;BR /&gt;data_functions.convert_julian_to_date('123241')&lt;BR /&gt;Output: '2023-08-29 00:00:00'&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Call the function for a dataframe column, and it fails. Error:&amp;nbsp;&lt;SPAN class=""&gt;TypeError&lt;/SPAN&gt;&lt;SPAN&gt;: Column is not iterable&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;BR /&gt;df_silver = df_silver.withColumn('request_date', data_functions.convert_julian_to_date(df_silver['request_date']))&lt;/P&gt;&lt;P&gt;---------------------------------------------------------------------------&lt;BR /&gt;TypeError Traceback (most recent call last)&lt;BR /&gt;Cell In[46], line 1&lt;BR /&gt;----&amp;gt; 1 df_silver = df_silver.withColumn('request_date', data_functions.convert_julian_to_date(df_silver['request_date']))&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/notebookutils/mssparkutils/handlers/udfHandler.py:95, in UDF.__create_dynamic_function.&amp;lt;locals&amp;gt;.dynamic_function(*args, **kwargs)&lt;BR /&gt;93 workspace_id = self.__metadata.get("folderObjectId", "")&lt;BR /&gt;94 capacity_id = self.__metadata.get("capacityObjectId", "")&lt;BR /&gt;---&amp;gt; 95 result = self.__udf_handler.run(artifact_id, name, parameters, workspace_id, capacity_id)&lt;BR /&gt;96 if json.loads(result).get("status", "").lower() != "succeeded":&lt;BR /&gt;97 raise Exception(f"Function {name} failed with error: {result}")&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/notebookutils/mssparkutils/handlers/udfHandler.py:27, in UdfHandler.run(self, artifact_id, function_name, parameters, workspace_id, capacity_id)&lt;BR /&gt;24 if not workspace_id:&lt;BR /&gt;25 workspace_id = self.getCurrentWorkspaceId()&lt;BR /&gt;---&amp;gt; 27 return self.jvm.notebookutils.udf.run(artifact_id, function_name, parameters, workspace_id, capacity_id)&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_gateway.py:1314, in JavaMember.__call__(self, *args)&lt;BR /&gt;1313 def __call__(self, *args):&lt;BR /&gt;-&amp;gt; 1314 args_command, temp_args = self._build_args(*args)&lt;BR /&gt;1316 command = proto.CALL_COMMAND_NAME +\&lt;BR /&gt;1317 self.command_header +\&lt;BR /&gt;1318 args_command +\&lt;BR /&gt;1319 proto.END_COMMAND_PART&lt;BR /&gt;1321 answer = self.gateway_client.send_command(command)&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_gateway.py:1277, in JavaMember._build_args(self, *args)&lt;BR /&gt;1275 def _build_args(self, *args):&lt;BR /&gt;1276 if self.converters is not None and len(self.converters) &amp;gt; 0:&lt;BR /&gt;-&amp;gt; 1277 (new_args, temp_args) = self._get_args(args)&lt;BR /&gt;1278 else:&lt;BR /&gt;1279 new_args = args&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_gateway.py:1264, in JavaMember._get_args(self, args)&lt;BR /&gt;1262 for converter in self.gateway_client.converters:&lt;BR /&gt;1263 if converter.can_convert(arg):&lt;BR /&gt;-&amp;gt; 1264 temp_arg = converter.convert(arg, self.gateway_client)&lt;BR /&gt;1265 temp_args.append(temp_arg)&lt;BR /&gt;1266 new_args.append(temp_arg)&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_collections.py:523, in MapConverter.convert(self, object, gateway_client)&lt;BR /&gt;521 java_map = HashMap()&lt;BR /&gt;522 for key in object.keys():&lt;BR /&gt;--&amp;gt; 523 java_map[key] = object[key]&lt;BR /&gt;524 return java_map&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_collections.py:82, in JavaMap.__setitem__(self, key, value)&lt;BR /&gt;81 def __setitem__(self, key, value):&lt;BR /&gt;---&amp;gt; 82 self.put(key, value)&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_gateway.py:1314, in JavaMember.__call__(self, *args)&lt;BR /&gt;1313 def __call__(self, *args):&lt;BR /&gt;-&amp;gt; 1314 args_command, temp_args = self._build_args(*args)&lt;BR /&gt;1316 command = proto.CALL_COMMAND_NAME +\&lt;BR /&gt;1317 self.command_header +\&lt;BR /&gt;1318 args_command +\&lt;BR /&gt;1319 proto.END_COMMAND_PART&lt;BR /&gt;1321 answer = self.gateway_client.send_command(command)&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_gateway.py:1277, in JavaMember._build_args(self, *args)&lt;BR /&gt;1275 def _build_args(self, *args):&lt;BR /&gt;1276 if self.converters is not None and len(self.converters) &amp;gt; 0:&lt;BR /&gt;-&amp;gt; 1277 (new_args, temp_args) = self._get_args(args)&lt;BR /&gt;1278 else:&lt;BR /&gt;1279 new_args = args&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_gateway.py:1264, in JavaMember._get_args(self, args)&lt;BR /&gt;1262 for converter in self.gateway_client.converters:&lt;BR /&gt;1263 if converter.can_convert(arg):&lt;BR /&gt;-&amp;gt; 1264 temp_arg = converter.convert(arg, self.gateway_client)&lt;BR /&gt;1265 temp_args.append(temp_arg)&lt;BR /&gt;1266 new_args.append(temp_arg)&lt;/P&gt;&lt;P&gt;File ~/cluster-env/clonedenv/lib/python3.10/site-packages/py4j/java_collections.py:510, in ListConverter.convert(self, object, gateway_client)&lt;BR /&gt;508 ArrayList = JavaClass("java.util.ArrayList", gateway_client)&lt;BR /&gt;509 java_list = ArrayList()&lt;BR /&gt;--&amp;gt; 510 for element in object:&lt;BR /&gt;511 java_list.add(element)&lt;BR /&gt;512 return java_list&lt;/P&gt;&lt;P&gt;File /opt/spark/python/lib/pyspark.zip/pyspark/sql/column.py:710, in Column.__iter__(self)&lt;BR /&gt;709 def __iter__(self) -&amp;gt; None:&lt;BR /&gt;--&amp;gt; 710 raise TypeError("Column is not iterable")&lt;/P&gt;&lt;P&gt;TypeError: Column is not iterable&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 05 Jun 2025 14:02:27 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4721603#M9938</guid>
      <dc:creator>tinbaj</dc:creator>
      <dc:date>2025-06-05T14:02:27Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4722304#M9952</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;Thank you for sharing the details and code. The error TypeError: Column is not iterable occurs because the User Data Function (UDF) is being applied to a Spark DataFrame column directly, which isn't compatible with the function's expectation of a single string input. To fix this, you need to register the UDF with Spark to handle DataFrame columns.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;Here’s how to resolve it:&lt;/P&gt;
&lt;P&gt;In your notebook, after instantiating the UDF, register it with Spark:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Use spark.udf.register to make the UDF available for DataFrame operations.&lt;/LI&gt;
&lt;LI&gt;Then, apply it using withColumn with the registered UDF.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Update your notebook code as follows:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Instantiate the UDF: data_functions = notebookutils.udf.getFunctions('data_functions')&lt;/LI&gt;
&lt;LI&gt;Register the UDF: spark.udf.register("convert_julian_to_date", data_functions.convert_julian_to_date)&lt;/LI&gt;
&lt;LI&gt;Apply to the DataFrame: df_silver = df_silver.withColumn('request_date', spark.sql.functions.expr("convert_julian_to_date(request_date)"))&lt;BR /&gt;&lt;BR /&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This ensures the UDF processes each row’s request_date column value correctly. Also, verify that the request_date column in df_silver contains valid Julian date strings (e.g: '123241'). If the column has mixed or invalid data types, you may need to preprocess it to ensure all values are strings.&lt;BR /&gt;&lt;BR /&gt;If this helps, please mark it as &lt;STRONG&gt;“Accept as solution”&lt;/STRONG&gt; and feel free to give a “&lt;STRONG&gt;Kudos”&lt;/STRONG&gt; to help others in the community as well.&lt;BR /&gt;Thank you.&lt;/P&gt;</description>
      <pubDate>Fri, 06 Jun 2025 05:00:44 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4722304#M9952</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-06-06T05:00:44Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4723417#M9976</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/882998"&gt;@v-ssriganesh&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your response. I am getting this error when I am running the command to register the UDF: Register the UDF: spark.udf.register("convert_julian_to_date", data_functions.convert_julian_to_date)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;--&amp;gt; 612&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;sparkSession&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_jsparkSession&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;udf()&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;registerPython(name, register_udf&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_judf) &lt;/SPAN&gt;&lt;SPAN class=""&gt;613&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt;&lt;SPAN&gt; return_udf File &lt;/SPAN&gt;&lt;SPAN class=""&gt;/opt/spark/python/lib/pyspark.zip/pyspark/sql/udf.py:321&lt;/SPAN&gt;&lt;SPAN&gt;, in &lt;/SPAN&gt;&lt;SPAN class=""&gt;UserDefinedFunction._judf&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self)&lt;/SPAN&gt; &lt;SPAN class=""&gt;314&lt;/SPAN&gt; &lt;SPAN class=""&gt;@property&lt;/SPAN&gt; &lt;SPAN class=""&gt;315&lt;/SPAN&gt; &lt;SPAN class=""&gt;def&lt;/SPAN&gt; &lt;SPAN class=""&gt;_judf&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN&gt;) &lt;/SPAN&gt;&lt;SPAN class=""&gt;-&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;gt;&lt;/SPAN&gt;&lt;SPAN&gt; JavaObject: &lt;/SPAN&gt;&lt;SPAN class=""&gt;316&lt;/SPAN&gt; &lt;SPAN class=""&gt;# It is possible that concurrent access, to newly created UDF,&lt;/SPAN&gt; &lt;SPAN class=""&gt;317&lt;/SPAN&gt; &lt;SPAN class=""&gt;# will initialize multiple UserDefinedPythonFunctions.&lt;/SPAN&gt; &lt;SPAN class=""&gt;318&lt;/SPAN&gt; &lt;SPAN class=""&gt;# This is unlikely, doesn't affect correctness,&lt;/SPAN&gt; &lt;SPAN class=""&gt;319&lt;/SPAN&gt; &lt;SPAN class=""&gt;# and should have a minimal performance impact.&lt;/SPAN&gt; &lt;SPAN class=""&gt;320&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_judf_placeholder &lt;/SPAN&gt;&lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt;&lt;SPAN&gt;: &lt;/SPAN&gt;&lt;SPAN class=""&gt;--&amp;gt; 321&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_judf_placeholder &lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_create_judf(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;func) &lt;/SPAN&gt;&lt;SPAN class=""&gt;322&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt; &lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_judf_placeholder File &lt;/SPAN&gt;&lt;SPAN class=""&gt;/opt/spark/python/lib/pyspark.zip/pyspark/sql/udf.py:330&lt;/SPAN&gt;&lt;SPAN&gt;, in &lt;/SPAN&gt;&lt;SPAN class=""&gt;UserDefinedFunction._create_judf&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, func)&lt;/SPAN&gt; &lt;SPAN class=""&gt;327&lt;/SPAN&gt;&lt;SPAN&gt; spark &lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; SparkSession&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_getActiveSessionOrCreate() &lt;/SPAN&gt;&lt;SPAN class=""&gt;328&lt;/SPAN&gt;&lt;SPAN&gt; sc &lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; spark&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;sparkContext &lt;/SPAN&gt;&lt;SPAN class=""&gt;--&amp;gt; 330&lt;/SPAN&gt;&lt;SPAN&gt; wrapped_func &lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; _wrap_function(sc, func, &lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;returnType) &lt;/SPAN&gt;&lt;SPAN class=""&gt;331&lt;/SPAN&gt;&lt;SPAN&gt; jdt &lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; spark&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_jsparkSession&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;parseDataType(&lt;/SPAN&gt;&lt;SPAN class=""&gt;self&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;returnType&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;json()) &lt;/SPAN&gt;&lt;SPAN class=""&gt;332&lt;/SPAN&gt; &lt;SPAN class=""&gt;assert&lt;/SPAN&gt;&lt;SPAN&gt; sc&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_jvm &lt;/SPAN&gt;&lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt;&lt;SPAN&gt; File &lt;/SPAN&gt;&lt;SPAN class=""&gt;/opt/spark/python/lib/pyspark.zip/pyspark/sql/udf.py:59&lt;/SPAN&gt;&lt;SPAN&gt;, in &lt;/SPAN&gt;&lt;SPAN class=""&gt;_wrap_function&lt;/SPAN&gt;&lt;SPAN class=""&gt;(sc, func, returnType)&lt;/SPAN&gt; &lt;SPAN class=""&gt;57&lt;/SPAN&gt; &lt;SPAN class=""&gt;else&lt;/SPAN&gt;&lt;SPAN&gt;: &lt;/SPAN&gt;&lt;SPAN class=""&gt;58&lt;/SPAN&gt;&lt;SPAN&gt; command &lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; (func, returnType) &lt;/SPAN&gt;&lt;SPAN class=""&gt;---&amp;gt; 59&lt;/SPAN&gt;&lt;SPAN&gt; pickled_command, broadcast_vars, env, includes &lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; _prepare_for_python_RDD(sc, command) &lt;/SPAN&gt;&lt;SPAN class=""&gt;60&lt;/SPAN&gt; &lt;SPAN class=""&gt;assert&lt;/SPAN&gt;&lt;SPAN&gt; sc&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_jvm &lt;/SPAN&gt;&lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt; &lt;SPAN class=""&gt;61&lt;/SPAN&gt; &lt;SPAN class=""&gt;return&lt;/SPAN&gt;&lt;SPAN&gt; sc&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_jvm&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;SimplePythonFunction( &lt;/SPAN&gt;&lt;SPAN class=""&gt;62&lt;/SPAN&gt; &lt;SPAN class=""&gt;bytearray&lt;/SPAN&gt;&lt;SPAN&gt;(pickled_command), &lt;/SPAN&gt;&lt;SPAN class=""&gt;63&lt;/SPAN&gt;&lt;SPAN&gt; env, &lt;/SPAN&gt;&lt;SPAN class=""&gt;(...)&lt;/SPAN&gt; &lt;SPAN class=""&gt;68&lt;/SPAN&gt;&lt;SPAN&gt; sc&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_javaAccumulator, &lt;/SPAN&gt;&lt;SPAN class=""&gt;69&lt;/SPAN&gt;&lt;SPAN&gt; ) File &lt;/SPAN&gt;&lt;SPAN class=""&gt;/opt/spark/python/lib/pyspark.zip/pyspark/rdd.py:5251&lt;/SPAN&gt;&lt;SPAN&gt;, in &lt;/SPAN&gt;&lt;SPAN class=""&gt;_prepare_for_python_RDD&lt;/SPAN&gt;&lt;SPAN class=""&gt;(sc, command)&lt;/SPAN&gt; &lt;SPAN class=""&gt;5248&lt;/SPAN&gt; &lt;SPAN class=""&gt;def&lt;/SPAN&gt; &lt;SPAN class=""&gt;_prepare_for_python_RDD&lt;/SPAN&gt;&lt;SPAN&gt;(sc: &lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;SparkContext&lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;, command: Any) &lt;/SPAN&gt;&lt;SPAN class=""&gt;-&lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;gt;&lt;/SPAN&gt;&lt;SPAN&gt; Tuple[&lt;/SPAN&gt;&lt;SPAN class=""&gt;bytes&lt;/SPAN&gt;&lt;SPAN&gt;, Any, Any, Any]: &lt;/SPAN&gt;&lt;SPAN class=""&gt;5249&lt;/SPAN&gt; &lt;SPAN class=""&gt;# the serialized command will be compressed by broadcast&lt;/SPAN&gt; &lt;SPAN class=""&gt;5250&lt;/SPAN&gt;&lt;SPAN&gt; ser &lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; CloudPickleSerializer() &lt;/SPAN&gt;&lt;SPAN class=""&gt;-&amp;gt; 5251&lt;/SPAN&gt;&lt;SPAN&gt; pickled_command &lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; ser&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;dumps(command) &lt;/SPAN&gt;&lt;SPAN class=""&gt;5252&lt;/SPAN&gt; &lt;SPAN class=""&gt;assert&lt;/SPAN&gt;&lt;SPAN&gt; sc&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_jvm &lt;/SPAN&gt;&lt;SPAN class=""&gt;is&lt;/SPAN&gt; &lt;SPAN class=""&gt;not&lt;/SPAN&gt; &lt;SPAN class=""&gt;None&lt;/SPAN&gt; &lt;SPAN class=""&gt;5253&lt;/SPAN&gt; &lt;SPAN class=""&gt;if&lt;/SPAN&gt; &lt;SPAN class=""&gt;len&lt;/SPAN&gt;&lt;SPAN&gt;(pickled_command) &lt;/SPAN&gt;&lt;SPAN class=""&gt;&amp;gt;&lt;/SPAN&gt;&lt;SPAN&gt; sc&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_jvm&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;PythonUtils&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;getBroadcastThreshold(sc&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;_jsc): &lt;/SPAN&gt;&lt;SPAN class=""&gt;# Default 1M&lt;/SPAN&gt; &lt;SPAN class=""&gt;5254&lt;/SPAN&gt; &lt;SPAN class=""&gt;# The broadcast will have same life cycle as created PythonRDD&lt;/SPAN&gt;&lt;SPAN&gt; File &lt;/SPAN&gt;&lt;SPAN class=""&gt;/opt/spark/python/lib/pyspark.zip/pyspark/serializers.py:469&lt;/SPAN&gt;&lt;SPAN&gt;, in &lt;/SPAN&gt;&lt;SPAN class=""&gt;CloudPickleSerializer.dumps&lt;/SPAN&gt;&lt;SPAN class=""&gt;(self, obj)&lt;/SPAN&gt; &lt;SPAN class=""&gt;467&lt;/SPAN&gt;&lt;SPAN&gt; msg &lt;/SPAN&gt;&lt;SPAN class=""&gt;=&lt;/SPAN&gt; &lt;SPAN class=""&gt;"&lt;/SPAN&gt;&lt;SPAN class=""&gt;Could not serialize object: &lt;/SPAN&gt;&lt;SPAN class=""&gt;%s&lt;/SPAN&gt;&lt;SPAN class=""&gt;: &lt;/SPAN&gt;&lt;SPAN class=""&gt;%s&lt;/SPAN&gt;&lt;SPAN class=""&gt;"&lt;/SPAN&gt; &lt;SPAN class=""&gt;%&lt;/SPAN&gt;&lt;SPAN&gt; (e&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;__class__&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN class=""&gt;__name__&lt;/SPAN&gt;&lt;SPAN&gt;, emsg) &lt;/SPAN&gt;&lt;SPAN class=""&gt;468&lt;/SPAN&gt;&lt;SPAN&gt; print_exec(sys&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;stderr) &lt;/SPAN&gt;&lt;SPAN class=""&gt;--&amp;gt; 469&lt;/SPAN&gt; &lt;SPAN class=""&gt;raise&lt;/SPAN&gt;&lt;SPAN&gt; pickle&lt;/SPAN&gt;&lt;SPAN class=""&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;PicklingError(msg) &lt;/SPAN&gt;&lt;SPAN class=""&gt;PicklingError&lt;/SPAN&gt;&lt;SPAN&gt;: Could not serialize object: &lt;STRONG&gt;PySparkRuntimeError: [CONTEXT_ONLY_VALID_ON_DRIVER] It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers.&lt;/STRONG&gt; For more information, see SPARK-5063.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 06 Jun 2025 16:20:16 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4723417#M9976</guid>
      <dc:creator>tinbaj</dc:creator>
      <dc:date>2025-06-06T16:20:16Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4723815#M9981</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;BR /&gt;Thank you for providing the error details. The PicklingError: [CONTEXT_ONLY_VALID_ON_DRIVER] occurs because the User Data Function (UDF) is being serialized in a way that references the SparkContext, which isn't allowed in Spark's distributed environment. This is likely due to how the UDF is defined or accessed in your notebook.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;To resolve this, try the following steps:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;Instead of directly registering the UDF with spark.udf.register, use the Fabric UDF directly in the DataFrame operation, as Fabric’s UDFs are designed to work seamlessly with Spark. Update your notebook code as follows:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Instantiate the UDF: data_functions = notebookutils.udf.getFunctions('data_functions')&lt;/LI&gt;
&lt;/UL&gt;
&lt;UL&gt;
&lt;UL&gt;
&lt;LI&gt;Apply the UDF to the DataFrame: df_silver = df_silver.withColumn('request_date', data_functions.convert_julian_to_date(df_silver.request_date))&lt;/LI&gt;
&lt;/UL&gt;
&lt;LI&gt;Ensure your UDF (convert_julian_to_date) in the User Data Functions item doesn’t reference SparkContext or other non-serializable objects. Your provided UDF code looks fine, but confirm it only uses standard Python libraries (e.g., datetime, timedelta) and avoids Spark-specific calls.&lt;/LI&gt;
&lt;LI&gt;Before applying to the DataFrame, test the UDF with a single value to confirm it works: print(data_functions.convert_julian_to_date('123241')). This should return '2023-08-29 00:00:00'.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;If the error persists, please share:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The schema of df_silver (df_silver.printSchema()).&lt;/LI&gt;
&lt;LI&gt;Any modifications made to the UDF code.&lt;/LI&gt;
&lt;LI&gt;Whether you’re running this in a Fabric notebook with a Spark session active.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Please try these steps and let me know the outcome. If it resolves the issue, consider marking it as “&lt;STRONG&gt;Accept as solution&lt;/STRONG&gt;” and giving a “&lt;STRONG&gt;Kudos&lt;/STRONG&gt;” to help others in the community.&lt;BR /&gt;Thank you.&lt;/P&gt;</description>
      <pubDate>Sat, 07 Jun 2025 06:28:48 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4723815#M9981</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-06-07T06:28:48Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4724761#M10001</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/882998"&gt;@v-ssriganesh&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for your response, but the suggested code did not fix the problem. I can confirm that I am using standard python libraries in UDF and does not use spark context.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The implementation as per the suggestion and the error message is as below:&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;data_functions&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;notebookutils&lt;/SPAN&gt;&lt;SPAN&gt;.udf.getFunctions(&lt;/SPAN&gt;&lt;SPAN&gt;'data_functions'&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;print&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;data_functions&lt;/SPAN&gt;&lt;SPAN&gt;.convert_julian_to_date(&lt;/SPAN&gt;&lt;SPAN&gt;'123241'&lt;/SPAN&gt;&lt;SPAN&gt;))&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Return Value: 2023-08-29&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;df_silver&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;df_silver&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;withColumn&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"request_date"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;data_functions&lt;/SPAN&gt;&lt;SPAN&gt;.convert_julian_to_date(&lt;/SPAN&gt;&lt;SPAN&gt;df_silver&lt;/SPAN&gt;&lt;SPAN&gt;.request_date))&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Error Message:&amp;nbsp;&lt;SPAN class=""&gt;PySparkTypeError&lt;/SPAN&gt;: [NOT_ITERABLE] Column is not iterable.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Now if we go through the documentation for UDF's (Link: &lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/user-data-functions/python-programming-model" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/fabric/data-engineering/user-data-functions/python-programming-model&lt;/A&gt;), column data type is not one of the acceptable data type in UDF's. could this be a reason for this error?&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Thanks&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 09 Jun 2025 03:12:47 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4724761#M10001</guid>
      <dc:creator>tinbaj</dc:creator>
      <dc:date>2025-06-09T03:12:47Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4725412#M10019</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;BR /&gt;Thank you for the update and detailed feedback.&lt;BR /&gt;&lt;BR /&gt;he PySparkTypeError: [NOT_ITERABLE] Column is not iterable error occurs because Fabric User Data Functions (UDFs) expect scalar inputs (e.g:L strings, integers), but df_silver.request_date is a Spark DataFrame column, which isn’t directly compatible. The documentation you referenced correctly notes that UDFs don’t accept column objects as inputs, which explains this error.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;To resolve this, you need to register the UDF with Spark to process each row’s request_date value individually. Since you’ve confirmed the UDF works for a single input ('123241' returns '2023-08-29'), the issue is specific to DataFrame application. Here’s how to fix it:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Instantiate the UDF: data_functions = notebookutils.udf.getFunctions('data_functions')&lt;/LI&gt;
&lt;LI&gt;Register the UDF with Spark: Use from pyspark.sql.functions import udf and register the UDF as convert_udf = udf(data_functions.convert_julian_to_date).&lt;/LI&gt;
&lt;LI&gt;Apply the UDF to the DataFrame: df_silver = df_silver.withColumn('request_date', convert_udf(df_silver.request_date)).&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Additionally, check the request_date column is a string type, as your UDF expects strings.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;If this helps, please &lt;STRONG&gt;“Accept as solution”&lt;/STRONG&gt; and give a &lt;STRONG&gt;“kudos” &lt;/STRONG&gt;to assist other community members.&lt;BR /&gt;Thank you.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 09 Jun 2025 09:28:00 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4725412#M10019</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-06-09T09:28:00Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4725754#M10029</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/882998"&gt;@v-ssriganesh&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please see message 5 from this thread. We tried this a couple of days ago, and it doesn't work. When we try to register a User-Defined Function (UDF) as a UDF in Spark, it gives a Spark context error.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Does this mean that we cannot use User Data Functions for transformations in Dataframes?&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Mon, 09 Jun 2025 12:32:46 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4725754#M10029</guid>
      <dc:creator>tinbaj</dc:creator>
      <dc:date>2025-06-09T12:32:46Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4726636#M10042</link>
      <description>&lt;P&gt;Hello &lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;BR /&gt;Thank you for your patience and for providing detailed feedback.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;We recommend raising a support ticket with Microsoft Fabric support for deeper investigation, as the issue may be specific to your workspace or the UDF’s interaction with your Spark environment. You can explain all the troubleshooting steps you have taken to help them better understand the issue.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;You can create a Microsoft support ticket with the help of the link below:&lt;BR /&gt;&lt;A href="https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;If this information is helpful, consider marking it as “&lt;STRONG&gt;Accept as solution&lt;/STRONG&gt;” and giving a “&lt;STRONG&gt;Kudos&lt;/STRONG&gt;” to help others in the community.&lt;BR /&gt;Thank you.&lt;/P&gt;</description>
      <pubDate>Tue, 10 Jun 2025 07:04:29 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4726636#M10042</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-06-10T07:04:29Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4731172#M10135</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;Could you please confirm if the issue has been resolved after raising a support case? If a solution has been found, it would be greatly appreciated if you could share your insights with the community. This would be helpful for other members who may encounter similar issues.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;Thank you for your understanding and assistance.&lt;/P&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 13 Jun 2025 08:07:31 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4731172#M10135</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-06-13T08:07:31Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4733929#M10193</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hello&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;BR /&gt;We are following up once again regarding your query. Could you please confirm if the issue has been resolved through the support ticket with Microsoft?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;If the issue has been resolved, we kindly request you to share the resolution or key insights here to help others in the community. If we don’t hear back, we’ll go ahead and close this thread.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Should you need further assistance in the future, we encourage you to reach out via the Microsoft Fabric Community Forum and create a new thread. We’ll be happy to help.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thank you for your understanding and participation.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 16 Jun 2025 17:50:52 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4733929#M10193</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-06-16T17:50:52Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4734126#M10200</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/882998"&gt;@v-ssriganesh&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The ticket I raised with Microsoft did not provide any resolution to this issue. The associate classified this problem as more of a pyspark problem than a UDF issue. Please see below how the conversation ended with the Microsoft associate for this ticket.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;As reported, you had a User Data Function (UDF) defined to convert a data from oracle database that is stored in Julian format to a data format.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;It works when simply passing a julian date as input to this function with implementation like this:&amp;nbsp;&lt;BR /&gt;&lt;STRONG&gt;data_functions = notebookutils.udf.getFunctions('data_functions')&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;When you tried below code, it failed with “&lt;I&gt;Error Message: PySparkTypeError: [NOT_ITERABLE] Column is not iterable&lt;/I&gt;”.&amp;nbsp;&lt;BR /&gt;&lt;STRONG&gt;df_silver = df_silver.withColumn("request_date", data_functions.convert_julian_to_date(df_silver.request_date))&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;Registering a SQL function (with below code) threw “&lt;I&gt;PySparkRuntimeError: [CONTEXT_ONLY_VALID_ON_DRIVER] It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063&lt;/I&gt;”.&amp;nbsp;&lt;BR /&gt;&lt;STRONG&gt;spark.udf.register("convert_julian_to_date", data_functions.convert_julian_to_date)&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;As discussed, you created Fabric user data function “convert_julian_to_date”. When you used “notebookutils.udf” to get / invoke the function, it processed successfully.&amp;nbsp;These show that Fabric user data function “convert_julian_to_date” itself is working without issues.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Just to clarify, Fabric User data functions uses “fabric.functions” library to provide the functionality. And like what you did, you can retrieve and invoke the function via “notebookutils.udf”. To my knowledge, Fabric User data functions (“fabric.functions” library) basically enables you to create user data functions in Python, not offering other methods (integrated with 3rd-parties like PySpark) by default.&lt;/SPAN&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;&lt;A title="https://learn.microsoft.com/en-us/fabric/data-engineering/user-data-functions/python-programming-model" href="https://learn.microsoft.com/en-us/fabric/data-engineering/user-data-functions/python-programming-model" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/fabric/data-engineering/user-data-functions/python-programming-model&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;&lt;A title="https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities#user-data-function-udf-utilities" href="https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities#user-data-function-udf-utilities" target="_blank" rel="noopener"&gt;https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities#user-data-function-udf-utilities&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Apache Spark DataFrames is 3rd-party and not supported by us – So I couldn’t provide the most accurate information for your other questions. I’d assume that you could call / invoke Fabric User data functions from Apache Spark DataFrames, but you might improperly use those PySpark APIs.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I did some research on “PySparkTypeError: [NOT_ITERABLE] Column is not iterable” – It’d be more about the DataFrame and/or the withColumn() usage.&lt;/SPAN&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;Can you perform a quick test by&amp;nbsp;&lt;STRONG&gt;df_silver = df_silver.withColumn("request_date", df_silver.request_date)&lt;/STRONG&gt;?&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;Would it even work?&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;For getting “PySparkRuntimeError: [CONTEXT_ONLY_VALID_ON_DRIVER] It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063” when using&amp;nbsp;&lt;STRONG&gt;spark.udf.register("convert_julian_to_date", data_functions.convert_julian_to_date)&lt;/STRONG&gt;,&lt;/SPAN&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;My best guess would be that&amp;nbsp;&lt;STRONG&gt;spark.udf.register()&lt;/STRONG&gt;&amp;nbsp;requires a Python function, pyspark.sql.functions.udf() or pyspark.sql.functions.pandas_udf(). While&amp;nbsp;&lt;STRONG&gt;data_functions.convert_julian_to_date&lt;/STRONG&gt;&amp;nbsp;is from&amp;nbsp;&lt;STRONG&gt;notebookutils.udf&lt;/STRONG&gt;&amp;nbsp;that it cannot be recognized or directly registered using this method. &amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Unfortunately, all I could do was to check the Fabric user data function, and give my assumptions for those PySpark errors. Hope it would be helpful. To move forward, I’d suggest that:&lt;/SPAN&gt;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN&gt;Not sure if PySpark provides support to Users – If yes, you may want to contact PySpark support for the issues you’re facing.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;For further assistance regarding implementation / design of the whole solution you’re trying to accomplish, you can contact Azure sales or get help from an Azure partner.&lt;/SPAN&gt;&lt;/LI&gt;&lt;OL&gt;&lt;LI&gt;&lt;SPAN&gt;&lt;A title="https://azure.microsoft.com/en-us/contact" href="https://azure.microsoft.com/en-us/contact" target="_blank" rel="noopener"&gt;https://azure.microsoft.com/en-us/contact&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN&gt;&lt;A title="https://azure.microsoft.com/en-us/partners" href="https://azure.microsoft.com/en-us/partners" target="_blank" rel="noopener"&gt;https://azure.microsoft.com/en-us/partners&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;/OL&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Jun 2025 02:46:50 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4734126#M10200</guid>
      <dc:creator>tinbaj</dc:creator>
      <dc:date>2025-06-17T02:46:50Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4735566#M10245</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;BR /&gt;We appreciate your patience and sharing the update on the issue.&lt;BR /&gt;&lt;BR /&gt;From what you've described, it looks like the Fabric User Data Function (UDF) itself is working as expected when used with notebookutils.udf. The issues seem to come up only when trying to use it inside PySpark operations like withColumn() or when attempting to register it with spark.udf.register.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Since PySpark is a third-party tool and isn't fully integrated with Fabric UDFs, this kind of limitation is expected for now. Currently, calling Fabric UDFs directly inside PySpark transformations or registering them as Spark SQL functions isn't supported.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you still need to apply similar logic to your DataFrame, you might want to rewrite the function using a regular PySpark UDF (pyspark.sql.functions.udf() or pandas_udf) so it works smoothly within the PySpark context.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I totally understand this might not be the solution you were hoping for, but given the current capabilities of Fabric, using PySpark-native methods or checking with PySpark support channels would be the best way forward.&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;Thank you for your understanding.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 18 Jun 2025 07:16:00 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4735566#M10245</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-06-18T07:16:00Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4740792#M10369</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hello &lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;We are following up once again regarding your query. Could you please confirm if the issue has been resolved through the support ticket with Microsoft?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;If the issue has been resolved, we kindly request you to share the resolution or key insights here to help others in the community. If we don’t hear back, we’ll go ahead and close this thread.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Should you need further assistance in the future, we encourage you to reach out via the Microsoft Fabric Community Forum and create a new thread. We’ll be happy to help.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thank you for your understanding.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;LI-WRAPPER&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 23 Jun 2025 12:24:00 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4740792#M10369</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-06-23T12:24:00Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4740848#M10371</link>
      <description>&lt;P&gt;HI&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/882998"&gt;@v-ssriganesh&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As I explained in my previous response, based on Microsoft's response, the User data Functions cannot be used for transformations in the dataframe. Therefore, we need to utilize PySpark's native functions to transform data in the User data functions. So, I need to tweak my solution a bit and not use User data functions for Fabric but instead use pyspark.udf to do the transformation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I think I know the way ahead now. Thanks for your support and help. We can close the ticket now.&lt;/P&gt;</description>
      <pubDate>Mon, 23 Jun 2025 13:05:08 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4740848#M10371</guid>
      <dc:creator>tinbaj</dc:creator>
      <dc:date>2025-06-23T13:05:08Z</dc:date>
    </item>
    <item>
      <title>Re: Reusable function for data transformation - user data functions</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4741620#M10384</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/308748"&gt;@tinbaj&lt;/a&gt;,&lt;BR /&gt;&lt;SPAN&gt;Thank you for the update on the issue. Please continue to utilize the Microsoft Fabric Community Forum for further discussions and support.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Jun 2025 04:09:00 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Reusable-function-for-data-transformation-user-data-functions/m-p/4741620#M10384</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-06-24T04:09:00Z</dc:date>
    </item>
  </channel>
</rss>

