Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredJoin us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM. Register now.
Can someone help me understand why LTRIM('0011160027', '0') returns an empty string when using PySpark? It does not behave this way when querying from a lakehouse sql endpoint.
Solved! Go to Solution.
Hi @smoqt ,
Thank you for reaching out with your query about the LTRIM behavior in Microsoft Fabric.
As per my understanding, lakehouse SQL endpoint uses T-SQL’s LTRIM(string, characters), correctly trimming leading '0' to return '11160027'. Whereas Standard PySpark’s ltrim(str) only removes whitespace, not specific characters, which caused the empty string. However, Fabric extends ltrim to support ltrim(trimstr, str).
In Fabric, ltrim('0', '0011160027') works due to a Fabric-specific extension, trimming leading '0' to return '11160027'.
SQL Endpoint LTRIM('0', '0011160027') ,returns '0' because T-SQL’s LTRIM treats '0' as the string to trim, with no leading '0011160027' to remove.
Differences between T-SQL and PySpark can be tricky. Use regexp_replace in PySpark for portability:
from pyspark.sql.functions import regexp_replace
df = df.withColumn('trimmed', regexp_replace('NumberStr', '^0+', ''))
If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.
Thank you
For anyone else that was confused by this, the simple answer is that (at the time), the Fabric runtimes use spark 3.3 or 3.4, and the trimchars argument was not added until spark 4.0.
Keep that in mind if you have the spark/docs/latest url bookmarked, like I do.
Hi @smoqt ,
Thank you for reaching out with your query about the LTRIM behavior in Microsoft Fabric.
As per my understanding, lakehouse SQL endpoint uses T-SQL’s LTRIM(string, characters), correctly trimming leading '0' to return '11160027'. Whereas Standard PySpark’s ltrim(str) only removes whitespace, not specific characters, which caused the empty string. However, Fabric extends ltrim to support ltrim(trimstr, str).
In Fabric, ltrim('0', '0011160027') works due to a Fabric-specific extension, trimming leading '0' to return '11160027'.
SQL Endpoint LTRIM('0', '0011160027') ,returns '0' because T-SQL’s LTRIM treats '0' as the string to trim, with no leading '0011160027' to remove.
Differences between T-SQL and PySpark can be tricky. Use regexp_replace in PySpark for portability:
from pyspark.sql.functions import regexp_replace
df = df.withColumn('trimmed', regexp_replace('NumberStr', '^0+', ''))
If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.
Thank you
Very tricky if someone is copying SQL from SSMS environment or Lakehouse endpoint into a Notebook.
Tried ltrim( [trimstr ,] str) and that had the intended result. The apache spark doc does not specify unless I am missing something.
 
					
				
				
			
		
| User | Count | 
|---|---|
| 14 | |
| 7 | |
| 2 | |
| 2 | |
| 2 |