The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi Team
I want to execute select statement using deltatable path in fabric notebook . it gives an error. Please help on urgent basis .
below is select satament in Pyspark :
df= spark.sql "(SELECT * FROM abfss://<workspace_name>@msit-onelake.dfs.fabric.microsoft.com/<lakehouse_Name>.Lakehouse/Tables/<table_Name>")
display(df)
error is :
File /opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py:1440, in SparkSession.sql(self, sqlQuery, args, **kwargs) 1438 try: 1439 litArgs = {k: _to_java_column(lit(v)) for k, v in (args or {}).items()} -> 1440 return DataFrame(self._jsparkSession.sql(sqlQuery, litArgs), self) 1441 finally: 1442 if len(kwargs) > 0: File ~/cluster-env/trident_env/lib/python3.10/site-packages/py4j/java_gateway.py:1322, in JavaMember.__call__(self, *args) 1316 command = proto.CALL_COMMAND_NAME +\ 1317 self.command_header +\ 1318 args_command +\ 1319 proto.END_COMMAND_PART 1321 answer = self.gateway_client.send_command(command) -> 1322 return_value = get_return_value( 1323 answer, self.gateway_client, self.target_id, self.name) 1325 for temp_arg in temp_args: 1326 if hasattr(temp_arg, "_detach"): File /opt/spark/python/lib/pyspark.zip/pyspark/errors/exceptions/captured.py:175, in capture_sql_exception.<locals>.deco(*a, **kw) 171 converted = convert_exception(e.java_exception) 172 if not isinstance(converted, UnknownException): 173 # Hide where the exception came from that shows a non-Pythonic 174 # JVM exception message. --> 175 raise converted from None 176 else: 177 raise
Hi @Udaysutar28apr ,
You can try the following solution for the same error, change it to the default Lakehouse and be able to use relative file paths afterward.
Refer to:
Solved: Re: Fabric tutorial failing on files path - Microsoft Fabric Community
Best Regards,
Liu Yang
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
i want to execute delta table using below code :
df= spark.sql "(SELECT * FROM abfss://<workspace_name>@msit-onelake.dfs.fabric.microsoft.com/<lakehouse_Name>.Lakehouse/Tables/<table_Name>")
display(df)