Microsoft is giving away 50,000 FREE Microsoft Certification exam vouchers!
Enter the sweepstakes now!Prepping for a Fabric certification exam? Join us for a live prep session with exam experts to learn how to pass the exam. Register now.
I have configured JDK8 and Miniconda3 on my machine, as well as installed the VS Code extension for Synapse. I am now able to run Python code from a Fabric notebook in VS Code using the fabric-synapse-runtime-1-1.
However, when i try spark.sql('select * from lakehouse.table') I get the following error, whether the lakehouse and table exist or not. The same code executes perfectly fine directly in Fabric.
AnalysisException Traceback (most recent call last)
Cell In[2], line 3
1 # get table list
2 query = "SELECT * FROM lkh_raw.tbl_invt"
----> 3 tablesDf = spark.sql(query)
4 tablesArr = tablesDf.collect()
5 changesDf = None
File c:\Users\x\AppData\Local\miniconda3\envs\fabric-synapse-runtime-1-1\lib\site-packages\pyspark\sql\session.py:1034, in SparkSession.sql(self, sqlQuery, **kwargs)
1032 sqlQuery = formatter.format(sqlQuery, **kwargs)
1033 try:
-> 1034 return DataFrame(self._jsparkSession.sql(sqlQuery), self)
1035 finally:
1036 if len(kwargs) > 0:
File c:\Users\x\AppData\Local\miniconda3\envs\fabric-synapse-runtime-1-1\lib\site-packages\py4j\java_gateway.py:1321, in JavaMember.__call__(self, *args)
1315 command = proto.CALL_COMMAND_NAME +\
1316 self.command_header +\
1317 args_command +\
1318 proto.END_COMMAND_PART
1320 answer = self.gateway_client.send_command(command)
-> 1321 return_value = get_return_value(
1322 answer, self.gateway_client, self.target_id, self.name)
1324 for temp_arg in temp_args:
...
--> 196 raise converted from None
197 else:
198 raise
AnalysisException: cannot assign instance of scala.collection.immutable.Map$Map1 to field java.lang.Throwable.detailMessage of type java.lang.String in instance of java.util.concurrent.ExecutionException; line 1 pos 14
Any help is appreciated. Thanks!
Hi @GammaRamma ,
I have the following suggestions for code that can be executed perfectly directly in Fabric, but fails to execute in VS Code:
Ensure that the environment configuration in VS Code matches the environment in the fabric. This includes versions of the VS Code extensions for JDK, Miniconda, and Synapse.
Ensure that users have access to lakehouse and tables.
Why use Run time 1.1 instead of 1.3?
If you have any other questions please feel free to contact me.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
@Anonymous thanks for your reply.
I don't have synapse 1-3 runtime in VS code, but I do have 1-2, which gives another error for the same code:
Py4JJavaError: An error occurred while calling o33.sql. : org.apache.spark.SparkException: [INTERNAL_ERROR] The Spark SQL phase analysis failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace
Since posting my first message, I have tried other Fabric notebooks in VS Code that use spark.sql() and found that a couple of them actually work. So I compared the config files of the notebooks that work with the ones of the notebooks that don't work, but I found no significant differences. I compared .ressource-info, .artifact-info and lighter-config.json.
The notebooks are all in the same Fabric Workspace.
I copied and pasted the code from the notebooks that don't work into the ones that work and it worked, so it's not some particularity of the code that's the problem either.
I created new notebooks but they didn't work with the same code.
It seems pretty random.
I don't know what else to look at.
Thank you
User | Count |
---|---|
40 | |
34 | |
20 | |
10 | |
6 |
User | Count |
---|---|
56 | |
53 | |
18 | |
11 | |
8 |