The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I'm running the code below and getting this error:
AttributeError: 'NoneType' object has no attribute 'sql'
df = spark.sql("SELECT * FROM MyLH.ClaimFacts LIMIT 1000")
display(df)
df.printSchema()
If i run java -version, I get:
My JAVA_HOME is set to jdk-8.0.412.8-hotspot
When I go to the bin folder in there and do java-version, it shows:
"openjdk version "1.8.0_412"
I have all the conda stuff installed
I have the condabin folder and the bin folder of the 1.8 java install in PATH
I'm using the synapse kernal in VSCode (the same as what it shows in the notebook in the Fabric workspace)
I'm reinstalled and restarted my machine 50 times
I've tried running as an admin
Any ideas?
Hi @tmjones2
Thanks for using Fabric Community.
Please follow these steps:
1) Right click the ipython notebook
Reveal in File Explorer
2) Open the python log file: ../../../logs/${NOTEBOOK_ARTIFACT_ID}/PySparkLighter.log and check the session failure reason
Please attach the screenshot of the error here to understand what is happening.
You can also try stopping the spark session in the Notebook and try running in VS code.
Thanks
Hi @tmjones2
We haven’t heard from you on the last response and was just checking back to see if you could provide the details asked above.
Thanks
Hi @tmjones2
We haven’t heard from you on the last response and was just checking back to see if you could provide the details asked above.
Thanks
User | Count |
---|---|
6 | |
4 | |
3 | |
2 | |
2 |
User | Count |
---|---|
17 | |
17 | |
6 | |
6 | |
5 |