Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
riyajshaikh09
Frequent Visitor

issue while running Fabric notebook in VScode

I want to take workspace name, for that I need to install semantic-link

I ran pip install semantic
-link in first cell and

in next cell I am running
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, IntegerType, BooleanType, StringType
from pyspark.sql.functions import col, max, current_date
import sempy.fabric as fabric
import json
from datetime import datetime

I am getting error for import sempy.fabric as fabric

Aslo I am getting below error:
---> 2 from azure.core.credentials import AccessToken 3 from azure.storage.blob import BlobServiceClient 4 from typing import List, Optional, Union ModuleNotFoundError: No module named 'azure'

Even if try to read data from abffs

from pyspark.sql import SparkSession

# Initialize Spark session
spark = SparkSession.builder.appName("MergeWithRowHash").getOrCreate()
df = spark.read.json('abfss://<adadasdd>@onelake.dfs.fabric.microsoft.com/adasdasdasdasdade-459b21ae93c2/Files/config.json')

I am getting below error:
File c:\Users\ryuser\AppData\Local\miniconda3\envs\fabric-synapse-runtime-1-2\lib\site-packages\py4j\protocol.py:326, in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --> 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError( 331 "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n". 332 format(target_id, ".", name, value)) Py4JJavaError: An error occurred while calling o36.json. : java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2688) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3431) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3466) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$1(DataSource.scala:724) at scala.collection.immutable.List.map(List.scala:293) at org.apache.spark.sql.execution.datasources.DataSource$.checkAndGlobPathIfNecessary(DataSource.scala:722)


1 ACCEPTED SOLUTION
Anonymous
Not applicable

HI @riyajshaikh09,

For the library and command which I mentions, they should similar as below code:

pip install azure-core
pip install azure-storage-blob
pip install semantic-link

In addition, you can refer to the following links about install Azure libraries.

How to install Azure SDK library packages for Python - Python on Azure | Microsoft Learn

Package index for Azure SDK libraries for Python - Python on Azure | Microsoft Learn

Regards,

Xiaoxin Sheng

View solution in original post

7 REPLIES 7
Anonymous
Not applicable

HI @riyajshaikh09,

azurebfs.SecureAzureBlobFileSystem not found

According to the error message, it seems related to azure libraries. I suppose that you may lose some of libraries that in your environment so the code cannot be initialized/processed.
If that is the case, I'd like to suggest you check if these code processing required to attach 'Azure' and 'Azure Blob Storage' libraries. (when you run these codes on service side, it already includes and initialize some of required libraries)

Regards,

Xiaoxin Sheng

So how to fix this? which libraries I need to install?

 

Anonymous
Not applicable

Hi @riyajshaikh09,

Have you tried to import the azure libraries(e.g. azure core and azure blob storage) which appeared in the error messages before row 'import sempy.fabric as fabric'?

Regards,

Xiaoxin Sheng

can you give the commands?

Anonymous
Not applicable

HI @riyajshaikh09,

For the library and command which I mentions, they should similar as below code:

pip install azure-core
pip install azure-storage-blob
pip install semantic-link

In addition, you can refer to the following links about install Azure libraries.

How to install Azure SDK library packages for Python - Python on Azure | Microsoft Learn

Package index for Azure SDK libraries for Python - Python on Azure | Microsoft Learn

Regards,

Xiaoxin Sheng

hackcrr
Super User
Super User

Hi, @riyajshaikh09 

Ensure that the necessary packages are installed. Run the following command in the VSCode terminal.

pip install semantic-link
pip install azure-core
pip install azure-storage-blob

Make sure your import statements are correct and the Python package is available:

from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, IntegerType, BooleanType, StringType
from pyspark.sql.functions import col, max, current_date
import sempy.fabric as fabric
import json
from datetime import datetime

from azure.core.credentials import AccessToken
from azure.storage.blob import BlobServiceClient

To resolve the missing Hadoop filesystem class for Py4JJavaError, make sure that the necessary Hadoop and Azure configurations are set up correctly.
Add the following to your Spark configuration:

spark = SparkSession.builder \
    .appName("MergeWithRowHash") \
    .config("fs.azure.account.auth.type.<your_storage_account>.dfs.core.windows.net", "OAuth") \
    .config("fs.azure.account.oauth.provider.type.<your_storage_account>.dfs.core.windows.net", "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider") \
    .config("fs.azure.account.oauth2.client.id.<your_storage_account>.dfs.core.windows.net", "<your-client-id>") \
    .config("fs.azure.account.oauth2.client.secret.<your_storage_account>.dfs.core.windows.net", "<your-client-secret>") \
    .config("fs.azure.account.oauth2.client.endpoint.<your_storage_account>.dfs.core.windows.net", "https://login.microsoftonline.com/<your-tenant-id>/oauth2/token") \
    .getOrCreate()

Ensure that the path to the Azure Data Lake file system is in the correct format:

df = spark.read.json('abfss://<container-name>@<storage-account>.dfs.core.windows.net/<directory-path>/config.json')

Ensure that hadoop-azure and azure-data-lake-storeJARs are available in the Spark class path. You may need to download the JAR files and place them in the Spark installation jars directory or add them to the Spark configuration:

spark = SparkSession.builder \
    .appName("YourAppName") \
    .config("spark.jars", "/path/to/hadoop-azure.jar,/path/to/azure-data-lake-store.jar") \
    .getOrCreate()

In addition, you should check for any proxy settings or network issues that may be preventing access to Azure services.

 

hackcrr

If this post helps, then please consider Accept it as the solution and kudos to this post to help the other members find it more quickly

 

But while we are running in Fabric workspace we dont need to do all these. 
we can directly call this

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.