Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
soumya_cg
New Member

403 forbidden error on querying lakehouse table using pyspark

Hi All,

Need a help. I,m trying to run below query using pyspark inside my fabric lakehouse:

----> 1 df = spark.sql("SELECT * FROM Stream_Lakehouse.dbo.FACT_KPIs_LH LIMIT 1000")
2 display(df)

Can someone help. I have truncated the full error log due to issue with space here

I,m getting below error:

Py4JJavaError                             Traceback (most recent call last)
Cell In[47], line 1
----> 1 df = spark.sql("SELECT * FROM Stream_Lakehouse.dbo.FACT_KPIs_LH LIMIT 1000")
      2 display(df)
 
File /opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py:1440, in SparkSession.sql(self, sqlQuery, args, **kwargs)
   1438 try:
   1439     litArgs = {k: _to_java_column(lit(v)) for k, v in (args or {}).items()}
-> 1440     return DataFrame(self._jsparkSession.sql(sqlQuery, litArgs), self)
   1441 finally:
   1442     if len(kwargs) > 0:
 
File ~/cluster-env/trident_env/lib/python3.10/site-packages/py4j/java_gateway.py:1322, in JavaMember.__call__(self, *args)
   1316 command = proto.CALL_COMMAND_NAME +\
   1317     self.command_header +\
   1318     args_command +\
   1319     proto.END_COMMAND_PART
   1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
   1323     answer, self.gateway_client, self.target_id, self.name)
   1325 for temp_arg in temp_args:
   1326     if hasattr(temp_arg, "_detach"):
 
File /opt/spark/python/lib/pyspark.zip/pyspark/errors/exceptions/captured.py:169, in capture_sql_exception.<locals>.deco(*a, **kw)
    167 def deco(*a: Any, **kw: Any) -> Any:
    168     try:
--> 169         return f(*a, **kw)
    170     except Py4JJavaError as e:
    171         converted = convert_exception(e.java_exception)
 
File ~/cluster-env/trident_env/lib/python3.10/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
    324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325 if answer[1] == REFERENCE_TYPE:
--> 326     raise Py4JJavaError(
    327         "An error occurred while calling {0}{1}{2}.\n".
    328         format(target_id, ".", name), value)
    329 else:
    330     raise Py4JError(
    331         "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
    332         format(target_id, ".", name, value))
 
Py4JJavaError: An error occurred while calling o322.sql.
: java.lang.RuntimeException: Request failed: HTTP/1.1 403 Forbidden
at com.microsoft.fabric.spark.metadata.Helpers$.executeRequest(Helpers.scala:154)
at com.microsoft.fabric.platform.PbiPlatformClient.newGetRequest(PbiPlatformClient.scala:51)
at com.microsoft.fabric.platform.PbiPlatformClient.newGetRequest$(PbiPlatformClient.scala:47)
at com.microsoft.fabric.platform.PbiPlatformInternalApiClient.newGetRequest(PbiPlatformClient.scala:175)
at com.microsoft.fabric.platform.PbiPlatformInternalApiClient.getAllWorkspaces(PbiPlatformClient.scala:199)
at com.microsoft.fabric.platform.InstrumentedPbiPlatformClient.$anonfun$getAllWorkspaces$1(PbiPlatformClient.scala:164)
at com.microsoft.fabric.spark.metadata.Helpers$.timed(Helpers.scala:29)
at com.microsoft.fabric.platform.InstrumentedPbiPlatformClient.getAllWorkspaces(PbiPlatformClient.scala:164)
at com.microsoft.fabric.platform.PbiPlatformCachingClient.$anonfun$workspaceCache$1(PbiPlatformClient.scala:117)
at com.google.common.base.Suppliers$ExpiringMemoizingSupplier.get(Suppliers.java:192)
at com.microsoft.fabric.platform.PbiPlatformCachingClient.getWorkspace(PbiPlatformClient.scala:146)
at com.microsoft.fabric.platform.PbiPlatformCachingClient.getArtifacts(PbiPlatformClient.scala:136)
at com.microsoft.fabric.platform.PbiPlatformCachingClient.$anonfun$artifactCache$1(PbiPlatformClient.scala:130)
at com.github.benmanes.caffeine.cache.LocalLoadingCache.lambda$newMappingFunction$2(LocalLoadingCache.java:145)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1908)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
at com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:56)
at com.microsoft.fabric.platform.PbiPlatformCachingClient.getArtifact(PbiPlatformClient.scala:151)
at com.microsoft.fabric.spark.metadata.SchemaPathResolver.getArtifactRoot(pathResolvers.scala:127)
at com.microsoft.fabric.spark.metadata.SchemaPathResolver.getSchemaRoot(pathResolvers.scala:144)
at com.microsoft.fabric.spark.metadata.DefaultSchemaMetadataManager.listSchemas(DefaultSchemaMetadataManager.scala:218)
at com.microsoft.fabric.spark.metadata.DefaultSchemaMetadataManager.$anonfun$defaultSchemaPathResolver$1(DefaultSchemaMetadataManager.scala:30)
at com.microsoft.fabric.spark.metadata.NamespaceResolver.$anonfun$decodedSchemaNameCache$1(pathResolvers.scala:46)
at com.github.benmanes.caffeine.cache.LocalLoadingCache.lambda$newMappingFunction$2(LocalLoadingCache.java:145)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
at java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1908)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
at com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
at com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
at com.github.benmanes.caffeine.cache.LocalLoadingCache.get(LocalLoadingCache.java:56)
at com.microsoft.fabric.spark.metadata.Helpers$.forceLoadIfRequiredInCachedMap(Helpers.scala:61)
at com.microsoft.fabric.spark.metadata.NamespaceResolver.inferNamespace(pathResolvers.scala:87)
at com.microsoft.fabric.spark.metadata.NamespaceResolver.$anonfun$toNamespace$1(pathResolvers.scala:79)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705)
at com.microsoft.fabric.spark.metadata.NamespaceResolver.toNamespace(pathResolvers.scala:79)
at com.microsoft.fabric.spark.metadata.DefaultSchemaMetadataManager.getSchema(DefaultSchemaMetadataManager.scala:73)
at com.microsoft.fabric.spark.metadata.MetadataManager.getSchema(MetadataManager.scala:192)
at com.microsoft.fabric.spark.metadata.InstrumentedMetadataManager.super$getSchema(MetadataManager.scala:321)
at com.microsoft.fabric.spark.metadata.InstrumentedMetadataManager.$anonfun$getSchema$1(MetadataManager.scala:321)
at com.microsoft.fabric.spark.metadata.Helpers$.timed(Helpers.scala:29)
at com.microsoft.fabric.spark.metadata.InstrumentedMetadataManager.getSchema(MetadataManager.scala:321)
at com.microsoft.fabric.spark.catalog.OnelakeExternalCatalog.getDatabase(OnelakeExternalCatalog.scala:78)
at com.microsoft.fabric.spark.catalog.OnelakeExternalCatalog.databaseExists(OnelakeExternalCatalog.scala:84)
at com.microsoft.fabric.spark.catalog.InstrumentedExternalCatalog.$anonfun$databaseExists$1(OnelakeExternalCatalog.scala:417)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at com.microsoft.fabric.spark.metadata.Helpers$.timed(Helpers.scala:29)
at com.microsoft.fabric.spark.catalog.InstrumentedExternalCatalog.databaseExists(OnelakeExternalCatalog.scala:417)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:169)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:142)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:54)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.$anonfun$catalog$1(HiveSessionStateBuilder.scala:69)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog$lzycompute(SessionCatalog.scala:140)

 

1 ACCEPTED SOLUTION
frithjof_v
Super User
Super User
5 REPLIES 5
Anonymous
Not applicable

Hi @soumya_cg 

 

The issue seems to be partially fixed. I can read data from a schema enabled lakehouse now but I need to pin it as the default lakehouse of my notebook. Otherwise it returns me a "[REQUIRES_SINGLE_PART_NAMESPACE] spark_catalog requires a single-part namespace“ exception. 

vjingzhanmsft_0-1727252388051.png

I will keep tracking this issue and update if there is any further progress. 

 

Best Regards,
Jing

Anonymous
Not applicable

Hi @soumya_cg 

 

This is a known issue as frithjof_v has linked. The engineers are working on the fix now. But we haven't received any definite ETA yet. I will keep you updated once there is any progress. We appreciate your understanding and patience. 

 

Best Regards,
Jing

frithjof_v
Super User
Super User

Yes I,m using

Okay, see if the link in my previous comment helps you.

 

In general I would recommend not to use the schema enabled Lakehouse as long as it's only a preview feature.

 

I would just use normal Lakehouse without schemas.

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.