Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
I have the notebook below that runs without error. However, when I go back to my warehouse (Logistics_Warehouse), I don't see the table. Any thoughts? Also, can you recommend a good resource for reading and writing to different places: ie, read from lakehouse, write to warehouse.
from pyspark.sql import SparkSession
# Initialize Spark session
spark = SparkSession.builder \
.appName("StockSnapshot") \
.config("spark.sql.caseSensitive", "true") \
.getOrCreate()
# Define the SQL query
sql_query = """
SELECT src.is_lidded AS LiddedFlag
FROM HAN_SRC_StockSnapshot src
"""
df = spark.sql(sql_query)
df.write.mode("overwrite").format("tsql").option("table", "Logistics_Warehouse.dbo.HAN_StockSnapshot")
print("The SQL query has been executed and the results have been written to the 'StockSnapshot' table in the data warehouse.")
Hi @plott722,
Thank you for reaching out to the Microsoft Fabric Community Forum.
We sincerely apologize for the inconvenience. After reviewing the issue of missing warehouse table, here are some steps that might help resolve the issue:
Please verity that the spark session is correctly configured to connect to the data warehouse. Ensure that all necessary configurations for the connection are set.
Confirm that the connection details e.g., server name, database name, authentication details for the data warehouse are accurately specified. Check for any connectivity issues between Spark and the data warehouse.
Verify that the table name Logistics_Warehouse.dbo.HAN_StockSnapshot is correct and that the schema dbo exists in the data warehouse. Ensure there are no typos or incorrect schema references.
Ensure that the write format tsql is supported and correctly specified. If tsql is not valid, consider using a different format such as jdbc for writing to the data warehouse. Confirm that the user account used by spark has write permissions to create or overwrite tables in the specified schema of the data warehouse.
Check the execution logs of the notebook for any errors or exceptions, specifically those related to writing to the data warehouse.
Test with a simple query and write operation to verify that writing to the data warehouse works as expected. For instance, try writing a small Data frame with sample data to a test table in the data warehouse.
Also, please go through the below following links for better understanding:
Dataflow Gen2 data destinations and managed settings - Microsoft Fabric | Microsoft Learn
Tables in data warehousing - Microsoft Fabric | Microsoft Learn
Also, please go through the below following solved solution for reference:
Solved: Missing Tables in Data Warehouse - Microsoft Fabric Community
If this post helps, please give us Kudos and consider accepting it as a solution to help other members find it more quickly.
Thank you.
Hi @plott722
- Double-check the table name and schema in your option statement
- Somecases, there will be a delay in the metadata sync b/w the Lakehouse and the Warehouse. You might need to trigger a refresh of the metadata sync to ensure the Warehouse recognizes the newly written table.
- Make sure that you will have all the permissions to to write to the logoistics table
If you need more info about lakehouse and warehouse concepts in Fabric please go through Better together - the lakehouse and warehouse - Microsoft Fabric | Microsoft Learn
Thanks!
Did I answer your question? Mark my post as a solution!
Proud to be a Super User!
User | Count |
---|---|
33 | |
14 | |
6 | |
3 | |
2 |
User | Count |
---|---|
39 | |
22 | |
11 | |
7 | |
6 |