Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I am trying to create a managed table in lakehouse using NB with rows manually entered (SQL equivalent INSERT INTO) but I am getting this following error, i have no idea how to debug this. it seems to create the delta table without any columns
%%pyspark
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from pyspark.sql import functions as sf
from datetime import datetime
# Initialize Spark session
spark = SparkSession.builder \
.appName("session_one") \
.getOrCreate()
schema = StructType([
StructField('id',IntegerType(), True),
StructField('schema_name', StringType(), True),
StructField('table_name', StringType(), True),
StructField('watermark_value', TimestampType(), True),
StructField('full_path', StringType(), True)
])
row_one = [
(1, 'lorem', 'ipsum', datetime(1, 1, 1, 0, 0, 0), None),
]
df_one = spark.createDataFrame(row_one, schema)
df_two = df_one.withColumn('full_path', sf.concat(sf.col('schema_name'),sf.lit('.'),sf.col('table_name')))
df_two.show()
df_two.write.format("delta").saveAsTable("watermark")
How can I satisfy `No Delta transaction log entries were found ` req
Solved! Go to Solution.
This issue can be solved by using tablebuilder api
This issue can be solved by using tablebuilder api
Or maybe this could work (I asked ChatGPT how to create a similar table with SQL syntax)
%%sql
-- Step 1: Create the Table
CREATE TABLE watermark (
id INT,
schema_name VARCHAR(255),
table_name VARCHAR(255),
watermark_value TIMESTAMP,
full_path VARCHAR(255)
);
-- Step 2: Insert Data into the Table
INSERT INTO watermark (id, schema_name, table_name, watermark_value, full_path)
VALUES (1, 'lorem', 'ipsum', '0001-01-01 00:00:00', NULL);
-- Step 3: Update the `full_path` Column
UPDATE watermark
SET full_path = schema_name || '.'
|| table_name;
Does it work if you use this code below?
---------------------------------------------
from pyspark.sql.types import *
from pyspark.sql import functions as sf
from datetime import datetime
schema = StructType([
StructField('id',IntegerType(), True),
StructField('schema_name', StringType(), True),
StructField('table_name', StringType(), True),
StructField('watermark_value', TimestampType(), True),
StructField('full_path', StringType(), True)
])
row_one = [
(1, 'lorem', 'ipsum', datetime(1, 1, 1, 0, 0, 0), None),
]
df_one = spark.createDataFrame(row_one, schema)
df_two = df_one.withColumn('full_path', sf.concat(sf.col('schema_name'),sf.lit('.'),sf.col('table_name')))
df_two.show()
df_two.write.mode("overwrite").saveAsTable("watermark")
-----------------------------------------------
I don't think you need to specify %%pyspark as this is the default.
I don't think you need to initalize the spark session in your code in Fabric notebooks.
Maybe you need to add .mode("overwrite") or .mode("append") in the saveAsTable expression.
By the way, does your code run without errors if you remove line 28 in your code? (The saveAsTable line)
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
4 | |
4 | |
3 | |
2 | |
2 |