Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric certified for FREE! Don't miss your chance! Learn more

Reply
Dinzz
Regular Visitor

Issue Writing to OneLake-Enabled KQL Database Delta Tables

I enabled OneLake availability for my KQL database so the tables are exposed as Delta or Parquet files. I’m able to read the table using Parquet without any issues:

df = spark.read.format("parquet").load(abfsspath)

However, when I try to overwrite or append data to the table, it fails:

append_df.write.format("delta").mode("append").save(abfsspath)

I receive the following error:

Operation failed: "Forbidden", 403, AuthorizationPermissionMismatch
"This request is not authorized to perform this operation using this permission."

 

I am the workspace admin, and I don’t want to ingest data using the conventional KQL ingestion methods because they are too slow for my use case. I specifically need to write records directly using Delta or Parquet. Could you help me understand how to enable this write operation for Delta tables?

 

2 ACCEPTED SOLUTIONS

Hi @Dinzz 

Thank you for reaching out to the Microsoft Fabric Forum Community.

@kustortininja Thanks for the inputs.

Please try below code, it may helps you.


kustoUri = "https://trd-test_dummy.kusto.fabric.microsoft.com"

accessToken = mssparkutils.credentials.getToken(kustoUri)

append_df.write \
    .format("com.microsoft.kusto.spark.synapse.datasource") \
    .option("kustoCluster", kustoUri) \
    .option("kustoDatabase", database) \
    .option("kustoTable", tableName) \
    .option("accessToken", accessToken) \
    .option("ingestionMode", "Direct") \
    .option("tableCreation", "CreateIfNotExists") \
    .option("writeBatchTimeout", "00:10:00") \
    .option("batchSize", "1024") \
    .mode("append") \
    .save()

If I misunderstand your needs or you still have problems on it, please feel free to let us know.  

Thanks.

 

View solution in original post

Hi @Dinzz 

That behavior is expected and not related to your code itself. The Kusto Spark connector (even in Direct ingestion mode) is optimized for batch ingestion, not for low-latency, row-by-row writes. Each Spark write triggers to authentication and token validation, Ingestion orchestration in KQL and Extent creation and commit on the KQL side

Because of this, ingesting a single record can easily take 30–40 seconds, which is expected behavior.

Currently, there is no supported way in Fabric to ingest single records into KQL with millisecond-level latency, and direct Delta writes are intentionally blocked.


Thanks.

View solution in original post

8 REPLIES 8
kustortininja
Microsoft Employee
Microsoft Employee

Onelake availability endpoints are read-only. Eventhouse OneLake Availability - Microsoft Fabric | Microsoft Learn You cannot write back to the parquet table, you would need to write it as a new table in the Lakehouse. What do you mean "traditional ingestion methods are too slow for your use case". RTI is the fastest ingestion mechanism available in Fabric. If you are trying to refine and transform data in real-time, this wouldn't be a best practice. The lowest you can turn Onelake availability down to is 5 minutes, and even then it comes with a large warning about the small files problem in parquet. If you want to transform data on arrival you should look into Eventhouse Update Policies 

hi @kustortininja this is the code that iam using to ingest data from notebook 

 

kustoUri = "https://trd-test_dummy.kusto.fabric.microsoft.com"
accessToken = mssparkutils.credentials.getToken(kustoUri)

 

append_df.write \
    .format("com.microsoft.kusto.spark.synapse.datasource") \
    .option("kustoCluster", kustoUri) \
    .option("kustoDatabase", database) \
    .option("kustoTable", tableName) \
    .option("accessToken", accessToken) \
    .option("ingestionMode", "Direct") \
    .mode("append") \
    .save()

Hi @Dinzz 

Thank you for reaching out to the Microsoft Fabric Forum Community.

@kustortininja Thanks for the inputs.

Please try below code, it may helps you.


kustoUri = "https://trd-test_dummy.kusto.fabric.microsoft.com"

accessToken = mssparkutils.credentials.getToken(kustoUri)

append_df.write \
    .format("com.microsoft.kusto.spark.synapse.datasource") \
    .option("kustoCluster", kustoUri) \
    .option("kustoDatabase", database) \
    .option("kustoTable", tableName) \
    .option("accessToken", accessToken) \
    .option("ingestionMode", "Direct") \
    .option("tableCreation", "CreateIfNotExists") \
    .option("writeBatchTimeout", "00:10:00") \
    .option("batchSize", "1024") \
    .mode("append") \
    .save()

If I misunderstand your needs or you still have problems on it, please feel free to let us know.  

Thanks.

 

Hi @v-priyankata The code you shared is taking around 30–40 seconds to append a single record. Could you help me understand why this is taking so long? I’m looking to ingest the data as quickly as possible. Is there any alternative approach that would allow faster ingestion?

Hi @Dinzz 

That behavior is expected and not related to your code itself. The Kusto Spark connector (even in Direct ingestion mode) is optimized for batch ingestion, not for low-latency, row-by-row writes. Each Spark write triggers to authentication and token validation, Ingestion orchestration in KQL and Extent creation and commit on the KQL side

Because of this, ingesting a single record can easily take 30–40 seconds, which is expected behavior.

Currently, there is no supported way in Fabric to ingest single records into KQL with millisecond-level latency, and direct Delta writes are intentionally blocked.


Thanks.

hi @v-priyankata  thank you for the explanation it helped a lot

Hi @Dinzz 

Thank you for reaching out to the Microsoft Fabric Forum Community.

 

I hope the information provided was helpful. If you still have questions, please don't hesitate to reach out to the community.

 

Hi @Dinzz 

Hope everything’s going smoothly on your end. I wanted to check if the issue got sorted. if you have any other issues please reach community.

Helpful resources

Announcements
Sticker Challenge 2026 Carousel

Join our Community Sticker Challenge 2026

If you love stickers, then you will definitely want to check out our Community Sticker Challenge!

Free Fabric Certifications

Free Fabric Certifications

Get Fabric certified for free! Don't miss your chance.

January Fabric Update Carousel

Fabric Monthly Update - January 2026

Check out the January 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors
Top Kudoed Authors