Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
There is capability of creating snapshots of a data warehouse on the management tab within a Data Warehouse. The issue we have is if a table is dropped, whether purposeful or accidently, it also deletes the data off all snapshots. Instead of displaying the data in the tables, is displays "Unable to preview the data"
For instance, when a warehouse is deployed through pipelines, if the schema changes, then the deployment pipeline will drop all data in the table when updating the schema. Some developers my also do a drop table then Select * into when updating table data. A table may also get inadvertently dropped when working with it. In any case, I'm observing that the data is permanently lost despite the use of warehouse snapshots, whether the snapshot is taken pre-emptively or it is created using the point in time snapshot.
Unless I misunderstand the purpose of the snapsnot, the pretty much defeats the purpose and creates a false sense of security.
Has anyone else experienced this? And I'm curious what other people using to backup the data in their data warehouses? I am thinking I might need to resort to a custom notebook, but it would be much preferable if there were something out of the box that I could use instead of making something custom.
Solved! Go to Solution.
I think it is clear, whether a bug or design that snapshots are not a good option for backing up data, so I went with notebook route for scheduled backups of critical tables. Here is what I ended up doing:
I have a parallel workspaces (DEV/TEST/PROD) as my primary data workspaces. I created 3 item:
1. Warehouse - wh_backup - used for backups
2. Lakehouse - data_wh - Shortuct in the schemas that require backups
3. TSQL Notebook - Creates backups of tables from shortcut schemas into Warehouse.
1st Cell will make sure there is a log table to store run history. Since you aren't able to get the failure and run information from dynamic SQL, this ensures each step in the process is logged for review.
2nd Cell will create all schemas in the backup warehouse prefixed with todays date: IE: bck_20250921_[original schema name]
3rd Cell will backup the tables into into the newly created schemas.
4th cell, deletes tables/schemas that are older than the defined retention period.
5th cell will display logs from entire notebook.
NOTES: Don't schedule it to run near midnight UTC, or it may do weird things.
--===============================
-- Cell 1 — Initialize the backup log table
--===============================
-- Create the log table if it doesn't exist (run in oe_backup warehouse context)
IF NOT EXISTS (
SELECT 1
FROM [wh_backup].sys.tables t
JOIN [wh_backup].sys.schemas s ON s.schema_id = t.schema_id
WHERE s.name = 'dbo' AND t.name = 'backup_log'
)
BEGIN
CREATE TABLE [wh_backup].[dbo].[backup_log]
(
-- What/where
object_type VARCHAR(30) NOT NULL, -- 'SCHEMA' | 'TABLE'
action_name VARCHAR(50) NOT NULL, -- 'CreateSchema'|'CreateTable'|'CopyData' etc.
source_db VARCHAR(128) NULL,
source_schema VARCHAR(128) NULL,
source_object VARCHAR(128) NULL,
target_db VARCHAR(128) NULL,
target_schema VARCHAR(128) NULL,
target_object VARCHAR(128) NULL,
-- Timing
backup_at DATETIME2(3) NOT NULL ,
backup_date DATE NOT NULL ,
-- Outcome
status VARCHAR(10) NOT NULL ,
rows_affected BIGINT NULL,
-- Diagnostics
script_text VARCHAR(MAX) NULL, -- exact script attempted
error_message VARCHAR(MAX) NULL,
error_number INT NULL,
error_severity INT NULL,
error_state INT NULL
);
END;
--===============================
--Cell 2 — Create dated backup schemas (one per source schema)
--===============================
DECLARE @CreateSchemas VARCHAR(MAX) = '';
/* Build one TRY/CATCH block per schema to create and log */
SELECT
@CreateSchemas = @CreateSchemas + '
BEGIN
BEGIN TRY
IF NOT EXISTS (SELECT 1 FROM sys.schemas WHERE name = ''' + new_schema + ''')
BEGIN
DECLARE @stmt' + oid + ' VARCHAR(4000) = ''CREATE SCHEMA [' + new_schema + ']'';
EXEC(@stmt' + oid + ');
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text
)
VALUES
(
''SCHEMA'', ''CreateSchema'',
''data_wh'', ''' + source_schema + ''', NULL,
''wh_backup'', ''' + new_schema + ''', NULL,
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Success'', 0,
''CREATE SCHEMA [' + new_schema + ']''
);
END
ELSE
BEGIN
/* Schema appeared between discovery and create; record as already present */
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text
)
VALUES
(
''SCHEMA'', ''CreateSchema (exists)'',
''data_wh'', ''' + source_schema + ''', NULL,
''wh_backup'', ''' + new_schema + ''', NULL,
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Success'', 0,
''-- already existed: [' + new_schema + ']''
);
END
END TRY
BEGIN CATCH
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text, error_message, error_number, error_severity, error_state
)
VALUES
(
''SCHEMA'', ''CreateSchema'',
''data_wh'', ''' + source_schema + ''', NULL,
''wh_backup'', ''' + new_schema + ''', NULL,
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Fail'', 0,
''CREATE SCHEMA [' + new_schema + ']'',
ERROR_MESSAGE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE()
);
END CATCH
END;
'
FROM (
SELECT DISTINCT
-- prefix like bck_20250923_ + original schema
(p.prefix + s.name) AS new_schema,
s.name AS source_schema,
CAST(s.schema_id as VARCHAR(20)) as oid
FROM data_wh.sys.schemas AS s
JOIN data_wh.sys.tables AS t ON t.schema_id = s.schema_id
CROSS JOIN (SELECT 'bck_' + CAST(FORMAT(GETDATE(),'yyyyMMd_') AS VARCHAR(16)) AS prefix) AS p
WHERE (p.prefix + s.name) NOT IN (SELECT name FROM wh_backup.sys.schemas)
AND s.name NOT IN ('sys','queryinsights','INFORMATION_SCHEMA')
) AS x;
EXEC (@CreateSchemas);
SELECT *
FROM [wh_backup].[dbo].[backup_log]
WHERE backup_at >= DATEADD(HOUR, -1, SYSUTCDATETIME())
and action_name in ('CreateSchema','CreateSchema (exists)')
ORDER BY backup_at ASC;
--===============================
--Cell 3 — Copy source tables into dated backup schemas
--===============================
DECLARE @CreateTable VARCHAR(MAX) = '';
/* Build one TRY/CATCH block per table copy, using object_id suffix to avoid var-name collisions */
SELECT
@CreateTable = @CreateTable + '
BEGIN
BEGIN TRY
-- Unique variable names per table using object_id suffix
DECLARE @drop_stmt_' + oid + ' VARCHAR(4000) = ''DROP TABLE IF EXISTS ' + newTable + ''';
DECLARE @copy_stmt_' + oid + ' VARCHAR(MAX) = ''SELECT * INTO ' + newTable + ' FROM ' + oldTable + ''';
-- Drop old copy (if any), then copy fresh
EXEC(@drop_stmt_' + oid + ');
EXEC(@copy_stmt_' + oid + ');
DECLARE @rows_' + oid + ' BIGINT = @@ROWCOUNT;
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text
)
VALUES
(
''TABLE'', ''CopyTable'',
''data_wh'', ''' + source_schema + ''', ''' + table_name + ''',
''wh_backup'', ''' + target_schema + ''', ''' + table_name + ''',
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Success'', @rows_' + oid + ',
@copy_stmt_' + oid + '
);
END TRY
BEGIN CATCH
DECLARE @ERR_copy_stmt_' + oid + ' VARCHAR(MAX) = ''SELECT * INTO ' + newTable + ' FROM ' + oldTable + ''';
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text, error_message, error_number, error_severity, error_state
)
VALUES
(
''TABLE'', ''CopyTable'',
''data_wh'', ''' + source_schema + ''', ''' + table_name + ''',
''wh_backup'', ''' + target_schema + ''', ''' + table_name + ''',
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Fail'', 0,
@ERR_copy_stmt_' + oid + ', ERROR_MESSAGE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE()
);
END CATCH
END;
'
FROM (
SELECT
-- Logging pieces
s.name AS source_schema,
t.name AS table_name,
(p.prefix + s.name) AS target_schema,
-- Suffix for unique variable names
CAST(t.object_id AS VARCHAR(20)) AS oid,
-- Fully-qualified names used in dynamic SQL
'[wh_backup].[' + (p.prefix + s.name) + '].[' + t.name + ']' AS newTable,
'[data_wh].[' + s.name + '].[' + t.name + ']' AS oldTable
FROM data_wh.sys.schemas AS s
JOIN data_wh.sys.tables AS t ON t.schema_id = s.schema_id
CROSS JOIN (SELECT 'bck_' + CAST(FORMAT(GETDATE(),'yyyyMMd_') AS VARCHAR(16)) AS prefix) AS p
WHERE s.name NOT IN ('sys','queryinsights','INFORMATION_SCHEMA')
) AS x;
EXEC (@CreateTable);
SELECT *
FROM [wh_backup].[dbo].[backup_log]
WHERE backup_at >= DATEADD(HOUR, -1, SYSUTCDATETIME())
and action_name = 'CopyTable'
ORDER BY backup_at ASC;
--===============================
--Cell 4 — Retention cleanup (drop aged backup tables and empty schemas)
--===============================
/* === Retention window === */
DECLARE @retention_days INT = 60; -- change as needed
DECLARE @cutoff_date DATE = DATEADD(DAY, -@retention_days, CAST(SYSUTCDATETIME() AS DATE));
/* === Drop old backup tables (per-table TRY/CATCH with unique variable names) === */
DECLARE @DropTables VARCHAR(MAX) = '';
SELECT
@DropTables = @DropTables + '
BEGIN
BEGIN TRY
DECLARE @drop_stmt_' + CAST(t.object_id AS VARCHAR(20)) + ' VARCHAR(4000) =
''DROP TABLE IF EXISTS [wh_backup].[' + s.name + '].[' + t.name + ']'';
EXEC(@drop_stmt_' + CAST(t.object_id AS VARCHAR(20)) + ');
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text
)
VALUES
(
''TABLE'', ''DropOldBackupTable'',
NULL, NULL, NULL,
''wh_backup'', ''' + s.name + ''', ''' + t.name + ''',
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Success'', 0,
@drop_stmt_' + CAST(t.object_id AS VARCHAR(20)) + '
);
END TRY
BEGIN CATCH
DECLARE @ERR_stmt_' + CAST(t.object_id AS VARCHAR(20)) + ' VARCHAR(4000) =
''DROP TABLE IF EXISTS [wh_backup].[' + s.name + '].[' + t.name + ']'';
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text, error_message, error_number, error_severity, error_state
)
VALUES
(
''TABLE'', ''DropOldBackupTable'',
NULL, NULL, NULL,
''wh_backup'', ''' + s.name + ''', ''' + t.name + ''',
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Fail'', 0,
@ERR_stmt_' + CAST(t.object_id AS VARCHAR(20)) + ', ERROR_MESSAGE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE()
);
END CATCH
END;
'
FROM [wh_backup].sys.schemas AS s
JOIN [wh_backup].sys.tables AS t ON t.schema_id = s.schema_id
CROSS APPLY (
SELECT
TRY_CONVERT(date, SUBSTRING(s.name, 5, 8), 112) AS backup_dt
) AS p
WHERE
-- schema name like bck_YYYYMMDD_*
s.name LIKE 'bck\_%\_%' ESCAPE '\'
AND p.backup_dt IS NOT NULL
AND p.backup_dt < @cutoff_date;
EXEC (@DropTables);
/* === OPTIONAL: Drop old/empty backup schemas (per-schema TRY/CATCH) === */
DECLARE @DropSchemas VARCHAR(MAX) = '';
SELECT
@DropSchemas = @DropSchemas + '
BEGIN
BEGIN TRY
IF NOT EXISTS (
SELECT 1
FROM [wh_backup].sys.tables t
WHERE t.schema_id = ' + CAST(s.schema_id AS VARCHAR(20)) + '
)
BEGIN
DECLARE @drop_schema_stmt_' + CAST(s.schema_id AS VARCHAR(20)) + ' VARCHAR(4000) =
''DROP SCHEMA [' + s.name + ']'';
EXEC(@drop_schema_stmt_' + CAST(s.schema_id AS VARCHAR(20)) + ');
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text
)
VALUES
(
''SCHEMA'', ''DropOldBackupSchema'',
NULL, NULL, NULL,
''wh_backup'', ''' + s.name + ''', NULL,
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Success'', 0,
@drop_schema_stmt_' + CAST(s.schema_id AS VARCHAR(20)) + '
);
END
END TRY
BEGIN CATCH
DECLARE @ERR_drop_schema_' + CAST(s.schema_id AS VARCHAR(20)) + ' VARCHAR(4000) =
''DROP SCHEMA [' + s.name + ']'';
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text, error_message, error_number, error_severity, error_state
)
VALUES
(
''SCHEMA'', ''DropOldBackupSchema'',
NULL, NULL, NULL,
''wh_backup'', ''' + s.name + ''', NULL,
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Fail'', 0,
@ERR_drop_schema_' + CAST(s.schema_id AS VARCHAR(20)) + ', ERROR_MESSAGE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE()
);
END CATCH
END;
'
FROM [wh_backup].sys.schemas AS s
CROSS APPLY (
SELECT TRY_CONVERT(date, SUBSTRING(s.name, 5, 8), 112) AS backup_dt
) AS p
WHERE
s.name LIKE 'bck\_%\_%' ESCAPE '\'
AND p.backup_dt IS NOT NULL
AND p.backup_dt < @cutoff_date;
EXEC (@DropSchemas);
SELECT *
FROM [wh_backup].[dbo].[backup_log]
WHERE backup_at >= DATEADD(HOUR, -1, SYSUTCDATETIME())
and action_name in ('DropOldBackupSchema','DropOldBackupTable')
ORDER BY backup_at ASC;
--===============================
--Cell 4 — Recent run log
--===============================
SELECT *
FROM [wh_backup].[dbo].[backup_log]
WHERE backup_at >= DATEADD(HOUR, -3, SYSUTCDATETIME())
ORDER BY backup_at ASC;
I think it is clear, whether a bug or design that snapshots are not a good option for backing up data, so I went with notebook route for scheduled backups of critical tables. Here is what I ended up doing:
I have a parallel workspaces (DEV/TEST/PROD) as my primary data workspaces. I created 3 item:
1. Warehouse - wh_backup - used for backups
2. Lakehouse - data_wh - Shortuct in the schemas that require backups
3. TSQL Notebook - Creates backups of tables from shortcut schemas into Warehouse.
1st Cell will make sure there is a log table to store run history. Since you aren't able to get the failure and run information from dynamic SQL, this ensures each step in the process is logged for review.
2nd Cell will create all schemas in the backup warehouse prefixed with todays date: IE: bck_20250921_[original schema name]
3rd Cell will backup the tables into into the newly created schemas.
4th cell, deletes tables/schemas that are older than the defined retention period.
5th cell will display logs from entire notebook.
NOTES: Don't schedule it to run near midnight UTC, or it may do weird things.
--===============================
-- Cell 1 — Initialize the backup log table
--===============================
-- Create the log table if it doesn't exist (run in oe_backup warehouse context)
IF NOT EXISTS (
SELECT 1
FROM [wh_backup].sys.tables t
JOIN [wh_backup].sys.schemas s ON s.schema_id = t.schema_id
WHERE s.name = 'dbo' AND t.name = 'backup_log'
)
BEGIN
CREATE TABLE [wh_backup].[dbo].[backup_log]
(
-- What/where
object_type VARCHAR(30) NOT NULL, -- 'SCHEMA' | 'TABLE'
action_name VARCHAR(50) NOT NULL, -- 'CreateSchema'|'CreateTable'|'CopyData' etc.
source_db VARCHAR(128) NULL,
source_schema VARCHAR(128) NULL,
source_object VARCHAR(128) NULL,
target_db VARCHAR(128) NULL,
target_schema VARCHAR(128) NULL,
target_object VARCHAR(128) NULL,
-- Timing
backup_at DATETIME2(3) NOT NULL ,
backup_date DATE NOT NULL ,
-- Outcome
status VARCHAR(10) NOT NULL ,
rows_affected BIGINT NULL,
-- Diagnostics
script_text VARCHAR(MAX) NULL, -- exact script attempted
error_message VARCHAR(MAX) NULL,
error_number INT NULL,
error_severity INT NULL,
error_state INT NULL
);
END;
--===============================
--Cell 2 — Create dated backup schemas (one per source schema)
--===============================
DECLARE @CreateSchemas VARCHAR(MAX) = '';
/* Build one TRY/CATCH block per schema to create and log */
SELECT
@CreateSchemas = @CreateSchemas + '
BEGIN
BEGIN TRY
IF NOT EXISTS (SELECT 1 FROM sys.schemas WHERE name = ''' + new_schema + ''')
BEGIN
DECLARE @stmt' + oid + ' VARCHAR(4000) = ''CREATE SCHEMA [' + new_schema + ']'';
EXEC(@stmt' + oid + ');
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text
)
VALUES
(
''SCHEMA'', ''CreateSchema'',
''data_wh'', ''' + source_schema + ''', NULL,
''wh_backup'', ''' + new_schema + ''', NULL,
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Success'', 0,
''CREATE SCHEMA [' + new_schema + ']''
);
END
ELSE
BEGIN
/* Schema appeared between discovery and create; record as already present */
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text
)
VALUES
(
''SCHEMA'', ''CreateSchema (exists)'',
''data_wh'', ''' + source_schema + ''', NULL,
''wh_backup'', ''' + new_schema + ''', NULL,
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Success'', 0,
''-- already existed: [' + new_schema + ']''
);
END
END TRY
BEGIN CATCH
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text, error_message, error_number, error_severity, error_state
)
VALUES
(
''SCHEMA'', ''CreateSchema'',
''data_wh'', ''' + source_schema + ''', NULL,
''wh_backup'', ''' + new_schema + ''', NULL,
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Fail'', 0,
''CREATE SCHEMA [' + new_schema + ']'',
ERROR_MESSAGE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE()
);
END CATCH
END;
'
FROM (
SELECT DISTINCT
-- prefix like bck_20250923_ + original schema
(p.prefix + s.name) AS new_schema,
s.name AS source_schema,
CAST(s.schema_id as VARCHAR(20)) as oid
FROM data_wh.sys.schemas AS s
JOIN data_wh.sys.tables AS t ON t.schema_id = s.schema_id
CROSS JOIN (SELECT 'bck_' + CAST(FORMAT(GETDATE(),'yyyyMMd_') AS VARCHAR(16)) AS prefix) AS p
WHERE (p.prefix + s.name) NOT IN (SELECT name FROM wh_backup.sys.schemas)
AND s.name NOT IN ('sys','queryinsights','INFORMATION_SCHEMA')
) AS x;
EXEC (@CreateSchemas);
SELECT *
FROM [wh_backup].[dbo].[backup_log]
WHERE backup_at >= DATEADD(HOUR, -1, SYSUTCDATETIME())
and action_name in ('CreateSchema','CreateSchema (exists)')
ORDER BY backup_at ASC;
--===============================
--Cell 3 — Copy source tables into dated backup schemas
--===============================
DECLARE @CreateTable VARCHAR(MAX) = '';
/* Build one TRY/CATCH block per table copy, using object_id suffix to avoid var-name collisions */
SELECT
@CreateTable = @CreateTable + '
BEGIN
BEGIN TRY
-- Unique variable names per table using object_id suffix
DECLARE @drop_stmt_' + oid + ' VARCHAR(4000) = ''DROP TABLE IF EXISTS ' + newTable + ''';
DECLARE @copy_stmt_' + oid + ' VARCHAR(MAX) = ''SELECT * INTO ' + newTable + ' FROM ' + oldTable + ''';
-- Drop old copy (if any), then copy fresh
EXEC(@drop_stmt_' + oid + ');
EXEC(@copy_stmt_' + oid + ');
DECLARE @rows_' + oid + ' BIGINT = @@ROWCOUNT;
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text
)
VALUES
(
''TABLE'', ''CopyTable'',
''data_wh'', ''' + source_schema + ''', ''' + table_name + ''',
''wh_backup'', ''' + target_schema + ''', ''' + table_name + ''',
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Success'', @rows_' + oid + ',
@copy_stmt_' + oid + '
);
END TRY
BEGIN CATCH
DECLARE @ERR_copy_stmt_' + oid + ' VARCHAR(MAX) = ''SELECT * INTO ' + newTable + ' FROM ' + oldTable + ''';
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text, error_message, error_number, error_severity, error_state
)
VALUES
(
''TABLE'', ''CopyTable'',
''data_wh'', ''' + source_schema + ''', ''' + table_name + ''',
''wh_backup'', ''' + target_schema + ''', ''' + table_name + ''',
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Fail'', 0,
@ERR_copy_stmt_' + oid + ', ERROR_MESSAGE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE()
);
END CATCH
END;
'
FROM (
SELECT
-- Logging pieces
s.name AS source_schema,
t.name AS table_name,
(p.prefix + s.name) AS target_schema,
-- Suffix for unique variable names
CAST(t.object_id AS VARCHAR(20)) AS oid,
-- Fully-qualified names used in dynamic SQL
'[wh_backup].[' + (p.prefix + s.name) + '].[' + t.name + ']' AS newTable,
'[data_wh].[' + s.name + '].[' + t.name + ']' AS oldTable
FROM data_wh.sys.schemas AS s
JOIN data_wh.sys.tables AS t ON t.schema_id = s.schema_id
CROSS JOIN (SELECT 'bck_' + CAST(FORMAT(GETDATE(),'yyyyMMd_') AS VARCHAR(16)) AS prefix) AS p
WHERE s.name NOT IN ('sys','queryinsights','INFORMATION_SCHEMA')
) AS x;
EXEC (@CreateTable);
SELECT *
FROM [wh_backup].[dbo].[backup_log]
WHERE backup_at >= DATEADD(HOUR, -1, SYSUTCDATETIME())
and action_name = 'CopyTable'
ORDER BY backup_at ASC;
--===============================
--Cell 4 — Retention cleanup (drop aged backup tables and empty schemas)
--===============================
/* === Retention window === */
DECLARE @retention_days INT = 60; -- change as needed
DECLARE @cutoff_date DATE = DATEADD(DAY, -@retention_days, CAST(SYSUTCDATETIME() AS DATE));
/* === Drop old backup tables (per-table TRY/CATCH with unique variable names) === */
DECLARE @DropTables VARCHAR(MAX) = '';
SELECT
@DropTables = @DropTables + '
BEGIN
BEGIN TRY
DECLARE @drop_stmt_' + CAST(t.object_id AS VARCHAR(20)) + ' VARCHAR(4000) =
''DROP TABLE IF EXISTS [wh_backup].[' + s.name + '].[' + t.name + ']'';
EXEC(@drop_stmt_' + CAST(t.object_id AS VARCHAR(20)) + ');
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text
)
VALUES
(
''TABLE'', ''DropOldBackupTable'',
NULL, NULL, NULL,
''wh_backup'', ''' + s.name + ''', ''' + t.name + ''',
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Success'', 0,
@drop_stmt_' + CAST(t.object_id AS VARCHAR(20)) + '
);
END TRY
BEGIN CATCH
DECLARE @ERR_stmt_' + CAST(t.object_id AS VARCHAR(20)) + ' VARCHAR(4000) =
''DROP TABLE IF EXISTS [wh_backup].[' + s.name + '].[' + t.name + ']'';
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text, error_message, error_number, error_severity, error_state
)
VALUES
(
''TABLE'', ''DropOldBackupTable'',
NULL, NULL, NULL,
''wh_backup'', ''' + s.name + ''', ''' + t.name + ''',
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Fail'', 0,
@ERR_stmt_' + CAST(t.object_id AS VARCHAR(20)) + ', ERROR_MESSAGE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE()
);
END CATCH
END;
'
FROM [wh_backup].sys.schemas AS s
JOIN [wh_backup].sys.tables AS t ON t.schema_id = s.schema_id
CROSS APPLY (
SELECT
TRY_CONVERT(date, SUBSTRING(s.name, 5, 8), 112) AS backup_dt
) AS p
WHERE
-- schema name like bck_YYYYMMDD_*
s.name LIKE 'bck\_%\_%' ESCAPE '\'
AND p.backup_dt IS NOT NULL
AND p.backup_dt < @cutoff_date;
EXEC (@DropTables);
/* === OPTIONAL: Drop old/empty backup schemas (per-schema TRY/CATCH) === */
DECLARE @DropSchemas VARCHAR(MAX) = '';
SELECT
@DropSchemas = @DropSchemas + '
BEGIN
BEGIN TRY
IF NOT EXISTS (
SELECT 1
FROM [wh_backup].sys.tables t
WHERE t.schema_id = ' + CAST(s.schema_id AS VARCHAR(20)) + '
)
BEGIN
DECLARE @drop_schema_stmt_' + CAST(s.schema_id AS VARCHAR(20)) + ' VARCHAR(4000) =
''DROP SCHEMA [' + s.name + ']'';
EXEC(@drop_schema_stmt_' + CAST(s.schema_id AS VARCHAR(20)) + ');
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text
)
VALUES
(
''SCHEMA'', ''DropOldBackupSchema'',
NULL, NULL, NULL,
''wh_backup'', ''' + s.name + ''', NULL,
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Success'', 0,
@drop_schema_stmt_' + CAST(s.schema_id AS VARCHAR(20)) + '
);
END
END TRY
BEGIN CATCH
DECLARE @ERR_drop_schema_' + CAST(s.schema_id AS VARCHAR(20)) + ' VARCHAR(4000) =
''DROP SCHEMA [' + s.name + ']'';
INSERT INTO [wh_backup].[dbo].[backup_log]
(
object_type, action_name,
source_db, source_schema, source_object,
target_db, target_schema, target_object,
backup_at, backup_date,
status, rows_affected,
script_text, error_message, error_number, error_severity, error_state
)
VALUES
(
''SCHEMA'', ''DropOldBackupSchema'',
NULL, NULL, NULL,
''wh_backup'', ''' + s.name + ''', NULL,
SYSUTCDATETIME(), CAST(SYSUTCDATETIME() AS DATE),
''Fail'', 0,
@ERR_drop_schema_' + CAST(s.schema_id AS VARCHAR(20)) + ', ERROR_MESSAGE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE()
);
END CATCH
END;
'
FROM [wh_backup].sys.schemas AS s
CROSS APPLY (
SELECT TRY_CONVERT(date, SUBSTRING(s.name, 5, 8), 112) AS backup_dt
) AS p
WHERE
s.name LIKE 'bck\_%\_%' ESCAPE '\'
AND p.backup_dt IS NOT NULL
AND p.backup_dt < @cutoff_date;
EXEC (@DropSchemas);
SELECT *
FROM [wh_backup].[dbo].[backup_log]
WHERE backup_at >= DATEADD(HOUR, -1, SYSUTCDATETIME())
and action_name in ('DropOldBackupSchema','DropOldBackupTable')
ORDER BY backup_at ASC;
--===============================
--Cell 4 — Recent run log
--===============================
SELECT *
FROM [wh_backup].[dbo].[backup_log]
WHERE backup_at >= DATEADD(HOUR, -3, SYSUTCDATETIME())
ORDER BY backup_at ASC;
Hi @Nathan_Mosher ,
Thanks for reaching out to Microsoft Fabric Community forum.
Possible Solutions and Workarounds -
Given the current behavior of the built-in snapshot mechanism, consider the following approaches to help prevent the loss of important tables or data:
Avoid Dropping Tables
Rather than using DROP TABLE followed by CREATE or SELECT INTO, opt for ALTER TABLE / TRUNCATE + INSERT or MERGE / incremental operations. This approach maintains the table object, preserving its metadata and history.
As highlighted in dbt-related documentation, use incremental materialization or custom materializations that do not drop the table.
Leverage Table Cloning or Point-in-Time Clone/Backup
Create table clones at a specific point in time before making any destructive changes. This ensures the schema and data are preserved, even if the original table is dropped. Refer to available documentation for CREATE TABLE … AS CLONE OF … AT 'timestamp'.
https://learn.microsoft.com/en-us/fabric/data-warehouse/warehouse-snapshot
For broader coverage, consider periodic cloning or exporting of key tables as a backup strategy.
Automate Backups and Snapshots
Establish regular warehouse snapshots prior to deployments or schema modifications.
Implement version control for schema and data snapshots, and consider exporting data to external storage formats (such as CSV, Parquet, or Delta files) ahead of any schema changes.
Restore from External Backups or Exports
If a table is dropped unintentionally, restore it from external backups or exports. Please note that Fabric’s snapshots will not recover dropped tables in this scenario.
Submit Feature Requests or Monitor the Roadmap
If this behavior is not as expected, consider submitting feedback to Microsoft to request that snapshots retain dropped objects, or to introduce features such as “object undelete” or schema recovery within snapshot retention.
Monitor updates to preview features, as future releases may address these concerns.
Hi @Nathan_Mosher ,
Here is some Microsoft documentation on snapshots and what they should be used for.
https://learn.microsoft.com/en-us/fabric/data-warehouse/warehouse-snapshot
Another article for Fabric Warehouse Restore Points.
https://learn.microsoft.com/en-us/fabric/data-warehouse/restore-in-place
I assume you have already looked at these articles?
However, a snapshot is by no means a backup of your data. It is only intended to represent a specific state of the warehouse or your data. You can then use this.
Now to your problem.
If I understand correctly, you create a snapshot, then delete a table as an example, and this table is also missing from all previous snapshots.
Is that correct?
Furthermore, it should not be forgotten that warehouse snapshots are still in preview and could lead to one or two problems, but this is not necessarily the case.
Let me know if you need more help.
Best regards
To answer your question, Yes. Any table that has been dropped or replaced become inaccessible in all snapshots, even snapshots taken before it was dropped. I had read those articles you referenced, and from how they are written, this appears to just be an unintended bug. It specifically mentioned that the snapshots are intended to protect against unintended ETL corruption....
I ended up just creating a parralel workspace with a warehouse to store backups, and wrote a notebook script to create rolling backups of the critical tables. If anyone is interested in the code I wrote for that, feel free reply and I'll post it publically.
What you say about ETL is correct, of course.
But it's good that you've found a workaround that secures it for you.
Feel free to share it, as it will help other community members with similar problems.
If I can help you further, please let me know.
And if you like the answer, feel free to give me kudos or accept the answer.
Best regards