<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Lakehouse Table Generate Create Table in Data Engineering</title>
    <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4899735#M14003</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/925904"&gt;@JonBFabric&lt;/a&gt;&amp;nbsp;, Thank you for reaching out to the Microsoft Community Forum.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Fabric lakehouse cannot return the original CREATE TABLE statement because that information is never stored. Delta Lake only keeps a structural schema in its transaction log and all string columns are recorded simply as string without any notion of the original CHAR(n) or VARCHAR(n) definitions. Because the lakehouse storage layer does not preserve fixed length constraints, there is no system level metadata you can query later to recover them.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The SQL analytics endpoint also cannot help because it exposes a compatibility projection rather than the real underlying schema. Its inflated character lengths, including the 4× multiplier and the cap at 8000 are generated by the endpoint itself and do not represent the actual table definition or any original DDL. That surface is designed for querying, not schema reconstruction and therefore does not retain the information you are looking for.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Given these constraints, there is no reliable way to extract the exact SQL used to create a lakehouse table after the fact. If those character limits matter for downstream SQL workloads, the only viable approach is to rebuild the DDL by measuring actual data lengths in the table and defining controlled column sizes going forward. For future tables, the only dependable method is to version control the DDL at creation time or store it explicitly as metadata, because the platform does not preserve it automatically.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/data-warehousing" target="_blank"&gt;What Is Data Warehousing in Microsoft Fabric? - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-overview" target="_blank"&gt;What is a lakehouse? - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/data-types" target="_blank"&gt;Data Types in Fabric Data Warehouse - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-sql-analytics-endpoint" target="_blank"&gt;What is the SQL analytics endpoint for a lakehouse? - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/" target="_blank"&gt;Fabric Data Warehouse - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;</description>
    <pubDate>Thu, 11 Dec 2025 12:40:39 GMT</pubDate>
    <dc:creator>v-hashadapu</dc:creator>
    <dc:date>2025-12-11T12:40:39Z</dc:date>
    <item>
      <title>Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4898744#M13986</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I maintain server lakehouses, and due to issues with the deployment pipelines tend to apply schema changes through script, and to retain the data in the table use the following process:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Create a new table with the desired structure&lt;/LI&gt;&lt;LI&gt;Insert all records from oringinal table in to new table&lt;/LI&gt;&lt;LI&gt;After checking rounts drop the original table and rename the new table.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;For Step 1, to be able base the new table on the existing table I need to be able to identify existing data types for all existing fields, and it appears there is no reliable way of doing this. The main issue relates to Char and Varchar fields, as there appears to be no current mothod for determining the appropriate character length.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have tried various methods on the lakehouse, but those always show the fields to be String, with no maximum size.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have also tried querying INFORMATION_SCHEMA COLUMNS through the sql endpoint, and the problem here is that value for&amp;nbsp;CHARACTER_MAXIMUM_LENGTH appears to be 4 times the actual defined maximum number of characters, up to a maximum of 8000. I.e. A character length of 100 is shown as 400, 1000 is shown as 4000, 2000 and higher are always shown as 8000.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Does anyone know of a reliable way of generating a create statement for an existing lakehouse table?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 10 Dec 2025 15:28:49 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4898744#M13986</guid>
      <dc:creator>JonBFabric</dc:creator>
      <dc:date>2025-12-10T15:28:49Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4898775#M13988</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/925904"&gt;@JonBFabric&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;H3&gt;&lt;span class="lia-unicode-emoji" title=":white_heavy_check_mark:"&gt;✅&lt;/span&gt; Why this happens&lt;/H3&gt;&lt;UL&gt;&lt;LI&gt;Delta tables in Fabric do not enforce fixed-length character types like CHAR(n) or VARCHAR(n); they store text as variable-length strings.&lt;/LI&gt;&lt;LI&gt;The SQL endpoint maps these to STRING for compatibility, so the original length constraint is not preserved.&lt;/LI&gt;&lt;/UL&gt;&lt;HR /&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;H3&gt;&lt;span class="lia-unicode-emoji" title=":white_heavy_check_mark:"&gt;✅&lt;/span&gt; Why CHARACTER_MAXIMUM_LENGTH shows 4x&lt;/H3&gt;&lt;UL&gt;&lt;LI&gt;The SQL endpoint assumes UTF-16 encoding internally, so the reported length is multiplied by 4.&lt;/LI&gt;&lt;LI&gt;This is a known limitation and does not affect actual storage or query behaviour.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;Official References:&lt;/DIV&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-overview" target="_blank" rel="noopener"&gt;What is a lakehouse? - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.delta.io/delta-utility/" target="_blank" rel="noopener"&gt;Table utility commands | Delta Lake&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If this response was helpful in any way, I’d gladly accept a &lt;span class="lia-unicode-emoji" title=":thumbs_up:"&gt;👍&lt;/span&gt;much like the joy of seeing a DAX measure work first time without needing another FILTER.&lt;/P&gt;&lt;P&gt;Please mark it as the correct solution. It helps other community members find their way faster (and saves them from another endless loop &lt;span class="lia-unicode-emoji" title=":cyclone:"&gt;🌀&lt;/span&gt;.&lt;/P&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Wed, 10 Dec 2025 16:12:52 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4898775#M13988</guid>
      <dc:creator>Zanqueta</dc:creator>
      <dc:date>2025-12-10T16:12:52Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4898790#M13989</link>
      <description>&lt;P&gt;Thanks. Great to get the explanation as to what is happening and why. But going back to the original question...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Is it possible to identify the SQL statement used to originally create a table? I'm getting the impression that the answer is no. And given that the maximum record length that can be handled by the SQL endpoint is&amp;nbsp;&lt;SPAN&gt;8060 bytes, those character limits are crucial and need to be tightly controlled.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 10 Dec 2025 16:29:31 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4898790#M13989</guid>
      <dc:creator>JonBFabric</dc:creator>
      <dc:date>2025-12-10T16:29:31Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4899735#M14003</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/925904"&gt;@JonBFabric&lt;/a&gt;&amp;nbsp;, Thank you for reaching out to the Microsoft Community Forum.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Fabric lakehouse cannot return the original CREATE TABLE statement because that information is never stored. Delta Lake only keeps a structural schema in its transaction log and all string columns are recorded simply as string without any notion of the original CHAR(n) or VARCHAR(n) definitions. Because the lakehouse storage layer does not preserve fixed length constraints, there is no system level metadata you can query later to recover them.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The SQL analytics endpoint also cannot help because it exposes a compatibility projection rather than the real underlying schema. Its inflated character lengths, including the 4× multiplier and the cap at 8000 are generated by the endpoint itself and do not represent the actual table definition or any original DDL. That surface is designed for querying, not schema reconstruction and therefore does not retain the information you are looking for.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Given these constraints, there is no reliable way to extract the exact SQL used to create a lakehouse table after the fact. If those character limits matter for downstream SQL workloads, the only viable approach is to rebuild the DDL by measuring actual data lengths in the table and defining controlled column sizes going forward. For future tables, the only dependable method is to version control the DDL at creation time or store it explicitly as metadata, because the platform does not preserve it automatically.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/data-warehousing" target="_blank"&gt;What Is Data Warehousing in Microsoft Fabric? - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-overview" target="_blank"&gt;What is a lakehouse? - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/data-types" target="_blank"&gt;Data Types in Fabric Data Warehouse - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-sql-analytics-endpoint" target="_blank"&gt;What is the SQL analytics endpoint for a lakehouse? - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/" target="_blank"&gt;Fabric Data Warehouse - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 11 Dec 2025 12:40:39 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4899735#M14003</guid>
      <dc:creator>v-hashadapu</dc:creator>
      <dc:date>2025-12-11T12:40:39Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4899819#M14005</link>
      <description>&lt;P&gt;Whilst I understand everything that you are saying, I would like to describe another scenario which suggests that some of the above is not actually correct.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Using only SparkSQL in a notebook I have created a lakehouse table with a single varchar(10) field. If I try to insert any value with more than 10 characters I get the following error:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="lia-indent-padding-left-30px"&gt;&lt;SPAN&gt;[DELTA_EXCEED_CHAR_VARCHAR_LIMIT] Exceeds char/varchar type length limitation. Failed check: (isnull('String) OR (length('String) &amp;lt;= 10)).&lt;/SPAN&gt;&lt;/P&gt;&lt;P class="lia-indent-padding-left-30px"&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Obviously something in the spark engine or the delta table metadata ia storing the size restriction&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 11 Dec 2025 14:24:51 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4899819#M14005</guid>
      <dc:creator>JonBFabric</dc:creator>
      <dc:date>2025-12-11T14:24:51Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4900426#M14023</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/925904"&gt;@JonBFabric&lt;/a&gt;&amp;nbsp;, Thank you for reaching out to the Microsoft Community Forum.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Yes, Spark/Delta can record and enforce CHAR(n) / VARCHAR(n) when a table is created through Spark or other Delta-aware APIs; the engine stores that constraint in the Delta metadata and will reject writes that exceed the declared width (hence the DELTA_EXCEED_CHAR_VARCHAR_LIMIT error). The authoritative place to get that declaration is the Spark/Delta surface (for example, run SHOW CREATE TABLE or DESCRIBE TABLE EXTENDED in a Spark notebook or read the Delta transaction log/Delta Table API). Those commands return the DDL/metadata that Spark/Delta actually enforces.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Do not rely on the Fabric SQL analytics endpoint or INFORMATION_SCHEMA alone to recover declared widths. Those surfaces present a T-SQL compatibility projection that can inflate, cap or otherwise transform reported lengths (the 4×/8000 behaviour you saw) and therefore are not a trustworthy source of the original Spark declared sizes. If you cannot run Spark against the table, your fallback is to inspect the _delta_log or compute observed max character/byte lengths and reconstruct conservative VARCHAR widths and for long term safety you must version-control the DDL or persist it as table metadata at creation time.&lt;/P&gt;</description>
      <pubDate>Fri, 12 Dec 2025 07:41:37 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4900426#M14023</guid>
      <dc:creator>v-hashadapu</dc:creator>
      <dc:date>2025-12-12T07:41:37Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4900454#M14026</link>
      <description>&lt;P&gt;Good Morning,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm not looking for a way to access the metadata through the SQL endpoint, by using&amp;nbsp;&lt;SPAN&gt;INFORMATION_SCHEMA&amp;nbsp;or any other objects/functions, I just used it as an example of the only place that displayed anything other than string. I have already tried both&amp;nbsp;SHOW CREATE TABLE and DESCRIBE TABLE EXTENDED, the first is not supported by Fabric ([DELTA_OPERATION_NOT_ALLOWED] Operation not allowed: `SHOW CREATE TABLE` is not supported for Delta tables) and the 2nd only shows string.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Please could you provide me with an example of how to access the delta metadata responsible for enforcing the DELTA_EXCEED_CHAR_VARCHAR_LIMIT error. It doesn't need to be pretty.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thanks again&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 12 Dec 2025 08:22:50 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4900454#M14026</guid>
      <dc:creator>JonBFabric</dc:creator>
      <dc:date>2025-12-12T08:22:50Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4900594#M14036</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/925904"&gt;@JonBFabric&lt;/a&gt;&amp;nbsp;, Thank you for reaching out to the Microsoft Community Forum.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If Fabric is blocking SHOW CREATE TABLE and DESCRIBE TABLE EXTENDED only shows string, the next step is to read the Delta metadata directly through Spark, because the enforcement you are seeing (DELTA_EXCEED_CHAR_VARCHAR_LIMIT) comes from the schema stored in the Delta transaction log, not from the SQL endpoint. The length constraint is kept in the Delta log under metaData.schemaString and Spark/Delta will surface it correctly when you query the table through the Delta APIs. The simplest approach is to use a Spark notebook and load the table with DeltaTable.forPath(...).toDF(), which will show VarcharType(n) in the schema if the table was created with varchar(n). If that surface is not available, you can read the latest commit JSON in the _delta_log folder and print the metaData.schemaString field; that text contains the exact schema Spark is enforcing, including declared lengths.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Spark example you can run in a Fabric notebook to retrieve the metadata responsible for the enforcement:&lt;/P&gt;
&lt;P&gt;from delta.tables import DeltaTable import json&lt;BR /&gt;table_path = "/lakehouses/&amp;lt;your-lakehouse&amp;gt;/Tables/&amp;lt;your-table&amp;gt;" # update this&lt;BR /&gt;dt = DeltaTable.forPath(spark, table_path) print(dt.toDF().schema) # shows VarcharType(n) if declared&lt;BR /&gt;log_dir = f"{table_path}/_delta_log" files = [f.path for f in dbutils.fs.ls(log_dir) if f.name.endswith(".json")] latest = sorted(files)[-1]&lt;BR /&gt;content = dbutils.fs.head(latest, 500000) commit = json.loads(content) print(commit.get("metaData", {}).get("schemaString"))&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This will show you the exact schema stored in Delta and the constraint that triggers the length violation error. If the table is large and uses checkpoint parquet files, the same field appears in the checkpoint’s metaData struct. In short, the SQL endpoint cannot return the declared widths, but Spark and the Delta log always can.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-notebook-explore" target="_blank"&gt;Explore the lakehouse data with a notebook - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/data-types" target="_blank"&gt;Data Types in Fabric Data Warehouse - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-warehouse/query-delta-lake-logs" target="_blank"&gt;Delta Lake Logs in Warehouse - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-overview" target="_blank"&gt;What is a lakehouse? - Microsoft Fabric | Microsoft Learn&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 12 Dec 2025 11:54:28 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4900594#M14036</guid>
      <dc:creator>v-hashadapu</dc:creator>
      <dc:date>2025-12-12T11:54:28Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4900766#M14052</link>
      <description>&lt;P&gt;Thanks.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This didn't quite work out of the box, possibly because it was originally written for DataBricks rather than Fabric, but I have got it working. I will explan the differences as I go:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;from delta.tables import DeltaTable&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;import json&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;table_path = "&amp;lt;Path_To_Table&amp;gt;" # update this&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;dt = DeltaTable.forPath(spark, table_path) &lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;print(dt.toDF().schema) # shows VarcharType(n) if declared&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;.schema&amp;nbsp;shows only string as type, so the last 2 lines can be removed.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;log_dir = f"{table_path}/_delta_log"&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;files = [f.path for f in notebookutils.fs.ls(log_dir) if f.name.endswith(".json")] &lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;latest = sorted(files)[-1]&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;content = notebookutils.fs.head(latest, 5000000)&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Note that&amp;nbsp;dbutils is now replaced by&amp;nbsp;notebookutils.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&lt;SPAN&gt;content can not however be read as JSON, as it is infact 3 JSON documents seperated by&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;'&lt;/SPAN&gt;&lt;SPAN&gt;\n&lt;/SPAN&gt;&lt;SPAN&gt;', and not just 1, and the document which contains the field metadata is the 2nd.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;schemaString = json.loads(content.split('\n')[1]).get("metaData", {}).get("schemaString")&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;schemaString is actually an embedded JSON document held as a string, and so has to be converted also. The following snippet then prints the field name and the actual sql type.&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;for field in json.loads(schemaString).get("fields"):&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;STRONG&gt;&amp;nbsp; &amp;nbsp; print("FieldName:", field.get("name"), ", Type:", field.get("metadata").get("__CHAR_VARCHAR_TYPE_STRING"))&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;Certainly not a finished article, but it gives me what I need to build around.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;Thanks for your help and patience.&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 12 Dec 2025 16:26:24 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4900766#M14052</guid>
      <dc:creator>JonBFabric</dc:creator>
      <dc:date>2025-12-12T16:26:24Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4901702#M14062</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/925904"&gt;@JonBFabric&lt;/a&gt;&amp;nbsp;, Thanks for the update and the insights on how to solve this issue. We really appreciate it.&lt;BR /&gt;&lt;BR /&gt;If you have any queries, please feel free to create a new post, we are always happy to help.&lt;/P&gt;</description>
      <pubDate>Mon, 15 Dec 2025 04:57:14 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4901702#M14062</guid>
      <dc:creator>v-hashadapu</dc:creator>
      <dc:date>2025-12-15T04:57:14Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse Table Generate Create Table</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4902128#M14070</link>
      <description>&lt;P&gt;One final update on this.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The logfile containing the schema is not necessarily the most recent, and there is not necessarily only one version of the schema. There is a schema associated with every modification made to the table structure, be that the original creation or subsequent alterations. Consequently, the logfile we need is the most recent with a schema.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;from&lt;/SPAN&gt; &lt;SPAN&gt;delta&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;tables&lt;/SPAN&gt; &lt;SPAN&gt;import&lt;/SPAN&gt; &lt;SPAN&gt;DeltaTable&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;import&lt;/SPAN&gt; &lt;SPAN&gt;json&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;# Note that the table name must be lowercase&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;table_path&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;'&amp;lt;Path_To_Table&amp;gt;&lt;STRONG&gt;'&lt;/STRONG&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;# Identify log files&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;log_dir&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;f&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;{&lt;/SPAN&gt;&lt;SPAN&gt;table_path&lt;/SPAN&gt;&lt;SPAN&gt;}&lt;/SPAN&gt;&lt;SPAN&gt;/_delta_log"&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;files&lt;/SPAN&gt;&lt;SPAN&gt; = [&lt;/SPAN&gt;&lt;SPAN&gt;f&lt;/SPAN&gt;&lt;SPAN&gt;.path &lt;/SPAN&gt;&lt;SPAN&gt;for&lt;/SPAN&gt; &lt;SPAN&gt;f&lt;/SPAN&gt; &lt;SPAN&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;notebookutils&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;fs&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;ls&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;log_dir&lt;/SPAN&gt;&lt;SPAN&gt;) &lt;/SPAN&gt;&lt;SPAN&gt;if&lt;/SPAN&gt; &lt;SPAN&gt;f&lt;/SPAN&gt;&lt;SPAN&gt;.name.endswith(&lt;/SPAN&gt;&lt;SPAN&gt;".json"&lt;/SPAN&gt;&lt;SPAN&gt;)]&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;# Identify log files with a schema&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;filesWithSchema&lt;/SPAN&gt;&lt;SPAN&gt; = []&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;for&lt;/SPAN&gt; &lt;SPAN&gt;file&lt;/SPAN&gt; &lt;SPAN&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;sorted&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;files&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;reverse&lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt;True&lt;/SPAN&gt;&lt;SPAN&gt;) :&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;content&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;notebookutils&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;fs&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;head&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;file&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;5000000&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;JSONdocs&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;content&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;split&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;'&lt;/SPAN&gt;&lt;SPAN&gt;\n&lt;/SPAN&gt;&lt;SPAN&gt;'&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;for&lt;/SPAN&gt; &lt;SPAN&gt;doc&lt;/SPAN&gt; &lt;SPAN&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;JSONdocs&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;if&lt;/SPAN&gt; &lt;SPAN&gt;'schemaString'&lt;/SPAN&gt; &lt;SPAN&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;doc&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;filesWithSchema&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;append&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;file&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;# Load the header for the latest log file containing a schema&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;latest&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;sorted&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;filesWithSchema&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;reverse&lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt;True&lt;/SPAN&gt;&lt;SPAN&gt;)[&lt;/SPAN&gt;&lt;SPAN&gt;0&lt;/SPAN&gt;&lt;SPAN&gt;]&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;content&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;notebookutils&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;fs&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;head&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;latest&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;5000000&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;# Extract the schema&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;JSONdocs&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;content&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;split&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;'&lt;/SPAN&gt;&lt;SPAN&gt;\n&lt;/SPAN&gt;&lt;SPAN&gt;'&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;for&lt;/SPAN&gt; &lt;SPAN&gt;doc&lt;/SPAN&gt; &lt;SPAN&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;JSONdocs&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;if&lt;/SPAN&gt; &lt;SPAN&gt;'schemaString'&lt;/SPAN&gt; &lt;SPAN&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;doc&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;schemaString&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;json&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;loads&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;doc&lt;/SPAN&gt;&lt;SPAN&gt;).get(&lt;/SPAN&gt;&lt;SPAN&gt;"metaData"&lt;/SPAN&gt;&lt;SPAN&gt;, {}).get(&lt;/SPAN&gt;&lt;SPAN&gt;"schemaString"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;# Extract Field Metadata&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;FieldList&lt;/SPAN&gt;&lt;SPAN&gt; = []&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;OrdinalPosition&lt;/SPAN&gt;&lt;SPAN&gt; = &lt;/SPAN&gt;&lt;SPAN&gt;0&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;for&lt;/SPAN&gt; &lt;SPAN&gt;field&lt;/SPAN&gt; &lt;SPAN&gt;in&lt;/SPAN&gt; &lt;SPAN&gt;json&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;loads&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;schemaString&lt;/SPAN&gt;&lt;SPAN&gt;).get(&lt;/SPAN&gt;&lt;SPAN&gt;"fields"&lt;/SPAN&gt;&lt;SPAN&gt;) :&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;OrdinalPosition&lt;/SPAN&gt;&lt;SPAN&gt; += &lt;/SPAN&gt;&lt;SPAN&gt;1&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt; = {}&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'FieldName'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;field&lt;/SPAN&gt;&lt;SPAN&gt;.get(&lt;/SPAN&gt;&lt;SPAN&gt;"name"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'Nullable'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;field&lt;/SPAN&gt;&lt;SPAN&gt;.get(&lt;/SPAN&gt;&lt;SPAN&gt;"nullable"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'OrdinalPosition'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;OrdinalPosition&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;match&lt;/SPAN&gt; &lt;SPAN&gt;field&lt;/SPAN&gt;&lt;SPAN&gt;.get(&lt;/SPAN&gt;&lt;SPAN&gt;"type"&lt;/SPAN&gt;&lt;SPAN&gt;).split(&lt;/SPAN&gt;&lt;SPAN&gt;'('&lt;/SPAN&gt;&lt;SPAN&gt;)[&lt;/SPAN&gt;&lt;SPAN&gt;0&lt;/SPAN&gt;&lt;SPAN&gt;]:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;case&lt;/SPAN&gt; &lt;SPAN&gt;'string'&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'SQLType'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;field&lt;/SPAN&gt;&lt;SPAN&gt;.get(&lt;/SPAN&gt;&lt;SPAN&gt;"metadata"&lt;/SPAN&gt;&lt;SPAN&gt;).get(&lt;/SPAN&gt;&lt;SPAN&gt;"__CHAR_VARCHAR_TYPE_STRING"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;case&lt;/SPAN&gt; &lt;SPAN&gt;'timestamp'&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'SQLType'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;'timestamp'&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;case&lt;/SPAN&gt; &lt;SPAN&gt;'date'&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'SQLType'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;'date'&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;case&lt;/SPAN&gt; &lt;SPAN&gt;'integer'&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'SQLType'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;'int'&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;case&lt;/SPAN&gt; &lt;SPAN&gt;'short'&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'SQLType'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;'smallint'&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;case&lt;/SPAN&gt; &lt;SPAN&gt;'long'&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'SQLType'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;'bigint'&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;case&lt;/SPAN&gt; &lt;SPAN&gt;'decimal'&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'SQLType'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;field&lt;/SPAN&gt;&lt;SPAN&gt;.get(&lt;/SPAN&gt;&lt;SPAN&gt;"type"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;case&lt;/SPAN&gt; &lt;SPAN&gt;'boolean'&lt;/SPAN&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;[&lt;/SPAN&gt;&lt;SPAN&gt;'SQLType'&lt;/SPAN&gt;&lt;SPAN&gt;] = &lt;/SPAN&gt;&lt;SPAN&gt;'boolean'&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;FieldList&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;append&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;FieldDetails&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;display&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;FieldList&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 15 Dec 2025 11:06:07 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Lakehouse-Table-Generate-Create-Table/m-p/4902128#M14070</guid>
      <dc:creator>JonBFabric</dc:creator>
      <dc:date>2025-12-15T11:06:07Z</dc:date>
    </item>
  </channel>
</rss>

