Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi,
I believe is not possible to save the content from a dataframe directly to a Warehouse. Since I can only run Notebooks on Lakehouse.
Is this true? Or theres a way to do it that I´m not seeing,
Thanks in advance!
Solved! Go to Solution.
@rui_mtcarvalho you can write the data to lake house and use it with three part name in warehouse
Here is the Session where pawel showed this in our community session , skip to 1:02
https://www.youtube.com/watch?v=u7peyrNhYgw
we can use spark jdbc way , but the perfoamce is pretty bad. the easiest way is to use adf copy activity.
import os
from pyspark.sql.types import StringType
# acquire access token
access_token = mssparkutils.credentials.getToken("pbi")
# define JDBC URL and connection properties
jdbc_url = "jdbc:sqlserver://<dwhost>:1433;database=<dwName>;encrypt=true;trustServerCertificate=true;hostNameInCertificate=*.database.windows.net;loginTimeout=30"
df = spark.sql("select 1 as id , 'james' as name, 1.3 as cost, 'jack' as name2 ")
def map_to_sql_type(data_type):
if isinstance(data_type, StringType):
return "VARCHAR(255)"
return "" # Return an empty string for non-StringType
# Get the schema of the DataFrame
schema = df.schema
# Generate the SQL schema string, skipping non-StringType columns
sql_schema = ", ".join([f"{field.name} {map_to_sql_type(field.dataType)}" for field in schema if isinstance(field.dataType, StringType)])
connection_properties = {
"accessToken": access_token,
"driver": "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}
# read data from sql DW
df.write.option("createTableColumnTypes", sql_schema).mode("overwrite").jdbc(
url=jdbc_url,
table="test6",
properties=connection_properties
)
Please change the red part accordingly in above code,
Dwhost can be found in
Do you need it in the Warehouse given you can consume it in the Warehosue from the Lakehouse with no perceivable performance impact ?
If you do then why not use a datapipeline to run your notebook to load to the warehouse and then use a on success copy data activity to move to from lakehouse to warehouse ?
@rui_mtcarvalho you can write the data to lake house and use it with three part name in warehouse
Here is the Session where pawel showed this in our community session , skip to 1:02
https://www.youtube.com/watch?v=u7peyrNhYgw
Hi,
If the lakehouse has a shortcut to the datawarehouse, I believe you may be able to save the dataframe to the warehouse through the shortcut.
Kind Regards,
Dennes
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
70 | |
44 | |
14 | |
12 | |
5 |
User | Count |
---|---|
80 | |
77 | |
27 | |
8 | |
7 |