Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Next up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now

Reply
ccornelia
Frequent Visitor

Automating Data Wrangler output

Hi, 

Cornelia here. I am really excited to participate in this Hackathon!😁

 

I am trying to use Data Wrangler in order to generate a summary of a dataset.

I can save the csv or the generated Python code by clicking on specific buttons but I would need to automate this code or csv generation inside a data pipeline. Is there such a posibility or a workaround to use Data Wrangler in this way? 

 

Thanks!

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi Cornelia, 

If I understand you correctly, you'd like to describe a dataset automatically/programatically, the way data wrangler does in python notebooks. I don't know that there is an api to use the data wrangler programatically. 

what I can suggest is either using the dataframe description capabilites in spark:

https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.sql.DataFrame.describe.html

or generate the description directly in the dataset (assuming we're talking about pbi datasets = semantic models) using MCode

https://learn.microsoft.com/en-us/powerquery-m/table-schema

https://learn.microsoft.com/en-us/powerquery-m/table-profile 

 

hope this helps!

 

View solution in original post

3 REPLIES 3
ccornelia
Frequent Visitor

Thank you for your help! 😁

 

Even if Data Wrangler seems pretty good at generating python code for describing data, I ended up creating some custom pyspark functions - easier to modify and automate.

Anonymous
Not applicable

Hi Cornelia, 

If I understand you correctly, you'd like to describe a dataset automatically/programatically, the way data wrangler does in python notebooks. I don't know that there is an api to use the data wrangler programatically. 

what I can suggest is either using the dataframe description capabilites in spark:

https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.sql.DataFrame.describe.html

or generate the description directly in the dataset (assuming we're talking about pbi datasets = semantic models) using MCode

https://learn.microsoft.com/en-us/powerquery-m/table-schema

https://learn.microsoft.com/en-us/powerquery-m/table-profile 

 

hope this helps!

 

imejiauseche
Microsoft Employee
Microsoft Employee

Once you have the Python code generated with data wrangler you can put it in a cell of a notebook and from the notebook schedule a data pipeline run of the notebook.
Run add to pipeline in Microsoft Fabric notebooksRun add to pipeline in Microsoft Fabric notebooks

 

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.