Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi All,
One of my clients are using Amazon Redshift database for data warehouse solution and are planning to move to Fabric. In the interim, they would want to explore the options of retaining the existing solution until the migration completes. Since the destination connector for Amazon redshift is not available, what are the different ways which can be explored to write into Amazon redshift tables using fabric framework? Please provide any documentation on the suggestions if available.
Hi @Pallavi87 ,
There are several methods you can explore:
Using the Amazon Redshift Data API, you can run SQL statements using the Data API operations.
Using Python Connector: Amazon Redshift also provides a Python connector, which you can use to connect programmatically.
See the documentation below for more details:
Using the Amazon Redshift Data API - Amazon Redshift
Examples of using the Amazon Redshift Python connector - Amazon Redshift
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Thank you for the pointers and documentations Yang. Are there any details from Fabric perspective to how to use these methods? Also, any pre-requisites which might be needed to connect to Amazon Redshift from Fabric to write data into it.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
9 | |
4 | |
3 | |
3 | |
2 |