Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
Timal
Helper I
Helper I

Pyodbc Connect to Warehouse Endpoint timing out

Hello there,

 

So i've been using pyodbc as a workaround to write to Warehouse Tables using Notebooks for a while now.

Here and there i've had issues where it would result in Timeout Messages similar to the one here but they went away, usually after the next run or may it be a few hours.

 

The following Error happens when im trying to connect to the warehouse:

OperationalError: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 18 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')

 

Luckily i have access to multiple environments and this seems to not happen on all environments. However one of them has this error since almost 1 1/2 weeks, which isn't critical since it's currently not maintained but still worrying because the pyodbc component forms a central part of my logging logic.

I'm using the following connection string with ServicePrincipal Authentication:

 

 

        self.server = server
        self.database = database
        self.clientId = clientId
        self.clientSecret = clientSecret
        self.driver = "{"+pyodbc.drivers()[0]+"}"
        
self.connectionString = connection_string = (
            f"Driver={self.driver};"
            f"Server={self.server};"
            f"Database={self.database};"
            f"Authentication=ActiveDirectoryServicePrincipal;"
            f"UID={self.clientId};"
            f"PWD={self.clientSecret};"
        )

 

 

Whereas the Server is the SQL-Endpoint String and Database the Warehouse name.

 

Querying the Warehouse via the SQL-Endpoint UI, T-SQL Notebooks or external Tools like Azure Data Studio works without problems.

 

As i said this worked mostly like a charm for like half a year now - across multiple environments - so i'm wondering if anybody else encountered this issue lately?

Also - any suggestions how i can troubleshoot this? I know where it happens but its hard to troubleshoot the rest of it.

I've tried switching the capacity from a paid F2 to a Trial and backwards but it worked on neither configuration.

 

Thanks,

Tim

1 ACCEPTED SOLUTION

Hey @Anonymous ,

 

Well, another classic example of tunnel vision due to missleading Error Messages on my end.

After some configuration changes to the KeyVaults I noticed that the Secrets were outdated.

Updated the Secrets... guess what... worked.

Would've been nice if i got an error saying "invalid credentials" or something instead of "login timeout" but whatever.

 

Issue is fixed and as always, was down to a basic configuration issue.

 

TLDR: never trust the error message and always check your credentials first.

 

Thanks anyway for the support!

 

Regards,

Tim

View solution in original post

4 REPLIES 4
Anonymous
Not applicable

HI @Timal,

Any specific permission or environment configuration that you applied on the warehouse which got the error messages? Did this workspace capacity host across multiple geo? Can you please share some more detail about these?

Regards,

Xiaoxin Sheng

Hey @Anonymous ,

There is environment configuration for all workspaces but that is only for spark environment variables like Datetime conversion and so on and not specifically to SQL or Warehouses.

Capacity is only in a single location. I have multiple tenants in the same geo Region (Germany West in this case) and in one it works in the other it doesnt. So i'd say it's not a geo issue.

The resources are also in the same workspace.

Im using a notebook to access a warehouse in the same workspace for inserting Values to a Table using pyspark. Lakehouse is no option due to the need to Pipeline Integration for STPs and Lookups - at least i have iterated that a few times and there was always an issue with using a lakehouse instead.

 

However good points. Yet i cannot really get my head around what might cause it. Will test if i can actually connect using pyodbc when i run a local python script.

 

If you have any other points feel free to add them. I will open a MS Ticket at some point if it's not fixed but can't right now due to my difficult availability so i won't be able to collaborate probably.

 

Edit: I have tried running it from a new workspace with a Trial Capacity (against the same Warehouse) and that also Fails. Running it locally (VS Code) via pyodbc works without any issues. Same code, just local python file.

 

Thanks,

Tim

Anonymous
Not applicable

HI @Timal,

As your said, If they are work on the same data region but one work the other not.
Have you enabled allow list on the azure service principal usage or any different setting on the internal network policy setting or firewalls between these scenarios?
Regards,

Xiaoxin Sheng

Hey @Anonymous ,

 

Well, another classic example of tunnel vision due to missleading Error Messages on my end.

After some configuration changes to the KeyVaults I noticed that the Secrets were outdated.

Updated the Secrets... guess what... worked.

Would've been nice if i got an error saying "invalid credentials" or something instead of "login timeout" but whatever.

 

Issue is fixed and as always, was down to a basic configuration issue.

 

TLDR: never trust the error message and always check your credentials first.

 

Thanks anyway for the support!

 

Regards,

Tim

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.