I have quite a few dynamic sql statements that are being executed via sp_executesql and would like to limit the number of distributed requests when they are submitted.
It appears MAXDOP doesn't work so...suggestions?
Thank you for using Fabric community.
As I understand that you are trying to work with MAXDOP. Could you please be more clear with your ask, so that I can guide you better.
Also can you please provide screenshots of your error.
No, I'm not trying to work with MAXDOP.
Since Fabric, as it is today, shares a connection pool to pass 'Distributed Transactions' to the backend processors I would like to manage the throttling of those calls per thread. I'm using 20 thread FOREACH loops that need to be able to limit how many connections each can use. That is a simple MAXDOP call in SQL Server; not in Fabric so...
Thanks for using the Fabric community and reporting this .
I have reached to the internal team for help on this. I will update you once I hear from them.
Appreciate your patience.
Apologies for the delay in reply from our side.
Sorry for the inconvenience, currently MAXDOP is not a Fabric (or Synapse) concept. There are no plans to support request limiting at this time. We can certainly keep an eye on it for upvote.
Appreciate if you could share the feedback on our feedback channel. Which would be open for the user community to upvote & comment on. This allows our product teams to effectively prioritize your request against our existing feature backlog and gives insight into the potential impact of implementing the suggested feature.
A bit more reason. I have two parallel FOREACH loops that load TXT files into individual tables while creating the datatype logic tables in 'temp' schema. Each one can either update or insert (similar to MERGE). This causes a couple issues: 1) Error while trying to execute an external computation, 2) Error while updating runtime physical metadata information.
I also keep track of the records landed, staged, and loaded into the warehouse star-schema. That causes issues as well since re-runs are updates to counts and that causes 2) above to happen.
My thoughts are that with something like MAXDOP I can ensure some threads are single while other threads like mass-inserts are up to 20. The updates also need to be sequential unless someone knows how to get around the snapshot isolation issues with updates.
Any ideas would be appreciated.
Hi @jamesneal @jamesneal6927 , as @v-cboorla-msft mentioned we do not have plans to support/expose a maxdop like feature. I am really interested to learn more about your loading scenario however. If you are running into errors because of how you are parallelizing it would be interesting to learn more. How are you kick off this job? The only way I see that you could be running into error is if your two FOREACH loops are updating the same tables at the same time, therefore I am not sure how MAXDOP exclusively solves your problem. Are you using TRANSACTIONS?
@steve-msft - I'm inserting/updating the same sys_log table with the BEGIN/END/ERROR data from ALL these parallel thread (as stated before). By me throttling those commands from ALL the other parallel file loads I should be able to manage that single table demand. Unless you have a better idea on how to properly manage a log table in this isolation environment.
You shouldn't need to throttle or limit in any way the inserts. They will not cause any blocking (either for other DML or selects).
For Updates and Deletes, based on what you've said about your workload, you'd want to keep those in as small of transactions as possible and implement try/catch logic for when there are conflicts between 2 concurrent transactions both have updates/deletes.
Check out the November 2023 Fabric update to learn about new features.
130+ sessions, 130+ speakers, Product managers, MVPs, and experts. All about Power BI and Fabric. Attend online or watch the recordings.
Ask your questions about Synapse here!