Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.

Reply
marcoG
Resolver I
Resolver I

The copy from sql server on premise has been released but it doesn't work

On-premise SQL Server copy task remains in queued state indefinitely

1 ACCEPTED SOLUTION
marcoG
Resolver I
Resolver I

I finally solved it myself:
For those who use proxies, in addition to the files:
- C:\Program Files\On-premises data gateway\enterprisegatewayconfigurator.exe.config

- C:\Program Files\On-premises data gateway\Microsoft.PowerBI.EnterpriseGateway.exe.config

- C:\Program Files\Local Data Gateway\m\Microsoft.Mashup.Container.NetFX45.exe.config

with proxy configuration:

<system.net>
<defaultProxy useDefaultCredentials="true">
<proxy
autoDetect="false"
proxyaddress="http://192.168.1.10:3128"
bypassonlocal="true"
usesystemdefault="false"
/>
</defaultProxy>
</system.net>

 

The 2 integration runtime files must also be updated:

- C:\Program Files\On-premises data gateway\FabricIntegrationRuntime\5.0\Shared\Fabrichost.exe.config

- C:\Program Files\On-premises data gateway\FabricIntegrationRuntime\5.0\Shared\Fabricworker.exe.config

 

Now it finally works

View solution in original post

28 REPLIES 28
marcoG
Resolver I
Resolver I

I finally solved it myself:
For those who use proxies, in addition to the files:
- C:\Program Files\On-premises data gateway\enterprisegatewayconfigurator.exe.config

- C:\Program Files\On-premises data gateway\Microsoft.PowerBI.EnterpriseGateway.exe.config

- C:\Program Files\Local Data Gateway\m\Microsoft.Mashup.Container.NetFX45.exe.config

with proxy configuration:

<system.net>
<defaultProxy useDefaultCredentials="true">
<proxy
autoDetect="false"
proxyaddress="http://192.168.1.10:3128"
bypassonlocal="true"
usesystemdefault="false"
/>
</defaultProxy>
</system.net>

 

The 2 integration runtime files must also be updated:

- C:\Program Files\On-premises data gateway\FabricIntegrationRuntime\5.0\Shared\Fabrichost.exe.config

- C:\Program Files\On-premises data gateway\FabricIntegrationRuntime\5.0\Shared\Fabricworker.exe.config

 

Now it finally works

TomT131
Frequent Visitor

Hi,

Issue still persists in mid May. SQL Server 2019 OnPrem copy activity is stuck on "in progress".
Gateway is latest as at 13.05.2024 (3000.218.9)..
Hoping this will be resolved soon as we're trying to do a POC of Fabric being a viable alternative to status quo setup. Thanks 

tsunloan
Helper I
Helper I

I experienced this same issue initially.


My pipeline was configured to use an Azure data lake gen 2 storage account for staging and the connection I used was configured to authenticate with an organisational account or service principal (can't remember which).

 

After re-configuring it to authenticate using an account key instead, the task proceeded beyond the queued state and I was finally able to reach the next big blocker that prevents the copy activity from working: the bypassing of the configured on-prem gateway when connecting the staging storage account.

I downloaded the latest version of gateway(3000.218.9) but the copy still queues endlessly. I tried to enable the also for the destination (adlg gen 2) connection via gateway and token SAS but nothing. With the Azure Synapse integration runtime I don't have all these problems

JimThurston
New Member

I have the same issue.  Whenever I use any data source which uses an on-premises data gateway, the pipeline gets stuck in 'Queued' with no helpful error messages.  This occurs whether I'm using an ODBC connection on the on-prem server or a SQL Server.  Destination is Azure SQL.  The process gets stuck even if only selecting one row (which shows correctly in the preview and the auto-mapping works fine).

 

Using the same connection in DataFlow Gen 2 works fine but I was hoping to use Pipelines for this task.

 

Any ideas appreciated!

re84uk
Frequent Visitor

Hi

 

I have the same problem, attempting to copy data from on-prem SQL Server to lakehouse via data pipelines just sits indefinitely at status "In Progress".

v-cboorla-msft
Community Support
Community Support

Hi @marcoG 

 

Thanks for using Microsoft Fabric Community.

As I understand you are facing an issue with an on-premise SQL Server copy task getting stuck in the queued state within Microsoft Fabric.

Could you please share the screenshot of the error message you are seeing or some additional information about the activity that you are using in the Microsoft Fabric portal for the queued task?

 

Thanks.

I dont get any errors... it stays tha way:

marcoG_0-1711355125802.png

 

 

The connections test are ok...

 

 

 

Hi @marcoG 

 

I successfully replicated the reported scenario and was able to run the copy activity without encountering any difficulties. For your reference, I have attached the screenshots.

vcboorlamsft_0-1711361124556.pngvcboorlamsft_1-1711361134735.pngvcboorlamsft_2-1711361142045.pngvcboorlamsft_3-1711361148322.png

It appears to be an intermittent issue with the data pipeline run. To ensure reliable data processing, I recommend you to attempt the activity again after a brief period.

Try to delete the activity, refresh the browser, add the activity again, and run the pipeline, this might help you.

Try to retry a data pipeline run this might help you, for further details on retrying data pipeline runs, kindly refer to retry a data pipeline run.

If the issue still persists please do let us know.

 

I hope this information helps.

 

Thanks.

I am experiencing the exact same issue.

 

JFTxJ_0-1711552175937.png

As suggested, I deleted my "Copy Data" activity and recreated it but the same issue occurred.  I also tried re creating the Copy Data activity using the "Copy Assistant", but again I got the same result.

 

Any help would be greatly appreciated.

Hi @JFTxJ 

 

In order to understand your specific data copy scenario, Could you please share some more details about your copy data activity. This will help me to understand the query and guide in better way. If possible, could you provide information on the following:

 

Source and Sink Details: Can you elaborate on the types of data stores involved in the copy operation?

Data Transformation Specifications: Are there any transformations being applied to the data during the copy process?

Screenshots: If feasible, sharing screenshots of the source and sink configuration within your data copy activity would be highly beneficial for visualization purposes.

 

Thanks.

Aboslutely!  The Copy Data will copy data from an On-Prem SQL Server hosted on AWS on an EC2 and Copy this data to a Lakehouse in the same workspace as the Pipeline.  The source is a SQL Server 2019 standard. 

There are no transformation to the data at all; the purpose is to do a strait copy of the data from On-Prem to the Fabric Lakehouse.

 

Here is the details of the source:

JFTxJ_0-1711627128575.png

Here are the details of the sink:

JFTxJ_2-1711627221391.png

Here are the mapping configs:

JFTxJ_3-1711627390585.png

And settings are defaults:

JFTxJ_4-1711627415888.png

 

Let me know if you need anythign else.

 

Have you used the gateway too?

marcoG_0-1711362664528.png

 

Hi @marcoG 

 

Following up to see if you have a resolution yet or still facing the issue. In case if you have any resolution please do share that same with the community as it can be helpful to others.
If you have any question relating to the current thread, please do let us know and we will try out best to help you.
In case if you have any other question on a different issue, we request you to open a new thread.

 

If the issue still persists, please refer : Set new firewall rules on server running the gateway

vcboorlamsft_0-1711538561600.png

For more details please refer : Currently supported monthly updates to the on-premises data gateways

 

I hope this information helps.


Thanks.

it still does not work

Hi @marcoG 

 

Could you please share me the screenshots of the error that you are getting along with the copy activity details. This will help me in understanding the issue and help you in better way.

 

Thanks. 

As I wrote previously I don't get any errors... it remains in the queued state indefinitely

Hi @marcoG 

 

Monitor Pipeline Runs: Use the Azure Data Factory monitoring tools to view activity logs. These logs might provide detailed information about the issue, pinpointing specific errors or bottlenecks causing the queueing.

For details please refer : Monitor data pipeline runs.

 

Thanks.

Thank you for your general answers.
If I have a pipeline with only 1 activity that copies a table of 2 rows I cannot take an infinite amount of time to complete, do you agree?
So displaying the monitoring hub which tells me that the task is in progress, what do I need it for?

marcoG_0-1711544037324.png

 

Helpful resources

Announcements
RTI Forums Carousel3

New forum boards available in Real-Time Intelligence.

Ask questions in Eventhouse and KQL, Eventstream, and Reflex.

LearnSurvey

Fabric certifications survey

Certification feedback opportunity for the community.

Top Kudoed Authors