The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I have created a gold_dev_lh and my first test is to just manually add a file. Get data. Upload a small csv, Its just not working. The file just sits in upload files and nothing happens?
Upload Files
Current Uploads
FileName Lakehouse Name
test.csv gold_dev_lh 25/kb / 25 kb
Is this another bug?
I added a file to my silver data lake yesterday so I know that it has happened. i just cant fathom why I can't add into the gold LK I just set up
I havent given up on lakehouse with schema enabled. There seems to be way too many problems with it
Hi @DebbieE , Could you please confirm whether the issue has been resolved through the support ticket with Microsoft? If so, we request that you share the resolution or any key insights in this thread to benefit other members of the community. We will proceed to close this thread as part of our follow-up process, .
Should you require further assistance in the future, we encourage you to create a new thread in the Microsoft Fabric Community Forum. Our team will be happy to assist you.
Thank you for your cooperation.
Hi @DebbieE , hope you are doing well. may i know if you have raised the support ticket as suggested or if your issue's solved from the other suggestions given? If your issue's solved either way, please share the insights here and mark them or any other helpful response "Accept as Solution" so others with similar issues may find the solution easily.
Thank you.
Hi @DebbieE , Thank you for reaching out to the Microsoft Community Forum.
This is unlikely a permissions or configuration issue, especially if both Lakehouses were created in the same workspace and under the same user. It's almost certainly a UI glitch, not a backend failure. The upload is probably completing, but the file isn't showing in the UI due to a rendering issue. To confirm whether the file actually uploaded, open a notebook within the same workspace and run the following code:
display(dbutils.fs.ls("Files/"))
If your file, like test.csv, appears in the output, then the upload succeeded, it’s just the UI that failed to reflect it. From there, you can read and promote it to a Delta table. If the file doesn't appear, try uploading again. if it's in another language, make sure you’ve switched the Fabric UI to English and use an incognito window in Chrome or Edge to bypass any cached UI issues. After uploading, refresh the page immediately. These steps often resolve the display issue and allow the file to become visible.
If this helped solve the issue, please consider marking it “Accept as Solution” so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.
I already tried this but ModuleNotFoundError: No module named 'pyspark.dbutils' I dont thin this is available in Fabric. Its in databricks so i can't do that?
tried again this morning and it still doesnt work.. This is a bit ridiculous, Is this a known bug?
As an addition. I thought I would try and load a small csv into silver to check this works. Hours after trying again with gold and can see this
Files are currently being uploaded to Fabric from multiple Lakehouses. This will affect the upload time of files to dev_timesheet_silver_lh until other files have finished.
So...this isnt good. Its a tony csv file
Hi @DebbieE , Thank you for reaching out to the Microsoft Community Forum.
Use this to check if the file actually uploaded:
from pyspark.sql.functions import input_file_name
df = spark.read.csv("Files/") df.select(input_file_name()).show(truncate=False)
If your file appears here, the upload succeeded and the UI just isn’t updating. If not, it likely stalled in the upload queue. To work around the issue, try uploading through OneLake Data Hub, which is often more reliable. You can also drop the file in OneDrive or SharePoint and create a shortcut in the Lakehouse, this syncs automatically and avoids the upload bottleneck.
If you have access, check the Fabric Admin Portal for capacity usage, throttling is more likely when capacity is maxed out. Otherwise, wait a bit and retry the upload when fewer background jobs are running.
If the above didn’t work for you, then the best next step is to report this issue to Microsoft Support, as it may be a bug or a backend issue. You can provide them with all the troubleshooting steps you've already taken, which will help them understand the issue better and provide a resolution. They might be able to provide a solution that isn't immediately obvious.
Below is the link to help create Microsoft Support ticket:
How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
If this helped solve the issue, please consider marking it “Accept as Solution” so others with similar queries may find it more easily. If not, please share the details, always happy to help.
Thank you.
User | Count |
---|---|
4 | |
2 | |
2 | |
2 | |
2 |
User | Count |
---|---|
15 | |
10 | |
9 | |
6 | |
5 |