Recently I faced a problem that my C drive on widows was almost full. This was due to the fact that docker was consuming 57 GB of space. I had 900 GB in my secondary drive free and my C drive was just 165 GB in total. But the docker installer for Windows does not provide any option to install it on any alternate drive setting "data-root" in configuration file did not help but it made docker desktop hung and I had to manually remove that entry to make docker desktop come up again successfully.
But there is solution to this problem. On windows Docker uses WSL module for running its virtual machine and run a linux distribution inside that. This virtual machine's virtual hard disk is stored in "C:\Users\<Username>\AppData\Local\Docker\wsl\data" folder. There are two linux distributions which runs to support Docker on windows.
PS C:\Windows\system32> wsl -l
Windows Subsystem for Linux Distributions:
docker-desktop (Default)
docker-desktop-data
PS C:\Windows\system32>
docker-desktop is the default and takes less space. docker-desktop-data is the one which takes a lot of space and grows with time. We need to move docker-desktop-data to secondary drive which is D: for my case. We need to stop docker-desktop before moving docker-desktop-data. We don't need to stop docker-desktop-data. We need to run the following commands on priviledged power shell of window for which you need to run it with "Run as administrator". These call hangs if docker desktop is running. It is difficult to make sure that docker is not running so I recommend to run the following commands just after restarting your computer.
wsl --shutdown
wsl --export docker-desktop-data docker-desktop-data.tar
wsl --unregister docker-desktop-data
mkdir D:\docker-desktop-data
wsl --import docker-desktop-data D:\docker-desktop-data .\docker-desktop-data.tar --version 2
After running the above command you will notice that a VHD file is created under "D:\docker-desktop-data" folder. This is the virtual hard disk where Docker stores all the data. Now you can start the docker and start using it with no data loss.
After doing that I faced one problem that docker was running slow because I moved it from SSD to HDD. In normal case this problem is bearable but in my case the build started failing because I was using Mysql in testcontainers and it gives 120 seconds for a container to start and if it does not start in 120 sec it kills it and start a new container. The second container also get killed after 120 seconds and after 3 retries it fails the build.
There was no provision in testcontainers for increasing the timeout so I had to take a fork of testcontainers and add a provision for doing that. After enhancing testcontainers and increasing timeout to 600 sec my build started working. I described that in my post
https://blog.bigdatawithjasvant.com/2023/01/increasing-container-startup-timeout-of.html
No comments:
Post a Comment