Later thoughts: This article misuses the word “CDN”. Here, it is used to mention a web server that holds uploaded image files, which is how HYVOR saved and served images a while ago. However, we now use an object storage, which a far easier and cheaper option.
Today, we moved HYVOR from AWS to DigitalOcean. There were a few reasons for this.
AWS Instance started to stop without any reason. That happened two times in the last couple of days.
Many Hyvor Talk users had requested us to move our servers from the US to the EU for privacy reasons.
I really hate the AWS interface. The more I used it the more I hated it.
All of our other servers are hosted on DigitalOcean. We didn't bother migrating the CDN earlier. This was the time!
So, we moved our CDN, which had 10GB of images saved on the disk, to the DigitalOcean Frankfurt data center in about an hour. The process is really simple than you might think.
Make a plan
First, I wrote down all the steps I have to do on a Notion page.
Install the application in the new Droplet
Attach a DigitalOcean volume to the Droplet
Pull files from the old server to the new server (using rsync)
Run the final rsync and change DNS records
I'll explain each step.
Setting up the New Server
DigitalOcean droplets are easy to create and manage. Here, I choose to save the images in a DigitalOcean Volume. A volume is like a hard drive that you can attach to the droplet. The reason to choose Volumes is that we can increase the size of storage without upgrading the main server. Usually, CDNs don't require a lot of CPU power. It's all about processing and serving each image one time. We use Cloudflare for caching. So, the Cloudflare cache will serve the cached image automatically without requiring our servers to process it again. So, we chose a Droplet with 1GB RAM ($5/month) and attached a 100GB Volume ($10/month) to it.
Then, we installed our CDN application, which is written in PHP. It handles image uploading and delivery.
Server to server SSH
This is the most interesting part for me.
If you don't love SSH, you haven't used it much.
I use Git Bash for SSH connections. You can use anything like Putty or Browser extensions. Log into both of your servers in two terminals.
Go to the new server's SSH session. Check the .ssh folder if you have any ssh keys generated (id_rsa and id_rsa.pub or a similar file). If you don't have any, use this command to generate one. You don't really need to add a passphrase.
1ssh-keygen -t rsa
Next, open the file and copy the content from your terminal to your computer's clipboard. In Git Bash, you can select the text, right-click, and copy.
The command to display the public key.
1cat id_rsa.pub
Now, you want to paste the copied key to the old server's authorized_keys file.
Switch to the other SSH session (old server). Then, open .ssh/authorized_key file for editing. I prefer vi but you can use vim or nano.
1vi ~/.ssh/authorzied_keys
Now, paste the copied key here. Make sure to add a line break if you have any other keys. Then, save the changes. Now you can SSH from new server to the old server, which makes it possible to use rsync.
Copying Files
There are two ways to copy files via SSH: scp (Secure Copy) and rsync. I first started copying a small portion of data to the new server with both methods to test the speed. rsync was a lot faster than scp . Here's a good explanation on scp vs rsync. rsync definitely has more optimizations.
Let's copy the content now. Run the following rsync command in the new server's SSH session.
1rsync -azP user@old-host-ip:/path/in/old/host/ /path/in/new/host
Note the ending / in user@old-host-ip:/path/in/old/host/. It instructs rsync to copy the contents of that folder to the destination, not the folder itself.
a - Similar to r (recursive) but more commonly used with rsync
z - enable compression in transfer
P - show progress
user@old-host-ip:/path/in/old/host/ - source (As we have already set up SSH keys, the server will automatically connect to the other server via SSH).
/path/in/new/host - destination (folder you need to save the content)
In my case, it only took a few minutes to copy 10GB images. But, it will depend on your network speed.
Changing DNS
If you migrate a live website, you'll need to change DNS records too. Here's what I did.
Set up the complete application in the new server (We discussed this earlier).
Run a rsync command once.
Added a new DNS records (For example, new-cdn.example.com) so that I can check if the application works fine in the server. You can also check this by accessing the server via direct IP address. But, in our case, our application was depending on the domain name.
After tests are successful, you'll need to run rsync
again because any file that is created in the old server after the last rsync should be added to the new server. Thankfully, rsync only copy the new files, not everything. Make sure to use the same command we used earlier.
Right after running the rsync command, change DNS records of your live site (In my case, I changed cdn.hyvor.com's IP address to the new server).
That's all. I thought it would be a daunting process to migrate large files from a server to another server. But, that's pretty simple if you know the basic SSH.
If you have any questions, feel free to comment below.
Comments