Why did we migrate⌗
Running a software company in Türkiye has become quite expensive over the past few years. Skyrocketing inflation and a dramatically weakening Turkish lira against the US dollar have turned the cost of dollar-denominated infrastructure into a serious burden. A bill that seemed manageable two years ago is affected very differently now when the exchange rate has increased severalfold.
Every month, we were paying DigitalOcean to $1,432 192GB RAM, 32 vCPUs, 600GB SSD, two block volumes (1TB each), and backup enabled for a droplet. The server was fine – but the price-to-performance ratio had stopped making sense.
Then we discovered the Hetzner AX162-R.
| digital ocean | Hetzner AX162-R | |
|---|---|---|
| CPU | 32 vCPU | AMD EPYC 9454P (48 cores/96 threads) |
| to hit | 192 GB | 256GB DDR5 |
| Disc | 600GB SSD + 2x1TB volume | 1.92TB NVMe Gen4 RAID1 |
| monthly cost | $1,432 | $233 |
| savings | — | $1,199/month |
he is $14,388 saved per year – To a server that is objectively more powerful in every dimension. The decision was easy.

I have been a DigitalOcean customer for about 8 years. They have a great product and I have no complaints about reliability or developer experience. But looking at those numbers now, I can’t help but feel a little sad about all the extra money left on the table over the years. If you’re running steady-state workloads and not actively using DO’s ecosystem features, do yourself a favor and check out the dedicated server pricing before your next upgrade.
What were we driving⌗
This was not a toy project. The stack includes:
- 30 MySQL Databases (248 GB data)
- 34 Nginx Virtual Host across multiple domains
- gitlab ee (42GB Backup)
- Neo4J Graph DB (30GB graph database)
- supervisor Managing dozens of background workers
- gearman job queue
- Multiple live mobile apps serving hundreds of thousands of users
Old Server: CentOS 7 – Long past its end of life, but still running in production. New Server: AlmaLinux 9.7 – A RHEL 9 compatible distribution and natural successor to CentOS. This migration was an opportunity to finally escape from an OS that had not received security updates for years.
Strategy: Zero Downtime⌗
The naive approach – change DNS, restart everything, hope for the best – was not acceptable. Instead, we designed a reasonable migration path consisting of six steps:
Step 1 – Full Stack Installation on New Server
Nginx (compiled from source with the same flags), PHP (via Remi repo, with the same .ini config files from old server), MySQL 8.0, Neo4J Graph DB, GitLab EE, Node.js, Supervisor and Gearman. Each service had to be configured to match the behavior of the old server before touching a single DNS record.
SSL certificates were handled by complete rsyncing /etc/letsencrypt/ Directory from old server to new server. After the migration was complete and all traffic flowed to the new server, we forcefully renewed all the certificates at once:
certbot renew --force-renewal
Step 2 – Cloned Web Files with rsync
complete /var/www/html The directory (~65 GB, 1.5 million files) was cloned to the new server using rsync over ssh with --checksum Flag for integrity verification. We ran a final incremental sync just before the cutover to catch any files that changed after the initial clone.
Step 3 – MySQL Master to Slave Replication
Instead of taking the database offline for a dump-and-restore, we set up a live replica. The old server became the master, the new server became a read-only slave. we experimented mydumper For the initial bulk load, replication is then initiated from the exact binlog state recorded in the dump metadata. This kept both databases in sync in real time until the moment of cutover.
Step 4 – DNS TTL Reduction
We scripted the DigitalOcean DNS API to reduce all A and AAAA record TTL from 3600 to 300 seconds – without touching the MX or TXT records (changing the mail record TTL may cause deliverability issues). After waiting an hour for the old TTL to expire globally, we were ready to cut down to less than 5 minutes.
Step 5 – Converted old server nginx to reverse proxy
We wrote a Python script that parses each server {} Block all 34 Nginx site configurations, backup the originals, and replace them with proxy configurations pointing to the new server. This meant that during DNS propagation, any requests coming to the old IP were silently forwarded. No user will notice any disruption.
Step 6 – DNS Cutover and Decommission
A single Python script hit the DigitalOcean API and flipped all the A records to the new server IP in a matter of seconds. The old server remained in cold standby for a week, then was shut down.
Key Insight: At no time did we have any windows where service was unavailable. Traffic was always being served – either directly or through a proxy.

MySQL Migration⌗
This was the most complicated part of the entire operation.
Dumping data⌗
we experimented mydumper instead of standard mysqldump – And it made a huge difference. By taking advantage of the new server’s 48 CPU cores for parallel export and import, what would have happened Day With traditional single-threaded mysqldump was completed in hours. If you are migrating a large MySQL database and you are not using mydumper/myloaderYou’re doing it the hard way.
mydumper \
--threads 32 \
--compress \
--trx-consistency-only \
--skip-definer \
--chunk-filesize 256 \
-v 3 \
--outputdir /root/mydumper_backup/
main dump metadata File recorded binlog state at the time of the snapshot:
File: mysql-bin.000004
Position: 21834307
This will be the starting point of our replication.
Transferring the dump to a new server⌗
Once the dump was complete, we transferred it to the new server rsync over ssh. With 248GB of compressed volumes, this was significantly faster than any other transfer method:
rsync -avz --progress /root/mydumper_backup/ root@NEW_SERVER:/root/mydumper_backup/
--compress flag up mydumper It paid off here – the compressed pieces moved much faster on the wire.
Loading data⌗
myloader \
--threads 32 \
--overwrite-tables \
--ignore-errors 1062 \
--skip-definer \
-v 3 \
--directory /root/mydumper_backup/
MySQL 5.7 to 8.0 problem⌗
Being stuck on CentOS 7 meant we were also stuck on MySQL 5.7 – an older version that had been in production for years. Before migration, we ran mysqlcheck --check-upgrade To verify that our data was compatible with MySQL 8.0. It came back clean, so we installed the latest MySQL 8.0 Community on the new server. The performance improvement was immediately noticeable in all of our projects – query execution times dropped significantly due to MySQL 8.0’s improved optimizer and InnoDB enhancements.
That said, the version change has introduced a tricky problem.
After import, mysql.user The table had the wrong column structure – 45 columns instead of the expected 51. mysql.infoschema Disappearing, breaking user authentication.
solve:
systemctl stop mysqld
mysqld --upgrade=FORCE --user=mysql &
But it failed the first time:
ERROR: 'sys.innodb_buffer_stats_by_schema' is not VIEW
sys Schemas were imported as regular tables instead of views. Solution:
Then run the upgrade again. Success.
Setting up MySQL Replication⌗
With both dumps imported, we configured the new server as a replica of the old server:
CHANGE MASTER TO
MASTER_HOST='OLD_SERVER_IP',
MASTER_USER='replicator',
MASTER_PASSWORD='...',
MASTER_PORT=3306,
MASTER_LOG_FILE='mysql-bin.000004',
MASTER_LOG_POS=21834307;
START SLAVE;
Almost immediately, replication stopped with error 1062 (duplicate key). This happened because our dump was taken in two passes – during the interval between them, rows were written to some tables, and now both the imported dump and the binlog replay were attempting to insert the same rows.
Solution:
SET GLOBAL slave_exec_mode = 'IDEMPOTENT';
START SLAVE;
IDEMPOTENT The mode silently skips duplicate keys and missing row errors. All important databases got synced without any errors. In a few minutes, Seconds_Behind_Master Dropped to 0.
Test before cutting more⌗
Before touching any single DNS record, we need to verify that all services are working correctly on the new server. Tip: We temporarily edited this /etc/hosts file on our local machine to point our domain names to the IP of the new server.
# /etc/hosts (local machine)
NEW_SERVER_IP yourdomain1.com
NEW_SERVER_IP yourdomain2.com
# ... and so on for all your domains
Once implemented, our browser and Postman will be on the new server while the rest of the world is still on the old server. We tested our API endpoints, checked the admin panel, and verified that each service was responding correctly. Only after this confirmation did we proceed for the cutover.
A Sneaky Super Privilege Problem⌗
Once the master-slave replication was fully synchronized, we noticed that INSERT statements were succeeding on the new server when they should not have been – read_only = 1 was set, but writing was ongoing.
Reason: All PHP application users were granted SUPER privilege. In MySQL, SUPER ignore read_only.
SHOW GRANTS FOR 'some_db_user'@'localhost';
-- Result: GRANT SELECT, INSERT, UPDATE, DELETE, ..., SUPER, ... ON *.*
We revoked it from all 24 application users:
REVOKE SUPER ON *.* FROM 'some_db_user'@'localhost';
-- repeated for all 24 users
FLUSH PRIVILEGES;
After this, read_only = 1 Correctly blocked all writes to application users while allowing replication to continue.

DNS Preparation⌗
All domains were managed through DigitalOcean DNS (with nameservers pointing to GoDaddy). We scripted TTL reduction against the DigitalOcean API, touching only A and AAAA records – not MX or TXT records, as changing mail record TTL could cause deliverability issues with Google Workspace.
# Only A and AAAA records
if record["type"] in ("A", "AAAA"):
update_record_ttl(domain, record["id"], 300)
After waiting an hour for the old TTL to expire, we were ready.
Converting old server Nginx to reverse proxy
Instead of editing 34 config files by hand, we wrote a Python script that parses each server {} block in each configuration file, identify the main content blocks, replace them with proxy configuration, and back up the original .backup files.
server {
listen 443 ssl;
server_name yourdomain.com;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
location / {
proxy_pass https://NEW_SERVER_IP;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_ssl_verify off;
proxy_read_timeout 150;
}
}
Key: proxy_ssl_verify off – The SSL certificate of the new server is valid for the domain, not for the IP address. It’s okay to disable validation here because we control both ends.
Cutover⌗
with replication Seconds_Behind_Master: 0 And the reverse proxy is ready, we executed the cutover in order:
1. New server: STOP SLAVE;
2. New server: SET GLOBAL read_only = 0;
3. New server: RESET SLAVE ALL;
4. New server: supervisorctl start all
5. Old server: nginx -t && systemctl reload nginx (proxy goes live)
6. Old server: supervisorctl stop all
7. Mac: python3 do_cutover.py (DNS: all A records to new server IP)
8. Wait: ~5 minutes for propagation
9. Old server: comment out all crontab entries
The DNS cutover script hit the DigitalOcean API and changed every A record to the new server IP in about 10 seconds.
One last thing after the cutover⌗
After the migration, we found that many GitLab project webhooks were still pointing to the old server IP. We wrote a script to scan all projects and update them in bulk via the GitLab API.
last number⌗
we left there $1,432/month all the way down $233/month -save $14,388 per year. And we received a more powerful machine:
CPU: 32 vCPUs to 96 logical CPUs (AMD EPYC 9454P, 48 cores x 2 threads)
To hit: 192GB to 256GB DDR5
storage: ~2.6TB merged into 2TB NVMe RAID1
Downtime: 0 minutes
The entire journey took about 24 hours. No users were affected.
Main points⌗
MySQL Replication is your best friend for zero-downtime migration. Set it quickly, let it set, then cut with confidence.
Check your MySQL user privileges before migration. SUPER bypassing privileges read_only — If your app users have this, your slave environment is not actually read-only.
Script everything. DNS updates, nginx config rewrites, webhook updates – doing these by hand on 34+ sites would have taken hours and produced errors.
mydumper + myloader dramatically outperforms mysqldump For large datasets. Parallel dump/restore with 32 threads reduced days of work to hours.
Cloud providers for steady-state workloads are expensive. If you’re not using autoscaling or short-term infrastructure, a dedicated server often provides better performance at a fraction of the cost.
All scripts on GitHub⌗
All Python scripts used in this migration are open-source and available on GitHub:
GitHub project
All scripts support one DRY_RUN = True Mode so you can safely preview changes before applying them.
<a href