HomeWork with meBlogAboutContact
Cloud
Docker Setup - Part 4: Setting up Redundant HTTP Services and Docker Build Speedup
Frederik Banke
Frederik Banke
December 30, 2017
3 min

Table Of Contents

01
What we will cover in this part
02
Setting up replication
03
Increasing PHP upload limit
04
Speeding up the build time for docker images
05
Update wordpress
06
Next steps

The setup needs to be able to scale better than it is capable of currently. Right now the http1 and http2 are defined as two services instead of the same service with two replicas. Also the build process is a bit slow, primarily because it needs to compile php for every build. I did not succeed with this part, but I did learn a lot about the docker build cache. More about that later. I also managed to fix a few other things that bothered me. Like the php file upload limit, and updating wordpress since v4.9.1 was released since my previous deployment.

What we will cover in this part

  • Making the http1 and http2 services be a single replicated service
  • Increasing the php file upload limit
  • Understanding the docker build cache better, to improve the build time
  • Updating wordpress to the newest version

You can download the full project here.

Setting up replication

The two webservers are currently defined as two distinct services, but this can be made more concise by using replication. Another benefit is that it is much easier to scale the services using replication than by adding additional services in the yml file.

Previous setup with two service definitions

http1:
image: 637345297332.dkr.ecr.eu-west-1.amazonaws.com/patch-httpd:latest
build: httpd
links:
- php
volumes:
- wp-core:/var/www/datadriven-investment.com/:ro
- wp-core:/var/www/broderi-info.dk/:ro
- datadriven-investment-data:/var/www/datadriven-investment.com/wp-content:ro
- broderi-info-data:/var/www/broderi-info.dk/wp-content:ro
http2:
image: 637345297332.dkr.ecr.eu-west-1.amazonaws.com/patch-httpd:latest
build: httpd
links:
- php
volumes:
- wp-core:/var/www/datadriven-investment.com/:ro
- wp-core:/var/www/broderi-info.dk/:ro
- datadriven-investment-data:/var/www/datadriven-investment.com/wp-content:ro
- broderi-info-data:/var/www/broderi-info.dk/wp-content:ro

New setup

http:
image: 637345297332.dkr.ecr.eu-west-1.amazonaws.com/patch-httpd:latest
build: httpd
links:
- php
deploy:
replicas: 2
volumes:
- wp-core:/var/www/datadriven-investment.com/:ro
- wp-core:/var/www/broderi-info.dk/:ro
- datadriven-investment-data:/var/www/datadriven-investment.com/wp-content:ro
- broderi-info-data:/var/www/broderi-info.dk/wp-content:ro

Now instead of two services where only the name differs, only a single service with a deploy specification that specifies two replicas. Now when deploying to the docker swarm it will start two containers. Docker adds the IP address of both containers to the internal DNS, so resolving “http” will use a round robin scheme switching between the replicas. This can be used by our loadbalancer to switch between the containers exactly like with the setup with two services.

The previous nginx loadbalancer setup

upstream datadriven-investment-loadbalance {
server http1;
server http2;
}

It can now be changed to

upstream datadriven-investment-loadbalance {
server http;
}

Switching between the containers are now handled by the DNS service instead of nginx. If we change it to have 3 or more replicas it will automatic work because of the DNS service.

In my current setup I use docker-compose but it will ignore the replicas directive since it is only used by docker in swarm mode. Since the service discovery by nginx are based on the DNS name it will still work fine when using it from my local development setup where only a single instance is started. A side benefit is that we save resources in the development environment because only a single service are started.

Increasing PHP upload limit

The default file upload limit on the php:7.1-fpm-alpine docker image is just 2mb which is a bit to low for me, so I wanted to change the limit to 64mb instead. The image loads all the .ini files in the folder /usr/local/etc/php/conf.d/ so we need to add a file to this folder that changes the defaults.

uploads.ini

file_uploads = On
upload_max_filesize = 64M
post_max_size = 64M

This file is then copied to the correct folder using the ADD directive in the Dockerfile

ADD uploads.ini /usr/local/etc/php/conf.d/uploads.ini

This changes the upload limit in the php service, but nginx also have an internal limit for request size, so we also need to increase this limit in the loadbalancer and http service.

client_max_body_size 64m;

Both the settings are described here

Speeding up the build time for docker images

The bitbucket build pipeline takes around 5 minutes to build, this gives around 10 deploys each month on the free plan. Since most of the build time is used to compile php in the php-fpm image I thought that it would be possible to cache this part to improve the build time. Bitbucket does have a caching mechanism and docker also have a cache that speeds up builds.

The correct way to speedup the build process is to use the docker build image, but as described in this issue it is currently a work in progress in bitbucket pipelines. It seems like it will be supported shortly so I will not try to do a workaround.

They way the docker build cache works are described here. In short for every RUN command in the Dockerfile docker creates a cache called a step/layer. If the run command does not change docker will use the cache instead of running the RUN command.

Update wordpress

In my initial setup wordpress was v4.9 this was updated to v4.9.1 fixing a few security problems. So I need to update my version, the setup is already prepared for this, i just needed to download the newest tar.gz file from wordpress.org and overwrite the existing file in the folder for the fileserver.

When deploying the new setup wordpress core is updated.

Next steps

Now the http service are replicated but this services only serves static files and act as a proxy to php-fpm, not a very resource intensive job. Most of the load will be placed in the php-fpm service, which is not replicated yet. This bottleneck needs to be removed.

I noticed that the service I use for database backup is not working correctly, it does not do the automatic backup that it is supposed to. Only if I attach to the container and run the backup command manually it does a backup. This needs to be fixed. I would like to find a way to boot and run an image at a scheduled time. Right now the two backup services run continuously but are only needed when the do the backup, the rest of the time they just idle, wasting resources.


Share

Frederik Banke

Frederik Banke

Code Coach

I teach developers how to improve the code they write. I focus on how they can implement clean code and craftsmanship principles to all aspects of their code.

Expertise

Clean Code
Refactoring
TDD
C#

Social Media

linkedingithubwebsite

Related Posts

GlusterFS to EFS
Switching from GlusterFS to Amazon EFS
April 19, 2019
2 min
© 2024, All Rights Reserved.
Powered By Earl Grey
Icons made by Freepik.com from Flaticon.com

Quick Links

About MeContact

Social Media