To run WordPress we're going to need a hosting stack. In this case we're going to use the LEMP stack, which stands for Linux, Nginx (pronounced engine-x, hence the E), MySQL and PHP. Next to that, we're also adding some additional tools to improve speed and security.
I based my complete installation on an excellent post by Dave Hilditch. If you need or want to know more details on the choice of software and what it does exactly, please go to his original post (link below). He did a lot of testing and actually has the in-depth knowledge I lack.
https://www.wpintense.com/2018/10/20/installing-the-fastest-wordpress-stack-ubuntu-18-mysql-8/
I did however modified some parameters because my server has less memory. For more information on those settings go to Ashley Rich' tutorial on SpinupWP:
https://spinupwp.com/hosting-wordpress-yourself-nginx-php-mysql/
Login to the Ubuntu EC2 instance with PuTTY as described in part 6.
Run the following commands one by one to install MySQL. Copy a single line and right-click in PuTTY to paste.
NOTE: Do not choose the default for the authentication mechanism when installing MySQL, use the LEGACY
authentication mechanism to remain WordPress compatible.
sudo apt update && sudo apt upgradesudo wget -c https://dev.mysql.com/get/mysql-apt-config_0.8.14-1_all.debsudo dpkg -i mysql-apt-config_0.8.14-1_all.debsudo apt install software-properties-commonsudo add-apt-repository ppa:ondrej/phpsudo apt update && sudo apt upgradesudo apt install mysql-server -y # accept all defaults
Run the following commands one by one to install Nginx, PHP and Redis.
sudo apt -y install php7.3sudo apt purge apache2 -ysudo apt install -y nginx tmux curl php7.3-fpm php7.3-cli php7.3-curl php7.3-gd php7.3-intl php7.3-mysql php7.3-mbstring php7.3-zip php7.3-xml unzip php7.3-soap php7.3-redis redis
Install Fail2ban.
sudo apt install fail2ban
Install Letsencrypt
sudo apt updatesudo apt install software-properties-commonsudo add-apt-repository universesudo add-apt-repository ppa:certbot/certbotsudo apt updatesudo apt install python-certbot-nginx
Edit /etc/redis/redis.conf to prevent it from writing to disk. We are only interested in using it as an object cache in memory.
sudo nano /etc/redis/redis.conf
Press ctrl+w
to search and search for maxmemory
. Now set it to 100mb
.
# limit for maxmemory so that there is some free RAM on the system for slave# output buffers (but this is not needed if the policy is 'noeviction').#maxmemory 100mb
Now search for maxmemory-policy
and set it to allkeys-lru
to delete old keys using first-in-first-out principle.
# The default is:#maxmemory-policy allkeys-lru
Search for save 900
and comment out the 3 lines that start with save
.
# like in the following example:## save ""#save 900 1#save 300 10#save 60 10000
Now press ctrl+x
, y
and enter
to save and close.
Restart redis with this command:
sudo service redis-server restart
Go to Route 53 and add a Type A
record with name blog
to the Ubuntu instance: Value <instance public IP>
.
We are lazy so we're not going to figure out all the settings. Luckily Dave has done all the tedious work and is kind enough to share his efforts. Let's download it:
cd ~git clone https://github.com/dhilditch/wpintense-rocket-stack-ubuntu18-wordpresssudo cp wpintense-rocket-stack-ubuntu18-wordpress/nginx/* /etc/nginx/ -Rsudo ln -s /etc/nginx/sites-available/jodibooks.conf /etc/nginx/sites-enabled/sudo rm /etc/nginx/sites-enabled/default
His files use the nginx_fastcgi_cache
library, and for that to work we need to create a cache folder.
sudo mkdir /var/www/cachesudo mkdir /var/www/cache/jodibookssudo chown www-data:www-data /var/www/cache/ -R
Open the jodibooks.conf file and update the cache folder.
sudo nano /etc/nginx/sites-available/jodibooks.conf
Change the top of the file to:
# This config file uses nginx fastcgi-cachefastcgi_cache_path /var/www/cache/jodibooks levels=1:2 keys_zone=jodibooks:100m inactive=60m;
Now we're going to enter our own domain name server_name
and change the website folder to our own /var/www/jodibooks
. We also change the log files to our name: jodibooks_access.log
and jodibooks_error.log
.
server {listen 80;listen [::]:80;server_name blog.jodibooks.com;root /var/www/jodibooks;index index.php index.htm index.html;access_log /var/log/nginx/jodibooks_access.log;error_log /var/log/nginx/jodibooks_error.log;
Now press ctrl+x
, y
and enter
to save and close. Check if you made no errors and restart Nginx:
sudo ln -s /etc/nginx/sites-available/jodibooks.conf /etc/nginx/sites-enabled/sudo nginx -tsudo service nginx restart
When you go to your website blog.jodibooks.com
you should get a 404 error.
Because our server has limited memory, we're going to change some additional settings. Search for each one of them and change the values to the ones listed below.
sudo nano /etc/nginx/nginx.conf
worker_processes 2;
and max number of connections per worker worker_connections 1024;
. This gives us a maximum of 2048 connections.multi_accept on;
to accept all new connections at a time, opposed to accepting one new connection at a time.user www-data;worker_processes 2;pid /run/nginx.pid;include /etc/nginx/modules-enabled/*.conf;events {worker_connections 1024;multi_accept on;}
keepalive_timeout 30;
server_tokens off;
. This will disable emitting the Nginx version number in error messages and response headers.client_max_body_size 64m;
. This is the maximum upload size for media library files.http {### Basic Settings##sendfile on;tcp_nopush on;tcp_nodelay on;keepalive_timeout 30;types_hash_max_size 2048;server_tokens off;client_max_body_size 64m;
gzip_proxied any;
gzip_comp_level 2;
gzip_types ...;
### Gzip Settings##gzip on;# gzip_vary on;gzip_proxied any;gzip_comp_level 2;# gzip_buffers 16 8k;# gzip_http_version 1.1;gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
Press ctrl+x
, y
and enter
to save and close. Check if you made no errors:
sudo nginx -t
Open and edit the fastcgi_params
file.
sudo nano /etc/nginx/fastcgi_params
Check if the line below exists. If not, add it.
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
Save and close, check if you didn't make an error and restart Nginx.
sudo nginx -tsudo service nginx restart
WordPress uses a MySQL database which we have to create first. Log in to MySQL. The first time you have to enter a password. Generate a strong password and save it somewhere.
sudo mysql -u root -p
Create the database by running the following SQL one line at a time, including the ";". Again choose a strong password, this time for the WordPress database. I.e. WordPress only has access to this database, not to other MySQL databases on this instance. For now that doesn't matter as we only run the WP database, but it might come in handy in the future: Appendix E.
wordpress
jodibooksWP
CREATE DATABASE jodibooksWP;CREATE USER 'wordpress'@'localhost' IDENTIFIED WITH mysql_native_password BY 'CHOOSEASTRONGPASSWORD';GRANT ALL PRIVILEGES ON jodibooksWP.* TO'wordpress'@'localhost';EXIT;
Now that we have a database, we can tune and tweak the settings. Open mysqld.cnf
to edit and add the lines below to the end of the file.
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
innodb_buffer_pool_size = 200Minnodb_log_file_size = 100Minnodb_buffer_pool_instances = 8innodb_io_capacity = 5000max_binlog_size = 100Mexpire_logs_days = 3max_connections = 50
I based mine on these two posts, the first uses a server with 4 GB the second 2 GB, where we have 1 GB:
Now press ctrl+x
, y
and enter
to save and close.
Restart MySQL:
sudo service mysql restart
Now we're going to create a hidden file with our credentials, so we don't have to enter them every time (https://easyengine.io/tutorials/mysql/mycnf-preference/). Open the file in the nano editor.
sudo nano ~/.my.cnf
It should in the end look something like this:
[client]socket=/var/run/mysqld/mysqld.sockuser=wordpresspassword=<CHOOSEASTRONGPASSWORD>
Press ctrl+x
, y
and enter
to save and close. And
sudo chmod 0600 ~/.my.cnf
After the blog is up and running for a few weeks (and regularly after that), run a tuning script to check for possible optimizations to the settings in mysqld.cnf
(step 3).
cd ~git clone https://github.com/BMDan/tuning-primer.shcd tuning-primer.shsudo ./tuning-primer.sh
Open php.ini
file to edit.
sudo nano /etc/php/7.3/fpm/php.ini
Now search (ctrl+w
) for max_execution_time
, memory_limit
, upload_max_filesize
and post_max_size
. Set them to the values below (max sizes should be the same as entered in the Nginx config file earlier).
max_execution_time = 6000memory_limit = 256Mupload_max_filesize = 64Mpost_max_size = 64M
Search for and uncomment ;max_input_vars = 1000
and change value to 5000
.
max_input_vars = 5000
Search for opcache and uncomment the next lines and change values if needed:
opcache.enable=1opcache.memory_consumption=128opcache.interned_strings_buffer=8opcache.max_accelerated_files=50000opcache.revalidate_freq=60
Press ctrl+x
, y
and enter
to save and close. Check for correct syntax:
sudo php-fpm7.3 -t
Now open the www.conf
file for editing.
sudo nano /etc/php/7.3/fpm/pool.d/www.conf
Search for pm =
and set it to static
. "The default is pm = dynamic. If you set pm = static, you can then set pm.max_children to control how many simultaneous PHP processes will be running the entire time your server is running."
; Note: This value is mandatory.pm = static; The number of child processes to be created when pm is set to 'static' and the; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.; This value sets the limit on the number of simultaneous requests that will be; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP; CGI. The below defaults are based on a server without much resources. Don't; forget to tweak pm.* to fit your needs.; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'; Note: This value is mandatory.pm.max_children = 2
Save and close the file, check for correct syntax and restart php.
sudo php-fpm7.3 -tsudo service php7.3-fpm restart
Let's check if we indeed have 2 child processes running. Type top
in the console and press shift+M
to sort on memory usage. You can also press e to switch to show the memory in megabytes. We should see a few occurrences of both nginx and php-fpm. Both will have one instance running under the root user (this is main process that spawns each worker) and the remainder (2) should be running under the username you specified.
To quote Dave Hilditch: "The basic install, if you’ve followed the installation above, automatically includes SSH/putty attacks and blocks those attacks based on IP addresses. I will write a separate article about configuring fail2ban as it can be complicated, but if you wish to get this set up, you should install the WP fail2ban plugin and follow their guide for adding their ‘jails’ and ‘filters’. Basically, fail2ban uses filter config files to spot dodgy traffic and then uses the jail config files to decide how long to ban them."
I could not find that article and haven't bothered since. So please share if you have a decent tutorial on fail2ban.
We already had a WordPress blog running, so we wanted to migrate the files and database to this new EC2 instance. That was in the end pretty easy, but there some things you need to know. I'll explain in the "Import existing site" section. Installing a brand new site however is much easier, so let's start with that.
Download and install WordPress by running the following commands one line at a time.
sudo wget https://wordpress.org/latest.zip -P /var/www/sudo unzip /var/www/latest.zip -d /var/www/sudo mv /var/www/wordpress /var/www/jodibookssudo chown www-data:www-data /var/www/jodibooks -Rsudo rm /var/www/latest.zip
Enter your (sub)domain name and you should see the WordPress installation screen. In the installation screen, you’ll be asked for the database name, the database username and the database password, so enter those from when you created the MySQL database:
wordpress
jodibooksWP
You should obviously change ‘CHOOSEASTRONGPASSWORD’, although with this config, and because we ran the secure mysql scripts, remote login to your MySQL server will be disallowed.
"For some weird reason, the WordPress installer fails miserably if your site starts out HTTPS. So, you have to install over HTTP and then convert to HTTPS." As I haven't done a new install, I'll advise you to head over to the original post here: https://www.wpintense.com/2018/10/20/installing-the-fastest-wordpress-stack-ubuntu-18-mysql-8/ and browse to the "Changing your site to use SSL" section.
Again this is nothing new. I found an excellent guide here:
My migration was a little bit different though, so I'll summarize to only show the parts of the guide I used and what I needed to change.
On your existing WordPress install and activate the Duplicator plugin.
Go to the plugin page and click Create New.
Give the package a name 20200219_jodibooks
and click Next.
After scanning we find a notice on big files. Nothing much to do about it, so we mark the checkbox and press Build.
Give it some time. When the build is done, download both files to your local computer.
Now we need to upload these files to our Ubuntu instance. To do that we first need to make a folder in our home directory on the instance. So in a PuTTY SSH session to the Ubuntu instance type:
cd ~mkdir downloads
Now we will copy the files to that folder using the SSH connection with a program called PSCP. It comes standard with the PuTTY Windows installer.
Open a command shell in Windows and check if the %PATH% to the executable exists. Enter pscp
and the result should be something like this.
PuTTY Secure Copy clientRelease 0.73Usage: pscp [options] [user@]host:source targetpscp [options] source [source...] [user@]host:targetpscp [options] -ls [user@]host:filespecOptions:-V print version information and exit-pgpfp print PGP key fingerprints and exit-p preserve file attributes-q quiet, don't show statistics-r copy directories recursively-v show verbose messages-load sessname Load settings from saved session-P port connect to specified port-l user connect with specified username-pw passw login with specified password-1 -2 force use of particular SSH protocol version-4 -6 force use of IPv4 or IPv6-C enable compression-i key private key file for user authentication-noagent disable use of Pageant-agent enable use of Pageant-hostkey aa:bb:cc:...manually specify a host key (may be repeated)-batch disable all interactive prompts-no-sanitise-stderr don't strip control chars from standard error-proxycmd commanduse 'command' as local proxy-unsafe allow server-side wildcards (DANGEROUS)-sftp force use of SFTP protocol-scp force use of SCP protocol-sshlog file-sshrawlog filelog protocol details to a file
If not add it temporarily and browse to the download folder:
set PATH="C:\Program Files\PuTTY";%PATH%cd onedrive\downloads
Now we have to copy the files to our instance. The syntax is like this:
pscp -i <private key> <source> <destination>
C:\Users\Joepje\Documents\jodibooks-ubuntu-server-01.ppk
20200219_jodibooks_8285e0759081a6fb1247_20200219151251_archive.zip
and installer.php
ubuntu@blog.jodibooks.com:downloads/
Note: remember to use quotes " " when there is a space in the path or filename somewhere.
pscp -i C:\Users\Joepje\Documents\jodibooks-ubuntu-server-01.ppk installer.php ubuntu@blog.jodibooks.com:downloads/pscp -i C:\Users\Joepje\Documents\jodibooks-ubuntu-server-01.ppk 20200219_jodibooks_8285e0759081a6fb1247_20200219151251_archive.zip ubuntu@blog.jodibooks.com:downloads/
We can close the command terminal in Windows and focus on the PuTTY terminal. The files we just uploaded need to be moved to the web folder we configured in Nginx: /var/www/jodibooks
.
sudo mv -v ~/downloads/* /var/www/jodibooks/
Now we can open the installer by going to blog.jodibooks.com/installer.php
in our browser.
I can't continue any further, because I don't want to overwrite my existing installation. On the next page you have to enter your MySQL host, database name, user and password. The host is localhost
. After that it is all pretty straight forward. If in doubt check step 6 here.
When done, the plugin will cleanup all temp files. Login to your WordPress install and go to the Duplicator plugin. Delete the package and remove the plugin.
Go to the settings page in WordPress and change the WordPress Address and Site Address to https://<your domain>
and Save changes.
We've already installed all the components, so we just need to run the script. It will scan the Nginx config file for the domain to get SSL certificates for. We already entered it, so the script will do most of the configuring automatically. We only need to enter some contact details, which will be need to create the certificate.
The script will also add all the necessary changes and additions to the Nginx config files, nice!
sudo certbot --nginx
The certificate needs to be renewed every 90 days. To do that we create a cronjob.
sudo crontab -e
Add line:
0 0 1 * * certbot renew
Save and close: ctrl+x
, y
, enter
.
Go to your blog through both http and https and check if both work. The former should be rerouted to the latter automatically. And you should be able to login.
With your blog running, we need to install some plugins to make the most out of all the things we configured so far.
First install the Nginx cache and Redis Object Cache plugins. You can use them to clear the cache if needed.
Optionally you can install Autoptimize. This plugin will help you minimize CSS and JS to serve your page even quicker.
Also optional: WP-Optimize. This plugin has a cleaning tool for your database, to keep it as small as possible. And it has an image compression function, making them faster to download.
Lastly we'll install WP Offload Media Lite. This plugin offloads our images to S3, thus not clogging up our EBS instance drive.
To get this plugin working, we need to give it credentials. We have to make a policy in IAM and attach it to the EC2-jodibooks-WordPress
Role. Open the role in IAM.
Now Attach policies and Create policy.
In the newly opened tab click JSON and paste the code below. Change the Resource to your own bucket. https://deliciousbrains.com/wp-offload-media/doc/custom-iam-policy-for-amazon-s3/
{"Version": "2012-10-17","Statement": [{"Sid": "VisualEditor0","Effect": "Allow","Action": ["s3:Put*","s3:Get*","s3:CreateBucket","s3:List*","s3:DeleteObject"],"Resource": ["arn:aws:s3:::jodibooks-public-cdn","arn:aws:s3:::jodibooks-public-cdn/*"]}]}
Enter a name S3.Wordpress.Offload.Media.Lite
for the policy and a description Allow offload plugin to use S3 as file storage
.
Create the policy.
Go back to the EC2-jodibooks-WordPress
role and add the policy.
Open the plugin and select S3 as the storage provider. Select to use IAM Roles.
Click Browse existing buckets and select the bucket jodibooks-public-cdn
.
Set the following settings to ON:
blog/wp-content/uploads/
With that we have a fully working environment. What we still need to do is make sure everything will be backed up and monitored. That will be the topic of the next parts.