Click the big orange button on the dashboard that says Launch instance and choose Launch instance or go to Instances through the menu on the left and click a blue Launch instance button on the top of the page.
Step 1: Type
ubuntu 18.04 in the search bar, select AWS Marketplace and now select search result Ubuntu 18.04 LTS - Bionic.
Read the product details and optionally check pricing per instance type and click Continue.
Step 2: Choose the instance type. We've taken a
t3.micro for our WordPress blog. More on this choice in Appendix D. Click Next... and resist the inviting blue button.
Step 3: you can basically ignore all settings again, except the subnet and T2/T3 Unlimited.
Disablethis option. With a t-type instance you get a certain amount of CPU credits per day. When you need more, you pay an additional fee. This option enables you to use an unlimited amount of addition credits.
Step 4: the default volume is fine. Just disable Delete on Termination and enable Encryption
(default) aws/ebs. Again resist the blue button and click Next... instead.
Update Feb 27: I had to increase the size, as the logging and WordPress folders were consuming more than I expected. I now increased it to 16 GB, which can be done without restarting Ubuntu and the instance, cool.
Increasing the volume size: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
Step 5: add some tags (optional). Resist the blue button again when you're done. I added 4 tags:
MySQL, NGINX, PHP, PostFix, Redis, WordPress
Step 6: add or configure your security group. As we already made one earlier, we can select the
Remote server management security group. And finally we can click the blue Review button.
Step 7: review all the details and click Launch. You can ignore the "Your instance configuration is not eligible for the free usage tier" remark.
Ubuntu 18.04 LTS - Bionic
Remote server management
T2/T3 Unlimited Disabled
Delete on Termination: No
Name, apps, databases and tools
After pressing Launch you have to select a key pair. We already made one for our Linux instance, so let's select that one, acknowledge that you have the private key and press Launch Instance.
An IAM role allows the EC2 to communicate with other AWS services on your behalf. Our Linux instance will send log data to CloudWatch, static files for our blog (mostly images) to an S3 bucket and backups of our database to another S3 bucket.
Select the Windows instance, click Actions, Instance Settings and Attach/Replace IAM Role.
Select the IAM role we made for WordPress and click Apply.
The Ubuntu instance is up an running, so we can connect to it through SSH. This will all be command line, so no GUI like the Windows instance.
Once you're connected start with updating the OS:
sudo apt update && sudo apt upgrade -y
I'm not a firewall guru. To be honest, I know very little about firewalls, especially how to configure them in Linux. I used the sources below to setup mine.
We start with installing the firewall. In Ubuntu this is UFW, UncomplicatedFireWall.
sudo apt updatesudo apt install ufw
Now the first thing we do is configure the default settings. We want to allow all outgoing traffic and deny all incoming traffic.
sudo ufw default allow outgoingsudo ufw default deny incoming
All incoming ports are closed now, but we want to be able to SSH (PuTTY) into the instance. To do that we need to configure that in the firewall. We do that by allowing port 22 to accept traffic. Remember that we only allow SSH from our IP through the security group rules.
sudo ufw allow ssh
We also want our WordPress page to be accessible. To do that we have to allow traffic on port 80 and 443. There are multiple ways to do that, but we'll take the easy route.
sudo ufw allow 'Nginx Full'
Lastly we enable the firewall. You will be warned that enabling the firewall may disrupt existing ssh connections, just type
y and hit
sudo ufw enable
Check with the following command:
sudo ufw status verbose
This one has bothered me for weeks. After I setup the instance and installed all the software, I had a blazingly fast blog. But every night the instance would freeze up and would be unreachable. I tweaked memory settings of MySQL, PHP and Redis, but nothing worked. Eventually I figured out that the instance had no reserved swap space. After configuring that, it runs beautifully.
Now, preferably you only use the RAM of your instance, because that is the fastest option. As we only have 1 GB, this is not enough. So the second option would be to create swap space on the local SSD, ephemeral storage in AWS terms. A t3 instance however doesn't have that, so we are forced to forgo on best-practices and use an EBS volume.
AWS best practices regarding swap space: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-memory-swap-file/
AWS recommends creating swap space that is "2x the amount of RAM but never less than 32 MB". You can create a new volume specifically for swap, but I've added it to the main volume.
We will create a 2 GB swap space (64 blocks of 32 MB). In the command line type:
sudo dd if=/dev/zero of=/swapfile bs=32M count=64
Update the read and write permissions for the swap file:
sudo chmod 600 /swapfile
Set up a Linux swap area:
sudo mkswap /swapfile
Make the swap file available for immediate use by adding the swap file to swap space:
sudo swapon /swapfile
Verify that the procedure was successful:
sudo swapon -s
Enable the swap file at boot time by editing the
/etc/fstab file. Open the file in the editor:
sudo nano /etc/fstab
Add the following new line at the end of the file (use arrow keys and paste the line with right-mouse button), save the file (
ctrl+x), and then exit (
y and hit
/swapfile swap swap defaults 0 0
When starting a new SSH session, Ubuntu will display the amount of used memory, but you can also see it by using top or htop. These can be opened by simply typing
htop in the command line.
With that we have our two instances up and running. Before we continue we're going to make a backup and discuss the link between volumes, snapshots and AMI's.
A volume is the actual data (virtual drive) connected to the instance. In the Volumes screen you can see a list of your volumes and their state. We first have to give them a name, so we know to which server they belong. What's further interesting to see in this screen:
A snapshot is a recording of the volume at a certain point in time. You can use them as backups to go back to that point in time when something goes wrong. You can make as many snapshots as you want, but as with volumes you pay for storage.
At least you should create a snapshot before and after each reconfiguration of you OS or applications, for example when you release a new application, install new software or update the OS. I have to admit that I often forget to make a snapshot before the last two. In the end I only keep a snapshot of the latest known working configuration.
You can start creating a snapshot in two ways:
Select the volume you want to take a snapshot of. If an instance has multiple volumes, you can also select the instance and AWS will make a snapshot for all attached volumes. Enter a description for example the data or the specific configuration (application version, OS update, etc.) and add tags;
Click Create Snapshot and you'll see a screen which shows the snapshot ID. Click Close.
The snapshot is being created: Status is pending. Depending on the amount of changes since the last snapshot and the size of the volume this can take some time.
To restore your instance from a snapshot, we need to make an image or AMI.
In the Snapshots screen select the snapshot, click Actions followed by Create Image.
Add a name and a description. Leave everything else as is and click Create.
When we now go to the AMI screen, we can see our image. The AMI doesn't need any storage as it links to the snapshot.
The image can be used to Launch an instance. This can be a replacement instance for an unhealthy one or a second instance in an auto-scaling group. Simply press Launch and you will see the familiar EC2 launch steps as discussed earlier.
Should you ever want to change the instance type, you want a smaller and cheaper one, you need a bigger one or a completely different type, you could use an image to setup the new instance.
This seems like a lot of steps and it is. If your database and user data is not stored on the instance, this however is the best way to do it. It minimizes or even eliminates downtime for your users or visitors.
If however you have a database or user data on your instance (not recommended, but I'm guilty too), you have to prevent your users from entering new data while you move the data(base).
Anyway, if you're OK with a few minutes of downtime, you don't have to go through the 5 steps mentioned above, but can do the following:
Select the instance in the Instances screen and stop it. Click Actions, Instance State and Stop.
When the "Instance state" is stopped click Actions, Instance Settings and Change Instance Type.
Select the new instance type and restart the instance and that's it.
It would be safe to say that a lot could be improved, but as I don't know what I don't know, for now we're done configuring our EC2 instances. There are however two things I do know that I want to use in the future.
The first thing will be auto scaling and load balancing to be prepared for peaks in user demand. More on that in Appendix E. The second thing is to use the lifecycle manager to automatically make snapshots and remove old ones.
In the next part we're going to setup our domain names and DNS settings. For that we will use the AWS Route 53 service.