First post - How this page was built


Welcome to my first post and a simple guide for those who are looking to host their static websites with 99.9% availability for free without having to maintain or configure your own server. I will show you how you can use AWS S3 as file storage, Cloudflare for CDN and SSL and Gitlab for your build and deployment pipeline to host your static website.

I used Jekyll to build my static website, but in the end you could use whatever static site generator you want.

Cloudflare has data centers in more than 190 cities all over the world1 and their Basic plan (free) is more than enough for most websites. Though its features are basic, you will still get 99.9% uptime, free SSL certificate, DDoS protection, Brotli compression, auto JS/CSS minification, hiding of your destination server IP, and proper cache settings.

Requirements

You will need the following basic requirements to get started:

  • Account at AWS
  • Account at Cloudflare, with your existing NS already migrated to it

I will not be talking about how to migrate your domain to Cloudflare. Please follow their support guide here

As soon as your domain shows up as active in Cloudflare and your AWS account is set up, we can get started.

1. Create S3 Bucket

Screenshot of AWS S3 overview page

Create a new S3 Bucket by choosing a name and region. The bucket name should be the same as the domain of your website. In my case it is www.ccolic.ch

The region does not matter, because your site will be cached by Cloudflare anyway.

Click “Next” and skip the next page. You don’t need to touch any of these settings. When you get to the “Set permissions” page, make sure you uncheck the option “ Block all public access”. Like this:

Screenshot of AWS S3 permissions settings

After you have created your bucket, go to the properties page and enable static website hosting. Make sure you select “Use this bucket to host a website” and set the index document to index.html

Screenshot of AWS S3 bucket properties page

Now your S3 bucket is set up and ready for static website hosting. Make sure to remember or copy your endpoint address, you will need it later when we configure Cloudflare.

2. Create IAM user and policy

The next step is creating a new IAM user and configuring the permissions. The new user needs to be able to push new content into the new bucket.

But before we can create the user, we need to create a new policy first. Switch to Amazon IAM, go to Policies and create a new one.

Go to the “JSON” tab and paste the configuration below. Screenshot of AWS IAM new policy wizard

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": [
                "arn:aws:s3:::www.ccolic.ch",
                "arn:aws:s3:::www-dev.ccolic.ch"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObjectAcl",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::www.ccolic.ch/*",
                "arn:aws:s3:::www-dev.ccolic.ch/*"
            ]
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        }
    ]
}

Important: Make sure to change the bucket names to match your website name!

Now we can create a new user. Just make sure to select “Programmatic access”. Screenshot of AWS IAM new user wizard

In the next step we can attach the policy that we created earlier to this user, giving him access to the bucket. Screenshot of AWS IAM. Attaching a policy to new user

Now click “next” until the user is created. Your Access key ID and Secret access key will be shown at the end. Again make sure to copy them, you will need them later.

With this we can now continue and start to configure Cloudflare.

3. Setup Cloudflare

Login to Cloudflare and go to the DNS management of your domain. Here we will need to create a CNAME record which points to the endpoint address of the S3 bucket that you created before.

Screenshot of Cloudflare DNS management and two CNAME records

I wanted my site to be reachable on www.ccolic.ch as well as ccolic.ch. That’s why I created two records.

Make sure that the CNAME records have the status “proxied”, which means they will be served over the Cloudflare CDN network.

Next, check the SSL settings of your Cloudflare site and make sure they are set to “Flexible”

Screenshot of Cloudflare Site SSL Settings

This means that only the connection from the clients up to the CDN network are encrypted with SSL. The connections in the “backend” from Cloudflare to AWS are not encrypted. Using Full encryption will not work, because AWS S3 buckets don’t support SSL.

When you are already here, go to your Profile and create a new API token, which you will need later in your Gitlab CI/CD pipeline:

Screenshot of Cloudflare API tokens overview

The API token will have the permissions to purge the Cloudflare cache for your whole zone.

Screenshot of Cloudflare API token detailed view

You may also want to setup a Page Rule like this, which will forward all http requests to https. Screenshot of Cloudflare Site Page rules

A bit more advanced, but still really easy wiht Cloudflare: You can set up DNSSEC for your whole DNS zone. Screenshot of Cloudflare DNSSEC settings

You will need to enter all the details below into your registrar. This will vary a lot depending on who your registrar is. For me it was pretty straight forward and I pretty much just copied the values as they are shown here.

After you have entered and saved the values, you will need to wait a couple of hours and then check back.

4. Build your static website with Jekyll

I will explain how you can install Jekyll and get it running quickly to generate a simple webpage.

First, you need to install a full Ruby development evnironment. The official Jekyll installation guide will help you.

Next, you can install the Jekyll and bundler Ruby gems:

gem install jekyll bundler

Create a new Jekyll site at ./myblog

jekyll new myblog

Change into your new directory

cd myblog

Build the site

bundle exec jekyll build

This will build your website to the folder _site. The content of which we can now upload to the S3 bucket.

5. Upload the website with awscli

Install the awscli from aws.amazon.com/cli/

When installed, run aws configure and enter your Access ID and Secret Key which we created in step 2. The other fields are optional and can be skipped.

Now we can use the following command to sync the content of the _site folder to our new S3 bucket:

aws s3 sync _site/* s3://www.ccolic.ch --acl public-read --delete

Your website should now be reachable and you should see your generated Jekyll website.

I won’t go into further detail on how to set up Jekyll, there are a lot of tutorials and guides on the official Jekyll website and on the internet.

6. Gitlab CI/CD Build Pipline

Finally, we will put everything that we did until now together in Gitlab. If you host your repository on Gitlab, you can make use of the CI/CD pipeline features to automatically build and deploy your website as soon as you push changes to the repo.

In my setup I have two Git branches in my repository: master and dev. Commits and builds of the dev branch will be deployed to my test website at www-dev.ccolic.ch.

When I merge the changes into the master branch, they will automically be built and when I confirm it, they will be deployed to the “production” website at www.ccolic.ch

You can use my .gitlab-ci.yml file as a template:

image: ruby:2.6

variables:
  GIT_STRATEGY: clone
  GIT_SUBMODULE_STRATEGY: recursive
  AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
  AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
  AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION

stages:
  - build
  - deploy

cache:
  paths:
  - vendor/

build-prod:
  stage: build
  artifacts:
    paths:
      - public
    expire_in: 1 week
  variables:
    JEKYLL_ENV: production
  before_script:
    - bundle install --path vendor/bundle
  script:
    - bundle exec jekyll build -d public --config "_config.yml"
    - ls -lart public
  only:
    - master

build-dev:
  stage: build
  artifacts:
    paths:
      - public
    expire_in: 1 week
  before_script:
    - bundle install --path vendor/bundle
  script:
    - bundle exec jekyll build -d public --config "_config.yml,_config.dev.yml"
    - ls -lart public
  only:
    - dev
  
deploy-prod:
  stage: deploy
  image: badsectorlabs/aws-compress-and-deploy
  variables:
    DESC: "Prod build, commit: $CI_COMMIT_SHA"
  script:
    - cd public 
    - echo [+] Syncing all files to $CF_DOMAIN_PROD
    - aws s3 sync . s3://$CF_DOMAIN_PROD --region $AWS_DEFAULT_REGION --acl public-read --delete
    - cd ..
    - echo [+] Purging Cloudflare cache
    - ./purge_cache.sh
  environment:
    name: master-prod
  only:
    - master
  when: manual

deploy-dev:
  stage: deploy
  image: badsectorlabs/aws-compress-and-deploy
  variables:
    DESC: "Dev build, commit: $CI_COMMIT_SHA"
  script:
    - cd public 
    - echo [+] Syncing all files to $CF_DOMAIN_DEV
    - aws s3 sync . s3://$CF_DOMAIN_DEV --region $AWS_DEFAULT_REGION --acl public-read --delete
    - cd ..
    - echo [+] Purging Cloudflare cache
    - ./purge_cache.sh
  environment:
    name: dev
  only:
    - dev

To summarize what it does:

  • All important variables are configured in the CI/CD variables, which means they are availabe as environment variables in the script.
  • The build stages use the image ruby:2.6 and will install all dependencies to the path vendor/bundle, which is part of the CI pipeline cache. This speeds up builds massively.
  • The deploy stages use the image badsectorlabs/aws-compress-and-deploy, which is a minimal image containing the awscli. Since all important settings are in the environment variables, we don’t need to configure anything further.
  • The deploy stages also call a small script which will issues a POST request to the Cloudflare API to clear the cache for the appropriate website, making sure you immediately serve the new content.

purge_cache.sh, a small bash script which will issue a POST request with CURL to purge the Cloudflare cache of a specific site and URL:

#!/usr/bin/env bash

curl -X POST "https://api.cloudflare.com/client/v4/zones/$CF_ZONE_ID/purge_cache" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $CF_API_KEY" \
--data '{"files":["https://'$CF_DOMAIN_DEV'"]}'

For all of this to work, we need to enter all important values and credentials into the CI/CD variables:

Screenshot of Gitlab CI/CD settings and variables

These are all the variables that I have configured. Some of the more sensitive information I masked.

Key Value Masked
AWS_ACCESS_KEY_ID The AWS Access Key ID from step 2 yes
AWS_DEFAULT_REGION The AWS region you selected when creating the bucket. no
AWS_SECRET_ACCESS_KEY The AWS Secret Key from step 2 yes
CF_API_EMAIL Your Cloudflare Username/Email no
CF_API_KEY Your Cloudflare API key from step 3 yes
CF_DOMAIN_DEV Your development domain (www-dev.ccolic.ch) no
CF_DOMAIN_PROD Your production domain (www.ccolic.ch) no
CF_ZONE_ID Your Cloudflare Zone ID yes

References