Building Hexo with AWS

A couple years ago, I set up a blog with Hexo. Like most blog authors, I promptly stopped adding to it. But still it persisted, living in AWS for the paltry sum of $1.23/month.

I’ve decided I’d like to be able to commit to it again - but that keeping a machine (or even a VM) alive just to run Hexo periodically is a pain. It’s time to make this blog auto-build on a git commit. Turns out, it’s pretty easy to do that with AWS CloudPipeline and CodeBuild.

There are already several blogs showing how to get this going (like this one). If you use AWS CodeCommit to hold your Hexo git repo this is pretty easy…except I spent far too long sorting out a couple fine details.

Making it build

I used AWS CodeBuild - just set up a build, point it at the CodeCommit repo, pick a branch (like master), and away it goes….mostly. CodeBuild uses a special file, buildspec.yml, to drive the build. Here’s mine:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
version: 0.2

phases:
install:
commands:
- npm install -g hexo-cli hexo-deployer-aws-s3
- npm install

build:
commands:
- git clone https://github.com/HmyBmny/hexo-theme-concise.git themes/concise
- hexo generate

post_build:
commands:
- hexo deploy

The install phase is used to customize CodeBuild’s virtual machine to suit your build. After install, CodeBuild will copy in your repository - but oddly enough, behavior varies. I ran CodeBuild manually, and got a git checkout of the repo; run it via CodePipeline, and I got copies of the repository contents (but not as a git checkout). Originally I used a git submodule to hold my preferred theme; that broke when I moved to using CodePipeline (since a copy isn’t really a full git checkout).

To compensate, I do an explicit git clone of my favorite theme in the build step - then run hexo generate to compile up the blog. Just like a Makefile, but with more YAML.

Finally, post_build deploys the mess. Normally, you’d want to use this step to save your build artifacts (presumably in S3, or something), and follow up with a CodeDeploy process in your CodePipeline…but that’s kinda overkill for this.

A note on S3 access

A CodeBuild project is (like all AWS things) associated with an ARN. When running the buildspec.yml, it acts with a pre-associated role - look in the “Environment” config for a “Service role” (it’s arn:aws:iam::849931445269:role/service-role/codebuild-fmepnet-blog-service-role, or something like that, for my case). You’ll need to give that access to write into your S3 bucket. I added this snippet to my bucket to enable appropriate access:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"Sid": "Stmt1595998013517",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::849931445269:role/service-role/codebuild-fmepnet-blog-service-role"
},
"Action": [
"s3:GetObject",
"s3:PutObject*",
"s3:List*",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::fmepnet.org/*"
}

And then, CodePipeline

Adding CodePipeline should be pretty easy - just point it at your existing CodeCommit repo (and branch) and the CodeBuild that compiles it. Then all you need to do is push a commit and within seconds the pipeline will trigger, build your blog, and push it to S3.

Of course, if you use CloudFront as a CDN on top of your S3 blog, be prepared to wait for the cache to time out. Maybe at some point I’ll add cache invalidation for new posts, but that’s a note for another evening.