Streamlining Blog Deployment: Setting Up a CI/CD Pipeline with AWS for a Jekyll Blog
In this article, I’m diving into how I moved from manually pushing updates to my blog, built with Jekyll, to a slick, automated CI/CD pipeline. This change has not only made updates faster but also less prone to mistakes, thanks to integrating with some AWS services like ECR, CodeBuild, and CodePipeline.
The Old Way: Manual Deployment Process
Deploying updates manually was a real chore and pretty error-prone. Here’s what it used to take:
- Building the Docker Image: For every little update, I had to manually build a new Docker image.
# I would use to choose a tag using semantic versioning (e.g. v1.0.2) docker build -t blog:$(TAG) .
- Tagging and Pushing the Image to AWS ECR: Next, I’d tag the image and push it to Amazon Elastic Container Registry (ECR).
docker tag blog:$(TAG) 381491975528.dkr.ecr.us-east-1.amazonaws.com/blog:$(TAG) docker push 381491975528.dkr.ecr.us-east-1.amazonaws.com/blog:$(TAG)
- Updating the ECS Task Definition: Then, I’d have to update the ECS task definition by creating a new task definition version to use the new image.
- Deploying the Updated Task: Finally, I’d manually update the service in ECS to run the new task definition. (ECS would automatically run a new task and drain the previous task running the previous version)
Automating the Process with AWS
Switching to an automated CI/CD pipeline has been a game-changer. Here’s how I set it up:
Build Specification File (buildspec.yml
)
I kicked things off by setting up a buildspec.yml
file, which tells AWS CodeBuild exactly what to do: Login into ECR, build and tag a new image, push that new image into ECR and finally create an artificact for the deployment stage.
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 381491975528.dkr.ecr.us-east-1.amazonaws.com
build:
commands:
- echo Building the Docker image...
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- docker build -t ${REPOSITORY_URI}:$IMAGE_TAG .
- docker tag ${REPOSITORY_URI}:$IMAGE_TAG ${REPOSITORY_URI}:latest
post_build:
commands:
- echo Pushing the Docker image...
- docker push ${REPOSITORY_URI}:$IMAGE_TAG
- docker push ${REPOSITORY_URI}:latest
- echo Writing image definitions file...
- printf '[{"name":"blog-container","imageUri":"%s"}]' ${REPOSITORY_URI}:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
Setting Up AWS CodeBuild
I configured AWS CodeBuild with all the permissions it needs to handle everything from building images to storing them in ECR, and even logging the process in CloudWatch.
Here’s what was involved:
- Permissions: The CodeBuild service role was given permissions to interact with necessary AWS services:
- ECR Access: To login, push images, and manage repositories.
- S3 Access: To store build artifacts.
- CloudWatch Logs Access: For storing and viewing build logs.
Integrating with AWS CodePipeline
AWS CodePipeline ties it all together. It watches for any changes in my repo, kicks off builds with CodeBuild, and ensures my blog is always up-to-date with the latest changes without me lifting a finger.
Finally, I had to setup a deployment stage which is pretty straightforward when using the AWS console.
About using an IaC tool
While setting up the CI/CD pipeline manually through the AWS Console has been effective, it would be more scalable and maintainable to use an infrastructure as code tool such as Terraform or AWS Cloud Development Kit (CDK) in future enhancements. I plan to revisit this setup to incorporate these tools, aiming to further optimize our deployment processes.
Wrap-Up
I’m now one commit away from deploying a new app version! Moving to a CI/CD pipeline means I spend less time managing deployments and more time writing content. It’s been a great improvement from the manual process for sure.
Happy deploying!