6.2 KiB
type |
---|
tutorial |
Running Composer and NPM scripts with deployment via SCP in GitLab CI/CD
This guide covers the building of dependencies of a PHP project while compiling assets via an NPM script using GitLab CI/CD.
While it is possible to create your own image with custom PHP and Node.js versions, for brevity, we will use an existing Docker image that contains both PHP and Node.js installed.
image: tetraweb/php
The next step is to install zip/unzip packages and make composer available. We will place these in the before_script
section:
before_script:
- apt-get update
- apt-get install zip unzip
- php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
- php composer-setup.php
- php -r "unlink('composer-setup.php');"
This will make sure we have all requirements ready. Next, we want to run composer install
to fetch all PHP dependencies and npm install
to load Node.js packages, then run the npm
script. We need to append them into before_script
section:
before_script:
# ...
- php composer.phar install
- npm install
- npm run deploy
In this particular case, the npm deploy
script is a Gulp script that does the following:
- Compile CSS & JS
- Create sprites
- Copy various assets (images, fonts) around
- Replace some strings
All these operations will put all files into a build
folder, which is ready to be deployed to a live server.
How to transfer files to a live server
You have multiple options: rsync, scp, sftp, and so on. For now, we will use scp.
To make this work, you need to add a GitLab CI/CD Variable (accessible on gitlab.example/your-project-name/variables
). That variable will be called STAGING_PRIVATE_KEY
and it's the private SSH key of your server.
Security tip
Create a user that has access only to the folder that needs to be updated.
After you create that variable, you need to make sure that key will be added to the Docker container on run:
before_script:
# - ....
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
In order, this means that:
- We check if the
ssh-agent
is available and we install it if it's not. - We create the
~/.ssh
folder. - We make sure we're running bash.
- We disable host checking (we don't ask for user accept when we first connect to a server and since every job will equal a first connect, we kind of need this).
And this is basically all you need in the before_script
section.
How to deploy
As we stated above, we need to deploy the build
folder from the docker image to our server. To do so, we create a new job:
stage_deploy:
artifacts:
paths:
- build/
only:
- dev
script:
- ssh-add <(echo "$STAGING_PRIVATE_KEY")
- ssh -p22 server_user@server_host "mkdir htdocs/wp-content/themes/_tmp"
- scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/_tmp
- ssh -p22 server_user@server_host "mv htdocs/wp-content/themes/live htdocs/wp-content/themes/_old && mv htdocs/wp-content/themes/_tmp htdocs/wp-content/themes/live"
- ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/_old"
Here's the breakdown:
only:dev
means that this build will run only when something is pushed to thedev
branch. You can remove this block completely and have everything be ran on every push (but probably this is something you don't want)ssh-add ...
we will add that private key you added on the web UI to the docker container- We will connect via
ssh
and create a new_tmp
folder - We will connect via
scp
and upload thebuild
folder (which was generated by anpm
script) to our previously created_tmp
folder - We will connect again via
ssh
and move thelive
folder to an_old
folder, then move_tmp
tolive
. - We connect to SSH and remove the
_old
folder
What's the deal with the artifacts? We just tell GitLab CI/CD to keep the build
directory (later on, you can download that as needed).
Why we do it this way
If you're using this only for stage server, you could do this in two steps:
- ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/live/*"
- scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/live
The problem is that there will be a small period of time when you won't have the app on your server.
Therefore, for a production environment we use additional steps to ensure that at any given time, a functional app is in place.
Where to go next
Since this was a WordPress project, I gave real life code snippets. Some further ideas you can pursue:
- Having a slightly different script for
master
branch will allow you to deploy to a production server from that branch and to a stage server from any other branches. - Instead of pushing it live, you can push it to WordPress official repo (with creating a SVN commit, etc.).
- You could generate i18n text domains on the fly.
Our final .gitlab-ci.yml
will look like this:
image: tetraweb/php
before_script:
- apt-get update
- apt-get install zip unzip
- php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
- php composer-setup.php
- php -r "unlink('composer-setup.php');"
- php composer.phar install
- npm install
- npm run deploy
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
stage_deploy:
artifacts:
paths:
- build/
only:
- dev
script:
- ssh-add <(echo "$STAGING_PRIVATE_KEY")
- ssh -p22 server_user@server_host "mkdir htdocs/wp-content/themes/_tmp"
- scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/_tmp
- ssh -p22 server_user@server_host "mv htdocs/wp-content/themes/live htdocs/wp-content/themes/_old && mv htdocs/wp-content/themes/_tmp htdocs/wp-content/themes/live"
- ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/_old"