6.3 KiB
stage | group | info | type |
---|---|---|---|
Release | Release | To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments | tutorial |
Running Composer and npm scripts with deployment via SCP in GitLab CI/CD (FREE)
This guide covers the building of dependencies of a PHP project while compiling assets via an npm script using GitLab CI/CD.
While it is possible to create your own image with custom PHP and Node.js versions, for brevity we use an existing Docker image that contains both PHP and Node.js installed.
image: tetraweb/php
The next step is to install zip/unzip packages and make composer available. We place these in the before_script
section:
before_script:
- apt-get update
- apt-get install zip unzip
- php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
- php composer-setup.php
- php -r "unlink('composer-setup.php');"
This makes sure we have all requirements ready. Next, run composer install
to fetch all PHP dependencies and npm install
to load Node.js packages. Then run the npm
script. We need to append them into before_script
section:
before_script:
# ...
- php composer.phar install
- npm install
- npm run deploy
In this particular case, the npm deploy
script is a Gulp script that does the following:
- Compile CSS & JS
- Create sprites
- Copy various assets (images, fonts) around
- Replace some strings
All these operations put all files into a build
folder, which is ready to be deployed to a live server.
How to transfer files to a live server
You have multiple options: rsync, SCP, SFTP, and so on. For now, use SCP.
To make this work, you must add a GitLab CI/CD Variable (accessible on gitlab.example/your-project-name/variables
). Name this variable STAGING_PRIVATE_KEY
and set it to the private SSH key of your server.
Security tip
Create a user that has access only to the folder that needs to be updated.
After you create that variable, make sure that key is added to the Docker container on run:
before_script:
# - ....
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
In order, this means that:
- We check if the
ssh-agent
is available and we install it if it's not. - We create the
~/.ssh
folder. - We make sure we're running bash.
- We disable host checking (we don't ask for user accept when we first connect to a server, and since every job equals a first connect, we need this).
And this is basically all you need in the before_script
section.
How to deploy
As we stated above, we need to deploy the build
folder from the Docker image to our server. To do so, we create a new job:
stage_deploy:
artifacts:
paths:
- build/
only:
- dev
script:
- ssh-add <(echo "$STAGING_PRIVATE_KEY")
- ssh -p22 server_user@server_host "mkdir htdocs/wp-content/themes/_tmp"
- scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/_tmp
- ssh -p22 server_user@server_host "mv htdocs/wp-content/themes/live htdocs/wp-content/themes/_old && mv htdocs/wp-content/themes/_tmp htdocs/wp-content/themes/live"
- ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/_old"
Here's the breakdown:
only:dev
means that this build runs only when something is pushed to thedev
branch. You can remove this block completely and have everything run on every push (but probably this is something you don't want).ssh-add ...
we add that private key you added on the web UI to the Docker container.- We connect via
ssh
and create a new_tmp
folder. - We connect via
scp
and upload thebuild
folder (which was generated by anpm
script) to our previously created_tmp
folder. - We connect again via
ssh
and move thelive
folder to an_old
folder, then move_tmp
tolive
. - We connect to SSH and remove the
_old
folder.
What's the deal with the artifacts? We tell GitLab CI/CD to keep the build
directory (later on, you can download that as needed).
Why we do it this way
If you're using this only for stage server, you could do this in two steps:
- ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/live/*"
- scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/live
The problem is that there's a small period of time when you don't have the app on your server.
Therefore, for a production environment we use additional steps to ensure that at any given time, a functional app is in place.
Where to go next
Since this was a WordPress project, I gave real life code snippets. Some further ideas you can pursue:
- Having a slightly different script for the default branch allows you to deploy to a production server from that branch and to a stage server from any other branches.
- Instead of pushing it live, you can push it to WordPress official repository.
- You could generate i18n text domains on the fly.
Our final .gitlab-ci.yml
looks like this:
image: tetraweb/php
before_script:
- apt-get update
- apt-get install zip unzip
- php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
- php composer-setup.php
- php -r "unlink('composer-setup.php');"
- php composer.phar install
- npm install
- npm run deploy
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
stage_deploy:
artifacts:
paths:
- build/
only:
- dev
script:
- ssh-add <(echo "$STAGING_PRIVATE_KEY")
- ssh -p22 server_user@server_host "mkdir htdocs/wp-content/themes/_tmp"
- scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/_tmp
- ssh -p22 server_user@server_host "mv htdocs/wp-content/themes/live htdocs/wp-content/themes/_old && mv htdocs/wp-content/themes/_tmp htdocs/wp-content/themes/live"
- ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/_old"