[Breaking] Add HA-support; switch to `Deployment` (#437)

# Changes

A big shoutout to @luhahn for all his work in #205 which served as the base for this PR.

## Documentation

- [x] After thinking for some time about it, I still prefer the distinct option (as started in #350), i.e. having a standalone "HA" doc under `docs/ha-setup.md` to not have a very long README (which is already quite long).
      Most of the information below should go into it with more details and explanations behind all of the individual components.

## Chart deps

~~- Adds `meilisearch` as a chart dependency for a HA-ready issue indexer. Only works with >= Gitea 1.20~~
~~- Adds `redis` as a chart dependency for a HA-ready session and queue store.~~
- Adds `redis-cluster` as a chart dependency for a HA-ready session and queue store (alternative to `redis`). Only works with >= Gitea 1.19.2.
- Removes `memcached` instead of `redis-cluster`
- Add `postgresql-ha` as default DB dep in favor of `postgres`

## Adds smart HA chart logic

The goal is to set smart config values that result in a HA-ready Gitea deployment if `replicaCount` > 1.

- If `replicaCount` > 1,
  - `gitea.config.session.PROVIDER` is automatically set to `redis-cluster`
  - `gitea.config.indexer.REPO_INDEXER_ENABLED` is automatically set to `false` unless the value is `elasticsearch` or `meilisearch`
  - `redis-cluster` is used for `[queue]` and `[cache]` and `[session]`mode or not

Configuration of external instances of `meilisearch` and `minio` are documented in a new markdown doc.

## Deployment vs Statefulset

Given all the discussions about this lately (#428), I think we could use both.
In the end, we do not have the requirement for a sequential pod scale up/scale down as it would happen in statefulsets.
On the other side, we do not have actual stateless pods as we are attaching a RWX to the deployment.
Yet I think because we do not have a leader-election requirement, spawning the pods as a deployment makes "Rolling Updates" easier and also signals users that there is no "leader election" logic and each pod can just be "destroyed" at anytime without causing interruption.

Hence I think we should be able to switch from a statefulset to a deployment, even in the single-replica case.

This change also brought up a templating/linting issue: the definition of `.Values.gitea.config.server.SSH_LISTEN_PORT` in `ssh-svc.yaml` just "luckily" worked so far due to naming-related lint processing. Due to the change from "statefulset" to "deployment", the processing queue changed and caused a failure complaining about `config.server.SSH_LISTEN_PORT` not being defined yet.
The only way I could see to fix this was to "properly" define the value in `values.yaml` instead of conditionally definining it in `helpers.tpl`. Maybe there's a better way?

## Chart PVC Creation

I've adapted the automated PVC creation from another chart to be able to provide the `storageClassName` as I couldn't get dynamic provisioning for EFS going with the current implementation.
In addition the naming and approach within the Gitea chart for PV creation is a bit unusual and aligning it might be beneficial.

A semi-unrelated change which will result in a breaking change for existing users but this PR includes a lot of breaking changes already, so including another one might not make it much worse...

- New `persistence.mount`: whether to mount an existing PVC (via `persistence.existingClaim`
- New `persistence.create`: whether to create a new PVC

## Testing

As this PR does a lot of things, we need proper testing.
The helm chart can be installed from the Git branch via `helm-git` as follows:

```
helm repo add gitea-charts git+https://gitea.com/gitea/helm-chart@/?ref=deployment
helm install gitea --version 0.0.0
```
It is **highly recommended** to test the chart in a dedicated namespace.

I've tested this myself with both `redis` and `redis-cluster` and it seemed to work fine.
I just did some basic operations though and we should do more niche testing before merging.

Examplary `values.yml` for testing (only needs a valid RWX storage class):

<details>

<summary>values.yaml</summary>

```yml
image:
  tag: "dev"
  PullPolicy: "Always"
  rootless: true

replicaCount: 2

persistence:
  enabled: true
  accessModes:
    - ReadWriteMany
  storageClass: FIXME

redis-cluster:
  enabled: false
  global:
    redis:
      password: gitea

gitea:
  config:
    indexer:
      ISSUE_INDEXER_ENABLED: true
      REPO_INDEXER_ENABLED: false
```
</details>

## Preferred setup

The preferred HA setup with respect to performance and stability might currently be as follows:

- Repos: RWX (e.g. EFS or Azurefiles NFS)
- Issue indexer: Meilisearch (HA)
- Session and cache: Redis Cluster (HA)
- Attachments/Avatars: Minio (HA)

This will result in a ~ 10-pod HA setup overall.
All pods have very low resource requests.

fix #98

Co-authored-by: pat-s <pat-s@noreply.gitea.io>
Reviewed-on: https://gitea.com/gitea/helm-chart/pulls/437
Co-authored-by: pat-s <patrick.schratz@gmail.com>
Co-committed-by: pat-s <patrick.schratz@gmail.com>
This commit is contained in:
pat-s 2023-07-17 19:09:42 +00:00 committed by pat-s
parent f66a192d45
commit 8e27bb9bae
17 changed files with 675 additions and 245 deletions

View File

@ -1,9 +1,12 @@
dependencies:
- name: memcached
repository: oci://registry-1.docker.io/bitnamicharts
version: 6.3.14
- name: postgresql
repository: oci://registry-1.docker.io/bitnamicharts
version: 12.4.1
digest: sha256:02d4846bf416038a42658dbca8f8001d0e3ce967b00e990048f8d420065c33fd
generated: "2023-04-28T09:32:05.295167+02:00"
- name: postgresql-ha
repository: oci://registry-1.docker.io/bitnamicharts
version: 11.6.1
- name: redis-cluster
repository: oci://registry-1.docker.io/bitnamicharts
version: 8.4.4
digest: sha256:3b203051c9fb8df9e771a4d67c276190a1c63aae9bf980ef3676e2a51b2f56c7
generated: "2023-05-13T21:47:51.823348+02:00"

View File

@ -34,13 +34,17 @@ maintainers:
# Bitnami charts are served from GitHub CDN - See https://github.com/bitnami/charts/issues/10539 for details
dependencies:
# OCI registry: https://blog.bitnami.com/2023/01/bitnami-helm-charts-available-as-oci.html (2023-01)
# Chart release date: 2023-04
- name: memcached
repository: oci://registry-1.docker.io/bitnamicharts
version: 6.3.14
condition: memcached.enabled
# Chart release date: 2023-04
- name: postgresql
repository: oci://registry-1.docker.io/bitnamicharts
version: 12.4.1
condition: postgresql.enabled
# Chart release date: 2023-05
- name: postgresql-ha
repository: oci://registry-1.docker.io/bitnamicharts
version: 11.6.1
condition: postgresql-ha.enabled
# Chart release date: 2023-04
- name: redis-cluster
repository: oci://registry-1.docker.io/bitnamicharts
version: 8.4.4
condition: redis-cluster.enabled

294
README.md
View File

@ -4,7 +4,7 @@
- [Update and versioning policy](#update-and-versioning-policy)
- [Dependencies](#dependencies)
- [Installing](#installing)
- [Prerequisites](#prerequisites)
- [High Availability](#high-availability)
- [Configuration](#configuration)
- [Default Configuration](#default-configuration)
- [Additional _app.ini_ settings](#additional-appini-settings)
@ -24,11 +24,12 @@
- [Themes](#themes)
- [Parameters](#parameters)
- [Global](#global)
- [strategy](#strategy)
- [Image](#image)
- [Security](#security)
- [Service](#service)
- [Ingress](#ingress)
- [StatefulSet](#statefulset)
- [deployment](#deployment)
- [ServiceAccount](#serviceaccount)
- [Persistence](#persistence-1)
- [Init](#init)
@ -37,7 +38,8 @@
- [LivenessProbe](#livenessprobe)
- [ReadinessProbe](#readinessprobe)
- [StartupProbe](#startupprobe)
- [Memcached](#memcached)
- [redis-cluster](#redis-cluster)
- [PostgreSQL-ha](#postgresql-ha)
- [PostgreSQL](#postgresql)
- [Advanced](#advanced)
- [Contributing](#contributing)
@ -49,8 +51,8 @@ It is published under the MIT license.
## Introduction
This helm chart has taken some inspiration from [jfelten's helm chart](https://github.com/jfelten/gitea-helm-chart).
But takes a completely different approach in providing a database and cache with dependencies.
Additionally, this chart provides LDAP and admin user configuration with values, as well as being deployed as a statefulset to retain stored repositories.
Yet it takes a completely different approach in providing a database and cache with dependencies.
Additionally, this chart allows to provide LDAP and admin user configuration with values.
## Update and versioning policy
@ -75,8 +77,8 @@ This chart provides those dependencies, which can be enabled, or disabled via co
Dependencies:
- PostgreSQL ([configuration](#postgresql))
- Memcached ([configuration](#memcached))
- PostgreSQL HA ([configuration](#postgresql))
- Redis Cluster ([configuration](#cache))
## Installing
@ -88,11 +90,13 @@ helm install gitea gitea-charts/gitea
When upgrading, please refer to the [Upgrading](#upgrading) section at the bottom of this document for major and breaking changes.
## Prerequisites
## High Availability
- Kubernetes 1.12+
- Helm 3.0+
- PV provisioner for persistent data support
Since version 9.0.0 this chart has experimental support for running Gitea and it's dependencies in a HA setup.
The setup is still experimental and care must be taken for production use as Gitea core is not yet officially HA-ready.
Deploying a HA-ready Gitea instance requires some effort including using HA-ready dependencies.
See the [HA Setup](docs/ha-setup.md) document for more details.
## Configuration
@ -116,12 +120,12 @@ All defaults can be overwritten in `gitea.config`.
INSTALL_LOCK is always set to true, since we want to configure Gitea with this helm chart and everything is taken care of.
_All default settings are made directly in the generated app.ini, not in the Values._
_All default settings are made directly in the generated `app.ini`, not in the Values._
#### Database defaults
If a builtIn database is enabled the database configuration is set automatically.
For example, PostgreSQL builtIn will appear in the app.ini as:
For example, PostgreSQL builtIn will appear in the `app.ini` as:
```ini
[database]
@ -132,18 +136,6 @@ PASSWD = gitea
USER = gitea
```
#### Memcached defaults
Memcached is handled the exact same way as database builtIn.
Once Memcached builtIn is enabled, this chart will generate the following part in the `app.ini`:
```ini
[cache]
ADAPTER = memcache
ENABLED = true
HOST = RELEASE-NAME-memcached.default.svc.cluster.local:11211
```
#### Server defaults
The server defaults are a bit more complex.
@ -192,8 +184,7 @@ gitea:
name: gitea-app-ini-plaintext
```
This would mount the two additional volumes (`oauth` and `some-additionals`)
from different sources to the init containerwhere the _app.ini_ gets updated.
This would mount the two additional volumes (`oauth` and `some-additionals`) from different sources to the init container where the _app.ini_ gets updated.
All files mounted that way will be read and converted to environment variables and then added to the _app.ini_ using [environment-to-ini](https://github.com/go-gitea/gitea/tree/main/contrib/environment-to-ini).
The key of such additional source represents the section inside the _app.ini_.
@ -237,6 +228,9 @@ We also support to directly interact with the generated _app.ini_.
To inject self defined variables into the _app.ini_ a certain format needs to be honored.
This is described in detail on the [env-to-ini](https://github.com/go-gitea/gitea/tree/main/contrib/environment-to-ini) page.
Prior to Gitea 1.20 and Chart 9.0.0 the helm chart had a custom prefix `ENV_TO_INI`.
After the support for a custom prefix was removed in Gite core, the prefix was changed to `GITEA`.
For example a database setting needs to have the following format:
```yaml
@ -259,7 +253,7 @@ Priority (highest to lowest) for defining app.ini variables:
### External Database
Any external Database listed in [https://docs.gitea.io/en-us/database-prep/](https://docs.gitea.io/en-us/database-prep/) can be used instead of the built-in PostgreSQL.
Any external database listed in [https://docs.gitea.io/en-us/database-prep/](https://docs.gitea.io/en-us/database-prep/) can be used instead of the built-in PostgreSQL.
In fact, it is **highly recommended** to use an external database to ensure a stable Gitea installation longterm.
If an external database is used, no matter which type, make sure to set `postgresql.enabled` to `false` to disable the use of the built-in PostgreSQL.
@ -345,34 +339,23 @@ More about this issue [here](https://gitea.com/gitea/helm-chart/issues/161).
### Cache
This helm chart can use a built in cache.
The default is Memcached from bitnami.
The cache handling is done via `redis-cluster` (via the `bitnami` chart) by default.
This deployment is HA-ready but can also be used for single-pod deployments.
By default, 6 replicas are deployed for a working `redis-cluster` deployment.
Many cloud providers offer a managed redis service, which can be used instead of the built-in `redis-cluster`.
```yaml
memcached:
redis-cluster:
enabled: true
```
If the built in cache should not be used simply configure the cache in `gitea.config`.
```yaml
gitea:
config:
cache:
ENABLED: true
ADAPTER: memory
INTERVAL: 60
HOST: 127.0.0.1:9090
```
### Persistence
Gitea will be deployed as a statefulset.
Gitea will be deployed as a deployment.
By simply enabling the persistence and setting the storage class according to your cluster everything else will be taken care of.
The following example will create a PVC as a part of the statefulset.
This PVC will not be deleted even if you uninstall the chart.
The following example will create a PVC as a part of the deployment.
Please note, that an empty storageClass in the persistence will result in kubernetes using your default storage class.
Please note, that an empty `storageClass` in the persistence will result in kubernetes using your default storage class.
If you want to use your own storage class define it as follows:
@ -382,14 +365,12 @@ persistence:
storageClass: myOwnStorageClass
```
When using PostgreSQL as dependency, this will also be deployed as a statefulset by default.
If you want to manage your own PVC you can simply pass the PVC name to the chart.
```yaml
persistence:
enabled: true
existingClaim: MyAwesomeGiteaClaim
claimName: MyAwesomeGiteaClaim
```
In case that persistence has been disabled it will simply use an empty dir volume.
@ -401,13 +382,13 @@ You can interact with the postgres settings as displayed in the following exampl
postgresql:
persistence:
enabled: true
existingClaim: MyAwesomeGiteaPostgresClaim
claimName: MyAwesomeGiteaPostgresClaim
```
### Admin User
This chart enables you to create a default admin user.
It is also possible to update the password for this user by upgrading or redeloying the chart.
It is also possible to update the password for this user by upgrading or redeploying the chart.
It is not possible to delete an admin user after it has been created.
This has to be done in the ui.
You cannot use `admin` as username.
@ -651,14 +632,22 @@ kubectl create secret generic gitea-themes --from-file={{FULL-PATH-TO-CSS}} --na
### Global
| Name | Description | Value |
| ------------------------- | ------------------------------------------------------------------------- | --------------- |
| `global.imageRegistry` | global image registry override | `""` |
| `global.imagePullSecrets` | global image pull secrets override; can be extended by `imagePullSecrets` | `[]` |
| `global.storageClass` | global storage class override | `""` |
| `global.hostAliases` | global hostAliases which will be added to the pod's hosts files | `[]` |
| `replicaCount` | number of replicas for the statefulset | `1` |
| `clusterDomain` | cluster domain | `cluster.local` |
| Name | Description | Value |
| ------------------------- | ------------------------------------------------------------------------- | ----- |
| `global.imageRegistry` | global image registry override | `""` |
| `global.imagePullSecrets` | global image pull secrets override; can be extended by `imagePullSecrets` | `[]` |
| `global.storageClass` | global storage class override | `""` |
| `global.hostAliases` | global hostAliases which will be added to the pod's hosts files | `[]` |
| `replicaCount` | number of replicas for the deployment | `1` |
### strategy
| Name | Description | Value |
| --------------------------------------- | -------------- | --------------- |
| `strategy.type` | strategy type | `RollingUpdate` |
| `strategy.rollingUpdate.maxSurge` | maxSurge | `100%` |
| `strategy.rollingUpdate.maxUnavailable` | maxUnavailable | `0` |
| `clusterDomain` | cluster domain | `cluster.local` |
### Image
@ -678,6 +667,7 @@ kubectl create secret generic gitea-themes --from-file={{FULL-PATH-TO-CSS}} --na
| `podSecurityContext.fsGroup` | Set the shared file system group for all containers in the pod. | `1000` |
| `containerSecurityContext` | Security context | `{}` |
| `securityContext` | Run init and Gitea containers as a specific securityContext | `{}` |
| `podDisruptionBudget` | Pod disruption budget | `{}` |
### Service
@ -685,7 +675,7 @@ kubectl create secret generic gitea-themes --from-file={{FULL-PATH-TO-CSS}} --na
| --------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| `service.http.type` | Kubernetes service type for web traffic | `ClusterIP` |
| `service.http.port` | Port number for web traffic | `3000` |
| `service.http.clusterIP` | ClusterIP setting for http autosetup for statefulset is None | `None` |
| `service.http.clusterIP` | ClusterIP setting for http autosetup for deployment is None | `None` |
| `service.http.loadBalancerIP` | LoadBalancer IP setting | `nil` |
| `service.http.nodePort` | NodePort for http service | `nil` |
| `service.http.externalTrafficPolicy` | If `service.http.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation | `nil` |
@ -696,7 +686,7 @@ kubectl create secret generic gitea-themes --from-file={{FULL-PATH-TO-CSS}} --na
| `service.http.annotations` | HTTP service annotations | `{}` |
| `service.ssh.type` | Kubernetes service type for ssh traffic | `ClusterIP` |
| `service.ssh.port` | Port number for ssh traffic | `22` |
| `service.ssh.clusterIP` | ClusterIP setting for ssh autosetup for statefulset is None | `None` |
| `service.ssh.clusterIP` | ClusterIP setting for ssh autosetup for deployment is None | `None` |
| `service.ssh.loadBalancerIP` | LoadBalancer IP setting | `nil` |
| `service.ssh.nodePort` | NodePort for ssh service | `nil` |
| `service.ssh.externalTrafficPolicy` | If `service.ssh.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation | `nil` |
@ -720,21 +710,22 @@ kubectl create secret generic gitea-themes --from-file={{FULL-PATH-TO-CSS}} --na
| `ingress.tls` | Ingress tls settings | `[]` |
| `ingress.apiVersion` | Specify APIVersion of ingress object. Mostly would only be used for argocd. | |
### StatefulSet
### deployment
| Name | Description | Value |
| ------------------------------------------- | ------------------------------------------------------ | ----- |
| `resources` | Kubernetes resources | `{}` |
| `schedulerName` | Use an alternate scheduler, e.g. "stork" | `""` |
| `nodeSelector` | NodeSelector for the statefulset | `{}` |
| `tolerations` | Tolerations for the statefulset | `[]` |
| `affinity` | Affinity for the statefulset | `{}` |
| `dnsConfig` | dnsConfig for the statefulset | `{}` |
| `priorityClassName` | priorityClassName for the statefulset | `""` |
| `statefulset.env` | Additional environment variables to pass to containers | `[]` |
| `statefulset.terminationGracePeriodSeconds` | How long to wait until forcefully kill the pod | `60` |
| `statefulset.labels` | Labels for the statefulset | `{}` |
| `statefulset.annotations` | Annotations for the Gitea StatefulSet to be created | `{}` |
| Name | Description | Value |
| ------------------------------------------ | ------------------------------------------------------ | ----- |
| `resources` | Kubernetes resources | `{}` |
| `schedulerName` | Use an alternate scheduler, e.g. "stork" | `""` |
| `nodeSelector` | NodeSelector for the deployment | `{}` |
| `tolerations` | Tolerations for the deployment | `[]` |
| `affinity` | Affinity for the deployment | `{}` |
| `topologySpreadConstraints` | TopologySpreadConstraints for the deployment | `[]` |
| `dnsConfig` | dnsConfig for the deployment | `{}` |
| `priorityClassName` | priorityClassName for the deployment | `""` |
| `deployment.env` | Additional environment variables to pass to containers | `[]` |
| `deployment.terminationGracePeriodSeconds` | How long to wait until forcefully kill the pod | `60` |
| `deployment.labels` | Labels for the deployment | `{}` |
| `deployment.annotations` | Annotations for the Gitea deployment to be created | `{}` |
### ServiceAccount
@ -749,20 +740,22 @@ kubectl create secret generic gitea-themes --from-file={{FULL-PATH-TO-CSS}} --na
### Persistence
| Name | Description | Value |
| ---------------------------- | ----------------------------------------------------------------------------------------------------- | ------------------- |
| `persistence.enabled` | Enable persistent storage | `true` |
| `persistence.existingClaim` | Use an existing claim to store repository information | `nil` |
| `persistence.size` | Size for persistence to store repo information | `10Gi` |
| `persistence.accessModes` | AccessMode for persistence | `["ReadWriteOnce"]` |
| `persistence.labels` | Labels for the persistence volume claim to be created | `{}` |
| `persistence.annotations` | Annotations for the persistence volume claim to be created | `{}` |
| `persistence.storageClass` | Name of the storage class to use | `nil` |
| `persistence.subPath` | Subdirectory of the volume to mount at | `nil` |
| `extraVolumes` | Additional volumes to mount to the Gitea statefulset | `[]` |
| `extraContainerVolumeMounts` | Mounts that are only mapped into the Gitea runtime/main container, to e.g. override custom templates. | `[]` |
| `extraInitVolumeMounts` | Mounts that are only mapped into the init-containers. Can be used for additional preconfiguration. | `[]` |
| `extraVolumeMounts` | **DEPRECATED** Additional volume mounts for init containers and the Gitea main container | `[]` |
| Name | Description | Value |
| ---------------------------- | ----------------------------------------------------------------------------------------------------- | ---------------------- |
| `persistence.enabled` | Enable persistent storage | `true` |
| `persistence.create` | Whether to create the persistentVolumeClaim for shared storage | `true` |
| `persistence.mount` | Whether the persistentVolumeClaim should be mounted (even if not created) | `true` |
| `persistence.claimName` | Use an existing claim to store repository information | `gitea-shared-storage` |
| `persistence.size` | Size for persistence to store repo information | `10Gi` |
| `persistence.accessModes` | AccessMode for persistence | `["ReadWriteOnce"]` |
| `persistence.labels` | Labels for the persistence volume claim to be created | `{}` |
| `persistence.annotations` | Annotations for the persistence volume claim to be created | `{}` |
| `persistence.storageClass` | Name of the storage class to use | `nil` |
| `persistence.subPath` | Subdirectory of the volume to mount at | `nil` |
| `extraVolumes` | Additional volumes to mount to the Gitea deployment | `[]` |
| `extraContainerVolumeMounts` | Mounts that are only mapped into the Gitea runtime/main container, to e.g. override custom templates. | `[]` |
| `extraInitVolumeMounts` | Mounts that are only mapped into the init-containers. Can be used for additional preconfiguration. | `[]` |
| `extraVolumeMounts` | **DEPRECATED** Additional volume mounts for init containers and the Gitea main container | `[]` |
### Init
@ -784,21 +777,22 @@ kubectl create secret generic gitea-themes --from-file={{FULL-PATH-TO-CSS}} --na
### Gitea
| Name | Description | Value |
| -------------------------------------- | ------------------------------------------------------------------------------------------------------------- | -------------------- |
| `gitea.admin.username` | Username for the Gitea admin user | `gitea_admin` |
| `gitea.admin.existingSecret` | Use an existing secret to store admin user credentials | `nil` |
| `gitea.admin.password` | Password for the Gitea admin user | `r8sA8CPHD9!bt6d` |
| `gitea.admin.email` | Email for the Gitea admin user | `gitea@local.domain` |
| `gitea.metrics.enabled` | Enable Gitea metrics | `false` |
| `gitea.metrics.serviceMonitor.enabled` | Enable Gitea metrics service monitor | `false` |
| `gitea.ldap` | LDAP configuration | `[]` |
| `gitea.oauth` | OAuth configuration | `[]` |
| `gitea.config` | Configuration for the Gitea server,ref: [config-cheat-sheet](https://docs.gitea.io/en-us/config-cheat-sheet/) | `{}` |
| `gitea.additionalConfigSources` | Additional configuration from secret or configmap | `[]` |
| `gitea.additionalConfigFromEnvs` | Additional configuration sources from environment variables | `[]` |
| `gitea.podAnnotations` | Annotations for the Gitea pod | `{}` |
| `gitea.ssh.logLevel` | Configure OpenSSH's log level. Only available for root-based Gitea image. | `INFO` |
| Name | Description | Value |
| -------------------------------------- | ------------------------------------------------------------------------- | -------------------- |
| `gitea.admin.username` | Username for the Gitea admin user | `gitea_admin` |
| `gitea.admin.existingSecret` | Use an existing secret to store admin user credentials | `nil` |
| `gitea.admin.password` | Password for the Gitea admin user | `r8sA8CPHD9!bt6d` |
| `gitea.admin.email` | Email for the Gitea admin user | `gitea@local.domain` |
| `gitea.metrics.enabled` | Enable Gitea metrics | `false` |
| `gitea.metrics.serviceMonitor.enabled` | Enable Gitea metrics service monitor | `false` |
| `gitea.ldap` | LDAP configuration | `[]` |
| `gitea.oauth` | OAuth configuration | `[]` |
| `gitea.config.server.SSH_PORT` | SSH port for rootlful Gitea image | `22` |
| `gitea.config.server.SSH_LISTEN_PORT` | SSH port for rootless Gitea image | `2222` |
| `gitea.additionalConfigSources` | Additional configuration from secret or configmap | `[]` |
| `gitea.additionalConfigFromEnvs` | Additional configuration sources from environment variables | `[]` |
| `gitea.podAnnotations` | Annotations for the Gitea pod | `{}` |
| `gitea.ssh.logLevel` | Configure OpenSSH's log level. Only available for root-based Gitea image. | `INFO` |
### LivenessProbe
@ -836,18 +830,29 @@ kubectl create secret generic gitea-themes --from-file={{FULL-PATH-TO-CSS}} --na
| `gitea.startupProbe.successThreshold` | Success threshold for startup probe | `1` |
| `gitea.startupProbe.failureThreshold` | Failure threshold for startup probe | `10` |
### Memcached
### redis-cluster
| Name | Description | Value |
| ----------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `memcached.enabled` | Memcached is loaded as a dependency from [Bitnami](https://github.com/bitnami/charts/tree/master/bitnami/memcached) if enabled in the values. Complete Configuration can be taken from their website. | `true` |
| `memcached.service.ports.memcached` | Port for Memcached | `11211` |
| Name | Description | Value |
| ------------------------------------- | ---------------------------------------------------- | ------- |
| `redis-cluster.enabled` | Enable redis | `true` |
| `redis-cluster.global.redis.password` | Password for the "Gitea" user (overrides `password`) | `gitea` |
### PostgreSQL-ha
| Name | Description | Value |
| ---------------------------------------------------------------- | -------------------------------------------------------------------- | ------- |
| `postgresql-ha.enabled` | Enable PostgreSQL-ha | `true` |
| `postgresql-ha.global.postgresql-ha.auth.password` | Password for the `gitea` user (overrides `auth.password`) | `gitea` |
| `postgresql-ha.global.postgresql-ha.auth.database` | Name for a custom database to create (overrides `auth.database`) | `gitea` |
| `postgresql-ha.global.postgresql-ha.auth.username` | Name for a custom user to create (overrides `auth.username`) | `gitea` |
| `postgresql-ha.global.postgresql-ha.service.ports.postgresql-ha` | PostgreSQL-ha service port (overrides `service.ports.postgresql-ha`) | `5432` |
| `postgresql-ha.primary.persistence.size` | PVC Storage Request for PostgreSQL-ha volume | `10Gi` |
### PostgreSQL
| Name | Description | Value |
| ------------------------------------------------------- | ---------------------------------------------------------------- | ------- |
| `postgresql.enabled` | Enable PostgreSQL | `true` |
| `postgresql.enabled` | Enable PostgreSQL | `false` |
| `postgresql.global.postgresql.auth.password` | Password for the `gitea` user (overrides `auth.password`) | `gitea` |
| `postgresql.global.postgresql.auth.database` | Name for a custom database to create (overrides `auth.database`) | `gitea` |
| `postgresql.global.postgresql.auth.username` | Name for a custom user to create (overrides `auth.username`) | `gitea` |
@ -873,7 +878,72 @@ See [CONTRIBUTORS GUIDE](CONTRIBUTING.md) for details.
## Upgrading
This section lists major and breaking changes of each Helm Chart version.
Please read them carefully to upgrade successfully.
Please read them carefully to upgrade successfully, especially the change of the **default database backend**!
If you miss this, blindly upgrading may delete your Postgres instance and you may lose your data!
<details>
<summary>To 9.0.0</summary>
This chart release comes with many breaking changes while aiming for a HA-ready setup.
Please go through all of them carefully to perform a successful upgrade.
Here's a brief summary again, followed by more detailed migration instructions:
- Switch from `Statefulset` to `Deployment`
- Switch from `Memcached` to `redis-cluster` as the default session and queue provider
- Switch from `postgres` to `postgres-ha` as the default database provider
- A chart-internal PVC bootstrapping logic
- New `persistence.mount`: whether to mount an existent PVC (even if not creating it)
- New `persistence.create`: whether to create a new PVC
- Renamed `persistence.existingClaim` to `persistence.claimName`
While not required, we recommend to start with a RWX PV for new installations.
A RWX volume is required for installation aiming for HA.
If you want to stay with a pre-existing RWO PV, you need to set
- `persistence.mount=true`
- `persistence.create=false`
- `persistence.claimName` to the name of your existing PVC.
If you do not, Gitea will create a new PVC which will in turn create a new PV.
If this happened to you by accident, you can still recover your data by setting using the settings from above in a subsequent run.
If you want to stay with a `memcache` instead of `redis-cluster`, you need to deploy `memcache` manually (e.g. from [bitnami](https://github.com/bitnami/charts/tree/main/bitnami/memcached)) and set
- `cache.HOST = "<memcache connection string>"`
- `cache.ADAPTER = "memcache"`
- `session.PROVIDER = "memcache"`
- `session.PROVIDER_CONFIG = "<memcache connection string>"`
- `queue.TYPE = "memcache"`
- `queue.CONN_STR = "<memcache connection string>"`
The `memcache` connection string has the scheme `memcache://<memcache service name>:<memcache service port>`, e.g. `gitea-memcached.gitea.svc.cluster.local:11211`.
The first item here (`<memcache service name>`) will be different compared to the example if you deploy `memcache` yourself.
The above changes are motivated by the idea to tidy dependencies but also have HA-ready ones at the same time.
The previous `memcache` default was not HA-ready, hence we decided to switch to `redis-cluster` by default.
<!-- markdownlint-disable-next-line -->
**Transitioning from a RWO to RWX Persistent Volume**
If you want to switch to a RWX volume and go for HA, you need to
1. Backup the data stored under `/data`
2. Let the chart create a new RWX PV (or do it statically yourself)
3. Restore the backup to the same location in the new PV
<!-- markdownlint-disable-next-line -->
**Transitioning from Postgres to Postgres HA**
If you are running with a non-HA PG DB from a previous chart release, you need to set
- `postgresql-ha.enabled=false`
- `postgresql.enabled=true`
This is needed to stay with your existing single-instance DB (as the HA-variant is the new default).
</details>
<details>

175
docs/ha-setup.md Normal file
View File

@ -0,0 +1,175 @@
# High Availability
**Experimental**
All components (in-memory DB, volume/asset storage, code indexer) used by Gitea must be deployed in a HA-ready fashion to achieve a full HA-ready Gitea deployment.
The following document explains how to achieve this for all individual components.
The resulting Gitea deployment will consist of ~ 10 pods (depending on the chosen components and their replicas).
One should evaluate upfront whether a HA-deployment is required as switching between HA/non-HA comes with some effort.
For production instances, HA is always recommended to increase uptime and have a frictionless update process.
A general comment about chart dependencies and external services:
Instead of relying on chart dependencies, it is often better to rely on an external, (managed) instances (in-memory database, asset storage provider, database, etc.).
Many cloud providers offer such services, at least for databases or in-memory databases.
They might cost a bit more than using a self-hosted k8s variant but are usually easier to maintain and scale, if needed.
Also they can be centrally managed and are not linked to the Gitea helm chart or namespace.
Please consider using external services before you start with your Gitea HA setup, it will make your life (and the life of the Gitea maintainers) easier.
This helm chart tries to help as much as possible to simplify and assert the provisioning of a HA-ready Gitea instance by implementing smart conditionals if `replicaCount` is set to a value > 1.
Nevertheless, we cannot guarantee for every possible combination of Gitea settings to work together perfectly in a HA setup.
As a general advice, we recommend to have a test environment aside on which to test possible changes/upgrades before applying these to a production installation.
## Requirements for HA
Storage-wise, the HA-Gitea setup requires a RWX file-system which can be shared among the deployment-based replica pods.
In addition, the following components are required for full HA-readiness:
- A HA-ready issue (and optionally code) indexer: `elasticsearch` or `meilisearch`
- A HA-ready external object/asset storage (`minio`) (optional, assets can also be stored on the RWX file-system)
- A HA-ready cache (`redis-cluster`)
- A HA-ready DB
`postgres.enabled`, which default to `true`, must be set to `false` for a HA setup.
The default `postgres` chart dependency is not HA-ready (there's a dedicated `postgres-ha` chart).
The following sections discuss each of the components in more detail.
Note that for each component discussed, the shown configurations only provides a (working) starting point, not necessarily the most optimal setup.
We try to optimize this document over time as we have gained more experience with HA setups from users.
## Indexers (Issues and code/repo)
The default code indexer `bleve` is not able to allow multiple connections and hence cannot be used in a HA setup.
Alternatives are `elasticsearch` and `meilisearch` (as of >= 1.19.2).
Unless you have an existing `elasticsearch` cluster, we recommend using `meilisearch` as it is faster and requires way less resources.
Unfortunately, `meilisearch` does only support the `ISSUE_INDEXER` and not the `REPO_INDEXER` yet ([tracking issue](https://github.com/go-gitea/gitea/pull/24149)).
This means that the `REPO_INDEXER` must still be disabled for a HA setup right now.
An alternative to the two options above for the `ISSUE_INDEXER` is `"db"`, however we recommend to just go with `meilisearch` in this case and to not bother the DB with indexing.
To configure `meilisearch` within Gitea, do the following:
```yml
gitea:
config:
indexer:
ISSUE_INDEXER_CONN_STR: <http://meilisearch.<namespace>.svc.cluster.local:7700>
ISSUE_INDEXER_ENABLED: true
ISSUE_INDEXER_TYPE: meilisearch
REPO_INDEXER_ENABLED: false
# REPO_INDEXER_TYPE: meilisearch # not yet working
```
Unfortunately `meilisearch` cannot be deployed in HA as of now.
Nevertheless it allows for multiple Gitea requests at the same time and is therefore required in a HA setup.
Exemplary configuration for the [meilisearch-kubernetes](https://github.com/meilisearch/meilisearch-kubernetes/tree/main/charts/meilisearch) chart:
```yaml
persistence:
enabled: true
accessMode: ReadWriteOnce
size: 5Gi
```
## Cache, session and queue
A `redis` instance is required for the in-memory cache.
Two options exist:
- `redis`
- `redis-cluster`
The chart provides `redis-cluster` as a dependency as this one can be used for both HA and non-HA setups.
You're also welcome to go with `redis` if you prefer or already have a running instance.
It should be noted that `redis-cluster` support is only available starting with Gitea 1.19.2.
You can also configure an external (managed) `redis` instance to be used.
To do so, you need to set the following configuration values yourself:
- `gitea.config.queue.TYPE`: redis`
- `gitea.config.queue.CONN_STR`: `<your redis connection string>`
- `gitea.config.session.PROVIDER`: `redis`
- `gitea.config.session.PROVIDER_CONFIG`: `<your redis connection string>`
- `gitea.config.cache.ENABLED`: `true`
- `gitea.config.cache.ADAPTER`: `redis`
- `gitea.config.cache.HOST`: `<your redis connection string>`
## Object and asset storage
Object/asset storage refers to the storage of attachments, avatars, LFS files, etc.
While most of these can be stored on the RWX file-system, it is recommended to use an external S3-compatible object storage for such, mainly for performance reasons.
By default the chart provisions a single RWO volume to store everything (repos, avatars, packages, etc.).
This volume cannot be mounted by multiple pods.
Hence, a RWX volume is required and (optionally) an external HA-ready object storage.
> **Note:** Double-check that the file permissions are set correctly on the RWX volume! That is everything should be owned by the `git` user which usually has `uid=1000` and `gid=1000`.
To use `minio` you need to deploy and configure an external `minio` instance yourself and explicitly define the `STORAGE_TYPE` values as shown below.
Note that `MINIO_BUCKET` here is just a name and does not refer to a S3 bucket.
It's the root access point for all objects belonging to the respective application, i.e., to Gitea in this case.
```yaml
gitea:
config:
attachment:
STORAGE_TYPE: minio
lfs:
STORAGE_TYPE: minio
picture:
AVATAR_STORAGE_TYPE: minio
"storage.packages":
STORAGE_TYPE: minio
storage:
MINIO_ENDPOINT: <minio-headless.<namespace>.svc.cluster.local:9000>
MINIO_LOCATION: <location>
MINIO_ACCESS_KEY_ID: <access key>
MINIO_SECRET_ACCESS_KEY: <secret key>
MINIO_BUCKET: <bucket name>
MINIO_USE_SSL: false
```
Exemplary configuration for the [bitnami minio](https://github.com/bitnami/charts/blob/main/bitnami/minio) chart:
```yaml
auth:
rootUser: minio
mode: distributed
replicaCount: 4
persistence:
enabled: true
size: 20Gi
accessModes:
- ReadWriteOnce
```
## Database
If you do not have an HA-ready DB, using a managed database service in the cloud might be the easiest and most robust solution.
Remember: disable the built-in `postgres` dependency and configure the database connection manually via `gitea.config.database`:
```yml
gitea:
database:
builtIn:
postgresql:
enabled: false
config:
database:
DB_TYPE: postgres
HOST: <host>
NAME: <name>
USER: <user>
```
## Known issues
- Currently Cron jobs are run on all replicas as no leader election is implemented.
See [https://github.com/go-gitea/gitea/issues/13791](https://github.com/go-gitea/gitea/issues/13791) for a discussion and possible solution.
- Running with multiple replicas slows down Gitea a bit, i.e. page loading time increases.

View File

@ -2,6 +2,27 @@
{{/*
Expand the name of the chart.
*/}}
{{- /* multiple replicas assertions */ -}}
{{- if gt .Values.replicaCount 1.0 -}}
{{- fail "When using multiple replicas, a RWX file system is required" -}}
{{- if eq (get (.Values.persistence.accessModes 0) "ReadWriteOnce") -}}
{{- fail "When using multiple replicas, a RWX file system is required" -}}
{{- end }}
{{- if eq (get .Values.gitea.config.indexer "ISSUE_INDEXER_TYPE") "bleve" -}}
{{- fail "When using multiple replicas, the repo indexer must be set to 'meilisearch' or 'elasticsearch'" -}}
{{- end }}
{{- if and (eq .Values.gitea.config.indexer.REPO_INDEXER_TYPE "bleve") (eq .Values.gitea.config.indexer.REPO_INDEXER_ENABLED "true") -}}
{{- fail "When using multiple replicas, the repo indexer must be set to 'meilisearch' or 'elasticsearch'" -}}
{{- end }}
{{- if eq .Values.gitea.config.indexer.ISSUE_INDEXER_TYPE "bleve" -}}
{{- (printf "DEBUG: When using multiple replicas, the repo indexer must be set to 'meilisearch' or 'elasticsearch'") | fail -}}
{{- end }}
{{- end }}
{{- define "gitea.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
@ -95,8 +116,22 @@ app.kubernetes.io/instance: {{ .Release.Name }}
{{- printf "%s-postgresql.%s.svc.%s:%g" .Release.Name .Release.Namespace .Values.clusterDomain .Values.postgresql.global.postgresql.service.ports.postgresql -}}
{{- end -}}
{{- define "memcached.dns" -}}
{{- printf "%s-memcached.%s.svc.%s:%g" .Release.Name .Release.Namespace .Values.clusterDomain .Values.memcached.service.ports.memcached | trunc 63 | trimSuffix "-" -}}
{{- define "redis.dns" -}}
{{- if (index .Values "redis-cluster").enabled -}}
{{- printf "redis+cluster://:%s@%s-redis-cluster-headless.%s.svc.%s:%g/0?pool_size=100&idle_timeout=180s&" (index .Values "redis-cluster").global.redis.password .Release.Name .Release.Namespace .Values.clusterDomain (index .Values "redis-cluster").service.ports.redis -}}
{{- end -}}
{{- end -}}
{{- define "redis.port" -}}
{{- if (index .Values "redis-cluster").enabled -}}
{{ (index .Values "redis-cluster").service.ports.redis }}
{{- end -}}
{{- end -}}
{{- define "redis.servicename" -}}
{{- if (index .Values "redis-cluster").enabled -}}
{{- printf "%s-redis-cluster-headless.%s.svc.%s" .Release.Name .Release.Namespace .Values.clusterDomain -}}
{{- end -}}
{{- end -}}
{{- define "gitea.default_domain" -}}
@ -182,6 +217,7 @@ https
{{- else -}}
{{- (printf "Key %s cannot be on top level of configuration" $key) | fail -}}
{{- end -}}
{{- end }}
{{- end }}
@ -211,6 +247,18 @@ https
{{- if not (hasKey .Values.gitea.config "oauth2") -}}
{{- $_ := set .Values.gitea.config "oauth2" dict -}}
{{- end -}}
{{- if not (hasKey .Values.gitea.config "session") -}}
{{- $_ := set .Values.gitea.config "session" dict -}}
{{- end -}}
{{- if not (hasKey .Values.gitea.config "queue") -}}
{{- $_ := set .Values.gitea.config "queue" dict -}}
{{- end -}}
{{- if not (hasKey .Values.gitea.config "queue.issue_indexer") -}}
{{- $_ := set .Values.gitea.config "queue.issue_indexer" dict -}}
{{- end -}}
{{- if not (hasKey .Values.gitea.config "indexer") -}}
{{- $_ := set .Values.gitea.config "indexer" dict -}}
{{- end -}}
{{- end -}}
{{- define "gitea.inline_configuration.defaults" -}}
@ -226,13 +274,30 @@ https
{{- if not (hasKey .Values.gitea.config.metrics "ENABLED") -}}
{{- $_ := set .Values.gitea.config.metrics "ENABLED" .Values.gitea.metrics.enabled -}}
{{- end -}}
{{- if .Values.memcached.enabled -}}
{{- if (index .Values "redis-cluster").enabled -}}
{{- $_ := set .Values.gitea.config.cache "ENABLED" "true" -}}
{{- $_ := set .Values.gitea.config.cache "ADAPTER" "memcache" -}}
{{- $_ := set .Values.gitea.config.cache "ADAPTER" "redis" -}}
{{- if not (.Values.gitea.config.cache.HOST) -}}
{{- $_ := set .Values.gitea.config.cache "HOST" (include "memcached.dns" .) -}}
{{- $_ := set .Values.gitea.config.cache "HOST" (include "redis.dns" .) -}}
{{- end -}}
{{- end -}}
{{- /* redis queue */ -}}
{{- if (index .Values "redis-cluster").enabled -}}
{{- $_ := set .Values.gitea.config.queue "TYPE" "redis" -}}
{{- $_ := set .Values.gitea.config.queue "CONN_STR" (include "redis.dns" .) -}}
{{- end -}}
{{- /* multiple replicas */ -}}
{{- if gt .Values.replicaCount 1.0 -}}
{{- if not (get .Values.gitea.config.session "PROVIDER") -}}
{{- $_ := set .Values.gitea.config.session "PROVIDER" "redis" -}}
{{- end -}}
{{- if not (get .Values.gitea.config.session "PROVIDER_CONFIG") -}}
{{- $_ := set .Values.gitea.config.session "PROVIDER_CONFIG" (include "redis.dns" .) -}}
{{- end -}}
{{- if not .Values.gitea.config.indexer.ISSUE_INDEXER_TYPE -}}
{{- $_ := set .Values.gitea.config.indexer "ISSUE_INDEXER_TYPE" "db" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- define "gitea.inline_configuration.defaults.server" -}}

View File

@ -16,6 +16,32 @@ metadata:
{{- include "gitea.labels" . | nindent 4 }}
type: Opaque
stringData:
assertions: |
{{- /* multiple replicas assertions */ -}}
{{- if gt .Values.replicaCount 1.0 -}}
{{- if .Values.gitea.config.cron.GIT_GC_REPOS -}}
{{- if .Values.gitea.config.cron.GIT_GC_REPOS.enabled -}}
{{- fail "Invoking the garbage collector via CRON is not yet supported when running with multiple replicas. Please set 'GIT_GC_REPOS.enabled = false'." -}}
{{- end }}
{{- end }}
{{- if eq (first .Values.persistence.accessModes) "ReadWriteOnce" -}}
{{- fail "When using multiple replicas, a RWX file system is required and gitea.persistence.accessModes[0] must be set to ReadWriteMany." -}}
{{- end }}
{{- if eq (get .Values.gitea.config.indexer "ISSUE_INDEXER_TYPE") "bleve" -}}
{{- fail "When using multiple replicas, the issue indexer (gitea.config.indexer.ISSUE_INDEXER_TYPE) must be set to a HA-ready provider such as 'meilisearch', 'elasticsearch' or 'db' (if the DB is HA-ready)." -}}
{{- end }}
{{- if .Values.gitea.config.indexer.REPO_INDEXER_TYPE -}}
{{- if eq (get .Values.gitea.config.indexer "REPO_INDEXER_TYPE") "bleve" -}}
{{- if .Values.gitea.config.indexer.REPO_INDEXER_ENABLED -}}
{{- if eq (get .Values.gitea.config.indexer "REPO_INDEXER_ENABLED") "true" -}}
{{- fail "When using multiple replicas, the repo indexer (gitea.config.indexer.REPO_INDEXER_TYPE) must be set to 'meilisearch' or 'elasticsearch' or disabled." -}}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
config_environment.sh: |-
#!/usr/bin/env bash
set -euo pipefail

View File

@ -1,20 +1,27 @@
apiVersion: apps/v1
kind: StatefulSet
kind: Deployment
metadata:
name: {{ include "gitea.fullname" . }}
annotations:
{{- if .Values.statefulset.annotations }}
{{- toYaml .Values.statefulset.annotations | nindent 4 }}
{{- if .Values.deployment.annotations }}
{{- toYaml .Values.deployment.annotations | nindent 4 }}
{{- end }}
labels:
{{- include "gitea.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: {{ .Values.strategy.type }}
{{- if eq .Values.strategy.type "RollingUpdate" }}
rollingUpdate:
maxUnavailable: {{ .Values.strategy.rollingUpdate.maxUnavailable }}
maxSurge: {{ .Values.strategy.rollingUpdate.maxSurge }}
{{- end }}
selector:
matchLabels:
{{- include "gitea.selectorLabels" . | nindent 6 }}
{{- if .Values.statefulset.labels }}
{{- toYaml .Values.statefulset.labels | nindent 6 }}
{{- if .Values.deployment.labels }}
{{- toYaml .Values.deployment.labels | nindent 6 }}
{{- end }}
serviceName: {{ include "gitea.fullname" . }}
template:
@ -32,8 +39,8 @@ spec:
{{- end }}
labels:
{{- include "gitea.labels" . | nindent 8 }}
{{- if .Values.statefulset.labels }}
{{- toYaml .Values.statefulset.labels | nindent 8 }}
{{- if .Values.deployment.labels }}
{{- toYaml .Values.deployment.labels | nindent 8 }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
@ -62,8 +69,8 @@ spec:
value: /data
- name: GITEA_TEMP
value: /tmp/gitea
{{- if .Values.statefulset.env }}
{{- toYaml .Values.statefulset.env | nindent 12 }}
{{- if .Values.deployment.env }}
{{- toYaml .Values.deployment.env | nindent 12 }}
{{- end }}
{{- if .Values.signing.enabled }}
- name: GNUPGHOME
@ -97,8 +104,8 @@ spec:
value: /data
- name: GITEA_TEMP
value: /tmp/gitea
{{- if .Values.statefulset.env }}
{{- toYaml .Values.statefulset.env | nindent 12 }}
{{- if .Values.deployment.env }}
{{- toYaml .Values.deployment.env | nindent 12 }}
{{- end }}
{{- if .Values.gitea.additionalConfigFromEnvs }}
{{- toYaml .Values.gitea.additionalConfigFromEnvs | nindent 12 }}
@ -234,8 +241,8 @@ spec:
- name: GITEA_ADMIN_PASSWORD
value: {{ .Values.gitea.admin.password | quote }}
{{- end }}
{{- if .Values.statefulset.env }}
{{- toYaml .Values.statefulset.env | nindent 12 }}
{{- if .Values.deployment.env }}
{{- toYaml .Values.deployment.env | nindent 12 }}
{{- end }}
volumeMounts:
- name: init
@ -250,7 +257,7 @@ spec:
{{- include "gitea.init-additional-mounts" . | nindent 12 }}
resources:
{{- toYaml .Values.initContainers.resources | nindent 12 }}
terminationGracePeriodSeconds: {{ .Values.statefulset.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.deployment.terminationGracePeriodSeconds }}
containers:
- name: {{ .Chart.Name }}
image: "{{ include "gitea.image" . }}"
@ -283,8 +290,8 @@ spec:
- name: GNUPGHOME
value: {{ .Values.signing.gpgHome }}
{{- end }}
{{- if .Values.statefulset.env }}
{{- toYaml .Values.statefulset.env | nindent 12 }}
{{- if .Values.deployment.env }}
{{- toYaml .Values.deployment.env | nindent 12 }}
{{- end }}
ports:
- name: ssh
@ -340,6 +347,10 @@ spec:
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
@ -378,38 +389,13 @@ spec:
path: private.asc
defaultMode: 0100
{{- end }}
{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }}
{{- if .Values.persistence.enabled }}
{{- if .Values.persistence.mount }}
- name: data
persistentVolumeClaim:
{{- with .Values.persistence.existingClaim }}
claimName: {{ tpl . $ }}
{{- end }}
claimName: {{ .Values.persistence.claimName }}
{{- end }}
{{- else if not .Values.persistence.enabled }}
- name: data
emptyDir: {}
{{- else if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
volumeClaimTemplates:
- metadata:
name: data
{{- with .Values.persistence.annotations }}
annotations:
{{- range $key, $value := . }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
{{- with .Values.persistence.labels }}
labels:
{{- range $key, $value := . }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
spec:
accessModes:
{{- range .Values.persistence.accessModes }}
- {{ . | quote }}
{{- end }}
{{- include "gitea.persistence.storageClass" . | indent 8 }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- end }}

View File

@ -61,6 +61,27 @@ stringData:
echo "Gitea migrate might fail due to database connection...This init-container will try again in a few seconds"
exit 1
}
{{- if include "redis.servicename" . }}
function test_redis_connection() {
local RETRY=0
local MAX=30
echo 'Wait for redis to become avialable...'
until [ "${RETRY}" -ge "${MAX}" ]; do
nc -vz -w2 {{ include "redis.servicename" . }} {{ include "redis.port" . }} && break
RETRY=$[${RETRY}+1]
echo "...not ready yet (${RETRY}/${MAX})"
done
if [ "${RETRY}" -ge "${MAX}" ]; then
echo "Redis not reachable after '${MAX}' attempts!"
exit 1
fi
}
test_redis_connection
{{- end }}
{{- if or .Values.gitea.admin.existingSecret (and .Values.gitea.admin.username .Values.gitea.admin.password) }}

View File

@ -0,0 +1,17 @@
{{- if .Values.podDisruptionBudget -}}
{{- if .Capabilities.APIVersions.Has "policy/v1" }}
apiVersion: policy/v1
{{- else }}
apiVersion: policy/v1beta1
{{- end }}
kind: PodDisruptionBudget
metadata:
name: {{ include "gitea.fullname" . }}
labels:
{{- include "gitea.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "gitea.selectorLabels" . | nindent 6 }}
{{- toYaml .Values.podDisruptionBudget | nindent 2 }}
{{- end -}}

24
templates/gitea/pvc.yaml Normal file
View File

@ -0,0 +1,24 @@
{{- if and .Values.persistence.enabled .Values.persistence.create }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Values.persistence.claimName }}
namespace: {{ $.Release.Namespace }}
annotations:
{{ .Values.persistence.annotations | toYaml | indent 4}}
spec:
accessModes:
{{- if gt .Values.replicaCount 1.0 }}
- ReadWriteMany
{{- else }}
{{- .Values.persistence.accessModes | toYaml | nindent 4 }}
{{- end }}
volumeMode: Filesystem
{{- if .Values.persistence.storageClass }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
volumeName: ""
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- end }}

View File

@ -39,7 +39,9 @@ spec:
ports:
- name: ssh
port: {{ .Values.service.ssh.port }}
{{- if .Values.gitea.config.server.SSH_LISTEN_PORT }}
targetPort: {{ .Values.gitea.config.server.SSH_LISTEN_PORT }}
{{- end }}
protocol: TCP
{{- if .Values.service.ssh.nodePort }}
nodePort: {{ .Values.service.ssh.nodePort }}

View File

@ -1,17 +1,17 @@
suite: Statefulset template (basic)
suite: deployment template (basic)
release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/statefulset.yaml
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
tests:
- it: renders a statefulset
template: templates/gitea/statefulset.yaml
- it: renders a deployment
template: templates/gitea/deployment.yaml
asserts:
- hasDocuments:
count: 1
- containsDocument:
kind: StatefulSet
kind: Deployment
apiVersion: apps/v1
name: gitea-unittests

View File

@ -1,13 +1,13 @@
suite: Statefulset template (signing disabled)
suite: deployment template (signing disabled)
release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/statefulset.yaml
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
tests:
- it: skips gpg init container
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
asserts:
- notContains:
path: spec.template.spec.initContainers
@ -15,7 +15,7 @@ tests:
content:
name: configure-gpg
- it: skips gpg env in `init-directories` init container
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
set:
signing.enabled: false
asserts:
@ -25,14 +25,14 @@ tests:
name: GNUPGHOME
value: /data/git/.gnupg
- it: skips gpg env in runtime container
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
asserts:
- notContains:
path: spec.template.spec.containers[0].env
content:
name: GNUPGHOME
- it: skips gpg volume spec
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
asserts:
- notContains:
path: spec.template.spec.volumes

View File

@ -1,13 +1,13 @@
suite: Statefulset template (signing enabled)
suite: deployment template (signing enabled)
release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/statefulset.yaml
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
tests:
- it: adds gpg init container
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
set:
signing:
enabled: true
@ -39,7 +39,7 @@ tests:
mountPath: /raw
readOnly: true
- it: adds gpg env in `init-directories` init container
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
set:
signing.enabled: true
signing.existingSecret: "custom-gpg-secret"
@ -50,7 +50,7 @@ tests:
name: GNUPGHOME
value: /data/git/.gnupg
- it: adds gpg env in runtime container
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
set:
signing.enabled: true
signing.existingSecret: "custom-gpg-secret"
@ -61,7 +61,7 @@ tests:
name: GNUPGHOME
value: /data/git/.gnupg
- it: adds gpg volume spec
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
set:
signing:
enabled: true
@ -78,7 +78,7 @@ tests:
path: private.asc
defaultMode: 0100
- it: supports gpg volume spec with external reference
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
set:
signing:
enabled: true

View File

@ -1,13 +1,13 @@
suite: Statefulset template (SSH configuration)
suite: deployment template (SSH configuration)
release:
name: gitea-unittests
namespace: testing
templates:
- templates/gitea/statefulset.yaml
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
tests:
- it: supports defining SSH log level for root based image
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
set:
image.rootless: false
asserts:
@ -17,7 +17,7 @@ tests:
name: SSH_LOG_LEVEL
value: "INFO"
- it: supports overriding SSH log level
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
set:
image.rootless: false
gitea.ssh.logLevel: "DEBUG"
@ -28,7 +28,7 @@ tests:
name: SSH_LOG_LEVEL
value: "DEBUG"
- it: skips SSH_LOG_LEVEL for rootless image
template: templates/gitea/statefulset.yaml
template: templates/gitea/deployment.yaml
set:
image.rootless: true
gitea.ssh.logLevel: "DEBUG" # explicitly defining a non-standard level here

View File

@ -4,24 +4,24 @@ release:
namespace: testing
templates:
- templates/gitea/serviceaccount.yaml
- templates/gitea/statefulset.yaml
- templates/gitea/deployment.yaml
- templates/gitea/config.yaml
tests:
- it: does not modify the StatefulSet by default
template: templates/gitea/statefulset.yaml
- it: does not modify the deployment by default
template: templates/gitea/deployment.yaml
asserts:
- notExists:
path: spec.serviceAccountName
- it: adds the reference to the StatefulSet with serviceAccount.create=true
template: templates/gitea/statefulset.yaml
- it: adds the reference to the deployment with serviceAccount.create=true
template: templates/gitea/deployment.yaml
set:
serviceAccount.create: true
asserts:
- equal:
path: spec.template.spec.serviceAccountName
value: gitea-unittests
- it: allows referencing an externally created ServiceAccount to the StatefulSet
template: templates/gitea/statefulset.yaml
- it: allows referencing an externally created ServiceAccount to the deployment
template: templates/gitea/deployment.yaml
set:
serviceAccount:
create: false # explicitly set to define rendering behavior

View File

@ -20,9 +20,19 @@ global:
# hostnames:
# - example.com
## @param replicaCount number of replicas for the statefulset
## @param replicaCount number of replicas for the deployment
replicaCount: 1
## @section strategy
## @param strategy.type strategy type
## @param strategy.rollingUpdate.maxSurge maxSurge
## @param strategy.rollingUpdate.maxUnavailable maxUnavailable
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: "100%"
maxUnavailable: 0
## @param clusterDomain cluster domain
clusterDomain: cluster.local
@ -74,11 +84,16 @@ containerSecurityContext: {}
## @param securityContext Run init and Gitea containers as a specific securityContext
securityContext: {}
## @param podDisruptionBudget Pod disruption budget
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 1
## @section Service
service:
## @param service.http.type Kubernetes service type for web traffic
## @param service.http.port Port number for web traffic
## @param service.http.clusterIP ClusterIP setting for http autosetup for statefulset is None
## @param service.http.clusterIP ClusterIP setting for http autosetup for deployment is None
## @param service.http.loadBalancerIP LoadBalancer IP setting
## @param service.http.nodePort NodePort for http service
## @param service.http.externalTrafficPolicy If `service.http.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation
@ -101,7 +116,7 @@ service:
annotations: {}
## @param service.ssh.type Kubernetes service type for ssh traffic
## @param service.ssh.port Port number for ssh traffic
## @param service.ssh.clusterIP ClusterIP setting for ssh autosetup for statefulset is None
## @param service.ssh.clusterIP ClusterIP setting for ssh autosetup for deployment is None
## @param service.ssh.loadBalancerIP LoadBalancer IP setting
## @param service.ssh.nodePort NodePort for ssh service
## @param service.ssh.externalTrafficPolicy If `service.ssh.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable source IP preservation
@ -155,7 +170,7 @@ ingress:
# If helm doesn't correctly detect your ingress API version you can set it here.
# apiVersion: networking.k8s.io/v1
## @section StatefulSet
## @section deployment
#
## @param resources Kubernetes resources
resources:
@ -177,26 +192,29 @@ resources:
## @param schedulerName Use an alternate scheduler, e.g. "stork"
schedulerName: ""
## @param nodeSelector NodeSelector for the statefulset
## @param nodeSelector NodeSelector for the deployment
nodeSelector: {}
## @param tolerations Tolerations for the statefulset
## @param tolerations Tolerations for the deployment
tolerations: []
## @param affinity Affinity for the statefulset
## @param affinity Affinity for the deployment
affinity: {}
## @param dnsConfig dnsConfig for the statefulset
## @param topologySpreadConstraints TopologySpreadConstraints for the deployment
topologySpreadConstraints: []
## @param dnsConfig dnsConfig for the deployment
dnsConfig: {}
## @param priorityClassName priorityClassName for the statefulset
## @param priorityClassName priorityClassName for the deployment
priorityClassName: ""
## @param statefulset.env Additional environment variables to pass to containers
## @param statefulset.terminationGracePeriodSeconds How long to wait until forcefully kill the pod
## @param statefulset.labels Labels for the statefulset
## @param statefulset.annotations Annotations for the Gitea StatefulSet to be created
statefulset:
## @param deployment.env Additional environment variables to pass to containers
## @param deployment.terminationGracePeriodSeconds How long to wait until forcefully kill the pod
## @param deployment.labels Labels for the deployment
## @param deployment.annotations Annotations for the Gitea deployment to be created
deployment:
env:
[]
# - name: VARIABLE
@ -218,14 +236,16 @@ serviceAccount:
name: ""
automountServiceAccountToken: false
imagePullSecrets: []
# - name: private-registry-access
# - name: private-registry-access
annotations: {}
labels: {}
## @section Persistence
#
## @param persistence.enabled Enable persistent storage
## @param persistence.existingClaim Use an existing claim to store repository information
## @param persistence.create Whether to create the persistentVolumeClaim for shared storage
## @param persistence.mount Whether the persistentVolumeClaim should be mounted (even if not created)
## @param persistence.claimName Use an existing claim to store repository information
## @param persistence.size Size for persistence to store repo information
## @param persistence.accessModes AccessMode for persistence
## @param persistence.labels Labels for the persistence volume claim to be created
@ -234,7 +254,9 @@ serviceAccount:
## @param persistence.subPath Subdirectory of the volume to mount at
persistence:
enabled: true
existingClaim:
create: true
mount: true
claimName: gitea-shared-storage
size: 10Gi
accessModes:
- ReadWriteOnce
@ -243,7 +265,7 @@ persistence:
storageClass:
subPath:
## @param extraVolumes Additional volumes to mount to the Gitea statefulset
## @param extraVolumes Additional volumes to mount to the Gitea deployment
extraVolumes: []
# - name: postgres-ssl-vol
# secret:
@ -358,13 +380,14 @@ gitea:
# customProfileUrl:
# customEmailUrl:
## @param gitea.config Configuration for the Gitea server,ref: [config-cheat-sheet](https://docs.gitea.io/en-us/config-cheat-sheet/)
config: {}
# APP_NAME: "Gitea: Git with a cup of tea"
# RUN_MODE: dev
#
# server:
# SSH_PORT: 22
## @param gitea.config.server.SSH_PORT SSH port for rootlful Gitea image
## @param gitea.config.server.SSH_LISTEN_PORT SSH port for rootless Gitea image
config:
# APP_NAME: "Gitea: Git with a cup of tea"
# RUN_MODE: dev
server:
SSH_PORT: 22 # rootful image
SSH_LISTEN_PORT: 2222 # rootless image
#
# security:
# PASSWORD_COMPLEXITY: spec
@ -446,23 +469,37 @@ gitea:
successThreshold: 1
failureThreshold: 10
## @section Memcached
#
## @param memcached.enabled Memcached is loaded as a dependency from [Bitnami](https://github.com/bitnami/charts/tree/master/bitnami/memcached) if enabled in the values. Complete Configuration can be taken from their website.
## ref: https://hub.docker.com/r/bitnami/memcached/tags/
## @param memcached.service.ports.memcached Port for Memcached
memcached:
## @section redis-cluster
## @param redis-cluster.enabled Enable redis
## @param redis-cluster.global.redis.password Password for the "gitea" user (overrides `password`)
redis-cluster:
enabled: true
# image:
# registry: docker.io
# repository: bitnami/memcached
# tag: ""
# digest: ""
# pullPolicy: IfNotPresent
# pullSecrets: []
service:
ports:
memcached: 11211
global:
redis:
password: gitea
## @section postgresql-ha
#
## @param postgresql-ha.enabled Enable postgresql-ha
## @param postgresql-ha.global.postgresql-ha.auth.password Password for the `gitea` user (overrides `auth.password`)
## @param postgresql-ha.global.postgresql-ha.auth.database Name for a custom database to create (overrides `auth.database`)
## @param postgresql-ha.global.postgresql-ha.auth.username Name for a custom user to create (overrides `auth.username`)
## @param postgresql-ha.global.postgresql-ha.service.ports.postgresql-ha postgresql-ha service port (overrides `service.ports.postgresql-ha`)
## @param postgresql-ha.primary.persistence.size PVC Storage Request for postgresql-ha volume
postgresql-ha:
enabled: true
global:
postgresql-ha:
auth:
password: gitea
database: gitea
username: gitea
service:
ports:
postgresql-ha: 5432
primary:
persistence:
size: 10Gi
## @section PostgreSQL
#
@ -473,7 +510,7 @@ memcached:
## @param postgresql.global.postgresql.service.ports.postgresql PostgreSQL service port (overrides `service.ports.postgresql`)
## @param postgresql.primary.persistence.size PVC Storage Request for PostgreSQL volume
postgresql:
enabled: true
enabled: false
global:
postgresql:
auth: