feat(other): major improvement of deployment (#7925)

* feat(other): major improvement of deployment

Motivation
----------

Kubernetes:
* backend becomes a statefulset (exclusive volume mount)
  See: https://spacelift.io/blog/statefulset-vs-deployment
* implement neo4j backup with job

Docker:
* consistent targets across all dockerfiles
* remove redundant labels
* remove unnecessary build args
* remove obsolete networks
* remove development dependencies for production

Rebranding:
* add image tags for local tagging and pulling
* use Github's docker build workflows
* use Github container registry
* ONBUILD to simplify caller Dockerfiles
* docker compose for branding

Tooling:
* same node --version as in dockerfile

Docs:
* missing step in README.md

* refactor: remove submodules

It's better to keep them all in a separate repository

* improve kubernetes chart

* better image tag defaults
* split neo4j into its own chart (for re-use)
* use application defaults where possible

* optional resources for all pods

* remove obsolete key/value pair from secrets

* remove obsolete build argsand

and add labels for neo4j enterprise

* env vars for webapp

* allow to define redirect domains

Define a list of Domains that redirect to the domain of the project. The
idea is to provide the ability to redirect eg. www.domain.tld to
domain.tld

* remove maintenance part regarding database

* move backup job outside template folder

* name the ingress

* updated ingress

* handle empty case of middlewares

* try to default the ingress

* use quote

* restore todo-next-update

* fix docu check

* fix naming

* try using prod:migrate

* try using override config

* copy src folder

* try using base as image instead of build

* fix test build

* force build

* comment for the problem

* fix webapp tests (potentially)

---------

Co-authored-by: Ulf Gebhardt <ulf.gebhardt@webcraft-media.de>
This commit is contained in:
Robert Schäfer 2025-02-28 18:22:23 +01:00 committed by GitHub
parent 925c1d8e81
commit 628b57aa29
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
190 changed files with 1111 additions and 5913 deletions

View File

@ -30,8 +30,8 @@ jobs:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.1.7
- name: Remove old documentation files
run: rm -rf ./deployment/src/old/ ./CHANGELOG.md # workaround until https://github.com/gaurav-nelson/github-action-markdown-link-check/pull/183 has been done
- name: Remove uncheckable documentation files
run: rm -rf ./CHANGELOG.md # workaround until https://github.com/gaurav-nelson/github-action-markdown-link-check/pull/183 has been done
- name: Check Markdown Links
uses: gaurav-nelson/github-action-markdown-link-check@1b916f2cf6c36510a6059943104e3c42ce6c16bc # 1.0.15

91
.github/workflows/docker-push.yml vendored Normal file
View File

@ -0,0 +1,91 @@
name: docker-push
on: push
jobs:
build-and-push-images:
strategy:
matrix:
app:
- name: neo4j
context: neo4j
file: neo4j/Dockerfile
target: community
- name: backend-base
context: backend
file: backend/Dockerfile
target: base
- name: backend-build
context: backend
file: backend/Dockerfile
target: build
- name: backend
context: backend
file: backend/Dockerfile
target: production
- name: webapp-base
context: webapp
file: webapp/Dockerfile
target: base
- name: webapp-build
context: webapp
file: webapp/Dockerfile
target: build
- name: webapp
context: webapp
file: webapp/Dockerfile
target: production
- name: maintenance-base
context: webapp
file: webapp/Dockerfile.maintenance
target: base
- name: maintenance-build
context: webapp
file: webapp/Dockerfile.maintenance
target: build
- name: maintenance
context: webapp
file: webapp/Dockerfile.maintenance
target: production
runs-on: ubuntu-latest
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}/${{ matrix.app.name }}
permissions:
contents: read
packages: write
attestations: write
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4.1.7
- name: Log in to the Container registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@70b2cdc6480c1a8b86edf1777157f8f437de2166
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=schedule
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=ref,event=branch
type=ref,event=pr
type=sha
- name: Build and push Docker images
id: push
uses: docker/build-push-action@4f58ea79222b3b9dc2c8bbdd6debcef730109a75
with:
context: ${{ matrix.app.context }}
target: ${{ matrix.app.target }}
file: ${{ matrix.app.file }}
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@ -112,7 +112,8 @@ jobs:
cp backend/.env.template backend/.env
- name: backend | docker compose
run: docker compose -f docker-compose.yml -f docker-compose.test.yml up --detach --no-deps neo4j backend
# doesn't work without the --build flag - this either means we should not load the cached images or cache the correct image
run: docker compose -f docker-compose.yml -f docker-compose.test.yml up --detach --no-deps neo4j backend --build
- name: backend | Initialize Database
run: docker compose exec -T backend yarn db:migrate init

View File

@ -77,7 +77,7 @@ jobs:
docker load < /tmp/images/neo4j.tar
docker load < /tmp/images/backend.tar
docker load < /tmp/images/webapp.tar
docker compose -f docker-compose.yml -f docker-compose.test.yml up --detach --no-deps webapp neo4j backend
docker compose -f docker-compose.yml -f docker-compose.test.yml up --detach --no-deps webapp neo4j backend --build
sleep 90s
- name: Full stack tests | run tests

View File

@ -94,7 +94,8 @@ jobs:
cp backend/.env.template backend/.env
- name: backend | docker compose
run: docker compose -f docker-compose.yml -f docker-compose.test.yml up --detach --no-deps webapp
# doesn't work without the --build flag - this either means we should not load the cached images or cache the correct image
run: docker compose -f docker-compose.yml -f docker-compose.test.yml up --detach --no-deps webapp --build
- name: webapp | Unit tests incl. coverage check
run: docker compose exec -T webapp yarn test

1
.tool-versions Normal file
View File

@ -0,0 +1 @@
nodejs 20.12.1

View File

@ -186,6 +186,9 @@ $ cp .env.template .env
# in folder backend/
$ cp .env.template .env
# in folder frontend/
$ cp .env.template .env
```
For Development:

View File

@ -1,103 +1,42 @@
##################################################################################
# BASE (Is pushed to DockerHub for rebranding) ###################################
##################################################################################
FROM node:20.12.1-alpine3.19 AS base
# ENVs
## DOCKER_WORKDIR would be a classical ARG, but that is not multi layer persistent - shame
ENV DOCKER_WORKDIR="/app"
## We Cannot do `$(date -u +'%Y-%m-%dT%H:%M:%SZ')` here so we use unix timestamp=0
ARG BBUILD_DATE="1970-01-01T00:00:00.00Z"
ENV BUILD_DATE=$BBUILD_DATE
## We cannot do $(yarn run version)-${BUILD_NUMBER} here so we default to 0.0.0-0
ARG BBUILD_VERSION="0.0.0-0"
ENV BUILD_VERSION=$BBUILD_VERSION
## We cannot do `$(git rev-parse --short HEAD)` here so we default to 0000000
ARG BBUILD_COMMIT="0000000"
ENV BUILD_COMMIT=$BBUILD_COMMIT
## SET NODE_ENV
ENV NODE_ENV="production"
## App relevant Envs
ENV PORT="4000"
# Labels
LABEL org.label-schema.build-date="${BUILD_DATE}"
LABEL org.label-schema.name="ocelot.social:backend"
LABEL org.label-schema.description="Backend of the Social Network Software ocelot.social"
LABEL org.label-schema.usage="https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/README.md"
LABEL org.label-schema.url="https://ocelot.social"
LABEL org.label-schema.vcs-url="https://github.com/Ocelot-Social-Community/Ocelot-Social/tree/master/backend"
LABEL org.label-schema.vcs-ref="${BUILD_COMMIT}"
LABEL org.label-schema.vendor="ocelot.social Community"
LABEL org.label-schema.version="${BUILD_VERSION}"
LABEL org.label-schema.schema-version="1.0"
LABEL maintainer="devops@ocelot.social"
# Install Additional Software
## install: git
RUN apk --no-cache add git python3 make g++
# Settings
## Expose Container Port
ENV NODE_ENV="production"
ENV PORT="4000"
EXPOSE ${PORT}
RUN apk --no-cache add git python3 make g++ bash
RUN mkdir -p /app
WORKDIR /app
CMD ["/bin/bash", "-c", "yarn run start"]
## Workdir
RUN mkdir -p ${DOCKER_WORKDIR}
WORKDIR ${DOCKER_WORKDIR}
##################################################################################
# DEVELOPMENT (Connected to the local environment, to reload on demand) ##########
##################################################################################
FROM base AS development
CMD ["/bin/sh", "-c", "yarn install && yarn run dev"]
# We don't need to copy or build anything since we gonna bind to the
# local filesystem which will need a rebuild anyway
# Run command
# (for development we need to execute yarn install since the
# node_modules are on another volume and need updating)
CMD /bin/sh -c "yarn install && yarn run dev"
##################################################################################
# CODE (Does contain all code files and is pushed to DockerHub for rebranding) ###
##################################################################################
FROM base AS code
# copy everything, but do not build.
FROM base AS build
COPY . .
ONBUILD COPY ./branding/constants/ src/config/tmp
ONBUILD RUN tools/replace-constants.sh
ONBUILD COPY ./branding/email/ src/middleware/helpers/email/
ONBUILD RUN yarn install --production=false --frozen-lockfile --non-interactive
ONBUILD RUN yarn run build
ONBUILD RUN mkdir /build
ONBUILD RUN cp -r ./build /build
ONBUILD RUN cp -r ./public /build/build
ONBUILD RUN cp -r ./package.json yarn.lock /build
ONBUILD RUN cd /build && yarn install --production=true --frozen-lockfile --non-interactive
##################################################################################
# BUILD (Does contain all files and the compilate and is therefore bloated) ######
##################################################################################
FROM code AS build
# yarn install
RUN yarn install --production=false --frozen-lockfile --non-interactive
# yarn build
RUN /bin/sh -c "yarn run build"
##################################################################################
# TEST ###########################################################################
##################################################################################
FROM build AS test
# required for the migrations
# ONBUILD RUN cp -r ./src /src
CMD ["/bin/bash", "-c", "yarn run dev"]
# Run command
CMD /bin/sh -c "yarn run dev"
FROM build AS production_build
##################################################################################
# PRODUCTION (Does contain only "binary"- and static-files to reduce image size) #
##################################################################################
FROM base AS production
# Copy "binary"-files from build image
COPY --from=build ${DOCKER_WORKDIR}/build ./build
COPY --from=build ${DOCKER_WORKDIR}/node_modules ./node_modules
# Copy static files
# TODO - externalize the uploads so we can copy the whole folder
COPY --from=build ${DOCKER_WORKDIR}/public/img/ ./public/img/
COPY --from=build ${DOCKER_WORKDIR}/public/providers.json ./public/providers.json
# Copy package.json for script definitions (lock file should not be needed)
COPY --from=build ${DOCKER_WORKDIR}/package.json ./package.json
# Run command
CMD /bin/sh -c "yarn run start"
COPY --from=production_build /build .

View File

@ -179,7 +179,9 @@ describe('Filter Posts', () => {
})
})
describe('order events by event start ascending', () => {
// Does not work on months end
// eslint-disable-next-line jest/no-disabled-tests
describe.skip('order events by event start ascending', () => {
it('finds the events ordered accordingly', async () => {
const {
data: { Post: result },
@ -201,7 +203,9 @@ describe('Filter Posts', () => {
})
})
describe('filter events by event start date', () => {
// Does not work on months end
// eslint-disable-next-line jest/no-disabled-tests
describe.skip('filter events by event start date', () => {
it('finds only events after given date', async () => {
const {
data: { Post: result },

View File

@ -0,0 +1,7 @@
#!/bin/bash
# TODO: this is a hack, we should find a better way to share files between backend and webapp
[ -f src/config/tmp/emails.js ] && mv src/config/tmp/emails.js src/config/emails.ts
[ -f src/config/tmp/logos.js ] && mv src/config/tmp/logos.js src/config/logos.ts
[ -f src/config/tmp/metadata.js ] && mv src/config/tmp/metadata.js src/config/metadata.ts
exit 0

View File

@ -1,7 +0,0 @@
# branding folder used for "docker compose up" run in deployment folder
CONFIGURATION=stage.ocelot.social
# used in "scripts/clusters.backup-multiple-servers.sh"
BACKUP_CONFIGURATIONS="stage.ocelot.social stage.wir.social"
# if '<= 0' no backups will be deleted
BACKUP_SAVED_BACKUPS_NUMBER=7

View File

@ -1,27 +0,0 @@
# Docker
## Apple M1 Platform
***Attention:** For using Docker commands in Apple M1 environments!*
```bash
# set env variable for your shell
$ export DOCKER_DEFAULT_PLATFORM=linux/amd64
```
### Docker Compose Override File For Apple M1 Platform
For Docker compose `up` or `build` commands, you can use our Apple M1 override file that specifies the M1 platform:
```bash
# in main folder
# for production
$ docker compose -f docker-compose.yml -f docker-compose.apple-m1.override.yml up
# for production testing Docker images from DockerHub
$ docker compose -f docker-compose.ocelotsocial-branded.yml -f docker-compose.apple-m1.override.yml up
# only once: init admin user and create indexes and constraints in Neo4j database
$ docker compose exec backend /bin/sh -c "yarn prod:migrate init"
```

View File

@ -1,25 +0,0 @@
# Minikube
There are many Kubernetes providers, but if you're just getting started, Minikube is a tool that you can use to get your feet wet.
After you [installed Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/)
open your minikube dashboard:
```text
$ minikube dashboard
```
This will give you an overview. Some of the steps below need some timing to make resources available to other dependent deployments. Keeping an eye on the dashboard is a great way to check that.
Follow the installation instruction for [Kubernetes with Helm](./src/kubernetes/README.md).
If all the pods and services have settled and everything looks green in your
minikube dashboard, expose the services you want on your host system.
For example:
```text
$ minikube service webapp --namespace=ocelotsocialnetwork
# optionally
$ minikube service backend --namespace=ocelotsocialnetwork
```

View File

@ -1,138 +0,0 @@
# Ocelot.Social Deploy And Rebranding
[![Build Status Publish](https://github.com/Ocelot-Social-Community/Ocelot-Social-Deploy-Rebranding/actions/workflows/publish.yml/badge.svg)](https://github.com/Ocelot-Social-Community/Ocelot-Social-Deploy-Rebranding/actions)
[![MIT License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/Ocelot-Social-Community/Ocelot-Social-Deploy-Rebranding/blob/master/LICENSE.md)
[![Discord Channel](https://img.shields.io/discord/489522408076738561.svg)](https://discord.gg/AJSX9DCSUA)
[![Open Source Helpers](https://www.codetriage.com/ocelot-social-community/ocelot-social-deploy-rebranding/badges/users.svg)](https://www.codetriage.com/ocelot-social-community/ocelot-social-deploy-rebranding)
This repository is an in use template to rebrand, configure, and deploy [ocelot.social](https://github.com/Ocelot-Social-Community/Ocelot-Social) networks.
The forked original repository is [stage.ocelot.social](https://github.com/Ocelot-Social-Community/stage.ocelot.social).
<!-- markdownlint-disable MD033 -->
<p style="text-align: center;">
<a href="https://ocelot.social" target="_blank"><img src="https://raw.githubusercontent.com/Ocelot-Social-Community/Ocelot-Social/master/webapp/static/img/custom/logo-squared.svg" alt="ocelot.social" width="40%" height="40%"></a>
</p>
<!-- markdownlint-enable MD033 -->
## Live demo
__Try out our deployed [development environment](https://stage.ocelot.social).__
Visit our staging networks:
- central staging network: [stage.ocelot.social](https://stage.ocelot.social)
<!-- - rebranded staging network: [rebrand.ocelot.social](https://stage.ocelot.social). -->
Logins:
| email | password | role |
| :--- | :--- | :--- |
| `user@example.org` | 1234 | user |
| `moderator@example.org` | 1234 | moderator |
| `admin@example.org` | 1234 | admin |
## Usage
Fork this repository to configure and rebrand it for your own [ocelot.social](https://github.com/Ocelot-Social-Community/Ocelot-Social) network.
### Package.Json And DockerHub Organisation
Write your own data into the main configuration file:
- [package.json](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/package.json)
Since all deployment methods described here depend on [Docker](https://docker.com) and [DockerHub](https://hub.docker.com), you need to create your own organisation on DockerHub and put its name in the [package.json](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/package.json) file as your `dockerOrganisation`.
### Configure And Branding
The next step is:
- [Set Environment Variables and Configurations](./deployment-values.md)
<!-- markdown-link-check-disable -->
- [Configure And Branding](./configurations/stage.ocelot.social/branding/README.md)
<!-- markdown-link-check-enable -->
### Optional: Locally Testing Configuration And Branding
Just in case you have Docker installed and run the following, you can check your branding locally:
```bash
# in main folder
$ docker-compose up
# fill the database with an initial admin
$ docker-compose exec backend yarn run prod:migrate init
```
The database is then initialised with the default administrator:
- E-mail: admin@example.org
- Password: 1234
For login or registration have a look in your browser at `http://localhost:3000/`.
For the maintenance page have a look in your browser at `http://localhost:5000/`.
### Push Changes To GitHub
Before merging these changes into the "master" branch on your GitHub fork repository, you need to configure the GitHub repository secrets. This is necessary to [publish](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/.github/workflows/publish.yml) the Docker images by pushing them via GitHub actions to repositories belonging to your DockerHub organisation.
First, go to your DockerHub profile under `Account Settings` and click on the `Security` tab. There you create an access token called `<your-organisation>-access-token` and copy the token to a safe place.
Secondly, in your GitHub repository, click on the 'Settings' tab and go to the 'Secrets' tab. There you create two secrets by clicking on `New repository secret`:
1. Named `DOCKERHUB_TOKEN` with the newly created DockerHub token (only the code, not the token name).
2. Named `DOCKERHUB_USERNAME` with your DockerHub username.
### Optional: Locally Testing Your DockerHub Images
Just in case you like to check your pushed Docker images in your organisation's DockerHub repositories locally:
- rename the file `docker-compose.ocelotsocial-branded.yml` with your network name
- in the file, rename the ocelot.social DockerHub organisation `ocelotsocialnetwork` to your organisations name
Remove any local Docker images if necessary and do the following:
```bash
# in main folder
$ docker-compose -f docker-compose.<your-organisation>-branded.yml up
# fill the database with an initial admin
$ docker-compose exec backend yarn run prod:migrate init
```
See the login details and browser addresses above.
### Deployment
Afterwards you can [deploy](./deployment.md) it on your server:
- [Kubernetes with Helm](./src/kubernetes/README.md)
## Developer Chat
Join our friendly open-source community on [Discord](https://discord.gg/AJSX9DCSUA) :heart_eyes_cat:
Just introduce yourself at `#introduce-yourself` and mention `@@Mentor` to get you onboard :neckbeard:
Check out the [contribution guideline](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/CONTRIBUTING.md), too!
We give write permissions to every developer who asks for it. Just text us on
[Discord](https://discord.gg/AJSX9DCSUA).
## Technology Stack
- [Docker](https://www.docker.com)
- [Kubernetes](https://kubernetes.io)
- [Helm](https://helm.sh)
<!--
## Attributions
Locale Icons made by [Freepik](http://www.freepik.com/) from [www.flaticon.com](https://www.flaticon.com/) is licensed by [CC 3.0 BY](http://creativecommons.org/licenses/by/3.0/).
Browser compatibility testing with [BrowserStack](https://www.browserstack.com/).
<img alt="BrowserStack Logo" src=".gitbook/assets/browserstack-logo.svg" width="256">
-->
## License
See the [LICENSE](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/LICENSE.md) file for license rights and limitations (MIT).
We need `DOCKER_BUILDKIT=0` for this to work.

View File

@ -1,3 +0,0 @@
/*
!/example
!.gitignore

@ -1 +0,0 @@
Subproject commit fdc2e52fa444b300e1c4736600bc0e9ae3314222

View File

@ -1,73 +0,0 @@
# Deployment Values
For each deployment, you need to set the environment variables and configurations.
Here is some specific information on how to set the values.
## Webapp
We have several configuration possibilities just in the frontend.
### Date Time
In file `branding/constants/dateTime.js`.
- `RELATIVE_DATETIME`
- `true` (default) or `false`
- `ABSOLUT_DATETIME_FORMAT`
- definition see [date-fns, format](https://date-fns.org/v3.3.1/docs/format):
- `P`: just localized date
- `Pp`: just localized date and time
## E-Mails
You need to set environment variables to send registration and invitation information or notifications to users, for example.
### SPF and DKIM
More and more e-mail providers require settings for authorization and verification of e-mail senders.
### SPF
Sometimes it is enough to create an SPF record in your DNS.
### DKIM
However, if you need DKIM authorization and verification, you must set the appropriate environment variables in: `.env`, `docker-compose.yml` or Helm script `values.yaml`:
```bash
SMTP_DKIM_DOMAINNAME=<your e-mail sender domain>
SMTP_DKIM_KEYSELECTOR=2017
SMTP_DKIM_PRIVATKEY="-----BEGIN RSA PRIVATE KEY-----\n<your base64 encoded privat key data>\n-----END RSA PRIVATE KEY-----\n"
```
You can find out how DKIM works here:
<https://www.ionos.com/digitalguide/e-mail/e-mail-security/dkim-domainkeys/>
To create the private and public DKIM key, see here:
<https://knowledge.ondmarc.redsift.com/en/articles/2141592-generating-2048-bits-dkim-public-and-private-keys-using-openssl-on-a-mac>
Information about the required PEM format can be found here:
<https://docs.progress.com/bundle/datadirect-hybrid-data-pipeline-installation-46/page/PEM-file-format.html>
## Neo4j Database
We have several configuration options for our Neo4j database.
### DBMS_DEFAULT_DATABASE Default Database Name to be Used
If you need to set the default database name in Neo4j to be used for all operations and terminal commands like our backup scripts, you must set the appropriate environment variable in: `.env`, `docker-compose.yml` or Helm script `values.yaml`:
```yaml
DBMS_DEFAULT_DATABASE: "graph.db"
```
The default value is `neo4j` if it is not set.
As example see files:
- `neo4j/.env.template`
- `deployment/docker-compose.yml`
- `deployment/configurations/stage.ocelot.social/kubernetes/values.yaml.template`

View File

@ -1,148 +0,0 @@
# Deployment
Before you start the deployment you have to do preparations.
## Deployment Preparations
Since all deployment methods described here depend on [Docker](https://docker.com) and [DockerHub](https://hub.docker.com), you need to create your own organisation on DockerHub and put its name in the [package.json](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/package.json) file as your `dockerOrganisation`.
Read more details in the [main README](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/README.md) under [Usage](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/README.md#usage).
## Deployment Methods
You have the following options for a deployment:
- [Kubernetes with Helm](./src/kubernetes/README.md)
## After Deployment
After the first deployment of the new network on your server, the database is initialized with the default administrator:
- E-mail: `admin@example.org`
- Password: `1234`
***ATTENTION:*** When you are logged in for the first time, please change your (the admin's) e-mail to an existing one and change your password to a secure one !!!
## Using the Scripts
To use most of the scripts you have to set the variable `CONFIGURATION` in your terminal by entering:
```bash
# in deployment folder
# set configuration name to folder name in 'configurations' folder (network name)
$ export CONFIGURATION=<your-configuration-name>
# to check this
$ echo $CONFIGURATION
```
### Secrets Encrypt/Decrypt
To encrypt and decrypt the secrets of your network in your terminal set a correct password in a (new) file `configurations/<your-configuration-name>/SECRET`.
If done please enter:
```bash
# in deployment folder
# encrypt secrets
$ scripts/secrets.encrypt.sh
# decrypt secrets
$ scripts/secrets.decrypt.sh
```
### Maintenance Mode On/Off
Activate or deactivate maintenance mode in your terminal:
```bash
# in deployment folder
# activate maintenance mode
$ scripts/cluster.maintenance.sh on
# deactivate maintenance mode
$ scripts/cluster.maintenance.sh off
```
### Backup Scripts
Save backups.
#### Single Backup
To save a local backup of the database and uploaded images:
```bash
# in deployment folder
# save backup
$ scripts/cluster.backup.sh
```
The backup will be saved into your network folders `backup` folder in a new folder with the date and time.
##### Default Database Name
To execute this script, it may be necessary to set the default database name in Neo4j.
In our deployments there are cases where the database is called `neo4j` (used by default) and in other cases `graph.db` (accidentally happened when we loaded the database into a new cluster).
In the new deployment with Helm, we set the default database name by the environment variable `NEO4J_dbms_default__database` in the Helm `values.yaml`.
See [Docker-specific configuration settings](https://neo4j.com/docs/operations-manual/4.4/docker/ref-settings/)
For more information see [Database Management Commands](../neo4j/README.md#database-management-commands).
#### Multiple Networks Backup
In order to save several network backups locally, you must define the configuration names of all networks in `.env`. The template for this is `deployment/.env.dist`:
```bash
# in the deployment folders '.env' set as example
BACKUP_CONFIGURATIONS="stage.ocelot.social stage.wir.social"
BACKUP_SAVED_BACKUPS_NUMBER=7
```
If `BACKUP_SAVED_BACKUPS_NUMBER <= 0` then no backups will be deleted.
To actually save all the backups run:
```bash
# in deployment folder
# save all backups listed in 'BACKUP_CONFIGURATIONS'
# delete all backups older then the 'BACKUP_SAVED_BACKUPS_NUMBER' newest ones
$ scripts/clusters.backup-multiple-servers.sh
```
The backups will be saved into your networks folders `backup` folder in a new folder with the date and time.
#### Automated Backups
⚠️ *Attention: Please check carefully whether really the oldest backups have been deleted. As shells on different systems behave differently with regard to the commands used in this script.*
Install automated backups by a [cron job](https://en.wikipedia.org/wiki/Cron).
Be aware of having the bash shell installed to run the script.
The environment variables for the automated backups are described above.
Installing a cron job by editing the cron table file:
```bash
# edit cron job table
$ crontab -e
```
In the editor add the line:
```bash
# in cron job table file
# set a cron job every night at 04am server time
# min hour day month weekday command
00 04 * * * /root/Ocelot-Social/deployment/scripts/clusters.backup-multiple-servers.sh >> /root/Ocelot-Social/deployment/backup-cron-job.log
```
This way the terminal output is written into a log file named `backup-cron-job.log` located in the deployment folder.
Be aware that the server datetime can differ from your local time.
Especially by the change between summer and winter time, because servers usually have UTC.
Find out the actual difference by running the command `date` on your server.

View File

@ -1,100 +0,0 @@
services:
########################################################
# WEBAPP ###############################################
########################################################
webapp:
# name the image to match our image to be tested from our DockerHub repository so that it can be pulled from there, otherwise it will be created locally from the 'dockerfile'
image: ocelotsocialnetwork/webapp-branded:latest
ports:
- 3000:3000
networks:
- test-network
depends_on:
- backend
environment:
- HOST=0.0.0.0
- GRAPHQL_URI=http://backend:4000
- MAPBOX_TOKEN="pk.eyJ1IjoiYnVzZmFrdG9yIiwiYSI6ImNraDNiM3JxcDBhaWQydG1uczhpZWtpOW4ifQ.7TNRTO-o9aK1Y6MyW_Nd4g"
# - WEBSOCKETS_URI=ws://backend:4000/graphql # is not working and not given in Docker YAML in main repo
- PUBLIC_REGISTRATION=true
- INVITE_REGISTRATION=true
- CATEGORIES_ACTIVE=true
########################################################
# BACKEND ##############################################
########################################################
backend:
# name the image to match our image to be tested from our DockerHub repository so that it can be pulled from there, otherwise it will be created locally from the 'dockerfile'
image: ocelotsocialnetwork/backend-branded:latest
networks:
- test-network
depends_on:
- neo4j
ports:
- 4000:4000
volumes:
- backend_uploads:/app/public/uploads
environment:
- NEO4J_URI=bolt://neo4j:7687
- GRAPHQL_URI=http://backend:4000
- CLIENT_URI=http://localhost:3000
- JWT_SECRET=b/&&7b78BF&fv/Vd
- MAPBOX_TOKEN=pk.eyJ1IjoiYnVzZmFrdG9yIiwiYSI6ImNraDNiM3JxcDBhaWQydG1uczhpZWtpOW4ifQ.7TNRTO-o9aK1Y6MyW_Nd4g
- PRIVATE_KEY_PASSPHRASE=a7dsf78sadg87ad87sfagsadg78
- EMAIL_SUPPORT=support@wir.social
- EMAIL_DEFAULT_SENDER=info@wir.social
# - PRODUCTION_DB_CLEAN_ALLOW=false # only true for production environments on staging servers
- PUBLIC_REGISTRATION=true
- INVITE_REGISTRATION=true
- CATEGORIES_ACTIVE=true
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- SMTP_HOST=mailserver
- SMTP_PORT=25
- SMTP_IGNORE_TLS=true
########################################################
# MAINTENANCE ##########################################
########################################################
maintenance:
# name the image to match our image to be tested from our DockerHub repository so that it can be pulled from there, otherwise it will be created locally from the 'dockerfile'
image: ocelotsocialnetwork/maintenance-branded:latest
networks:
- test-network
ports:
- 3001:80
########################################################
# NEO4J ################################################
########################################################
neo4j:
# name the image to match our image to be tested from our DockerHub repository so that it can be pulled from there, otherwise it will be created locally from the 'dockerfile'
image: ocelotsocialnetwork/neo4j-community-branded:latest
networks:
- test-network
environment:
- NEO4J_AUTH=none
- NEO4J_dbms_security_procedures_unrestricted=algo.*,apoc.*
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
ports:
- 7687:7687
volumes:
- neo4j_data:/data
########################################################
# MAILSERVER TO FAKE SMTP ##############################
########################################################
mailserver:
image: djfarrelly/maildev
ports:
- 1080:80
networks:
- test-network
networks:
test-network:
volumes:
backend_uploads:
neo4j_data:

View File

@ -1,190 +0,0 @@
services:
webapp-base:
image: ocelotsocialnetwork/webapp:local-base
build:
dockerfile: ../webapp/Dockerfile
context: ../webapp
target: base
command: sleep 0
webapp-code:
image: ocelotsocialnetwork/webapp:local-code
build:
dockerfile: ../webapp/Dockerfile
context: ../webapp
target: code
command: sleep 0
webapp:
image: ocelotsocialnetwork/webapp-branded:local-${CONFIGURATION}
container_name: webapp-branded
build:
dockerfile: src/docker/webapp.Dockerfile
target: branded
context: .
args:
- CONFIGURATION=$CONFIGURATION
- APP_IMAGE_TAG_BASE=local-base
- APP_IMAGE_TAG_CODE=local-code
ports:
- 3000:3000
networks:
- test-network
depends_on:
- backend
- webapp-base
- webapp-code
env_file:
- .env
environment:
- HOST=0.0.0.0
- GRAPHQL_URI=http://backend:4000
- MAPBOX_TOKEN="pk.eyJ1IjoiYnVzZmFrdG9yIiwiYSI6ImNraDNiM3JxcDBhaWQydG1uczhpZWtpOW4ifQ.7TNRTO-o9aK1Y6MyW_Nd4g"
# - WEBSOCKETS_URI=ws://backend:4000/graphql # is not working and not given in Docker YAML in main repo
- PUBLIC_REGISTRATION=true
- INVITE_REGISTRATION=true
- CATEGORIES_ACTIVE=true
backend-base:
image: ocelotsocialnetwork/backend:local-base
build:
dockerfile: ../backend/Dockerfile
context: ../backend
target: base
command: sleep 0
backend-code:
image: ocelotsocialnetwork/backend:local-code
build:
dockerfile: ../backend/Dockerfile
context: ../backend
target: code
command: sleep 0
backend:
image: ocelotsocialnetwork/backend-branded:local-${CONFIGURATION}
container_name: backend-branded
build:
dockerfile: src/docker/backend.Dockerfile
target: branded
context: .
args:
- CONFIGURATION=$CONFIGURATION
- APP_IMAGE_TAG_BASE=local-base
- APP_IMAGE_TAG_CODE=local-code
networks:
- test-network
depends_on:
- neo4j
- backend-base
- backend-code
ports:
- 4000:4000
volumes:
- backend_uploads:/app/public/uploads
environment:
- NEO4J_URI=bolt://neo4j:7687
- GRAPHQL_URI=http://backend:4000
- CLIENT_URI=http://localhost:3000
- JWT_SECRET=b/&&7b78BF&fv/Vd
- MAPBOX_TOKEN=pk.eyJ1IjoiYnVzZmFrdG9yIiwiYSI6ImNraDNiM3JxcDBhaWQydG1uczhpZWtpOW4ifQ.7TNRTO-o9aK1Y6MyW_Nd4g
- PRIVATE_KEY_PASSPHRASE=a7dsf78sadg87ad87sfagsadg78
- EMAIL_SUPPORT=support@wir.social
- EMAIL_DEFAULT_SENDER=info@wir.social
- PUBLIC_REGISTRATION=true
- INVITE_REGISTRATION=true
- CATEGORIES_ACTIVE=true
- SMTP_USERNAME=${SMTP_USERNAME}
- SMTP_PASSWORD=${SMTP_PASSWORD}
- SMTP_HOST=mailserver
- SMTP_PORT=25
- SMTP_IGNORE_TLS=true
#- PRODUCTION_DB_CLEAN_ALLOW=true
- NODE_ENV=development
maintenance-base:
image: ocelotsocialnetwork/maintenance:local-base
build:
dockerfile: ../webapp/Dockerfile.maintenance
context: ../webapp
target: base
command: sleep 0
maintenance-code:
image: ocelotsocialnetwork/maintenance:local-code
build:
dockerfile: ../webapp/Dockerfile.maintenance
context: ../webapp
target: code
command: sleep 0
maintenance:
# name the image so that it cannot be found in a DockerHub repository, otherwise it will not be built locally from the 'dockerfile' but pulled from there
image: ocelotsocialnetwork/maintenance-branded:local-${CONFIGURATION}
container_name: maintenance-branded
build:
# TODO: Separate from webapp, this must be independent
dockerfile: src/docker/maintenance.Dockerfile
target: branded
context: .
args:
- CONFIGURATION=$CONFIGURATION
- APP_IMAGE_TAG_BASE=local-base
- APP_IMAGE_TAG_CODE=local-code
networks:
- test-network
depends_on:
- maintenance-base
- maintenance-code
ports:
- 3001:80
neo4j:
# Neo4j v3.5.14-community
# image: wollehuss/neo4j-community-branded:latest
# Neo4j 4.4-community
image: ocelotsocialnetwork/neo4j-community:latest
container_name: neo4j-branded
networks:
- test-network
ports:
- 7687:7687
# only for development
# - 7474:7474
- 7474:7474
volumes:
- neo4j_data:/data
environment:
# settings reference: https://neo4j.com/docs/operations-manual/4.4/docker/ref-settings/
# TODO: This sounds scary for a production environment
- NEO4J_AUTH=none
- NEO4J_dbms_security_procedures_unrestricted=algo.*,apoc.*
- NEO4J_dbms_allow__format__migration=true
- NEO4J_dbms_allow__upgrade=true
# TODO: clarify if that is the only thing needed to unlock the Enterprise version
# - NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
# Uncomment following line for Neo4j Enterprise version instead of Community version
# TODO: clarify if that is the only thing needed to unlock the Enterprise version
# - NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
# set the name of the database to be used
# - NEO4J_dbms_default__database=graph.db
# - NEO4J_dbms_default__database=neo4j
# TODO: Remove the playground from production
# bring the database in offline mode to export or load dumps
# command: ["tail", "-f", "/dev/null"]
mailserver:
image: djfarrelly/maildev
container_name: mailserver-branded
ports:
- 1080:80
networks:
- test-network
networks:
test-network:
volumes:
backend_uploads:
neo4j_data:

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,24 @@
apiVersion: v2
name: ocelot-neo4j
description: A Helm chart for the neo4j database of ocelot-social
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "3.2.0"

View File

@ -0,0 +1,34 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-neo4j-backup
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: container-{{ .Release.Name }}-neo4j-backup
image: "{{ .Values.neo4j.image.repository }}:{{ default .Values.global.image.tag .Values.neo4j.image.tag .Chart.AppVersion "latest" }}"
imagePullPolicy: {{ quote .Values.global.image.pullPolicy }}
command:
- neo4j-admin
- dump
- --to
- "/backups/neo4j-dump-{{ now | date "20060102150405" }}"
envFrom:
- configMapRef:
name: {{ .Release.Name }}-neo4j-env
- secretRef:
name: {{ .Release.Name }}-neo4j-secret-env
volumeMounts:
- mountPath: /data/
name: neo4j-data
- mountPath: /backups/
name: neo4j-backups
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: {{ .Release.Name }}-neo4j-data
- name: neo4j-backups
persistentVolumeClaim:
claimName: {{ .Release.Name }}-neo4j-backups

View File

@ -0,0 +1,10 @@
{{- define "defaultTag" -}}
{{- .Values.global.image.tag | default .Chart.AppVersion }}
{{- end -}}
{{- define "resources" }}
{{- if . }}
resources:
{{ . | toYaml | indent 2 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,6 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Release.Name }}-neo4j-env
data:
{{ .Values.neo4j.env | toYaml | indent 2 }}

View File

@ -0,0 +1,22 @@
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Release.Name }}-neo4j-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.neo4j.storage }}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Release.Name }}-neo4j-backups
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.neo4j.storageBackups }}

View File

@ -0,0 +1,6 @@
kind: Secret
apiVersion: v1
metadata:
name: {{ .Release.Name }}-neo4j-secret-env
stringData:
{{ .Values.secrets.neo4j.env | toYaml | indent 2 }}

View File

@ -0,0 +1,14 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-neo4j
spec:
ports:
- name: {{ .Release.Name }}-bolt
port: 7687
targetPort: 7687
- name: {{ .Release.Name }}-http # for debugging only
port: 7474
targetPort: 7474
selector:
app: {{ .Release.Name }}-neo4j

View File

@ -0,0 +1,38 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Release.Name }}-neo4j
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}-neo4j
template:
metadata:
name: neo4j
annotations:
backup.velero.io/backup-volumes: neo4j-data
labels:
app: {{ .Release.Name }}-neo4j
spec:
restartPolicy: Always
containers:
- name: container-{{ .Release.Name }}-neo4j
image: "{{ .Values.neo4j.image.repository }}:{{ .Values.neo4j.image.tag | default (include "defaultTag" .) }}"
imagePullPolicy: {{ quote .Values.global.image.pullPolicy }}
{{- include "resources" .Values.neo4j.resources | indent 8 }}
ports:
- containerPort: 7687
- containerPort: 7474
envFrom:
- configMapRef:
name: {{ .Release.Name }}-neo4j-env
- secretRef:
name: {{ .Release.Name }}-neo4j-secret-env
volumeMounts:
- mountPath: /data/
name: neo4j-data
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: {{ .Release.Name }}-neo4j-data

View File

@ -0,0 +1,25 @@
underMaintenance: false
global:
image:
tag:
neo4j:
image:
repository: ghcr.io/ocelot-social-community/ocelot-social/neo4j
tag:
storage: "5Gi"
storageBackups: "10Gi"
env:
NEO4J_ACCEPT_LICENSE_AGREEMENT: "no"
NEO4J_AUTH: "none"
NEO4J_dbms_connector_bolt_thread__pool__max__size: "400"
NEO4J_dbms_memory_heap_initial__size: ""
NEO4J_dbms_memory_heap_max__size: ""
NEO4J_dbms_memory_pagecache_size: ""
NEO4J_dbms_security_procedures_unrestricted: "algo.*,apoc.*"
NEO4J_dbms_default__database: neo4j
NEO4J_apoc_import_file_enabled: "false"
NEO4J_dbms_allow__format__migration: "true"
NEO4J_dbms_allow__upgrade: "true"

View File

@ -0,0 +1,24 @@
apiVersion: v2
name: ocelot-social
description: A Helm chart for ocelot-social
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "3.2.0"

View File

@ -0,0 +1,10 @@
{{- define "defaultTag" -}}
{{- .Values.global.image.tag | default .Chart.AppVersion }}
{{- end -}}
{{- define "resources" }}
{{- if . }}
resources:
{{ . | toYaml | indent 2 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,39 @@
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ .Release.Name }}-letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: {{ quote .Values.secrets.acme_email }}
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: {{ .Release.Name }}-letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: traefik
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ .Release.Name }}-letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: {{ quote .Values.secrets.acme_email }}
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: {{ .Release.Name }}-letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: traefik

View File

@ -0,0 +1,6 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Release.Name }}-backend-env
data:
{{ .Values.backend.env | toYaml | indent 2 }}

View File

@ -0,0 +1,10 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Release.Name }}-uploads
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.backend.storage }}

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backend-secret-env
type: Opaque
stringData:
{{ .Values.secrets.backend.env | toYaml | indent 2 }}

View File

@ -0,0 +1,11 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-backend
spec:
ports:
- name: {{ .Release.Name }}-graphql
port: 4000
targetPort: 4000
selector:
app: {{ .Release.Name }}-backend

View File

@ -0,0 +1,52 @@
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-backend
spec:
selector:
matchLabels:
app: {{ .Release.Name }}-backend
template:
metadata:
annotations:
backup.velero.io/backup-volumes: uploads
labels:
app: {{ .Release.Name }}-backend
spec:
restartPolicy: Always
initContainers:
- name: {{ .Release.Name }}-backend-migrations
image: "{{ .Values.backend.image.repository }}:{{ .Values.backend.image.tag | default (include "defaultTag" .) }}"
imagePullPolicy: {{ quote .Values.global.image.pullPolicy }}
command: ["/bin/sh", "-c", "yarn prod:migrate up"]
{{- include "resources" .Values.backend.resources | indent 10 }}
envFrom:
- configMapRef:
name: {{ .Release.Name }}-backend-env
- secretRef:
name: {{ .Release.Name }}-backend-secret-env
containers:
- name: {{ .Release.Name }}-backend
image: "{{ .Values.backend.image.repository }}:{{ .Values.backend.image.tag | default (include "defaultTag" .) }}"
imagePullPolicy: {{ quote .Values.global.image.pullPolicy }}
{{- include "resources" .Values.backend.resources | indent 10 }}
env:
- name: GRAPHQL_URI
value: "http://{{ .Release.Name }}-backend:4000"
- name: CLIENT_URI
value: "https://{{ .Values.domain }}"
envFrom:
- configMapRef:
name: {{ .Release.Name }}-backend-env
- secretRef:
name: {{ .Release.Name }}-backend-secret-env
ports:
- containerPort: 4000
protocol: TCP
volumeMounts:
- mountPath: /app/public/uploads
name: uploads
volumes:
- name: uploads
persistentVolumeClaim:
claimName: {{ .Release.Name }}-uploads

View File

@ -0,0 +1,6 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}
data:
{{ .Values.configmap | toYaml | indent 2 }}

View File

@ -0,0 +1,65 @@
---
{{- define "joinRedirectMiddlewares" -}}
{{- $local := dict "first" true -}}
{{- range $k, $v := .Values.redirect_domains -}}{{- if not $local.first -}},{{- end -}}{{$.Release.Namespace}}-redirect-{{- $v | replace "." "-" -}}@kubernetescrd{{- $_ := set $local "first" false -}}{{- end -}}
{{- end -}}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ocelot
annotations:
cert-manager.io/issuer: {{ .Values.cert_manager.issuer | default (printf "%s-letsencrypt-staging" .Release.Name) }}
traefik.ingress.kubernetes.io/router.middlewares: {{ quote (include "joinRedirectMiddlewares" $)}}
spec:
tls:
- hosts:
- {{ quote .Values.domain }}
{{- range .Values.redirect_domains }}
- {{ quote . }}
{{- end }}
secretName: {{ .Release.Name }}-letsencrypt-tls
rules:
- host: {{ quote .Values.domain }}
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
{{- if .Values.underMaintenance }}
name: {{ .Release.Name }}-maintenance
port:
number: 80
{{- else }}
name: {{ .Release.Name }}-webapp
port:
number: 3000
{{- end }}
{{- range .Values.redirect_domains }}
- host: {{ quote . }} # the service must be defined, else the redirect is not working
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ $.Release.Name }}-maintenance
port:
number: 80
{{- end }}
{{- range .Values.redirect_domains }}
---
# Redirect with domain replacement
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: redirect-{{ . | replace "." "-" }}
spec:
redirectRegex:
regex: ^https://{{ . }}(.*)
replacement: https://{{ $.Values.domain }}${1}
permanent: true
{{- end }}

View File

@ -0,0 +1,24 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-maintenance
spec:
selector:
matchLabels:
app: {{ .Release.Name }}-maintenance
template:
metadata:
labels:
app: {{ .Release.Name }}-maintenance
spec:
restartPolicy: Always
containers:
- name: {{ .Release.Name }}-maintenance
image: "{{ .Values.maintenance.image.repository }}:{{ .Values.maintenance.image.tag | default (include "defaultTag" .) }}"
imagePullPolicy: {{ quote .Values.global.image.pullPolicy }}
{{- include "resources" .Values.maintenance.resources | indent 8 }}
env:
- name: HOST
value: 0.0.0.0
ports:
- containerPort: 80

View File

@ -0,0 +1,11 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-maintenance
spec:
ports:
- name: {{ .Release.Name }}-http
port: 80
targetPort: 80
selector:
app: {{ .Release.Name }}-maintenance

View File

@ -0,0 +1,6 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Release.Name }}-webapp-env
data:
{{ .Values.webapp.env | toYaml | indent 2 }}

View File

@ -0,0 +1,34 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-webapp
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Name }}-webapp
template:
metadata:
labels:
app: {{ .Release.Name }}-webapp
spec:
restartPolicy: Always
containers:
- name: {{ .Release.Name }}-webapp
image: "{{ .Values.webapp.image.repository }}:{{ .Values.webapp.image.tag | default (include "defaultTag" .) }}"
imagePullPolicy: {{ quote .Values.global.image.pullPolicy }}
{{- include "resources" .Values.webapp.resources | indent 8 }}
ports:
- containerPort: 3000
env:
- name: WEBSOCKETS_URI
value: "wss://{{ .Values.domain }}/api/graphql"
- name: HOST
value: "0.0.0.0"
- name: GRAPHQL_URI
value: "http://{{ .Release.Name }}-backend:4000"
envFrom:
- configMapRef:
name: {{ .Release.Name }}-webapp-env
- secretRef:
name: {{ .Release.Name }}-webapp-secret-env

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-webapp-secret-env
type: Opaque
stringData:
{{ .Values.secrets.webapp.env | toYaml | indent 2 }}

View File

@ -0,0 +1,11 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-webapp
spec:
ports:
- name: {{ .Release.Name }}-http
port: 3000
targetPort: 3000
selector:
app: {{ .Release.Name }}-webapp

View File

@ -0,0 +1,27 @@
domain: stage.ocelot.social
redirect_domains: []
cert_manager:
issuer:
underMaintenance: false
global:
image:
pullPolicy: IfNotPresent
tag:
backend:
image:
repository: ghcr.io/ocelot-social-community/ocelot-social/backend
storage: "10Gi"
env:
NEO4J_URI: "bolt://ocelot-social-neo4j:7687"
webapp:
image:
repository: ghcr.io/ocelot-social-community/ocelot-social/webapp
maintenance:
image:
repository: ghcr.io/ocelot-social-community/ocelot-social/maintenance

View File

@ -0,0 +1,16 @@
releases:
- name: ocelot-social
namespace: ocelot-social
chart: ../charts/ocelot-social
values:
- ./values/ocelot.yaml
secrets:
- ./secrets/ocelot.yaml
- name: ocelot-neo4j
namespace: ocelot-social
chart: ../charts/ocelot-neo4j
values:
- ./values/ocelot.yaml
secrets:
- ./secrets/ocelot.yaml

View File

@ -0,0 +1,76 @@
secrets:
acme_email: ENC[AES256_GCM,data:o+2HnrEqa/uXJwqUwdYU14FiZYPfLcKqkQ==,iv:1ouUU4ewzRL4ZDnwJm6BTVg3a64iC5+I2v+AWIF8W2Q=,tag:7ytv959cVmgSmXMC7A8zxA==,type:str]
webapp:
env:
MAPBOX_TOKEN: ENC[AES256_GCM,data:7Ka4BvQh6NDw9NKUcgGjLwxNHOqhVrZEj/DcGnyv1nXQIG/2WWGGHazAFWUCFpCUmCSaTPSkyLHPFyGQtQ7VAON3AG3tHtv5JvcBb4KDYrjAIzxhAAiHMYFtVJs=,iv:X0YL2dW42TUidJdBlRKb4Vq86X1OzHqipNHTBxmE7ds=,tag:KDH9NwDy6ghqdkXeZxuHgg==,type:str]
backend:
env:
JWT_SECRET: ENC[AES256_GCM,data:8qGviTFMOv9QyoNVwnlFNZ2PmvedbKJM,iv:rmZgs8h2QVsokzMzdGdEcInBLv8AX3xFUjkGhTf3sF0=,tag:SUJpMaIGAb14yg8RxCVUtA==,type:str]
MAPBOX_TOKEN: ENC[AES256_GCM,data:qK6iTYKiWfkvXBodm8zVmfr5ACTTz1+7Pt7Q/hwgv3SYERyo5NyqfsvbVKuDAD90kTCNODpSwUApJE6do/Umedg4s8mrnHXCckIDbX5BztoeHJBehsUC54ELcrQ=,iv:b65yqfdoOX366UXt7HS6nhL8hlZn4l5hQfrhI6NXc+I=,tag:vF48V+TRS5g9ezXhzAJnPw==,type:str]
PRIVATE_KEY_PASSPHRASE: ENC[AES256_GCM,data:05WXBFKIk0BtfUYmkWSwAP+/Y7v18LUow4X/,iv:y7VyymcoRLr2CK96BiErXvKP2Gn/QhECBZyeP+wo8LA=,tag:Hg/fIGyIDMY8P3mWfVupCw==,type:str]
#ENC[AES256_GCM,data:llx+JN8fRqwrLd2ahkmPrhPwcGIkn695l3Ox8VEs9YAR+1wpz3yujA==,iv:4Ctez8zMeqo3cpCCUVy6ZP4T1Z/myPw/FTq+++YAYbc=,tag:al/J8DLqNz6CoLl+TgUdOw==,type:comment]
EMAIL_DEFAULT_SENDER: ENC[AES256_GCM,data:z1EyEokf/TNkFLhRzsCbHew/6T8=,iv:Satr1c8aZQE73ZolC6n+PO74r+Gj3un5Mj0DIYb3n14=,tag:iK6l0GXuhLauBtFXTmLyKQ==,type:str]
SMTP_HOST: ENC[AES256_GCM,data:r0qbaUBB3CSUHR76,iv:TJIx71HW1aBB0sCEd1TB/tTgPBxLR1sdGAEf0t7Qilg=,tag:arXYtwVbIXVaUJpyommokQ==,type:str]
SMTP_USERNAME: ENC[AES256_GCM,data:lZ05DvSu,iv:Tyu7poao1shqKGd/sjTCgGNHU1xgRpjwjMRd+ArGf6o=,tag:dKms4G683JvFzja7YOwYKg==,type:str]
SMTP_PASSWORD: ENC[AES256_GCM,data:c9rnPIaKHIh2LNIJON3ib1IsA09OWGchDxRPRpvrtJw=,iv:08Acxl74lJbYtEEU6crVIYRXwkER8t1XPrhBA2PwEio=,tag:F0xrrt2PkBUMEyp7a81ssw==,type:str]
SMTP_PORT: ENC[AES256_GCM,data:MGmv,iv:IFg6oEncN0ICEmw96XL4EuPKqEZ6KLwU5FJYkveMSpY=,tag:kIVXlt0o5TfhOtRVqU/c4w==,type:str]
SMTP_IGNORE_TLS: ENC[AES256_GCM,data:ORAIWtg=,iv:6X4V3RDeYHrFdBTjsb3Ji0KWsZ2meL8ilqHNGQbcV/M=,tag:R87FgoQwqpes+0ejcOlrPg==,type:str]
#ENC[AES256_GCM,data:wEE3/SPsZqy9LATseOZG7LsCbjG5gY4VUT/TzxhHLJqcYP5I,iv:gcOA0XiUGWq15G4zTRPZ0qZ/XYMTjr+9krbOx0dwpeY=,tag:jd8LTiVT7UQShqMR9zZUZA==,type:comment]
SMTP_SECURE: ENC[AES256_GCM,data:PowbGhU=,iv:a1dK5AVySu749vPQvX9OLfMuD+tZkLNtXTMr17+4KuA=,tag:fuJQ7c4RBl25If01MSAmug==,type:str]
SMTP_DKIM_PRIVATKEY: null
SMTP_DKIM_DOMAINNAME: null
SMTP_DKIM_KEYSELECTOR: null
NEO4J_USERNAME: null
NEO4J_PASSWORD: null
REDIS_PASSWORD: null
neo4j:
env:
NEO4J_USERNAME: ""
NEO4J_PASSWORD: ""
sops:
kms: []
gcp_kms: []
azure_kv: []
hc_vault: []
age:
- recipient: age1llp6k66265q3rzqemxpnq0x3562u20989vcjf65fl9s3hjhgcscq6mhnjw
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBRbjk3QXdyZU5yZnE0dElE
SW91VGIvSnovRmc4MCtiNDhET3RHQTFoakd3ClB4RlZUZXRwSTgvUTR3Q1AwUGJo
NEpySWVEOFE4ZmIzek03NzczeVhyY0EKLS0tIG9SZ2ZwQXdFSUVTbWxCQXpUeWd2
VDlsRlY2Z1RjWFZjcU9UeUpJZHJuSmMKTuy/s49nIwfRQyDyCGBWZPvyR9oNEXxV
6C0oVQXVTifkMvDet3dZWnOy6TeMkZBLD4BZHXSI+l6DkNdmIiwIpw==
-----END AGE ENCRYPTED FILE-----
- recipient: age1zycwtk6dkxj6vuqhj9jw7932ythky9p3att6df4z9qasyw8v5dxquejcmp
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBQaHd0YW83bS9NZ1RBSWl6
cU0vMStYT3QxOFhOYmdNMUpNaHBLOVJGUFJVCnRjbWswbDhzOStFZTdXSVhTemJx
TVo1YnpxMDZxd1NWMVpNYXlYbzZtaVkKLS0tIGhmaHZzc2hnYi9WSStpc2lkbkRP
MElZK25Nc0lZTXBtc1BOQUpCandFKzAKnareBqzmHiSY551Iw8zPNg6aJN2QM0iN
f05TgS58OSEzXL60/9wBEN+E4Y1VErwOYP9CH8MdiAv1iRwLYgSJ/Q==
-----END AGE ENCRYPTED FILE-----
- recipient: age15arcg8x6ltnsacwalvny0h2d4d4wkdmax328mw3v5vda9zm97uqshtavmr
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSAyUWtnd1JObWNZZzZtWndv
dVhLWlRSNDNacHdSMXJ1ejV2RC80elA2TG1rCmc1MTFSMlpYM3hsSDNwWUJ0R3NC
Y2RrT2pZQllyTkdpcEs2akF0cENpc0EKLS0tIDFxV1B6bzZZVFVlSk5qZWxDbEd4
MkpsL3phc0M0VXBuUGQ2dFZOZHlKS1EKEmCasI2+d4FBgiI4Ter8Gxbl87yrfBq+
xze5n0df0GKK6JsML/0m2Z7HoqtCAEsjEfm45GdfAaiqPVh7gJG8TQ==
-----END AGE ENCRYPTED FILE-----
- recipient: age1khw2eps099audp3uu5s9rk07qznllh5c8a43gv5dtpnq2a7lue6qrehn5s
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBRcDlPb1BlVUIwSEUvTjBx
KytIS0xQWjlzeEJPSDI5SEg5RmpXWFhKZVRvCm1XLzlMUmo1U1BZL2ZFS25GSkhY
V0tESW1hYTU0V01UQzEvNjZjMDk2WDAKLS0tIEl5TG84VE1UN0V3bk13cFU3bTUr
aGNFeXZZRmlJM041OHdTM0pmM3BBdGMKGvFgYY1jhKwciAOZKyw0hlFVNbOk7CM7
041g17JXNV1Wk6WgMZ4w8p54RKQVaWCT4wxChy6wNNdQ3IeKgqEU2w==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2024-10-29T14:26:49Z"
mac: ENC[AES256_GCM,data:YXX7MEAK0wmuxLTmdr7q5uVd6DG6FhGUeE+EzbhWe/OovH6n+CjKZGklnEX+5ztDO0IgZh/T9Hx1CgFYuVbcOkvDoFBDwNpRA/QOQrM0p/+tRlMNCypC/Wh2xL0DhA4A/Qum2oyE/BDkt1Yy8N5wZDZn575+ZAjXEgAzlhpT5qk=,iv:ire3gkHTY6+0lgbV1Es6Lf8bcKTg4WKnq46M+b/VRcU=,tag:MkZULKcwROvIw/C0YtcUbA==,type:str]
pgp: []
unencrypted_suffix: _unencrypted
version: 3.9.0

View File

@ -0,0 +1,2 @@
cert_manager:
issuer: ocelot-social-letsencrypt-prod

View File

@ -1,74 +0,0 @@
#!/bin/bash
# for a branded version you should pass the following env variables:
# CONFIGURATION - your configuration folder name
# DOCKERHUB_ORGANISATION - your dockerhub organisation
# OCELOT_VERSION - specify the specific tag to build upon e.g. 2.4.0-300
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [ -z ${CONFIGURATION} ]; then
echo "You must provide a `CONFIGURATION` via environment variable"
exit 1
fi
echo "Using CONFIGURATION=${CONFIGURATION}"
# check DOCKERHUB_BRAND_VARRIANT
if [ -z ${DOCKERHUB_BRAND_VARRIANT} ]; then
echo "You must provide a `DOCKERHUB_BRAND_VARRIANT` via environment variable"
exit 1
fi
echo "Using DOCKERHUB_BRAND_VARRIANT=${DOCKERHUB_BRAND_VARRIANT}"
# configuration
DOCKERHUB_ORGANISATION=${DOCKERHUB_ORGANISATION:-"ocelotsocialnetwork"}
OCELOT_VERSION=${OCELOT_VERSION:-$(node -p -e "require('${SCRIPT_DIR}/../../package.json').version")}
OCELOT_GITHUB_RUN_NUMBER=${OCELOT_GITHUB_RUN_NUMBER:-master}
OCELOT_VERSION_BUILD=${OCELOT_VERSION_BUILD:-${OCELOT_VERSION}-${OCELOT_GITHUB_RUN_NUMBER}}
BRANDED_VERSION=${BRANDED_VERSION:-${GITHUB_RUN_NUMBER:-"local"}}
BUILD_DATE=${BUILD_DATE:-$(date -u +'%Y-%m-%dT%H:%M:%SZ')}
BUILD_VERSION_BASE=${BRANDED_VERSION}-ocelot.social${OCELOT_VERSION}
BUILD_VERSION=${BRANDED_VERSION}-ocelot.social${OCELOT_VERSION_BUILD}
BUILD_COMMIT=${GITHUB_SHA:-"0000000"}
# backend
docker build --target branded \
-t "${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}:latest" \
-t "${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION}" \
-t "${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION_BUILD}" \
-t "${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION_BASE}" \
-t "${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION}" \
-f "${SCRIPT_DIR}/../src/docker/backend.Dockerfile" \
--build-arg "CONFIGURATION=${CONFIGURATION}" \
--build-arg "APP_IMAGE_TAG_CODE=${OCELOT_VERSION}-code" \
--build-arg "APP_IMAGE_TAG_BASE=${OCELOT_VERSION}-base" \
"${SCRIPT_DIR}/../."
# webapp
docker build --target branded \
-t "${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}:latest" \
-t "${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION}" \
-t "${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION_BUILD}" \
-t "${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION_BASE}" \
-t "${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION}" \
-f "${SCRIPT_DIR}/../src/docker/webapp.Dockerfile" \
--build-arg "CONFIGURATION=${CONFIGURATION}" \
--build-arg "APP_IMAGE_TAG_CODE=${OCELOT_VERSION}-code" \
--build-arg "APP_IMAGE_TAG_BASE=${OCELOT_VERSION}-base" \
"${SCRIPT_DIR}/../."
# mainteance
docker build --target branded \
-t "${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}:latest" \
-t "${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION}" \
-t "${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION_BUILD}" \
-t "${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION_BASE}" \
-t "${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION}" \
-f "${SCRIPT_DIR}/../src/docker/maintenance.Dockerfile" \
--build-arg "CONFIGURATION=${CONFIGURATION}" \
--build-arg "APP_IMAGE_TAG_CODE=${OCELOT_VERSION}-code" \
--build-arg "APP_IMAGE_TAG_BASE=${OCELOT_VERSION}-base" \
"${SCRIPT_DIR}/../."

View File

@ -1,51 +0,0 @@
#!/bin/bash
# for a branded version you should pass the following env variables:
# DOCKERHUB_ORGANISATION - your dockerhub organisation
# OCELOT_VERSION - specify the specific tag to build upon e.g. 2.4.0-300
# DOCKERHUB_USERNAME - your dockerhub username
# DOCKERHUB_TOKEN - your dockerhub access token
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check DOCKERHUB_BRAND_VARRIANT
if [ -z ${DOCKERHUB_BRAND_VARRIANT} ]; then
echo "You must provide a `DOCKERHUB_BRAND_VARRIANT` via environment variable"
exit 1
fi
echo "Using DOCKERHUB_BRAND_VARRIANT=${DOCKERHUB_BRAND_VARRIANT}"
# configuration
DOCKERHUB_ORGANISATION=${DOCKERHUB_ORGANISATION:-"ocelotsocialnetwork"}
OCELOT_VERSION=${OCELOT_VERSION:-$(node -p -e "require('${SCRIPT_DIR}/../../package.json').version")}
OCELOT_GITHUB_RUN_NUMBER=${OCELOT_GITHUB_RUN_NUMBER:-master}
OCELOT_VERSION_BUILD=${OCELOT_VERSION_BUILD:-${OCELOT_VERSION}-${OCELOT_GITHUB_RUN_NUMBER}}
BRANDED_VERSION=${BRANDED_VERSION:-${GITHUB_RUN_NUMBER:-"local"}}
BUILD_VERSION_BASE=${BRANDED_VERSION}-ocelot.social${OCELOT_VERSION}
BUILD_VERSION=${BRANDED_VERSION}-ocelot.social${OCELOT_VERSION_BUILD}
# login to dockerhub
echo "${DOCKERHUB_TOKEN}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin
# push backend images
docker push ${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}:latest
docker push ${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION}
docker push ${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION_BUILD}
docker push ${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION_BASE}
docker push ${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION}
# push webapp images
docker push ${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}:latest
docker push ${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION}
docker push ${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION_BUILD}
docker push ${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION_BASE}
docker push ${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION}
# push maintenance images
docker push ${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}:latest
docker push ${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION}
docker push ${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}:${OCELOT_VERSION_BUILD}
docker push ${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION_BASE}
docker push ${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}:${BUILD_VERSION}

View File

@ -1,22 +0,0 @@
#!/bin/bash
# time stamp
printf "Neo4J bash :\n "
date
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [[ -z "$CONFIGURATION" ]]; then
echo "!!! You must provide a CONFIGURATION via environment variable !!!"
exit 1
fi
printf " Cluster: %s\n" $CONFIGURATION
# configuration
KUBECONFIG=${KUBECONFIG:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml}
kubectl --kubeconfig=${KUBECONFIG} -n default exec -it $(kubectl --kubeconfig=${KUBECONFIG} -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- /bin/sh

View File

@ -1,46 +0,0 @@
#!/bin/bash
# time stamp
printf "Backup started at:\n "
date
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [[ -z "$CONFIGURATION" ]]; then
echo "!!! You must provide a CONFIGURATION via environment variable !!!"
exit 1
fi
printf " Cluster: %s\n" $CONFIGURATION
# configuration
KUBECONFIG=${KUBECONFIG:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml}
BACKUP_DATE=$(date "+%F_%H-%M-%S")
BACKUP_FOLDER=${BACKUP_FOLDER:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/backup/${BACKUP_DATE}}
printf "Backup folder name: %s\n" $BACKUP_DATE
# create backup folder
mkdir -p ${BACKUP_FOLDER}
# cluster maintenance mode on && Neo4j maintenance mode on
${SCRIPT_DIR}/cluster.neo4j.sh maintenance on
# database backup
echo "Dumping database ..."
kubectl --kubeconfig=${KUBECONFIG} -n default exec -it \
$(kubectl --kubeconfig=${KUBECONFIG} -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') \
-- neo4j-admin dump --to=/var/lib/neo4j/$BACKUP_DATE-neo4j-dump
# copy neo4j backup to local drive
echo "Copying database to local file system ..."
kubectl --kubeconfig=${KUBECONFIG} cp \
default/$(kubectl --kubeconfig=${KUBECONFIG} -n default get pods | grep ocelot-neo4j |awk '{ print $1 }'):/var/lib/neo4j/$BACKUP_DATE-neo4j-dump $BACKUP_FOLDER/neo4j-dump
# copy image data
echo "Copying public uploads to local file system ..."
kubectl --kubeconfig=${KUBECONFIG} cp \
default/$(kubectl --kubeconfig=${KUBECONFIG} -n default get pods | grep ocelot-backend |awk '{ print $1 }'):/app/public/uploads $BACKUP_FOLDER/public-uploads
# Neo4j maintenance mode off && cluster maintenance mode off
${SCRIPT_DIR}/cluster.neo4j.sh maintenance off

View File

@ -1,22 +0,0 @@
#!/bin/bash
# time stamp
printf "Token :\n "
date
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [[ -z "$CONFIGURATION" ]]; then
echo "!!! You must provide a CONFIGURATION via environment variable !!!"
exit 1
fi
printf " Cluster: %s\n" $CONFIGURATION
# configuration
KUBECONFIG=${KUBECONFIG:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml}
kubectl --kubeconfig=${KUBECONFIG} create token admin-user -n kubernetes-dashboard

View File

@ -1,37 +0,0 @@
#!/bin/bash
# time stamp
printf "Tunnel started at:\n "
date
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [[ -z "$CONFIGURATION" ]]; then
echo "!!! You must provide a CONFIGURATION via environment variable !!!"
exit 1
fi
printf " Cluster: %s\n" $CONFIGURATION
# configuration
KUBECONFIG=${KUBECONFIG:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml}
kubectl --kubeconfig=${KUBECONFIG} get pods -n kubernetes-dashboard
#kubectl --kubeconfig=${KUBECONFIG} get -o json -n kubernetes-dashboard pod kubernetes-dashboard-kong-5ccb57895b-vxxmf
# export POD_NAME=$(kubectl --kubeconfig=${KUBECONFIG} get pods -n kubernetes-dashboard -l "app.kubernetes.io/name=kubernetes-dashboard-kong,app.kubernetes.io/instance=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}")
export POD_NAME=kubernetes-dashboard-kong-5ccb57895b-fzqk6
# export POD_NAME=$(kubectl --kubeconfig=${KUBECONFIG} get pods -n kubernetes-dashboard -l "app.kubernetes.io/name=kubernetes-dashboard,app.kubernetes.io/instance=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}")
echo $POD_NAME
kubectl --kubeconfig=${KUBECONFIG} -n kubernetes-dashboard port-forward $POD_NAME 8443:8443
# kubectl --kubeconfig=${KUBECONFIG} -n kubernetes-dashboard create token admin-user
# kubectl --kubeconfig=${KUBECONFIG} apply -f ${SCRIPT_DIR}/../scripts/admin-user.yml

View File

@ -1,56 +0,0 @@
#!/bin/bash
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [ -z ${CONFIGURATION} ]; then
echo "You must provide a `CONFIGURATION` via environment variable"
exit 1
fi
echo "Using CONFIGURATION=${CONFIGURATION}"
# configuration
KUBECONFIG=${KUBECONFIG:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml}
VALUES=${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubernetes/values.yaml
DOCKERHUB_OCELOT_TAG=${DOCKERHUB_OCELOT_TAG:-"latest"}
## install Ingress-Nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install \
ingress-nginx ingress-nginx/ingress-nginx \
--kubeconfig=${KUBECONFIG} \
-f ${SCRIPT_DIR}/../src/kubernetes/nginx.values.yaml
## install Cert-Manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--kubeconfig=${KUBECONFIG} \
--namespace cert-manager \
--create-namespace \
--version v1.13.2 \
--set installCRDs=true
## install Ocelot with helm
helm install \
ocelot \
--kubeconfig=${KUBECONFIG} \
--values ${VALUES} \
--set appVersion="${DOCKERHUB_OCELOT_TAG}" \
${SCRIPT_DIR}/../src/kubernetes/ \
--timeout 10m
## set Neo4j database indexes, constrains, and initial admin account plus run migrate up
kubectl --kubeconfig=${KUBECONFIG} \
-n default \
exec -it \
$(kubectl --kubeconfig=${KUBECONFIG} -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- \
/bin/sh -c "yarn prod:migrate init && yarn prod:migrate up"
# /bin/sh -c "node --experimental-repl-await build/src/db/clean.js && node --experimental-repl-await build/src/db/seed.js"
echo "!!! You must install a firewall or similar !!! (for DigitalOcean see: deployment/src/kubernetes/README.md)"

View File

@ -1,30 +0,0 @@
#!/bin/bash
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [[ -z "$CONFIGURATION" ]]; then
echo "You must provide a `CONFIGURATION` via environment variable"
exit 1
fi
echo "Using CONFIGURATION=${CONFIGURATION}"
# configuration
KUBECONFIG=${KUBECONFIG:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml}
case $1 in
on)
echo "Network maintenance: on"
kubectl --kubeconfig=${KUBECONFIG} patch ingress ingress-ocelot-webapp --type merge --patch-file ${SCRIPT_DIR}/../src/kubernetes/patches/patch.ingress.maintenance.on.yaml
;;
off)
echo "Network maintenance: off"
kubectl --kubeconfig=${KUBECONFIG} patch ingress ingress-ocelot-webapp --type merge --patch-file ${SCRIPT_DIR}/../src/kubernetes/patches/patch.ingress.maintenance.off.yaml
;;
*)
echo -e "Run this script with first argument either 'on' or 'off'"
exit
;;
esac

View File

@ -1,22 +0,0 @@
#!/bin/bash
# time stamp
printf "Neo4J bash :\n "
date
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [[ -z "$CONFIGURATION" ]]; then
echo "!!! You must provide a CONFIGURATION via environment variable !!!"
exit 1
fi
printf " Cluster: %s\n" $CONFIGURATION
# configuration
KUBECONFIG=${KUBECONFIG:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml}
kubectl --kubeconfig=${KUBECONFIG} -n default exec -it $(kubectl --kubeconfig=${KUBECONFIG} -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- bash

View File

@ -1,57 +0,0 @@
#!/bin/bash
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [[ -z "$CONFIGURATION" ]]; then
echo "You must provide a `CONFIGURATION` via environment variable"
exit 1
fi
# configuration
KUBECONFIG=${KUBECONFIG:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml}
case $1 in
maintenance)
case $2 in
on)
# maintenance mode on
${SCRIPT_DIR}/cluster.maintenance.sh on
# set Neo4j in offline mode (maintenance)
echo "Neo4j maintenance: on"
kubectl --kubeconfig=${KUBECONFIG} get deployment ocelot-neo4j -o json \
| jq '.spec.template.spec.containers[] += {"command": ["tail", "-f", "/dev/null"]}' \
| kubectl --kubeconfig=${KUBECONFIG} apply -f -
# wait for the container to restart
echo "Wait 60s ..."
sleep 60
;;
off)
# set Neo4j in online mode
echo "Neo4j maintenance: off"
kubectl --kubeconfig=${KUBECONFIG} get deployment ocelot-neo4j -o json \
| jq 'del(.spec.template.spec.containers[].command)' \
| kubectl --kubeconfig=${KUBECONFIG} apply -f -
# wait for the container to restart
echo "Wait 60s ..."
sleep 60
# maintenance mode off
${SCRIPT_DIR}/cluster.maintenance.sh off
;;
*)
echo -e "Run this script with first argument either 'off' or 'on'"
exit
;;
esac
;;
*)
echo -e "Run this script with first argument 'maintenance'"
exit
;;
esac

View File

@ -1,18 +0,0 @@
#!/bin/bash
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [ -z ${CONFIGURATION} ]; then
echo "You must provide a `CONFIGURATION` via environment variable"
exit 1
fi
echo "Using CONFIGURATION=${CONFIGURATION}"
# configuration
KUBECONFIG=${KUBECONFIG:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml}
# clean & seed
kubectl --kubeconfig=${KUBECONFIG} -n default exec -it $(kubectl --kubeconfig=${KUBECONFIG} -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- /bin/sh -c "node --experimental-repl-await build/src/db/clean.js && node --experimental-repl-await build/src/db/seed.js"

View File

@ -1,24 +0,0 @@
#!/bin/bash
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [ -z ${CONFIGURATION} ]; then
echo "You must provide a `CONFIGURATION` via environment variable"
exit 1
fi
echo "Using CONFIGURATION=${CONFIGURATION}"
# configuration
KUBECONFIG=${KUBECONFIG:-${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml}
VALUES=${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubernetes/values.yaml
DOCKERHUB_OCELOT_TAG=${DOCKERHUB_OCELOT_TAG:-"latest"}
# upgrade with helm
helm --kubeconfig=${KUBECONFIG} upgrade ocelot \
--values ${VALUES} \
--set appVersion="${DOCKERHUB_OCELOT_TAG}" \
${SCRIPT_DIR}/../src/kubernetes/ \
--timeout 10m

View File

@ -1,91 +0,0 @@
#!/bin/bash
# time stamp
printf "\n\nMultiple backups started at:\n "
date
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# save old CONFIGURATION for later reset
export SAVE_CONFIGURATION=$CONFIGURATION
# export all variables in "../.env"
set -a
source ${SCRIPT_DIR}/../.env
set +a
# check BACKUP_CONFIGURATIONS
if [[ -z "$BACKUP_CONFIGURATIONS" ]]; then
#%! echo "You must provide a BACKUP_CONFIGURATIONS via environment variable"
printf "!!! You must provide a BACKUP_CONFIGURATIONS via environment variable !!!\n"
exit 1
fi
# check BACKUP_SAVED_BACKUPS_NUMBER
if [[ -z ${BACKUP_SAVED_BACKUPS_NUMBER} ]]; then
#%! echo "You must provide a BACKUP_SAVED_BACKUPS_NUMBER via environment variable"
printf "!!! You must provide a BACKUP_SAVED_BACKUPS_NUMBER via environment variable !!!\n"
exit 1
fi
# convert configurations to array
IFS=' ' read -a CONFIGURATIONS_ARRAY <<< "$BACKUP_CONFIGURATIONS"
# display the clusters
printf "Backup the clusters:\n"
for i in "${CONFIGURATIONS_ARRAY[@]}"
do
echo " $i"
done
# deleting backups?
if (( BACKUP_SAVED_BACKUPS_NUMBER >= 1 )); then
printf "Keep the last %d backups for all networks.\n" $BACKUP_SAVED_BACKUPS_NUMBER
else
echo "!!! ATTENTION: No backups are deleted !!!"
fi
echo "Cancel by ^C. You have 15 seconds"
# wait for the admin to react
sleep 15
printf "\n"
for i in "${CONFIGURATIONS_ARRAY[@]}"
do
export CONFIGURATION=$i
# individual cluster backup
${SCRIPT_DIR}/cluster.backup.sh
# deleting backups?
if (( BACKUP_SAVED_BACKUPS_NUMBER >= 1 )); then
# delete all oldest backups, but leave the last BACKUP_SAVED_BACKUPS_NUMBER
keep=$BACKUP_SAVED_BACKUPS_NUMBER
path="$SCRIPT_DIR/../configurations/$CONFIGURATION/backup/"
cd $path
printf "In\n '$path'\n remove:\n"
while [ `ls -1 | wc -l` -gt $keep ]; do
oldest=`ls -c1 | sort -n | head -1`
printf " %s\n" $oldest
rm -rf $oldest
done
printf "Keep the last %d backups:\n" $BACKUP_SAVED_BACKUPS_NUMBER
ls -c1 | sort -n | awk '{print " " $0}'
cd $SCRIPT_DIR
else
echo "!!! ATTENTION: No backups are deleted !!!"
fi
printf "\n"
done
# reset CONFIGURATION to old
export CONFIGURATION=$SAVE_CONFIGURATION
echo "Reset to CONFIGURATION=$CONFIGURATION"

View File

@ -1,20 +0,0 @@
#!/bin/bash
# generate a secret and store it in the SECRET file.
# Note that this overwrites the existing file
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [ -z ${CONFIGURATION} ]; then
echo "You must provide a `CONFIGURATION` via environment variable"
exit 1
fi
echo "Using CONFIGURATION=${CONFIGURATION}"
# configuration
SECRET_FILE=${SCRIPT_DIR}/../configurations/${CONFIGURATION}/SECRET
openssl rand -base64 32 > ${SECRET_FILE}

View File

@ -1,50 +0,0 @@
#!/bin/bash
# decrypt secrets in the selected configuration
# Note that existing decrypted files will be overwritten
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [ -z ${CONFIGURATION} ]; then
echo "You must provide a `CONFIGURATION` via environment variable"
exit 1
fi
echo "Using CONFIGURATION=${CONFIGURATION}"
# configuration
SECRET=${SECRET}
SECRET_FILE=${SCRIPT_DIR}/../configurations/${CONFIGURATION}/SECRET
FILES=(\
"${SCRIPT_DIR}/../configurations/${CONFIGURATION}/.env" \
"${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml" \
"${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubernetes/values.yaml" \
"${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubernetes/dns.values.yaml" \
)
# Load SECRET from file if it is not set explicitly
if [ -z ${SECRET} ] && [ -f "${SECRET_FILE}" ]; then
SECRET=$(<${SECRET_FILE})
fi
# exit when there is no SECRET set
if [ -z ${SECRET} ]; then
echo "No SECRET provided and no SECRET-File found."
exit 1
fi
# decrypt
for file in "${FILES[@]}"
do
if [ -f "${file}.enc" ]; then
#gpg --symmetric --batch --passphrase="${SECRET}" --cipher-algo AES256 --output ${file}.enc ${file}
gpg --quiet --batch --yes --decrypt --passphrase="${SECRET}" --output ${file} ${file}.enc
echo "Decrypted ${file}"
fi
done
echo "DONE"
# gpg --quiet --batch --yes --decrypt --passphrase="${SECRET}" \
# --output $HOME/secrets/my_secret.json my_secret.json.gpg

View File

@ -1,47 +0,0 @@
#!/bin/bash
# encrypt secrets in the selected configuration
# Note that existing encrypted files will be overwritten
# base setup
SCRIPT_PATH=$(realpath $0)
SCRIPT_DIR=$(dirname $SCRIPT_PATH)
# check CONFIGURATION
if [ -z ${CONFIGURATION} ]; then
echo "You must provide a `CONFIGURATION` via environment variable"
exit 1
fi
echo "Using CONFIGURATION=${CONFIGURATION}"
# configuration
SECRET=${SECRET}
SECRET_FILE=${SCRIPT_DIR}/../configurations/${CONFIGURATION}/SECRET
FILES=(\
"${SCRIPT_DIR}/../configurations/${CONFIGURATION}/.env" \
"${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubeconfig.yaml" \
"${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubernetes/values.yaml" \
"${SCRIPT_DIR}/../configurations/${CONFIGURATION}/kubernetes/dns.values.yaml" \
)
# Load SECRET from file if it is not set explicitly
if [ -z ${SECRET} ] && [ -f "${SECRET_FILE}" ]; then
SECRET=$(<${SECRET_FILE})
fi
# exit when there is no SECRET set
if [ -z ${SECRET} ]; then
echo "No SECRET provided and no SECRET-File found."
exit 1
fi
# encrypt
for file in "${FILES[@]}"
do
if [ -f "${file}" ]; then
gpg --symmetric --batch --yes --passphrase="${SECRET}" --cipher-algo AES256 --output ${file}.enc ${file}
echo "Encrypted ${file}"
fi
done
echo "DONE"

View File

@ -1,46 +0,0 @@
ARG APP_IMAGE=ocelotsocialnetwork/backend
ARG APP_IMAGE_TAG_BASE=latest-base
ARG APP_IMAGE_TAG_CODE=latest-code
ARG APP_IMAGE_BASE=${APP_IMAGE}:${APP_IMAGE_TAG_BASE}
ARG APP_IMAGE_CODE=${APP_IMAGE}:${APP_IMAGE_TAG_CODE}
##################################################################################
# CODE (branded) #################################################################
##################################################################################
FROM $APP_IMAGE_CODE AS code
ARG CONFIGURATION=example
# copy public constants and email templates into the Docker image to brand it
COPY configurations/${CONFIGURATION}/branding/constants/emails.ts src/config/
COPY configurations/${CONFIGURATION}/branding/constants/logos.ts src/config/
COPY configurations/${CONFIGURATION}/branding/constants/metadata.ts src/config/
COPY configurations/${CONFIGURATION}/branding/email/ src/middleware/helpers/email/
##################################################################################
# BUILD ##########################################################################
##################################################################################
FROM code AS build
# yarn install
RUN yarn install --production=false --frozen-lockfile --non-interactive
# yarn build
RUN yarn run build
##################################################################################
# BRANDED (Does contain only "binary"- and static-files to reduce image size) ####
##################################################################################
FROM $APP_IMAGE_BASE AS branded
# TODO - do all copying with one COPY command to have one layer
# Copy "binary"-files from build image
COPY --from=build ${DOCKER_WORKDIR}/build ./build
COPY --from=build ${DOCKER_WORKDIR}/node_modules ./node_modules
# TODO - externalize the uploads so we can copy the whole folder
COPY --from=build ${DOCKER_WORKDIR}/public/img/ ./public/img/
COPY --from=build ${DOCKER_WORKDIR}/public/providers.json ./build/public/providers.json
# Copy package.json for script definitions (lock file should not be needed)
COPY --from=build ${DOCKER_WORKDIR}/package.json ./package.json
# Run command
CMD /bin/sh -c "yarn run start"

View File

@ -1,44 +0,0 @@
ARG APP_IMAGE=ocelotsocialnetwork/maintenance
ARG APP_IMAGE_TAG_BASE=latest-base
ARG APP_IMAGE_TAG_CODE=latest-code
ARG APP_IMAGE_BASE=${APP_IMAGE}:${APP_IMAGE_TAG_BASE}
ARG APP_IMAGE_CODE=${APP_IMAGE}:${APP_IMAGE_TAG_CODE}
##################################################################################
# CODE (branded) #################################################################
##################################################################################
FROM $APP_IMAGE_CODE AS code
ARG CONFIGURATION=example
# copy public constants into the Docker image to brand it
COPY configurations/${CONFIGURATION}/branding/static/ static/
COPY configurations/${CONFIGURATION}/branding/constants/ constants/
RUN /bin/sh -c 'cd constants && for f in *.ts; do mv -- "$f" "${f%.ts}.js"; done'
# locales
COPY configurations/${CONFIGURATION}/branding/locales/*.json locales/tmp/
COPY src/tools/ tools/
RUN apk add --no-cache bash jq
RUN tools/merge-locales.sh
##################################################################################
# BUILD ##########################################################################
##################################################################################
FROM code AS build
# yarn install
## unnicely done in $APP_IMAGE_CODE at the moment, see main repo
# RUN yarn install --production=false --frozen-lockfile --non-interactive
# yarn generate
RUN yarn run generate
##################################################################################
# BRANDED ### TODO # TODO # TODO # TODO # TODO # TODO # TODO # TODO # TODO ####
##################################################################################
# FROM $APP_IMAGE_BASE AS branded
FROM nginx:alpine AS branded
COPY --from=build ./app/dist/ /usr/share/nginx/html/
RUN rm /etc/nginx/conf.d/default.conf
COPY --from=code ./app/maintenance/nginx/custom.conf /etc/nginx/conf.d/

View File

@ -1,61 +0,0 @@
ARG APP_IMAGE=ocelotsocialnetwork/webapp
ARG APP_IMAGE_TAG_BASE=latest-base
ARG APP_IMAGE_TAG_CODE=latest-code
ARG APP_IMAGE_BASE=${APP_IMAGE}:${APP_IMAGE_TAG_BASE}
ARG APP_IMAGE_CODE=${APP_IMAGE}:${APP_IMAGE_TAG_CODE}
##################################################################################
# CODE (branded) #################################################################
##################################################################################
FROM $APP_IMAGE_CODE AS code
ARG CONFIGURATION=example
# copy public constants into the Docker image to brand it
COPY configurations/${CONFIGURATION}/branding/static/ static/
COPY configurations/${CONFIGURATION}/branding/constants/ constants/
RUN /bin/sh -c 'cd constants && for f in *.ts; do mv -- "$f" "${f%.ts}.js"; done'
COPY configurations/${CONFIGURATION}/branding/locales/html/ locales/html/
COPY configurations/${CONFIGURATION}/branding/assets/styles/imports/ assets/styles/imports/
COPY configurations/${CONFIGURATION}/branding/assets/fonts/ assets/fonts/
# locales
COPY configurations/${CONFIGURATION}/branding/locales/*.json locales/tmp/
COPY src/tools/ tools/
RUN apk add --no-cache bash jq
RUN tools/merge-locales.sh
##################################################################################
# BUILD ##########################################################################
##################################################################################
FROM code AS build
# yarn install
RUN yarn install --production=false --frozen-lockfile --non-interactive
# yarn build
RUN yarn run build
##################################################################################
# BRANDED (Does contain only "binary"- and static-files to reduce image size) ####
##################################################################################
FROM $APP_IMAGE_BASE AS branded
# TODO - do all copying with one COPY command to have one layer
# Copy "binary"-files from build image
COPY --from=build ${DOCKER_WORKDIR}/.nuxt ./.nuxt
COPY --from=build ${DOCKER_WORKDIR}/node_modules ./node_modules
COPY --from=build ${DOCKER_WORKDIR}/nuxt.config.js ./nuxt.config.js
# Copy static files
# TODO - this seems not be needed anymore for the new rebranding
# TODO - this should be one Folder containign all stuff needed to be copied
COPY --from=build ${DOCKER_WORKDIR}/config/ ./config/
COPY --from=build ${DOCKER_WORKDIR}/constants ./constants
COPY --from=build ${DOCKER_WORKDIR}/static ./static
COPY --from=build ${DOCKER_WORKDIR}/locales ./locales
COPY --from=build ${DOCKER_WORKDIR}/assets/styles/imports ./assets/styles/imports
COPY --from=build ${DOCKER_WORKDIR}/assets/fonts ./assets/fonts
# Copy package.json for script definitions (lock file should not be needed)
COPY --from=build ${DOCKER_WORKDIR}/package.json ./package.json
# Run command
CMD /bin/sh -c "yarn run start"

View File

@ -1,308 +0,0 @@
# Kubernetes Backup Of Ocelot.Social
One of the most important tasks in managing a running [ocelot.social](https://github.com/Ocelot-Social-Community/Ocelot-Social) network is backing up the data, e.g. the Neo4j database and the stored image files.
## Manual Offline Backup
To prepare, [kubectl](https://kubernetes.io/docs/tasks/tools/) must be installed and ready to use so that you have access to Kubernetes on your server.
Check if the correct context is used by running the following commands:
```bash
# check context and set the correct one
$ kubectl config get-contexts
# if the wrong context is chosen use it
$ kubectl config use-context <your-context>
# if you like check additionally if all pods are running well
$ kubectl -n default get pods -o wide
```
The very first step is to put the website into **maintenance mode**.
### Set Maintenance Mode
There are two ways to put the network into maintenance mode:
- via Kubernetes Dashboard
- via `kubectl`
#### Maintenance Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Ingresses` from the left side menu under `Service`.
After that, in the list that appears, you will find the entry `ingress-ocelot-webapp`, which has three dots on the right, where you can click to edit the entry.
You can scroll to the end of the YAML file, where you will find one or more `host` entries under `rules`, one for each domain of the network.
In all entries, change the value of the `serviceName` entry from ***ocelot-webapp*** to `ocelot-maintenance` and the value of the `servicePort` entry from ***3000*** to `80`.
First, check if your website is still online.
After you click `Update`, the new settings will be applied and you will find your website in maintenance mode.
#### Maintenance Mode Via `kubectl`
To put the network into maintenance mode, run the following commands in the terminal:
```bash
# list ingresses
$ kubectl get ingress -n default
# edit ingress
$ kubectl -n default edit ingress ingress-ocelot-webapp
```
Change the content of the YAML file for all domains to:
```yaml
spec:
rules:
- host: network-domain.social
http:
paths:
- backend:
# serviceName: ocelot-webapp
# servicePort: 3000
serviceName: ocelot-maintenance
servicePort: 80
```
First, check if your website is still online.
After you save the file, the new settings will be applied and you will find your website in maintenance mode.
### Neo4j Database Offline Backup
Before we can back up the database, we need to put it into **sleep mode**.
#### Set Neo4j To Sleep Mode
Again there are two ways to put the network into sleep mode:
- via Kubernetes Dashboard
- via `kubectl`
##### Sleep Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Deployments` from the left side menu under `Workloads`.
After that, in the list that appears, you will find the entry `ocelot-neo4j`, which has three dots on the right, where you can click to edit the entry.
Scroll to the end of the YAML file where you will find the `spec.template.spec.containers` entry. Here you can insert the `command` entry directly after `imagePullPolicy` in a new line.
```yaml
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
command: ["tail", "-f", "/dev/null"]
```
After clicking `Update`, the new settings will be applied and you should check in the `Pods` menu item on the left side if the `ocelot-neo4j-<ID>` pod restarts.
##### Sleep Mode Via `kubectl`
To put Neo4j into sleep mode, run the following commands in the terminal:
```bash
# list deployments
$ kubectl get deployments -n default
# edit deployment
$ kubectl -n default edit deployment ocelot-neo4j
```
Scroll to the `spec.template.spec.containers` entry. Here you can insert the `command` entry directly after `imagePullPolicy` in a new line.
```yaml
image: <network-DockerHub-name>/neo4j-community-branded:latest
imagePullPolicy: Always
command: ["tail", "-f", "/dev/null"]
```
After pressing enter, the new settings will be applied and you should check if the `ocelot-neo4j-<ID>` pod restarts.
Use command:
```bash
# check if the old pod restarts
$ kubectl -n default get pods -o wide
```
#### Generate Offline Backup
The offline backup is generated via `kubectl`:
```bash
# check for the Neo4j pod
$ kubectl -n default get pods -o wide
# ls: see wish backup dumps are already there
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- ls
# bash: enter bash of Neo4j
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- bash
# generate Dump
neo4j% neo4j-admin dump --to=/var/lib/neo4j/$(date +%F)-neo4j-dump
# exit bash
neo4j% exit
# ls: see if the new backup dump is there
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- ls
```
If you need a specific database name, add the option `--database=<name>` to the command `neo4j-admin dump`.
To find out the default database name, see the [Neo4j readme](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/neo4j/README.md).
Lets copy the dump backup
```bash
# copy dump onto backup volume direct
$ kubectl cp default/$(kubectl -n default get pods | grep ocelot-neo4j |awk '{ print $1 }'):/var/lib/neo4j/$(date +%F)-neo4j-dump /Volumes/<volume-name>/$(date +%F)-neo4j-dump
```
#### Remove Sleep Mode From Neo4j
Again there are two ways to put the network into working mode:
- via Kubernetes Dashboard
- via `kubectl`
##### Remove Sleep Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Deployments` from the left side menu under `Workloads`.
After that, in the list that appears, you will find the entry `ocelot-neo4j`, which has three dots on the right, where you can click to edit the entry.
Scroll to the `spec.template.spec.containers.command` entry and remove the whole `command` entry like:
```yaml
containers:
- name: container-ocelot-neo4j
image: 'senderfm/neo4j-community-branded:latest'
command:
- tail
- '-f'
- /dev/null
ports:
- containerPort: 7687
protocol: TCP
```
And get:
```yaml
containers:
- name: container-ocelot-neo4j
image: 'senderfm/neo4j-community-branded:latest'
ports:
- containerPort: 7687
protocol: TCP
```
After clicking `Update`, the new settings will be applied and you should check in the `Pods` menu item on the left side if the `ocelot-neo4j-<ID>` pod restarts.
##### Remove Sleep Mode Via `kubectl`
To put Neo4j into working mode, run the following commands in the terminal:
```bash
# list deployments
$ kubectl get deployments -n default
# edit deployment
$ kubectl -n default edit deployment ocelot-neo4j
```
Scroll to the `spec.template.spec.containers.command` entry and remove the whole `command` entry like:
```yaml
spec:
containers:
- command:
- tail
- -f
- /dev/null
envFrom:
- configMapRef:
name: configmap-ocelot-neo4j
```
And get:
```yaml
spec:
containers:
- envFrom:
- configMapRef:
name: configmap-ocelot-neo4j
```
After pressing enter, the new settings will be applied and you should check if the `ocelot-neo4j-<ID>` pod restarts.
Use command:
```bash
# check if the old pod restarts
$ kubectl -n default get pods -o wide
```
### Backend Backup
To back up the images from the backend volume, run commands:
```bash
# ls: backend/public/uploads
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- ls public/uploads
# copy all images from upload to backup volume direct
$ kubectl cp default/$(kubectl -n default get pods | grep ocelot-backend |awk '{ print $1 }'):/app/public/uploads /Volumes/<volume-name>/$(date +%F)-public-uploads
```
### Remove Maintenance Mode
There are two ways to put the network into working mode:
- via Kubernetes Dashboard
- via `kubectl`
#### Remove Maintenance Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Ingresses` from the left side menu under `Service`.
After that, in the list that appears, you will find the entry `ingress-ocelot-webapp`, which has three dots on the right, where you can click to edit the entry.
You can scroll to the end of the YAML file, where you will find one or more `host` entries under `rules`, one for each domain of the network.
In all entries, change the value of the `serviceName` entry from ***ocelot-maintenance*** to `ocelot-webapp` and the value of the `servicePort` entry from ***80*** to `3000`.
First, check if your website is still in maintenance mode.
After you click `Update`, the new settings will be applied and you will find your website online again.
#### Remove Maintenance Mode Via `kubectl`
To put the network into working mode, run the following commands in the terminal:
```bash
# list ingresses
$ kubectl get ingress -n default
# edit ingress
$ kubectl -n default edit ingress ingress-ocelot-webapp
```
Change the content of the YAML file for all domains to:
```yaml
spec:
rules:
- host: network-domain.social
http:
paths:
- backend:
serviceName: ocelot-webapp
servicePort: 3000
# serviceName: ocelot-maintenance
# servicePort: 80
```
First, check if your website is still in maintenance mode.
After you save the file, the new settings will be applied and you will find your website online again.
XXX
```bash
# Dump: Create a Backup in Kubernetes: https://docs.human-connection.org/human-connection/deployment/volumes/neo4j-offline-backup#create-a-backup-in-kubernetes
```

View File

@ -1,39 +0,0 @@
type: application
apiVersion: v2
name: ocelot-social
version: "1.0.0"
# The appVersion defines which docker image is pulled.
# Having it set to latest will pull the latest build on dockerhub.
# You are free to define a specific version here tho.
# e.g. appVersion: "latest" or "1.0.2-3-ocelot.social1.0.2-79"
# Be aware that this requires all your apps to have the same docker image version available.
appVersion: "latest"
description: The Helm chart for ocelot.social
home: https://ocelot.social
sources:
- https://github.com/Ocelot-Social-Community/
- https://github.com/Ocelot-Social-Community/Ocelot-Social
- https://github.com/Ocelot-Social-Community/Ocelot-Social-Deploy-Rebranding
maintainers:
- name: Ulf Gebhardt
email: ulf.gebhardt@webcraft-media.de
url: https://www.webcraft-media.de/#!ulf_gebhardt
icon: https://github.com/Ocelot-Social-Community/Ocelot-Social/raw/master/webapp/static/img/custom/welcome.svg
deprecated: false
# Unused Fields
#dependencies: # A list of the chart requirements (optional)
# - name: ingress-nginx
# version: v1.10.0
# repository: https://kubernetes.github.io/ingress-nginx
# condition: (optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (e.g. subchart1.enabled )
# tags: # (optional)
# - Tags can be used to group charts for enabling/disabling together
# import-values: # (optional)
# - ImportValues holds the mapping of source values to parent key to be imported. Each item can be a string or pair of child/parent sublist items.
# alias: (optional) Alias to be used for the chart. Useful when you have to add the same chart multiple times
#kubeVersion: A SemVer range of compatible Kubernetes versions (optional)
#keywords:
# - A list of keywords about this project (optional)
#annotations:
# example: A list of annotations keyed by name (optional).

View File

@ -1,145 +0,0 @@
# DigitalOcean
If you want to set up a [Kubernetes](https://kubernetes.io) cluster on [DigitalOcean](https://www.digitalocean.com), follow this guide.
## Create Account
Create an account with DigitalOcean.
## Add Project
On the left side you will see a menu. Click on `New Project`. Enter a name and click `Create Project`.
Skip moving resources, probably.
## Create Kubernetes Cluster
On the right top you find the button `Create`. Click on it and choose `Kubernetes - Create Kubernetes Cluster`.
- use the latest Kubernetes version
- choose your datacenter region
- name your node pool: e.g. `pool-<your-network-name>`
- `2 Basic nodes` with `2.5 GB RAM (total of 4 GB)`, `2 shared CPUs`, and `80 GB Disk` each is optimal for the beginning
- set your cluster name: e.g. `cluster-<your-network-name>`
- select your project
- no tags necessary
## Getting Started
After your cluster is set up see progress bar above click on `Getting started`. Please install the following management tools:
- [kubectl v1.24.1](https://kubernetes.io/docs/tasks/tools/)
- [doctl v1.78.0](https://github.com/digitalocean/doctl)
Install the tools as described on the tab or see the links here.
After the installation, click on `Continue`.
### Download Configuration File
Follow the steps to download the configuration file.
You can skip this step if necessary, as you can download the file later. You can then do this by clicking on `Kubernetes` in the left menu. In the menu to the right of the cluster name in the cluster list, click on `More` and select `Download Config`.
### Patch & Minor Version Upgrades
Skip `Patch & Minor Version Upgrades` for now.
### Install 1-Click Apps
You don't need a 1-click app. Our helmet script will install the required NGINXs.
Therefore, skip this step as well.
For a 1-click Kubernetes Dashboard or alternatives, follow the next steps.
## Install Kubernetes Dashboard
We recommend installing a Kubernetes Dashboard, as DigitalOcean no longer offers a pre-installed dashboard.
- 1-click-deployment of [Kubernetes Dashboard on DigitalOcean marketplace](https://marketplace.digitalocean.com/apps/kubernetes-dashboard)
There you will also find a section entitled `Getting Started`, which describes how you can log in from your local computer.
Very short description:
### In your DigitalOcean Account
For authentication, download the current cluster configuration file from DigitalOcean.
### In Terminal
Set the context of the cluster by command:
```bash
kubectl config use-context <context-name>
```
We seem to have two instances in our DigitalOcean cluster how we need to log into the Kubernetes Dashboard.
It looks like it depends on the Kubernetes Dashboard version, but we are not absolutely sure.
#### Login with `kubeconfig` File
Port-forward the Kubernetes Dashboard to your local machine:
```bash
# save pod name
$ export POD_NAME=$(kubectl get pods -n kubernetes-dashboard -l "app.kubernetes.io/name=kubernetes-dashboard,app.kubernetes.io/instance=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}")
# forward port
$ kubectl -n kubernetes-dashboard port-forward $POD_NAME 8443:8443
```
Access the URL in your local web browser at `https://127.0.0.1:8443/`, and log in using your Kubernetes cluster credentials downloaded config file.
You may encounter a certificate warning, so make sure to override it.
#### Login with Admin Token
Port-forward the Kubernetes Dashboard to your local machine:
```bash
# create your access token
kubectl -n kubernetes-dashboard create token admin-user
# forward port
kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
```
Access the URL in your local web browser at `https://127.0.0.1:8443/`, and log in using your access token.
You may encounter a certificate warning, so make sure to override it.
## Alternatives to Kubernetes Dashboard
DigitalOcean has a website about Kubernetes Dashboard and alternatives:
- <https://www.digitalocean.com/community/conceptual-articles/kubernetes-visualization-tools?mkt_tok=MTEzLURUTi0yNjYAAAGQ0YS-wbZaWn5th-m86-fM7vgiLvxNipWpAsUrgd2z4YgiMB0aRgCIDYEiC0Y2c0H9tBsICZQ5ORKgssOgeSjOKSEfN3i7xUpzqXbdZiYxNL2Q>
## DNS Configuration
There are the following two ways to set up the DNS.
### Manage DNS With A Different Domain Provider
If you have registered your domain or subdomain with another domain provider, add an `A` record there with one of the IP addresses from one of the cluster droplets in the DNS.
To find the correct IP address to set in the DNS `A` record, click `Droplets` in the left main menu.
A list of all your droplets will be displayed.
Take one of the IPs of perhaps two or more droplets in your cluster from the list and enter it into the `A` record.
### Manage DNS With DigitalOcean
***TODO:** How to configure the DigitalOcean DNS management service …*
To understand what makes sense to do when managing your DNS with DigitalOcean, you need to know how DNS works:
DNS means `Domain Name System`. It resolves domains like `example.com` into an IP like `123.123.123.123`.
DigitalOcean is not a domain registrar, but provides a DNS management service. If you use DigitalOcean's DNS management service, you can configure [your cluster](./README.md#dns) to always resolve the domain to the correct IP and automatically update it for that.
The IPs of the DigitalOcean machines are not necessarily stable, so the cluster's DNS service will update the DNS records managed by DigitalOcean to the new IP as needed.
***CAUTION:** If you are using an external DNS, you currently have to do this manually, which can cause downtime.*
## Deploy
Yeah, you're done here. Back to [Deployment with Helm for Kubernetes](./README.md).
## Backups On DigitalOcean
You can and should do [backups](./Backup.md) with Kubernetes for sure.
Additional to backup and copying the Neo4j database dump and the backend images you can do a volume snapshot on DigitalOcean at the moment you have the database in sleep mode.

View File

@ -1,350 +0,0 @@
# Kubernetes Helm Installation Of Ocelot.Social
Deploying [ocelot.social](https://github.com/Ocelot-Social-Community/Ocelot-Social) with [Helm](https://helm.sh) for [Kubernetes](https://kubernetes.io) is very straight forward. All you have to do is to change certain parameters, like domain names and API keys, then you just install our provided Helm chart to your cluster.
## Kubernetes Cloud Hosting
There are various ways to set up your own or a managed Kubernetes cluster. We will extend the following lists over time.
Please contact us if you are interested in options not listed below.
Managed Kubernetes:
- [DigitalOcean](./DigitalOcean.md)
## Configuration
You can customize the network server with your configuration by duplicate the `values.template.yaml` to a new `values.yaml` file and change it to your need. All included variables will be available as environment variables in your deployed kubernetes pods.
Besides the `values.template.yaml` file we provide a `nginx.values.template.yaml` and `dns.values.template.yaml` for a similar procedure. The new `nginx.values.yaml` is the configuration for the ingress-nginx Helm chart, while the `dns.values.yaml` file is for automatically updating the dns values on DigitalOcean and therefore optional.
## Installation
Due to the many limitations of Helm you still have to do several manual steps.
Those occur before you run the actual *ocelot.social* Helm chart.
Obviously it is expected of you to have `helm` and `kubectl` installed.
For the cert-manager you may need `cmctl`, see below.
For DigitalOcean you may also need `doctl`.
Install:
- [kubectl v1.24.1](https://kubernetes.io/docs/tasks/tools/)
- [doctl v1.78.0](https://docs.digitalocean.com/reference/doctl/how-to/install/)
- [cmctl v1.8.2](https://cert-manager.io/docs/usage/cmctl/#installation)
- [helm v3.9.0](https://helm.sh/docs/intro/install/)
### Cert Manager (https)
Please refer to [cert-manager.io docs](https://cert-manager.io/docs/installation/) for more details.
***ATTENTION:*** *Be with the Terminal in your repository in the folder of this README.*
We have three ways to install the cert-manager, purely via `kubectl`, via `cmctl`, or with `helm`.
We recommend using `helm` because then we do not mix the installation methods.
Please have a look here:
- [Installing with Helm](https://cert-manager.io/docs/installation/helm/#installing-with-helm)
Our Helm installation is optimized for cert-manager version `v1.13.1` and `kubectl` version `"v1.28.2`.
Please search here for cert-manager versions that are compatible with your `kubectl` version on the cluster and on the client: [cert-manager Supported Releases](https://cert-manager.io/docs/installation/supported-releases/#supported-releases).
***ATTENTION:*** *When uninstalling cert-manager, be sure to use the same method as for installation! Otherwise, we could end up in a broken state, see [Uninstall](https://cert-manager.io/docs/installation/kubectl/#uninstalling).*
<!-- #### 1. Add Helm repository and update
```bash
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
```
#### 2. Install Cert-Manager Helm chart
```bash
# option 1
# this can't be applied via kubectl to our cluster since the CRDs can't be installed properly this way ...
# $ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.crds.yaml
# option 2
# !!! untested for now for new deployment structure !!!
# in configuration/<deployment-name>
# kubeconfig.yaml set globaly
$ helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.13.2 \
--set installCRDs=true
# or kubeconfig.yaml in your repo, then adjust
$ helm install \
cert-manager jetstack/cert-manager \
--kubeconfig ./kubeconfig.yaml \
--namespace cert-manager \
--create-namespace \
--version v1.13.2 \
--set installCRDs=true
``` -->
### Ingress-Nginx
#### 1. Add Helm repository for `ingress-nginx` and update
```bash
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
```
#### 2. Install ingress-nginx
```bash
# in configuration/<deployment-name>
# kubeconfig.yaml set globaly
helm install ingress-nginx ingress-nginx/ingress-nginx -f ../../src/kubernetes/nginx.values.yaml
# or kubeconfig.yaml in your repo, then adjust
helm install \
ingress-nginx ingress-nginx/ingress-nginx -f ../../src/kubernetes/nginx.values.yaml \
--kubeconfig ./kubeconfig.yaml
```
### DigitalOcean Firewall
This is only necessary if you run DigitalOcean without load balancer ([see here for more info](https://stackoverflow.com/questions/54119399/expose-port-80-on-digital-oceans-managed-kubernetes-without-a-load-balancer/55968709)) .
#### 1. Authenticate towards DO with your local `doctl`
You will need a DO token for that.
```bash
# without doctl context
$ doctl auth init
# with doctl new context to be filled in
$ doctl auth init --context <new-context-name>
```
You will need an API token, which you can generate in the control panel at <https://cloud.digitalocean.com/account/api/tokens> .
#### 2. Generate DO firewall
Get the `CLUSTER_UUID` value from the dashboard or from the ID column via `doctl kubernetes cluster list`:
```bash
# need to apply access token by `doctl auth init` before
$ doctl kubernetes cluster list
```
Fill in the `CLUSTER_UUID` and `your-domain`. The latter with hyphens `-` instead of dots `.`:
```bash
# without doctl context
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:<CLUSTER_UUID> \
--name=<your-domain>-http-https
# with doctl context to be filled in
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:<CLUSTER_UUID> \
--name=<your-domain>-http-https --context <context-name>
```
To get informations about your success use this command. (Fill in the `ID` you got at creation.):
```bash
# without doctl context
$ doctl compute firewall get <ID>
# with doctl context to be filled in
$ doctl compute firewall get <ID> --context <context-name>
```
### DNS
***ATTENTION:** This seems not to work at all so we leave it away at the moment*
***TODO:** I thought this is necessary if we use the DigitalOcean DNS management service? See [Manage DNS With DigitalOcean](./DigitalOcean.md#manage-dns-with-digitalocean)*
This chart is only necessary (recommended is more precise) if you run DigitalOcean without load balancer.
You need to generate an access token with read + write for the `dns.values.yaml` at <https://cloud.digitalocean.com/account/api/tokens> and fill it in.
#### 1. Add Helm repository for `binami` and update
```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo update
```
#### 2. Install DNS
```bash
# !!! untested for now for new deployment structure !!!
# kubeconfig.yaml set globaly
$ helm install dns bitnami/external-dns -f dns.values.yaml
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml install dns bitnami/external-dns -f dns.values.yaml
```
### Ocelot.Social
***Attention:** Before installing your own ocelot.social network, you need to create a DockerHub (account and) organization, put its name in the `package.json` file, and push your deployment and rebranding code to GitHub so that GitHub Actions can push your Docker images to DockerHub. This is because Kubernetes will pull these images to create PODs from them.*
All commands for ocelot need to be executed in the kubernetes folder. Therefore `cd deployment/kubernetes/` is expected to be run before every command. Furthermore the given commands will install ocelot into the default namespace. This can be modified to by attaching `--namespace not.default`.
#### Install
Only run once for the first time of installation:
```bash
# in configuration/<deployment-name>
# kubeconfig.yaml set globaly
helm install ocelot \
--values ./kubernetes/values.yaml \
--set appVersion="latest" \
../../src/kubernetes/ \
--timeout 10m
# or kubeconfig.yaml in your repo, then adjust
helm install ocelot \
--kubeconfig ./kubeconfig.yaml \
--values ./kubernetes/values.yaml \
--set appVersion="latest" \
../../src/kubernetes/ \
--timeout 10m
```
#### Upgrade & Update
Run for all upgrades and updates:
```bash
# !!! untested for now for new deployment structure !!!
# in configuration/<deployment-name>
# kubeconfig.yaml set globaly
helm upgrade ocelot \
--values ./kubernetes/values.yaml \
--set appVersion="latest" \
../../src/kubernetes/ \
--timeout 10m
# or kubeconfig.yaml in your repo, then adjust
helm upgrade ocelot \
--kubeconfig ./kubeconfig.yaml \
--values ./kubernetes/values.yaml \
--set appVersion="latest" \
../../src/kubernetes/ \
--timeout 10m
```
#### Rollback
Run for a rollback, in case something went wrong:
```bash
# !!! untested for now for new deployment structure !!!
# in configuration/<deployment-name>
# kubeconfig.yaml set globaly
helm rollback ocelot --timeout 10m
# or kubeconfig.yaml in your repo, then adjust
helm rollback ocelot \
--kubeconfig ./kubeconfig.yaml \
--timeout 10m
```
#### Uninstall
Be aware that if you uninstall ocelot the formerly bound volumes become unbound. Those volumes contain all data from uploads and database. You have to manually free their reference in order to bind them again when reinstalling. Once unbound from their former container references they should automatically be rebound (considering the sizes did not change)
```bash
# !!! untested for now for new deployment structure !!!
# in configuration/<deployment-name>
# kubeconfig.yaml set globaly
helm uninstall ocelot --timeout 10m
# or kubeconfig.yaml in your repo, then adjust
helm uninstall ocelot \
--kubeconfig ./kubeconfig.yaml \
--timeout 10m
```
## Backups
You can and should do [backups](./Backup.md) with Kubernetes for sure.
<!-- ## Error Reporting
We use [Sentry](https://github.com/getsentry/sentry) for error reporting in both
our backend and web frontend. You can either use a hosted or a self-hosted
instance. Just set the two `DSN` in your
[configmap](../templates/configmap.template.yaml) and update the `COMMIT`
during a deployment with your commit or the version of your release.
### Self-hosted Sentry
For data privacy it is recommended to set up your own instance of sentry.
If you are lucky enough to have a kubernetes cluster with the required hardware
support, try this [helm chart](https://github.com/helm/charts/tree/master/stable/sentry).
On our kubernetes cluster we get "mult-attach" errors for persistent volumes.
Apparently DigitalOcean's kubernetes clusters do not fulfill the requirements. -->
## Kubernetes Commands (Without Helm) To Deploy New Docker Images To A Kubernetes Cluster
### Deploy A Version
```bash
# !!! be aware of the correct kube context !!!
$ kubectl config get-contexts
# deploy version '$BUILD_VERSION'
# !!! 'latest' is not recommended on production !!!
# for easyness set env
$ export BUILD_VERSION=1.0.8-48-ocelot.social1.0.8-184 # example
# check this with
$ echo $BUILD_VERSION
1.0.8-48-ocelot.social1.0.8-184
# deploy actual version '$BUILD_VERSION' to Kubernetes cluster
$ kubectl -n default set image deployment/ocelot-webapp container-ocelot-webapp=ocelotsocialnetwork/webapp:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-webapp
$ kubectl -n default set image deployment/ocelot-backend container-ocelot-backend=ocelotsocialnetwork/backend:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-backend
$ kubectl -n default set image deployment/ocelot-maintenance container-ocelot-maintenance=ocelotsocialnetwork/maintenance:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-maintenance
$ kubectl -n default set image deployment/ocelot-neo4j container-ocelot-neo4j=ocelotsocialnetwork/neo4j-community:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-neo4j
# verify deployment and wait for the pods of each deployment to get ready for cleaning and seeding of the database
$ kubectl -n default rollout status deployment/ocelot-webapp --timeout=240s
$ kubectl -n default rollout status deployment/ocelot-maintenance --timeout=240s
$ kubectl -n default rollout status deployment/ocelot-backend --timeout=240s
$ kubectl -n default rollout status deployment/ocelot-neo4j --timeout=240s
```
### Staging Clean And Seed Neo4j Database
***ATTENTION:*** Cleaning and seeding of our Neo4j database is only possible in production if env `PRODUCTION_DB_CLEAN_ALLOW=true` is set in our deployment.
```bash
# !!! be aware of the correct kube context !!!
$ kubectl config get-contexts
# for staging: reset and seed Neo4j database via backend
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- /bin/sh -c "node --experimental-repl-await build/src/db/clean.js && node --experimental-repl-await build/src/db/seed.js"
# or alternatively
# for production: set Neo4j database indexes, constrains, and initial admin account plus run migrate up via backend
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- /bin/sh -c "yarn prod:migrate init && yarn prod:migrate up"
```

View File

@ -1,13 +0,0 @@
# please duplicate template file and rename to "nginx.values.yaml" and fill in your value
controller:
kind: DaemonSet
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
ingressClass: nginx
daemonset:
useHostPort: true
service:
type: ClusterIP
rbac:
create: true

View File

@ -1,12 +0,0 @@
spec:
rules:
- host:
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ocelot-webapp
port:
number: 3000

View File

@ -1,12 +0,0 @@
spec:
rules:
- host:
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ocelot-maintenance
port:
number: 80

View File

@ -1 +0,0 @@
You installed ocelot-social! Congrats <3

View File

@ -1,31 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
PRODUCTION_DB_CLEAN_ALLOW: "{{ .Values.PRODUCTION_DB_CLEAN_ALLOW }}"
PUBLIC_REGISTRATION: "{{ .Values.PUBLIC_REGISTRATION }}"
INVITE_REGISTRATION: "{{ .Values.INVITE_REGISTRATION }}"
CATEGORIES_ACTIVE: "{{ .Values.CATEGORIES_ACTIVE }}"
CLIENT_URI: "{{ .Values.BACKEND.CLIENT_URI }}"
EMAIL_DEFAULT_SENDER: "{{ .Values.BACKEND.EMAIL_DEFAULT_SENDER }}"
SMTP_HOST: "{{ .Values.BACKEND.SMTP_HOST }}"
SMTP_PORT: "{{ .Values.BACKEND.SMTP_PORT }}"
SMTP_IGNORE_TLS: "{{ .Values.BACKEND.SMTP_IGNORE_TLS }}"
SMTP_SECURE: "{{ .Values.BACKEND.SMTP_SECURE }}"
SMTP_DKIM_DOMAINNAME: "{{ .Values.BACKEND.SMTP_DKIM_DOMAINNAME }}"
SMTP_DKIM_KEYSELECTOR: "{{ .Values.BACKEND.SMTP_DKIM_KEYSELECTOR }}"
GRAPHQL_URI: "http://{{ .Release.Name }}-backend:4000"
NEO4J_URI: "bolt://{{ .Release.Name }}-neo4j:7687"
#REDIS_DOMAIN: ---toBeSet(IP)---
#REDIS_PORT: "6379"
#SENTRY_DSN_WEBAPP: "---toBeSet---"
#SENTRY_DSN_BACKEND: "---toBeSet---"

View File

@ -1,62 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
minReadySeconds: {{ .Values.BACKEND.MIN_READY_SECONDS }}
progressDeadlineSeconds: {{ .Values.BACKEND.PROGRESS_DEADLINE_SECONDS }}
revisionHistoryLimit: {{ .Values.BACKEND.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-backend
template:
metadata:
annotations:
backup.velero.io/backup-volumes: uploads
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: {{ .Release.Name }}-backend
spec:
containers:
- name: container-{{ .Release.Name }}-backend
image: "{{ .Values.BACKEND.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.BACKEND.DOCKER_IMAGE_PULL_POLICY }}
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-backend
- secretRef:
name: secret-{{ .Release.Name }}-backend
resources:
requests:
memory: {{ .Values.BACKEND.RESOURCE_REQUESTS_MEMORY | default "500M" | quote }}
limits:
memory: {{ .Values.BACKEND.RESOURCE_LIMITS_MEMORY | default "1G" | quote }}
ports:
- containerPort: 4000
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /app/public/uploads
name: uploads
dnsPolicy: ClusterFirst
schedulerName: default-scheduler
restartPolicy: {{ .Values.BACKEND.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.BACKEND.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}
volumes:
- name: uploads
persistentVolumeClaim:
claimName: volume-claim-{{ .Release.Name }}-uploads

View File

@ -1,24 +0,0 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: volume-claim-{{ .Release.Name }}-uploads
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "volume-claim-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
#dataSource:
# name: uploads-snapshot
# kind: VolumeSnapshot
# apiGroup: snapshot.storage.k8s.io
storageClassName: storage-{{ .Release.Name }}-persistent
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.BACKEND.STORAGE_UPLOADS }}

View File

@ -1,22 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:
JWT_SECRET: "{{ .Values.BACKEND.JWT_SECRET }}"
MAPBOX_TOKEN: "{{ .Values.MAPBOX_TOKEN }}"
PRIVATE_KEY_PASSPHRASE: "{{ .Values.BACKEND.PRIVATE_KEY_PASSPHRASE }}"
SMTP_USERNAME: "{{ .Values.BACKEND.SMTP_USERNAME }}"
SMTP_PASSWORD: "{{ .Values.BACKEND.SMTP_PASSWORD }}"
SMTP_DKIM_PRIVATKEY: "{{ .Values.BACKEND.SMTP_DKIM_PRIVATKEY }}"
#NEO4J_USERNAME: ""
#NEO4J_PASSWORD: ""
#REDIS_PASSWORD: ---toBeSet---

View File

@ -1,20 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-graphql
port: 4000
targetPort: 4000
protocol: TCP
selector:
app: {{ .Release.Name }}-backend

View File

@ -1,22 +0,0 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "letsencrypt-production"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: {{ .Values.LETSENCRYPT.EMAIL }}
privateKeySecretRef:
name: letsencrypt-production
solvers:
- http01:
ingress:
class: nginx

View File

@ -1,22 +0,0 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "letsencrypt-staging"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: {{ .Values.LETSENCRYPT.EMAIL }}
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx

View File

@ -1,29 +0,0 @@
kind: Job
apiVersion: batch/v1
metadata:
name: job-{{ .Release.Name }}-db-init
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "job-db-init"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded, hook-failed
"helm.sh/hook-weight": "0"
spec:
template:
spec:
restartPolicy: Never
containers:
- name: job-{{ .Release.Name }}-db-init
image: "{{ .Values.BACKEND.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
command: ["/bin/sh", "-c", "yarn prod:migrate init"]
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-backend
- secretRef:
name: secret-{{ .Release.Name }}-backend

View File

@ -1,29 +0,0 @@
kind: Job
apiVersion: batch/v1
metadata:
name: job-{{ .Release.Name }}-db-migrate
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "job-db-migrate"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
"helm.sh/hook": post-install, post-upgrade
"helm.sh/hook-delete-policy": hook-succeeded, hook-failed
"helm.sh/hook-weight": "5"
spec:
template:
spec:
restartPolicy: Never
containers:
- name: job-{{ .Release.Name }}-db-migrations
image: "{{ .Values.BACKEND.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
command: ["/bin/sh", "-c", "yarn prod:migrate up"]
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-backend
- secretRef:
name: secret-{{ .Release.Name }}-backend

View File

@ -1,14 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
HOST: "0.0.0.0"

View File

@ -1,45 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
revisionHistoryLimit: {{ .Values.MAINTENANCE.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-maintenance
template:
metadata:
labels:
app: {{ .Release.Name }}-maintenance
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
spec:
containers:
- name: container-{{ .Release.Name }}-maintenance
image: "{{ .Values.MAINTENANCE.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.MAINTENANCE.DOCKER_IMAGE_PULL_POLICY }}
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-webapp
- secretRef:
name: secret-{{ .Release.Name }}-webapp
resources:
requests:
memory: {{ .Values.MAINTENANCE.RESOURCE_REQUESTS_MEMORY | default "500M" | quote }}
limits:
memory: {{ .Values.MAINTENANCE.RESOURCE_LIMITS_MEMORY | default "1G" | quote }}
ports:
- containerPort: 80
restartPolicy: {{ .Values.MAINTENANCE.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.MAINTENANCE.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}

View File

@ -1,13 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:

View File

@ -1,20 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-http
port: 80
targetPort: 80
protocol: TCP
selector:
app: {{ .Release.Name }}-maintenance

View File

@ -1,24 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
NEO4J_ACCEPT_LICENSE_AGREEMENT: "{{ .Values.NEO4J.ACCEPT_LICENSE_AGREEMENT }}"
NEO4J_AUTH: "{{ .Values.NEO4J.AUTH }}"
NEO4J_dbms_connector_bolt_thread__pool__max__size: "{{ .Values.NEO4J.DBMS_CONNECTOR_BOLT_THREAD_POOL_MAX_SIZE }}"
NEO4J_dbms_memory_heap_initial__size: "{{ .Values.NEO4J.DBMS_MEMORY_HEAP_INITIAL_SIZE }}"
NEO4J_dbms_memory_heap_max__size: "{{ .Values.NEO4J.DBMS_MEMORY_HEAP_MAX_SIZE }}"
NEO4J_dbms_memory_pagecache_size: "{{ .Values.NEO4J.DBMS_MEMORY_PAGECACHE_SIZE }}"
NEO4J_dbms_security_procedures_unrestricted: "{{ .Values.NEO4J.DBMS_SECURITY_PROCEDURES_UNRESTRICTED }}"
NEO4J_dbms_allow__format__migration: "true"
NEO4J_dbms_allow__upgrade: "true"
NEO4J_dbms_default__database: "{{ .Values.NEO4J.DBMS_DEFAULT_DATABASE }}"
NEO4J_apoc_import_file_enabled: "{{ .Values.NEO4J.APOC_IMPORT_FILE_ENABLED }}"

View File

@ -1,57 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
revisionHistoryLimit: {{ .Values.NEO4J.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-neo4j
template:
metadata:
name: neo4j
annotations:
backup.velero.io/backup-volumes: neo4j-data
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: {{ .Release.Name }}-neo4j
spec:
containers:
- name: container-{{ .Release.Name }}-neo4j
image: "{{ .Values.NEO4J.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.NEO4J.DOCKER_IMAGE_PULL_POLICY }}
ports:
- containerPort: 7687
- containerPort: 7474
resources:
requests:
memory: {{ .Values.NEO4J.RESOURCE_REQUESTS_MEMORY | default "1G" | quote }}
limits:
memory: {{ .Values.NEO4J.RESOURCE_LIMITS_MEMORY | default "1G" | quote }}
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-neo4j
- secretRef:
name: secret-{{ .Release.Name }}-neo4j
volumeMounts:
- mountPath: /data/
name: neo4j-data
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: volume-claim-{{ .Release.Name }}-neo4j
restartPolicy: {{ .Values.NEO4J.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.NEO4J.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}

View File

@ -1,19 +0,0 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: volume-claim-{{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "volume-claim-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
storageClassName: storage-{{ .Release.Name }}-persistent
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.NEO4J.STORAGE }}

View File

@ -1,15 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:
NEO4J_USERNAME: ""
NEO4J_PASSWORD: ""

View File

@ -1,23 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-bolt
port: 7687
targetPort: 7687
protocol: TCP
#- name: {{ .Release.Name }}-http
# port: 7474
# targetPort: 7474
selector:
app: {{ .Release.Name }}-neo4j

View File

@ -1,16 +0,0 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storage-{{ .Release.Name }}-persistent
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "storage-persistent"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
provisioner: {{ .Values.STORAGE.PROVISIONER }}
reclaimPolicy: {{ .Values.STORAGE.RECLAIM_POLICY }}
volumeBindingMode: {{ .Values.STORAGE.VOLUME_BINDING_MODE }}
allowVolumeExpansion: {{ .Values.STORAGE.ALLOW_VOLUME_EXPANSION }}

View File

@ -1,20 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
HOST: "0.0.0.0"
PUBLIC_REGISTRATION: "{{ .Values.PUBLIC_REGISTRATION }}"
INVITE_REGISTRATION: "{{ .Values.INVITE_REGISTRATION }}"
CATEGORIES_ACTIVE: "{{ .Values.CATEGORIES_ACTIVE }}"
COOKIE_EXPIRE_TIME: "{{ .Values.COOKIE_EXPIRE_TIME }}"
WEBSOCKETS_URI: "{{ .Values.WEBAPP.WEBSOCKETS_URI }}"
GRAPHQL_URI: "http://{{ .Release.Name }}-backend:4000"

View File

@ -1,49 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: {{ .Values.WEBAPP.REPLICAS }}
minReadySeconds: {{ .Values.WEBAPP.MIN_READY_SECONDS }}
progressDeadlineSeconds: {{ .Values.WEBAPP.PROGRESS_DEADLINE_SECONDS }}
revisionHistoryLimit: {{ .Values.WEBAPP.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-webapp
template:
metadata:
annotations:
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: {{ .Release.Name }}-webapp
spec:
containers:
- name: container-{{ .Release.Name }}-webapp
image: "{{ .Values.WEBAPP.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.WEBAPP.DOCKER_IMAGE_PULL_POLICY }}
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-webapp
- secretRef:
name: secret-{{ .Release.Name }}-webapp
resources:
requests:
memory: {{ .Values.WEBAPP.RESOURCE_REQUESTS_MEMORY | default "500M" | quote }}
limits:
memory: {{ .Values.WEBAPP.RESOURCE_LIMITS_MEMORY | default "1G" | quote }}
restartPolicy: {{ .Values.WEBAPP.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.WEBAPP.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}

Some files were not shown because too many files have changed in this diff Show More