Compare commits

...

129 Commits

Author SHA1 Message Date
857adbf0a9
v3.13.1 2025-11-17 23:20:22 +01:00
13fa25cd85
v3.12.2
fix secrets
2025-09-15 22:26:45 +02:00
865770556f
v3.12.1
remove s3-fix branch reference

production secrets

more production secrets
2025-09-13 17:54:26 +02:00
c0e5466ce9
s3 fix 2025-09-13 00:56:15 +02:00
c3bd4f5237
reinclude PRIVATE_KEY_PASSPHRASE secret 2025-09-13 00:01:54 +02:00
e0e377cac5
v3.12.0 2025-09-12 23:48:03 +02:00
d0a13fba03
update settings accordingly 2025-09-11 20:54:56 +02:00
337de8be5b
fix config variables 2025-05-16 11:46:20 +02:00
c1f36d1d41
v3.6.1 2025-05-13 09:51:48 +02:00
d3db5966a7
v3.4.0
missing folder

missing folder

missing folder
2025-04-29 08:47:52 +02:00
3e89fecc0b
v3.3.0 2025-04-12 15:02:10 +02:00
ba9afc5702
fix smtp host 2024-12-13 05:09:15 +01:00
a3dfc10f2e
update smtp host 2024-12-13 04:57:54 +01:00
84f1a6c919
adjusted config for proper email settings 2024-12-13 03:18:39 +01:00
000ad414d5
remove backup gitignore 2024-12-11 06:07:23 +01:00
fb6551d42b
fixed typo 2024-12-11 03:31:32 +01:00
037fb19b95
corrected urls 2024-12-11 03:07:57 +01:00
814be7548c
Merge pull request #1 from wir-social/hetzner
Hetzner
2024-12-11 02:45:33 +01:00
a79eedc99e
removed scripts & patches 2024-12-08 06:08:40 +01:00
97a8c6d270
configure redirect_domains for staging 2024-12-08 06:06:52 +01:00
bcdc38693f
wir.social production with subdomain 2024-12-05 13:28:57 +01:00
8731547afb
wir.social domain, restore script 2024-12-04 14:13:49 +01:00
51076e21dd
fix production environment 2024-12-04 10:43:38 +01:00
70dbf09140
fix NEO4J connection 2024-12-04 09:36:26 +01:00
c8d60b1e23
fix helmfile 2024-12-04 08:55:26 +01:00
cb5f1fc606
corrected publish.yml 2024-12-04 08:26:51 +01:00
6d6db9969e
fix path 2024-12-04 08:10:54 +01:00
83cfc8f5b0
update image name for staging 2024-12-04 08:08:55 +01:00
173fc75b0d
update staging url 2024-12-04 07:58:56 +01:00
Robert Schäfer
1afdada9e3 update domain 2024-12-02 14:38:08 +01:00
Robert Schäfer
d11b17fb8b remove obsolete workflow 2024-12-02 14:26:08 +01:00
Robert Schäfer
f2bb437773 Merge remote-tracking branch 'staging/hetzner' into hetzner 2024-12-02 14:24:58 +01:00
Robert Schäfer
f604c1a1f5 update to latest ocelot-staging version 2024-12-02 14:24:13 +01:00
Robert Schäfer
c9a63e31df change wildcard domain to it4c.org 2024-11-09 17:11:05 +01:00
Robert Schäfer
224d445639 update build image 2024-11-06 17:25:17 +01:00
Robert Schäfer
282afc6b56 update build image, add webapp env 2024-11-05 13:14:36 +01:00
Robert Schäfer
a8a1311783 typos 2024-10-29 22:18:52 +01:00
Robert Schäfer
9ae9020b23 fix image tag generation 2024-10-29 21:56:36 +01:00
Robert Schäfer
2ecbf8e7e2 add docker label ocelot-version 2024-10-29 21:43:15 +01:00
Robert Schäfer
a90047a31a update OCELOT_VERSION 2024-10-29 21:23:01 +01:00
Robert Schäfer
be5bcf8faa refactor: no need to tag OCELOT_VERSION
Now we have the version in a file, it's not necessary to encode it in the docker tag.
2024-10-29 17:41:17 +01:00
Robert Schäfer
6652a02c87 deploy on any tag 2024-10-29 17:34:46 +01:00
Robert Schäfer
a6951cbac7 better naming of github image repos 2024-10-29 16:05:15 +01:00
Robert Schäfer
9672ebfe97 update to new ocelot helm chart 2024-10-29 15:29:06 +01:00
Robert Schäfer
8e2884ced6 fix docker-compose.yml 2024-10-28 22:11:54 +01:00
Robert Schäfer
6894b57008 tagging is actually unnecessaryand
and can be done later
2024-10-28 21:17:24 +01:00
Robert Schäfer
78e7f7b3b7 feat: use checked in OCELOT_VERSION
`workflow_dispatch` only works on the default branch which is inconvenient for development
2024-10-28 14:19:53 +01:00
Robert Schäfer
57e7615c25 feat: docker-compose.yml for branding 2024-10-28 10:53:42 +01:00
Robert Schäfer
e971592128 fix worfklow 2024-10-27 21:38:55 +01:00
Robert Schäfer
5d0da1e282 obsolete code 2024-10-27 21:28:57 +01:00
Robert Schäfer
67cfcc9590 better image tagging in helmfile 2024-10-27 21:24:36 +01:00
Robert Schäfer
d2a56c4334 refactor: turn staging into default environment 2024-10-27 21:09:30 +01:00
Robert Schäfer
841bc4d66a update to new interfaces 2024-10-27 15:26:53 +01:00
Robert Schäfer
5b0e1ab07d fix oversights 2024-10-26 23:57:46 +02:00
Robert Schäfer
72ec5d4e2b undo maintenance mode 2024-10-26 22:32:04 +02:00
Robert Schäfer
0138939103 remove prometheus
prometheu should be installed centrally
2024-10-26 22:30:24 +02:00
Robert Schäfer
f066a4ea37 maintenance mode 2024-10-26 22:08:58 +02:00
Robert Schäfer
0fec341e82 chore: empty commit to test wei:pull github app 2024-10-26 20:36:32 +02:00
Robert Schäfer
0952f8fd36 refactor: kubernetes workflows
* use Github container registry to remove dependency on dockerhub
* use sops for secure encryption of secrets
* use ONBUILD in docker images for rebranding
* use helmfile for deploying various environments
2024-10-26 20:01:19 +02:00
Wolfgang Huß
3d5d678dd1
Merge pull request #4 from Ocelot-Social-Community/3-release-version-less
chore(other) release version-less
2023-11-29 13:06:19 +01:00
Wolfgang Huß
a930f11d8f Encrypt secrets - add domains 'ocelot.social', 'www.ocelot.social' 2023-11-29 12:54:27 +01:00
Wolfgang Huß
f7389c3917 Replace footer URLs with 2023-11-29 12:52:04 +01:00
Wolfgang Huß
4b427dc0a6 Add DKIM to 'values.yaml.template' 2023-11-29 12:46:34 +01:00
Wolfgang Huß
fdc2e52fa4 Encrypt secrets 2023-07-11 13:14:41 +02:00
Wolfgang Huß
e87806d1d6 Add 'filter.ts' to constant files 2023-07-11 13:11:43 +02:00
Wolfgang Huß
293de8b2df Add 'OCELOT_VERSION' as comment to '.env.dist' 2023-07-11 13:10:44 +02:00
350237c62d
reanme all old files to ts, since they are used in the frontend 2023-07-09 10:42:48 +02:00
02ccccd38f
renamed js files to ts 2023-07-07 22:29:22 +02:00
3056eec040
new secrets 2023-04-20 14:56:12 +02:00
a7176d1e62
new secrets 2023-04-20 13:35:49 +02:00
1d41deefe0
properly set constants 2023-04-20 13:34:54 +02:00
2f03303773
delete deployment folder 2023-04-20 13:34:42 +02:00
246c5dc201
properly define external links 2023-04-20 11:46:51 +02:00
b4832b0d0f
new secrets 2023-04-20 11:06:48 +02:00
be3ac7ad29
new secrets 2023-04-18 00:23:17 +02:00
9af6810cf6
properly use DOCKERHUB_ORGANISATION in publish 2023-04-18 00:23:10 +02:00
bc3e036b95
corrected publish workflow 2023-04-17 16:48:45 +02:00
69885510ce
new secrets 2023-04-17 15:12:01 +02:00
3512403f6f
include DOCKERHUB_BRAND_VARRIANT in env.dist 2023-04-17 15:11:55 +02:00
53cf410e3d
new values encrypted 2023-04-17 14:56:25 +02:00
a38bb6ffb4
gitignore backup folder 2023-04-17 14:56:16 +02:00
b3f7838c26
ressource limits 2023-04-13 09:02:20 +02:00
aab17d949f
fixed few more problems on publish 2023-03-21 17:41:40 +01:00
87a8b26991
piublish problem 2023-03-21 15:26:32 +01:00
48a040cad3
set github ref or master as tag suffix 2023-03-21 12:18:42 +01:00
540bd503b9
new secrets 2023-03-21 10:55:04 +01:00
77abadc844
publish workflow include ocelot build run 2023-03-21 10:54:57 +01:00
e80a9efd95
new .env.dist, new secrets 2023-03-20 23:39:48 +01:00
02bae448b6
missing quotations on json payload 2023-03-20 23:29:52 +01:00
e7ab20db5e
properly propagate ocelot refs throughat code checkout & workflows 2023-03-20 22:59:56 +01:00
57519c2011
missing secret 2023-03-20 22:21:26 +01:00
4469edf32b
provide an .env.dist example 2023-03-20 21:39:02 +01:00
2968f894ea
newly encrypted values 2023-03-20 21:38:52 +01:00
4b8f347214
use specific github refs & dockerhub tags 2023-03-20 21:38:41 +01:00
dcf018554e
wait for 4minutes till seeding the database 2023-03-20 20:52:30 +01:00
0167a6a7ee
newly encrypted files 2023-03-20 12:53:19 +01:00
b88c0bc48f
tag release secret 2023-03-20 12:52:16 +01:00
4da6c0fda2
use github.token 2023-03-20 12:18:00 +01:00
d077256f9f
properly reference SECRET, include secret in upload to dockerhub env 2023-03-20 12:16:52 +01:00
a31104f26f
fix publish workflow 2023-03-20 11:50:04 +01:00
2a3538fe12
tag version on github 2023-03-20 11:48:21 +01:00
7d2297a98c
adjusted trigger name and include publish workflow 2023-03-20 11:35:09 +01:00
07cafff7f4
moved example brand into stage.ocelot.social 2023-03-20 11:00:42 +01:00
9053fec28b
update deploy script 2023-03-15 13:49:49 +01:00
f0298469e6
update secrets 2023-03-15 13:40:26 +01:00
592f475767
newly encrypted values 2023-03-15 13:35:19 +01:00
86e1ebe65c
Merge pull request #1 from Ocelot-Social-Community/add-license-1
Create LICENSE
2023-03-14 12:14:07 +01:00
06c2f4712d
Create LICENSE 2023-03-14 12:13:53 +01:00
4bc17766dd
remove license 2023-03-14 12:13:28 +01:00
3a99cf1706
new secrets enable cluster upgrade 2023-03-14 02:57:40 +01:00
beb665eb13
new secrets, test reseed 2023-03-14 02:51:20 +01:00
6750040466
more secrets 2023-03-14 02:43:00 +01:00
689c2c7476
newly encrypted secrets 2023-03-14 02:41:06 +01:00
d293f55512
debug ls 2023-03-14 02:38:16 +01:00
f4d0fdb2d8
use github workspace variable for path 2023-03-14 02:31:53 +01:00
5be30f393c
dont expose .env contents, relative paths for scripts 2023-03-14 02:29:11 +01:00
52083f90d3
missing bracket 2023-03-14 02:24:18 +01:00
5e7fc098f8
reference env in configuration aswell 2023-03-14 02:23:45 +01:00
e81234aa5b
use .env 2023-03-14 02:20:04 +01:00
d3b7b445b3
no quotes 2023-03-14 02:18:39 +01:00
9a284a0f65
doublequote ref 2023-03-14 02:10:37 +01:00
775ae335db
fetch in dept to get tags 2023-03-14 02:07:01 +01:00
55f1cddb35
initial draft of deploy script, newly encrypted secrets 2023-03-14 02:01:41 +01:00
4726942368
encrypted .env, gitignore .env 2023-03-14 01:49:42 +01:00
75395fba16
removed .env 2023-03-14 01:47:24 +01:00
0336c79be1
include .env file 2023-03-14 01:37:30 +01:00
283519dab4
update 2023-03-13 13:05:06 +01:00
5b031fab36
first commit of encrypted values 2023-03-13 11:37:35 +01:00
63c1d6ce94
Initial commit 2023-03-13 11:29:27 +01:00
149 changed files with 645 additions and 4504 deletions

2
.env Normal file
View File

@ -0,0 +1,2 @@
OCELOT_VERSION=sha-592a8af

View File

@ -1,22 +0,0 @@
# GITHUB_OCELOT_REF affects the publish workflow
# GITHUB_OCELOT_REF is a ref (branch, tag, hash) of the ocelot repository
# if this value is not set the github ref just built in the triggering workflow is used.
# if this workflow is triggered by push to master instead of a build-trigger,
# the `master` branch of the ocelot repo is used.
# if you set it to `GITHUB_OCELOT_REF=master` unnessecary builds can occur.
# It is recommended to not set it rather then to set it to `master`
#GITHUB_OCELOT_REF=b2.4.0-351
# DOCKERHUB_OCELOT_TAG applies to the deploy workflow
# DOCKERHUB_OCELOT_TAG is a dockerhub tag for the configured (values.yaml) docker images
# if this value is not set the version just built in the triggering workflow is used.
# using `DOCKERHUB_OCELOT_TAG=latest` is the default behaviour of the Kubernetes Chart,
# but its inaccurate if two workflows are running at the same time.
# It is recommended to not set it rather then to set it to `latest`
#DOCKERHUB_OCELOT_TAG=12-ocelot.social2.4.0
# DOCKERHUB_BRAND_VARRIANT defines the name of the branded image uploaded to dockerhub.
DOCKERHUB_BRAND_VARRIANT=stage-ocelot-social
# DOCKERHUB_ORGANISATION defines which dockerhub organisation images will be uploaded to
# DOCKERHUB_ORGANISATION=ocelotsocialnetwork

BIN
.env.enc

Binary file not shown.

View File

@ -1,57 +0,0 @@
name: deploy
on:
repository_dispatch:
types: [trigger-ocelot-brand-build-success]
jobs:
deploy:
# see example https://github.com/do-community/example-doctl-action
# see example https://github.com/do-community/example-doctl-action/blob/main/.github/workflows/workflow.yaml
name: Deploy defined version to cluster
runs-on: ubuntu-latest
env:
SECRET: ${{ secrets.SECRET }}
CONFIGURATION: "this"
GITHUB_OCELOT_REF_JUST_BUILT: ${{ github.event.client_payload.ocelot_ref }}
DOCKERHUB_OCELOT_TAG_JUST_BUILT: ${{ github.event.client_payload.BUILD_VERSION }}
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Decrypt .env
run: gpg --quiet --batch --yes --decrypt --passphrase="${{ env.SECRET }}" --output .env .env.enc
- name: Load .env
uses: aarcangeli/load-dotenv@v1.0.0
with:
quiet: true
- name: Set GITHUB_OCELOT_REF
run: |
if [ -z ${GITHUB_OCELOT_REF} ]; then
echo "GITHUB_OCELOT_REF=${GITHUB_OCELOT_REF_JUST_BUILT}" >> $GITHUB_ENV
fi
shell: bash
- name: Checkout Ocelot code
uses: actions/checkout@v3
with:
repository: 'Ocelot-Social-Community/Ocelot-Social'
ref: ${{ env.GITHUB_OCELOT_REF }}
path: 'ocelot/'
fetch-depth: 0
- name: Checkout code
uses: actions/checkout@v3
with:
path: "ocelot/deployment/configurations/${{ env.CONFIGURATION }}"
- name: Set DOCKERHUB_OCELOT_TAG
run: |
if [ -z ${DOCKERHUB_OCELOT_TAG} ]; then
echo "DOCKERHUB_OCELOT_TAG=${DOCKERHUB_OCELOT_TAG_JUST_BUILT}" >> $GITHUB_ENV
fi
shell: bash
- name: Decrypt all secrets
run: ocelot/deployment/scripts/secrets.decrypt.sh
- name: Upgrade Cluster
run: ocelot/deployment/scripts/cluster.upgrade.sh
#- name: Sleep for 4 minutes
# run: sleep 240s
#- name: Reset and seed Neo4j database
# run: ocelot/deployment/scripts/cluster.reseed.sh

View File

@ -1,267 +1,87 @@
name: publish
on:
repository_dispatch:
types: [trigger-ocelot-build-success]
push:
branches:
- master
on: push
jobs:
build_branded:
name: Docker Build Branded
build-and-push-images:
strategy:
matrix:
app:
- name: backend
file: docker/backend.Dockerfile
- name: webapp
file: docker/webapp.Dockerfile
- name: maintenance
file: docker/maintenance.Dockerfile
runs-on: ubuntu-latest
env:
SECRET: ${{ secrets.SECRET }}
CONFIGURATION: "this"
GITHUB_OCELOT_REF_JUST_BUILT: ${{ github.event.client_payload.ref }}
OCELOT_GITHUB_RUN_NUMBER: ${{ github.event.client_payload.GITHUB_RUN_NUMBER }}
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}/${{ matrix.app.name }}
permissions:
contents: read
packages: write
attestations: write
id-token: write
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Decrypt .env
run: gpg --quiet --batch --yes --decrypt --passphrase="${{ env.SECRET }}" --output .env .env.enc
- name: Load .env
uses: aarcangeli/load-dotenv@v1.0.0
- name: Checkout repository
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4.1.7
- name: Log in to the Container registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
with:
quiet: true
- name: Set GITHUB_OCELOT_REF
run: |
if [ -z ${GITHUB_OCELOT_REF} ]; then
echo "GITHUB_OCELOT_REF=${GITHUB_OCELOT_REF_JUST_BUILT}" >> $GITHUB_ENV
fi
shell: bash
- name: Set DOCKERHUB_ORGANISATION
run: |
if [ -z ${DOCKERHUB_ORGANISATION} ]; then
echo "DOCKERHUB_ORGANISATION=ocelotsocialnetwork" >> $GITHUB_ENV
fi
- name: Checkout Ocelot code
uses: actions/checkout@v3
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Read $OCELOT_VERSION from file
run: cat .env >> $GITHUB_ENV
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@70b2cdc6480c1a8b86edf1777157f8f437de2166
with:
repository: 'Ocelot-Social-Community/Ocelot-Social'
ref: ${{ env.GITHUB_OCELOT_REF }}
path: 'ocelot/'
fetch-depth: 0
- name: Set OCELOT_GITHUB_RUN_NUMBER
run: |
if [ -z ${OCELOT_GITHUB_RUN_NUMBER} ]; then
echo "OCELOT_GITHUB_RUN_NUMBER=${GITHUB_OCELOT_REF}" >> $GITHUB_ENV
fi
if [ -z ${OCELOT_GITHUB_RUN_NUMBER} ]; then
echo "OCELOT_GITHUB_RUN_NUMBER=master" >> $GITHUB_ENV
fi
shell: bash
- name: Checkout Branded Repo code
uses: actions/checkout@v3
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=schedule
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=ref,event=branch
type=ref,event=pr
type=sha
labels: |
ocelot-version=${{ env.OCELOT_VERSION }}
- name: Build and push Docker images
id: push
uses: docker/build-push-action@4f58ea79222b3b9dc2c8bbdd6debcef730109a75
with:
ref: 'master'
path: "ocelot/deployment/configurations/${{ env.CONFIGURATION }}"
fetch-depth: 0
- name: Build branded images
run: |
ocelot/deployment/scripts/branded-images.build.sh
docker save "${DOCKERHUB_ORGANISATION}/backend-${DOCKERHUB_BRAND_VARRIANT}" > /tmp/backend-branded.tar
docker save "${DOCKERHUB_ORGANISATION}/webapp-${DOCKERHUB_BRAND_VARRIANT}" > /tmp/webapp-branded.tar
docker save "${DOCKERHUB_ORGANISATION}/maintenance-${DOCKERHUB_BRAND_VARRIANT}" > /tmp/maintenance-branded.tar
file: ${{ matrix.app.file }}
context: ${{ matrix.app.context || '.' }}
push: true
build-args: |
OCELOT_VERSION=${{ env.OCELOT_VERSION }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Upload Artifact (Backend)
uses: actions/upload-artifact@v2
with:
name: docker-backend-branded
path: /tmp/backend-branded.tar
- name: Upload Artifact (Webapp)
uses: actions/upload-artifact@v2
with:
name: docker-webapp-branded
path: /tmp/webapp-branded.tar
- name: Upload Artifact (Maintenance)
uses: actions/upload-artifact@v2
with:
name: docker-maintenance-branded
path: /tmp/maintenance-branded.tar
upload_to_dockerhub:
name: Upload to Dockerhub
deploy-to-kubernetes:
runs-on: ubuntu-latest
needs: [build_branded]
env:
SECRET: ${{ secrets.SECRET }}
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}
GITHUB_OCELOT_REF_JUST_BUILT: ${{ github.event.client_payload.ref }}
if: ${{ startsWith(github.ref, 'refs/tags/') }}
needs: build-and-push-images
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Decrypt .env
run: gpg --quiet --batch --yes --decrypt --passphrase="${{ env.SECRET }}" --output .env .env.enc
- name: Load .env
uses: aarcangeli/load-dotenv@v1.0.0
with:
quiet: true
- name: Set GITHUB_OCELOT_REF
run: |
if [ -z ${GITHUB_OCELOT_REF} ]; then
echo "GITHUB_OCELOT_REF=${GITHUB_OCELOT_REF_JUST_BUILT}" >> $GITHUB_ENV
fi
shell: bash
- name: Checkout Ocelot code
uses: actions/checkout@v3
with:
repository: 'Ocelot-Social-Community/Ocelot-Social'
ref: ${{ env.GITHUB_OCELOT_REF }}
path: 'ocelot/'
fetch-depth: 0
- name: Download Docker Image (Backend)
uses: actions/download-artifact@v2
with:
name: docker-backend-branded
path: /tmp
- name: Load Docker Image
run: docker load < /tmp/backend-branded.tar
- name: Download Docker Image (Webapp)
uses: actions/download-artifact@v2
with:
name: docker-webapp-branded
path: /tmp
- name: Load Docker Image
run: docker load < /tmp/webapp-branded.tar
- name: Download Docker Image (Maintenance)
uses: actions/download-artifact@v2
with:
name: docker-maintenance-branded
path: /tmp
- name: Load Docker Image
run: docker load < /tmp/maintenance-branded.tar
- name: Upload to dockerhub
run: ocelot/deployment/scripts/branded-images.upload.sh
github_tag:
name: Tag latest version on Github
runs-on: ubuntu-latest
needs: [upload_to_dockerhub]
env:
SECRET: ${{ secrets.SECRET }}
GITHUB_OCELOT_REF_JUST_BUILT: ${{ github.event.client_payload.ref }}
OCELOT_GITHUB_RUN_NUMBER: ${{ github.event.client_payload.GITHUB_RUN_NUMBER }}
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Decrypt .env
run: gpg --quiet --batch --yes --decrypt --passphrase="${{ env.SECRET }}" --output .env .env.enc
- name: Load .env
uses: aarcangeli/load-dotenv@v1.0.0
with:
quiet: true
- name: Set GITHUB_OCELOT_REF
run: |
if [ -z ${GITHUB_OCELOT_REF} ]; then
echo "GITHUB_OCELOT_REF=${GITHUB_OCELOT_REF_JUST_BUILT}" >> $GITHUB_ENV
fi
shell: bash
- name: Checkout Ocelot code
uses: actions/checkout@v3
with:
repository: 'Ocelot-Social-Community/Ocelot-Social'
ref: ${{ env.GITHUB_OCELOT_REF }}
path: 'ocelot/'
fetch-depth: 0
- name: Set OCELOT_GITHUB_RUN_NUMBER
run: |
if [ -z ${OCELOT_GITHUB_RUN_NUMBER} ]; then
echo "OCELOT_GITHUB_RUN_NUMBER=${GITHUB_OCELOT_REF}" >> $GITHUB_ENV
fi
if [ -z ${OCELOT_GITHUB_RUN_NUMBER} ]; then
echo "OCELOT_GITHUB_RUN_NUMBER=master" >> $GITHUB_ENV
fi
shell: bash
- name: Setup env
run: |
echo "OCELOT_VERSION=$(node -p -e "require('./ocelot/package.json').version")" >> $GITHUB_ENV
echo "BRANDED_VERSION=${GITHUB_RUN_NUMBER}" >> $GITHUB_ENV
echo "BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_ENV
echo "BUILD_COMMIT=${GITHUB_SHA}" >> $GITHUB_ENV
- run: echo "BUILD_VERSION=${BRANDED_VERSION}-ocelot.social${OCELOT_VERSION}-${OCELOT_GITHUB_RUN_NUMBER}" >> $GITHUB_ENV
- name: package-version-to-git-tag + build number
uses: pkgdeps/git-tag-action@v2
with:
github_token: ${{ github.token }} #${{ secrets.GITHUB_TOKEN }}
github_repo: ${{ github.repository }}
version: ${{ env.BUILD_VERSION }}
git_commit_sha: ${{ github.sha }}
git_tag_prefix: "b"
#- name: Generate changelog
# run: |
# yarn install
# yarn auto-changelog --latest-version ${{ env.VERSION }} --unreleased-only
- name: package-version-to-git-release
continue-on-error: true # Will fail if tag exists
id: create_release
uses: actions/create-release@v1
- uses: mdgreenwald/mozilla-sops-action@d9714e521cbaecdae64a89d2fdd576dd2aa97056 # v1.6.0
- uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4.1.7
- run: |
mkdir -p ~/.config/sops/age
echo $SOPS_KEY | base64 --decode > ~/.config/sops/age/keys.txt
env:
GITHUB_TOKEN: ${{ github.token }} #${{ secrets.GITHUB_TOKEN }} # This token is provided by Actions, you do not need to create your own token
SOPS_KEY: ${{ secrets.SOPS_KEY }}
- run: |
mkdir -p ~/.kube
sops decrypt ./helmfile/secrets/kubeconfig > ~/.kube/config
chmod 600 ~/.kube/config
- uses: helmfile/helmfile-action@80fbb6408b98822310f94d8d1321a2cacf87f78f #v1.9.2
with:
tag_name: ${{ env.BUILD_VERSION }}
release_name: ${{ env.BUILD_VERSION }}
#body_path: ./CHANGELOG.md
draft: false
prerelease: false
# TODO correct version
build_trigger:
name: Trigger successful brand build
runs-on: ubuntu-latest
needs: [github_tag]
env:
SECRET: ${{ secrets.SECRET }}
GITHUB_OCELOT_REF_JUST_BUILT: ${{ github.event.client_payload.ref }}
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Decrypt .env
run: gpg --quiet --batch --yes --decrypt --passphrase="${{ env.SECRET }}" --output .env .env.enc
- name: Load .env
uses: aarcangeli/load-dotenv@v1.0.0
with:
quiet: true
- name: Set GITHUB_OCELOT_REF
run: |
if [ -z ${GITHUB_OCELOT_REF} ]; then
echo "GITHUB_OCELOT_REF=${GITHUB_OCELOT_REF_JUST_BUILT}" >> $GITHUB_ENV
fi
shell: bash
- name: Checkout Ocelot code
uses: actions/checkout@v3
with:
repository: 'Ocelot-Social-Community/Ocelot-Social'
ref: ${{ env.GITHUB_OCELOT_REF }}
path: 'ocelot/'
fetch-depth: 0
- name: Set OCELOT_GITHUB_RUN_NUMBER
run: |
if [ -z ${OCELOT_GITHUB_RUN_NUMBER} ]; then
echo "OCELOT_GITHUB_RUN_NUMBER=${GITHUB_OCELOT_REF}" >> $GITHUB_ENV
fi
if [ -z ${OCELOT_GITHUB_RUN_NUMBER} ]; then
echo "OCELOT_GITHUB_RUN_NUMBER=master" >> $GITHUB_ENV
fi
shell: bash
- name: Setup env
run: |
echo "OCELOT_VERSION=$(node -p -e "require('./ocelot/package.json').version")" >> $GITHUB_ENV
echo "BRANDED_VERSION=${GITHUB_RUN_NUMBER}" >> $GITHUB_ENV
echo "BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_ENV
echo "BUILD_COMMIT=${GITHUB_SHA}" >> $GITHUB_ENV
- run: echo "BUILD_VERSION=${BRANDED_VERSION}-ocelot.social${OCELOT_VERSION}-${OCELOT_GITHUB_RUN_NUMBER}" >> $GITHUB_ENV
- name: Repository Dispatch
uses: peter-evans/repository-dispatch@v2
with:
token: ${{ github.token }}
event-type: trigger-ocelot-brand-build-success
repository: ${{ github.repository }}
client-payload: '{"ref": "${{ github.ref }}", "sha": "${{ github.sha }}", "ref_ocelot": "${{ github.event.client_payload.ref }}", "sha_ocelot": "${{ github.event.client_payload.sha }}", "OCELOT_VERSION": "${{ env.OCELOT_VERSION }}", "BRANDED_VERSION": "${{ env.BRANDED_VERSION }}", "BUILD_DATE": "${{ env.BUILD_DATE }}", "BUILD_COMMIT": "${{ env.BUILD_COMMIT }}", "BUILD_VERSION": "${{ env.BUILD_VERSION }}"}'
helmfile-args: apply
helmfile-workdirectory: ./helmfile
helm-plugins: >
https://github.com/databus23/helm-diff,
https://github.com/jkroepke/helm-secrets,
https://github.com/aslafy-z/helm-git

5
.gitignore vendored
View File

@ -1,4 +1 @@
*.yaml
SECRET
.env
/backup
.DS_Store

17
.sops.yaml Normal file
View File

@ -0,0 +1,17 @@
creation_rules:
- age: >-
age1al36hkk8can83zpxq8qyy07gpv83hdw9vchfly5f264kanz405as283a00,
age1llp6k66265q3rzqemxpnq0x3562u20989vcjf65fl9s3hjhgcscq6mhnjw,
age1zycwtk6dkxj6vuqhj9jw7932ythky9p3att6df4z9qasyw8v5dxquejcmp,
age15arcg8x6ltnsacwalvny0h2d4d4wkdmax328mw3v5vda9zm97uqshtavmr,
age1khw2eps099audp3uu5s9rk07qznllh5c8a43gv5dtpnq2a7lue6qrehn5s,
age1f6mzqe0cejajzt0c7nwdjz4xvs4hjct9d8hrgj60e7unzyfd7prsn0npe5,
age1t0ufylv5xfwhmcamu4gpwtay4wcuyqgzlkht4t04s9qjl8xjks9skxrt02
# age1al36hkk8can83zpxq8qyy07gpv83hdw9vchfly5f264kanz405as283a00 SOPS_KEY github secret
# age1llp6k66265q3rzqemxpnq0x3562u20989vcjf65fl9s3hjhgcscq6mhnjw @roschaefer
# age1zycwtk6dkxj6vuqhj9jw7932ythky9p3att6df4z9qasyw8v5dxquejcmp @mahula
# age15arcg8x6ltnsacwalvny0h2d4d4wkdmax328mw3v5vda9zm97uqshtavmr @Elweyn
# age1khw2eps099audp3uu5s9rk07qznllh5c8a43gv5dtpnq2a7lue6qrehn5s @ulfgebhardt
# age1f6mzqe0cejajzt0c7nwdjz4xvs4hjct9d8hrgj60e7unzyfd7prsn0npe5 @Tirokk
# age1t0ufylv5xfwhmcamu4gpwtay4wcuyqgzlkht4t04s9qjl8xjks9skxrt02 @Bettelstab

21
LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023 Ocelot.Social Community
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,11 +0,0 @@
# LICENSE
MIT License
Copyright \(c\) 2022 by the [Ocelot.Social Community](https://github.com/Ocelot-Social-Community)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files \(the "Software"\), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -1,7 +1,7 @@
# Wir.Social Deploys And Rebrands Ocelot.Social
[![Build Status Publish](https://github.com/wir-social/wir-social/actions/workflows/publish.yml/badge.svg)](https://github.com/wir-social/wir-social/actions)
[![MIT License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/wir-social/wir-social/blob/LICENSE.md)
[![Build Status Publish](https://github.com/IT4Change/wir.social/actions/workflows/publish.yml/badge.svg)](https://github.com/IT4Change/wir.social/actions)
[![MIT License](https://img.shields.io/badge/license-MIT-green.svg)](https://github.com/IT4Change/wir.social/blob/LICENSE.md)
[![Discord Channel](https://img.shields.io/discord/489522408076738561.svg)](https://discord.gg/AJSX9DCSUA)
[![Open Source Helpers](https://www.codetriage.com/ocelot-social-community/ocelot-social-deploy-rebranding/badges/users.svg)](https://www.codetriage.com/ocelot-social-community/ocelot-social-deploy-rebranding)

View File

@ -1,32 +0,0 @@
# Todo For Next Update
When you overtake this deploy and rebrand repo to your network you have to recognize the following changes and doings …
## This Latest Version >= 1.1.0 with 'ocelotDockerVersionTag' 1.1.0-205
### Deployment/Rebranding PR chore: 🍰 Release v1.1.0 - Implement Categories Again #63
- You have to add the `CATEGORIES_ACTIVE` from the `deployment/kubernetes/values.template.yaml` to your `deployment/kubernetes/values.yaml` and set it to your prevered value.
- Make sure the correct categories are in your Neo4j database on the server.
## Version >= 1.0.9 with 'ocelotDockerVersionTag' 1.0.9-199
### Deployment/Rebranding PR chore: 🍰 Implement PRODUCTION_DB_CLEAN_ALLOW for Staging Production Environments #56
- Copy `PRODUCTION_DB_CLEAN_ALLOW` from `deployment/kubernetes/values.template.yaml` to `values.yaml` and set it to `false` for production envireonments and only for several stage test servers to `true`.
### Deployment/Rebranding PR chore: [WIP] 🍰 Refine docs, first step #46
- Commit: `Update cert-manager apiVersion "cert-manager.io/v1alpha2" to "cert-manager.io/v1"
- Check for `kubectl` and `helm` versions.
## Version >= 1.0.8 with 'ocelotDockerVersionTag' 1.0.8-182
### PR feat: 🍰 Configure Cookie Expire Time #43
- You have to add the `COOKIE_EXPIRE_TIME` from the `deployment/kubernetes/values.template.yaml` to your `deployment/kubernetes/values.yaml` and set it to your prevered value.
- Correct `locale` cookie exploration time in data privacy.
## Version 1.0.7 with 'ocelotDockerVersionTag' 1.0.7-171
- No informations.

View File

@ -0,0 +1,5 @@
// this file is duplicated in `backend/src/constants/group.js` and `webapp/constants/group.js`
export const NAME_LENGTH_MIN = 3
export const NAME_LENGTH_MAX = 50
export const DESCRIPTION_WITHOUT_HTML_LENGTH_MIN = 10 // with removed HTML tags
export const SHOW_GROUP_BUTTON_IN_HEADER = true

View File

@ -1,12 +1,13 @@
export default {
MENU: [
// {
// name: 'Beiträge',
// path: '/#',
// nameIdent: 'nameIdent',
// path: '/',
// },
// {
// name: 'Über Yunite',
// url: 'https://yunite.org',
// nameIdent: 'nameIdent',
// url: 'https://ocelot.social',
// target: '_blank',
// },
],
}

View File

@ -3,7 +3,11 @@
import { defaultPageParamsPages } from '~/components/utils/InternalPages.js'
const ORGANIZATION = defaultPageParamsPages.ORGANIZATION.overwrite({
// externalLink: 'null', // if string is defined and not empty it's dominating
// if defined it's dominating
// externalLink: {
// url: 'https://ocelot.social',
// target: '_blank',
// },
internalPage: {
// footerIdent: 'site.made', // localized string identifier, if undefined default is used
@ -16,7 +20,11 @@ const ORGANIZATION = defaultPageParamsPages.ORGANIZATION.overwrite({
},
})
const DONATE = defaultPageParamsPages.DONATE.overwrite({
externalLink: 'https://webcraft-media.de/donate-for-wir-social.html', // if string is defined and not empty it's dominating
// if defined it's dominating
externalLink: {
url: 'https://webcraft-media.de/donate-for-wir-social.html',
target: '_blank',
},
internalPage: {
// footerIdent: 'site.donate', // localized string identifier, if undefined default is used
@ -29,7 +37,11 @@ const DONATE = defaultPageParamsPages.DONATE.overwrite({
},
})
const IMPRINT = defaultPageParamsPages.IMPRINT.overwrite({
externalLink: 'https://www.webcraft-media.de/#!impressum', // if string is defined and not empty it's dominating
// if defined it's dominating
externalLink: {
url: 'https://www.webcraft-media.de/#!impressum',
target: '_blank',
},
internalPage: {
// footerIdent: 'site.imprint', // localized string identifier, if undefined default is used
@ -42,7 +54,7 @@ const IMPRINT = defaultPageParamsPages.IMPRINT.overwrite({
},
})
const TERMS_AND_CONDITIONS = defaultPageParamsPages.TERMS_AND_CONDITIONS.overwrite({
// externalLink: null, // if string is defined and not empty it's dominating
// externalLink: null, // if defined it's dominating
internalPage: {
// footerIdent: 'site.termsAndConditions', // localized string identifier, if undefined default is used
@ -55,7 +67,7 @@ const TERMS_AND_CONDITIONS = defaultPageParamsPages.TERMS_AND_CONDITIONS.overwri
},
})
const CODE_OF_CONDUCT = defaultPageParamsPages.CODE_OF_CONDUCT.overwrite({
// externalLink: null, // if string is defined and not empty it's dominating
// externalLink: null, // if defined it's dominating
internalPage: {
// footerIdent: 'site.code-of-conduct', // localized string identifier, if undefined default is used
@ -68,7 +80,11 @@ const CODE_OF_CONDUCT = defaultPageParamsPages.CODE_OF_CONDUCT.overwrite({
},
})
const DATA_PRIVACY = defaultPageParamsPages.DATA_PRIVACY.overwrite({
externalLink: 'https://www.webcraft-media.de/#!datenschutz', // if string is defined and not empty it's dominating
// if defined it's dominating
externalLink: {
url: 'https://www.webcraft-media.de/#!datenschutz',
target: '_blank',
},
internalPage: {
// footerIdent: 'site.data-privacy', // localized string identifier, if undefined default is used
@ -81,7 +97,7 @@ const DATA_PRIVACY = defaultPageParamsPages.DATA_PRIVACY.overwrite({
},
})
const FAQ = defaultPageParamsPages.FAQ.overwrite({
// externalLink: null, // if string is defined and not empty it's dominating
// externalLink: null, // if defined it's dominating
internalPage: {
// footerIdent: 'site.faq', // localized string identifier, if undefined default is used
@ -94,7 +110,11 @@ const FAQ = defaultPageParamsPages.FAQ.overwrite({
},
})
const SUPPORT = defaultPageParamsPages.SUPPORT.overwrite({
externalLink: 'https://www.webcraft-media.de/#!contact', // if string is defined and not empty it's dominating
// if defined it's dominating
externalLink: {
url: 'https://www.webcraft-media.de/#!contact',
target: '_blank',
},
internalPage: {
// footerIdent: 'site.support', // localized string identifier, if undefined default is used

View File

@ -3,6 +3,19 @@
export default {
LOGO_HEADER_PATH: '/img/custom/logo-horizontal.svg',
LOGO_HEADER_WIDTH: '130px',
LOGO_HEADER_CLICK: {
// externalLink: {
// url: 'https://ocelot.social',
// target: '_blank',
// },
externalLink: null,
internalPath: {
to: {
name: 'index',
},
scrollTo: '.main-navigation',
},
},
LOGO_SIGNUP_PATH: '/img/custom/logo-squared.svg',
LOGO_WELCOME_PATH: '/img/custom/logo-squared.svg',
LOGO_LOGOUT_PATH: '/img/custom/logo-squared.svg',

View File

@ -1,25 +0,0 @@
# Minikube
There are many Kubernetes providers, but if you're just getting started, Minikube is a tool that you can use to get your feet wet.
After you [installed Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/)
open your minikube dashboard:
```text
$ minikube dashboard
```
This will give you an overview. Some of the steps below need some timing to make resources available to other dependent deployments. Keeping an eye on the dashboard is a great way to check that.
Follow the installation instruction for [Kubernetes with Helm](./kubernetes/README.md).
If all the pods and services have settled and everything looks green in your
minikube dashboard, expose the services you want on your host system.
For example:
```text
$ minikube service webapp --namespace=ocelotsocialnetwork
# optionally
$ minikube service backend --namespace=ocelotsocialnetwork
```

View File

@ -1,23 +0,0 @@
# Deployment
Before you start the deployment you have to do preparations.
## Deployment Preparations
Since all deployment methods described here depend on [Docker](https://docker.com) and [DockerHub](https://hub.docker.com), you need to create your own organisation on DockerHub and put its name in the [package.json](/package.json) file as your `dockerOrganisation`.
Read more details in the [main README](/README.md) under [Usage](/README.md#usage).
## Deployment Methods
You have the following options for a deployment:
- [Kubernetes with Helm](./kubernetes/README.md)
## After Deployment
After the first deployment of the new network on your server, the database is initialized with the default administrator:
- E-mail: admin@example.org
- Password: 1234
***ATTENTION:*** When you are logged in for the first time, please change your (the admin's) e-mail to an existing one and change your password to a secure one !!!

View File

@ -1,3 +0,0 @@
/dns.values.yaml
/nginx.values.yaml
/values.yaml

View File

@ -1,305 +0,0 @@
# Kubernetes Backup Of Ocelot.Social
One of the most important tasks in managing a running [ocelot.social](https://github.com/Ocelot-Social-Community/Ocelot-Social) network is backing up the data, e.g. the Neo4j database and the stored image files.
## Manual Offline Backup
To prepare, [kubectl](https://kubernetes.io/docs/tasks/tools/) must be installed and ready to use so that you have access to Kubernetes on your server.
Check if the correct context is used by running the following commands:
```bash
# check context and set the correct one
$ kubectl config get-contexts
# if the wrong context is chosen use it
$ kubectl config use-context <your-context>
# if you like check additionally if all pods are running well
$ kubectl -n default get pods -o wide
```
The very first step is to put the webside into **maintenance mode**.
### Set Maintenance Mode
There are two ways to put the network into maintenance mode:
- via Kubernetes Dashboard
- via `kubectl`
#### Maintenance Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Ingresses` from the left side menu under `Service`.
After that, in the list that appears, you will find the entry `ingress-ocelot-webapp`, which has three dots on the right, where you can click to edit the entry.
You can scroll to the end of the YAML file, where you will find one or more `host` entries under `rules`, one for each domain of the network.
In all entries, change the value of the `serviceName` entry from ***ocelot-webapp*** to `ocelot-maintenance` and the value of the `servicePort` entry from ***3000*** to `80`.
First, check if your website is still online.
After you click `Update`, the new settings will be applied and you will find your website in maintenance mode.
#### Maintenance Mode Via `kubectl`
To put the network into maintenance mode, run the following commands in the terminal:
```bash
# list ingresses
$ kubectl get ingress -n default
# edit ingress
$ kubectl -n default edit ingress ingress-ocelot-webapp
```
Change the content of the YAML file for all domains to:
```yaml
spec:
rules:
- host: network-domain.social
http:
paths:
- backend:
# serviceName: ocelot-webapp
# servicePort: 3000
serviceName: ocelot-maintenance
servicePort: 80
```
First, check if your website is still online.
After you save the file, the new settings will be applied and you will find your website in maintenance mode.
### Neo4j Database Offline Backup
Before we can back up the database, we need to put it into **sleep mode**.
#### Set Neo4j To Sleep Mode
Again there are two ways to put the network into sleep mode:
- via Kubernetes Dashboard
- via `kubectl`
##### Sleep Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Deployments` from the left side menu under `Workloads`.
After that, in the list that appears, you will find the entry `ocelot-neo4j`, which has three dots on the right, where you can click to edit the entry.
Scroll to the end of the YAML file where you will find the `spec.template.spec.containers` entry. Here you can insert the `command` entry directly after `imagePullPolicy` in a new line.
```yaml
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
command: ["tail", "-f", "/dev/null"]
```
After clicking `Update`, the new settings will be applied and you should check in the `Pods` menu item on the left side if the `ocelot-neo4j-<ID>` pod restarts.
##### Sleep Mode Via `kubectl`
To put Neo4j into sleep mode, run the following commands in the terminal:
```bash
# list deployments
$ kubectl get deployments -n default
# edit deployment
$ kubectl -n default edit deployment ocelot-neo4j
```
Scroll to the `spec.template.spec.containers` entry. Here you can insert the `command` entry directly after `imagePullPolicy` in a new line.
```yaml
image: <network-DockerHub-name>/neo4j-community-branded:latest
imagePullPolicy: Always
command: ["tail", "-f", "/dev/null"]
```
After pressing enter, the new settings will be applied and you should check if the `ocelot-neo4j-<ID>` pod restarts.
Use command:
```bash
# check if the old pod restarts
$ kubectl -n default get pods -o wide
```
#### Generate Offline Backup
The offline backup is generated via `kubectl`:
```bash
# check for the Neo4j pod
$ kubectl -n default get pods -o wide
# ls: see wish backup dumps are already there
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- ls
# bash: enter bash of Neo4j
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- bash
# generate Dump
neo4j% neo4j-admin dump --to=/var/lib/neo4j/$(date +%F)-neo4j-dump
# exit bash
neo4j% exit
# ls: see if the new backup dump is there
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- ls
```
Lets copy the dump backup
```bash
# copy dump onto backup volume direct
$ kubectl cp default/$(kubectl -n default get pods | grep ocelot-neo4j |awk '{ print $1 }'):/var/lib/neo4j/$(date +%F)-neo4j-dump /Volumes/<volume-name>/$(date +%F)-neo4j-dump
```
#### Remove Sleep Mode From Neo4j
Again there are two ways to put the network into working mode:
- via Kubernetes Dashboard
- via `kubectl`
##### Remove Sleep Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Deployments` from the left side menu under `Workloads`.
After that, in the list that appears, you will find the entry `ocelot-neo4j`, which has three dots on the right, where you can click to edit the entry.
Scroll to the `spec.template.spec.containers.command` entry and remove the whole `command` entry like:
```yaml
containers:
- name: container-ocelot-neo4j
image: 'senderfm/neo4j-community-branded:latest'
command:
- tail
- '-f'
- /dev/null
ports:
- containerPort: 7687
protocol: TCP
```
And get:
```yaml
containers:
- name: container-ocelot-neo4j
image: 'senderfm/neo4j-community-branded:latest'
ports:
- containerPort: 7687
protocol: TCP
```
After clicking `Update`, the new settings will be applied and you should check in the `Pods` menu item on the left side if the `ocelot-neo4j-<ID>` pod restarts.
##### Remove Sleep Mode Via `kubectl`
To put Neo4j into working mode, run the following commands in the terminal:
```bash
# list deployments
$ kubectl get deployments -n default
# edit deployment
$ kubectl -n default edit deployment ocelot-neo4j
```
Scroll to the `spec.template.spec.containers.command` entry and remove the whole `command` entry like:
```yaml
spec:
containers:
- command:
- tail
- -f
- /dev/null
envFrom:
- configMapRef:
name: configmap-ocelot-neo4j
```
And get:
```yaml
spec:
containers:
- envFrom:
- configMapRef:
name: configmap-ocelot-neo4j
```
After pressing enter, the new settings will be applied and you should check if the `ocelot-neo4j-<ID>` pod restarts.
Use command:
```bash
# check if the old pod restarts
$ kubectl -n default get pods -o wide
```
### Backend Backup
To back up the images from the backend volume, run commands:
```bash
# ls: backend/public/uploads
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- ls public/uploads
# copy all images from upload to backup volume direct
$ kubectl cp default/$(kubectl -n default get pods | grep ocelot-backend |awk '{ print $1 }'):/app/public/uploads /Volumes/<volume-name>/$(date +%F)-public-uploads
```
### Remove Maintenance Mode
There are two ways to put the network into working mode:
- via Kubernetes Dashboard
- via `kubectl`
#### Remove Maintenance Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Ingresses` from the left side menu under `Service`.
After that, in the list that appears, you will find the entry `ingress-ocelot-webapp`, which has three dots on the right, where you can click to edit the entry.
You can scroll to the end of the YAML file, where you will find one or more `host` entries under `rules`, one for each domain of the network.
In all entries, change the value of the `serviceName` entry from ***ocelot-maintenance*** to `ocelot-webapp` and the value of the `servicePort` entry from ***80*** to `3000`.
First, check if your website is still in maintenance mode.
After you click `Update`, the new settings will be applied and you will find your website online again.
#### Remove Maintenance Mode Via `kubectl`
To put the network into working mode, run the following commands in the terminal:
```bash
# list ingresses
$ kubectl get ingress -n default
# edit ingress
$ kubectl -n default edit ingress ingress-ocelot-webapp
```
Change the content of the YAML file for all domains to:
```yaml
spec:
rules:
- host: network-domain.social
http:
paths:
- backend:
serviceName: ocelot-webapp
servicePort: 3000
# serviceName: ocelot-maintenance
# servicePort: 80
```
First, check if your website is still in maintenance mode.
After you save the file, the new settings will be applied and you will find your website online again.
XXX
```bash
# Dump: Create a Backup in Kubernetes: https://docs.human-connection.org/human-connection/deployment/volumes/neo4j-offline-backup#create-a-backup-in-kubernetes
```

View File

@ -1,39 +0,0 @@
type: application
apiVersion: v2
name: ocelot-social
version: "1.0.0"
# The appVersion defines which docker image is pulled.
# Having it set to latest will pull the latest build on dockerhub.
# You are free to define a specific version here tho.
# e.g. appVersion: "latest" or "1.0.2-3-ocelot.social1.0.2-79"
# Be aware that this requires all your apps to have the same docker image version available.
appVersion: "latest"
description: The Helm chart for ocelot.social
home: https://ocelot.social
sources:
- https://github.com/Ocelot-Social-Community/
- https://github.com/Ocelot-Social-Community/Ocelot-Social
- https://github.com/Ocelot-Social-Community/Ocelot-Social-Deploy-Rebranding
maintainers:
- name: Ulf Gebhardt
email: ulf.gebhardt@webcraft-media.de
url: https://www.webcraft-media.de/#!ulf_gebhardt
icon: https://github.com/Ocelot-Social-Community/Ocelot-Social/raw/master/webapp/static/img/custom/welcome.svg
deprecated: false
# Unused Fields
#dependencies: # A list of the chart requirements (optional)
# - name: ingress-nginx
# version: v1.10.0
# repository: https://kubernetes.github.io/ingress-nginx
# condition: (optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (e.g. subchart1.enabled )
# tags: # (optional)
# - Tags can be used to group charts for enabling/disabling together
# import-values: # (optional)
# - ImportValues holds the mapping of source values to parent key to be imported. Each item can be a string or pair of child/parent sublist items.
# alias: (optional) Alias to be used for the chart. Useful when you have to add the same chart multiple times
#kubeVersion: A SemVer range of compatible Kubernetes versions (optional)
#keywords:
# - A list of keywords about this project (optional)
#annotations:
# example: A list of annotations keyed by name (optional).

View File

@ -1,84 +0,0 @@
# DigitalOcean
If you want to set up a [Kubernetes](https://kubernetes.io) cluster on [DigitalOcean](https://www.digitalocean.com), follow this guide.
## Create Account
Create an account with DigitalOcean.
## Add Project
On the left side you will see a menu. Click on `New Project`. Enter a name and click `Create Project`.
Skip moving resources, probably.
## Create Kubernetes Cluster
On the right top you find the button `Create`. Click on it and choose `Kubernetes - Create Kubernetes Cluster`.
- use the latest Kubernetes version
- choose your datacenter region
- name your node pool: e.g. `pool-<your-network-name>`
- `2 Basic nodes` with `2.5 GB RAM (total of 4 GB)`, `2 shared CPUs`, and `80 GB Disk` each is optimal for the beginning
- set your cluster name: e.g. `cluster-<your-network-name>`
- select your project
- no tags necessary
## Getting Started
After your cluster is set up see progress bar above click on `Getting started`. Please install the following management tools:
- [kubectl v1.24.1](https://kubernetes.io/docs/tasks/tools/)
- [doctl v1.78.0](https://github.com/digitalocean/doctl)
Install the tools as described on the tab or see the links here.
After the installation, click on `Continue`.
### Download Configuration File
Follow the steps to download the configuration file.
You can skip this step if necessary, as you can download the file later. You can then do this by clicking on `Kubernetes` in the left menu. In the menu to the right of the cluster name in the cluster list, click on `More` and select `Download Config`.
### Patch & Minor Version Upgrades
Skip `Patch & Minor Version Upgrades` for now.
### Install 1-Click Apps
You don't need a 1-click app. Our helmet script will install the required NGINXs.
Therefore, skip this step as well.
## DNS Configuration
There are the following two ways to set up the DNS.
### Manage DNS With A Different Domain Provider
If you have registered your domain or subdomain with another domain provider, add an `A` record there with one of the IP addresses from one of the cluster droplets in the DNS.
To find the correct IP address to set in the DNS `A` record, click `Droplets` in the left main menu.
A list of all your droplets will be displayed.
Take one of the IPs of perhaps two or more droplets in your cluster from the list and enter it into the `A` record.
### Manage DNS With DigitalOcean
***TODO:** How to configure the DigitalOcean DNS management service …*
To understand what makes sense to do when managing your DNS with DigitalOcean, you need to know how DNS works:
DNS means `Domain Name System`. It resolves domains like `example.com` into an IP like `123.123.123.123`.
DigitalOcean is not a domain registrar, but provides a DNS management service. If you use DigitalOcean's DNS management service, you can configure [your cluster](/deployment/kubernetes/README.md#dns) to always resolve the domain to the correct IP and automatically update it for that.
The IPs of the DigitalOcean machines are not necessarily stable, so the cluster's DNS service will update the DNS records managed by DigitalOcean to the new IP as needed.
***CAUTION:** If you are using an external DNS, you currently have to do this manually, which can cause downtime.*
## Deploy
Yeah, you're done here. Back to [Deployment with Helm for Kubernetes](/deployment/kubernetes/README.md).
## Backups On DigitalOcean
You can and should do [backups](/deployment/kubernetes/Backup.md) with Kubernetes for sure.
Additional to backup and copying the Neo4j database dump and the backend images you can do a volume snapshot on DigitalOcean at the moment you have the database in sleep mode.

View File

@ -1,299 +0,0 @@
# Kubernetes Helm Installation Of Ocelot.Social
Deploying [ocelot.social](https://github.com/Ocelot-Social-Community/Ocelot-Social) with [Helm](https://helm.sh) for [Kubernetes](https://kubernetes.io) is very straight forward. All you have to do is to change certain parameters, like domain names and API keys, then you just install our provided Helm chart to your cluster.
## Kubernetes Cloud Hosting
There are various ways to set up your own or a managed Kubernetes cluster. We will extend the following lists over time.
Please contact us if you are interested in options not listed below.
Managed Kubernetes:
- [DigitalOcean](/deployment/kubernetes/DigitalOcean.md)
## Configuration
You can customize the network server with your configuration by duplicate the `values.template.yaml` to a new `values.yaml` file and change it to your need. All included variables will be available as environment variables in your deployed kubernetes pods.
Besides the `values.template.yaml` file we provide a `nginx.values.template.yaml` and `dns.values.template.yaml` for a similar procedure. The new `nginx.values.yaml` is the configuration for the ingress-nginx Helm chart, while the `dns.values.yaml` file is for automatically updating the dns values on DigitalOcean and therefore optional.
## Installation
Due to the many limitations of Helm you still have to do several manual steps.
Those occur before you run the actual *ocelot.social* Helm chart.
Obviously it is expected of you to have `helm` and `kubectl` installed.
For the cert-manager you may need `cmctl`, see below.
For DigitalOcean you may also need `doctl`.
Install:
- [kubectl v1.24.1](https://kubernetes.io/docs/tasks/tools/)
- [doctl v1.78.0](https://docs.digitalocean.com/reference/doctl/how-to/install/)
- [cmctl v1.8.2](https://cert-manager.io/docs/usage/cmctl/#installation)
- [helm v3.9.0](https://helm.sh/docs/intro/install/)
### Cert Manager (https)
Please refer to [cert-manager.io docs](https://cert-manager.io/docs/installation/) for more details.
***ATTENTION:*** *Be with the Terminal in your repository in the folder of this README.*
We have three ways to install the cert-manager, purely via `kubectl`, via `cmctl`, or with `helm`.
We recommend using `helm` because then we do not mix the installation methods.
Please have a look here:
- [Installing with Helm](https://cert-manager.io/docs/installation/helm/#installing-with-helm)
Our Helm installation is optimized for cert-manager version `v1.9.1` and `kubectl` version `"v1.24.2`.
Please search here for cert-manager versions that are compatible with your `kubectl` version on the cluster and on the client: [cert-manager Supported Releases](https://cert-manager.io/docs/installation/supported-releases/#supported-releases).
***ATTENTION:*** *When uninstalling cert-manager, be sure to use the same method as for installation! Otherwise, we could end up in a broken state, see [Uninstall](https://cert-manager.io/docs/installation/kubectl/#uninstalling).*
<!-- #### 1. Create Namespace
```bash
# kubeconfig.yaml set globaly
$ kubectl create namespace cert-manager
# or kubeconfig.yaml in your repo, then adjust
$ kubectl --kubeconfig=/../kubeconfig.yaml create namespace cert-manager
```
#### 2. Add Helm repository and update
```bash
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
```
#### 3. Install Cert-Manager Helm chart
```bash
# option 1
# this can't be applied via kubectl to our cluster since the CRDs can't be installed properly this way ...
# $ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.crds.yaml
# option 2
# kubeconfig.yaml set globaly
$ helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.9.1 \
--set installCRDs=true
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml \
install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.9.1 \
--set installCRDs=true
``` -->
### Ingress-Nginx
#### 1. Add Helm repository for `ingress-nginx` and update
```bash
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
```
#### 2. Install ingress-nginx
```bash
# kubeconfig.yaml set globaly
$ helm install ingress-nginx ingress-nginx/ingress-nginx -f nginx.values.yaml
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml install ingress-nginx ingress-nginx/ingress-nginx -f nginx.values.yaml
```
### DigitalOcean Firewall
This is only necessary if you run DigitalOcean without load balancer ([see here for more info](https://stackoverflow.com/questions/54119399/expose-port-80-on-digital-oceans-managed-kubernetes-without-a-load-balancer/55968709)) .
#### 1. Authenticate towards DO with your local `doctl`
You will need a DO token for that.
```bash
# without doctl context
$ doctl auth init
# with doctl new context to be filled in
$ doctl auth init --context <new-context-name>
```
You will need an API token, which you can generate in the control panel at <https://cloud.digitalocean.com/account/api/tokens> .
#### 2. Generate DO firewall
Get the `CLUSTER_UUID` value from the dashboard or from the ID column via `doctl kubernetes cluster list`:
```bash
# need to apply access token by `doctl auth init` before
$ doctl kubernetes cluster list
```
Fill in the `CLUSTER_UUID` and `your-domain`. The latter with hyphens `-` instead of dots `.`:
```bash
# without doctl context
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:<CLUSTER_UUID> \
--name=<your-domain>-http-https
# with doctl context to be filled in
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:<CLUSTER_UUID> \
--name=<your-domain>-http-https --context <context-name>
```
To get informations about your success use this command. (Fill in the `ID` you got at creation.):
```bash
# without doctl context
$ doctl compute firewall get <ID>
# with doctl context to be filled in
$ doctl compute firewall get <ID> --context <context-name>
```
### DNS
***TODO:** I thought this is necessary if we use the DigitalOcean DNS management service? See [Manage DNS With DigitalOcean](/deployment/kubernetes/DigitalOcean.md#manage-dns-with-digitalocean)*
This chart is only necessary (recommended is more precise) if you run DigitalOcean without load balancer.
You need to generate an access token with read + write for the `dns.values.yaml` at <https://cloud.digitalocean.com/account/api/tokens> and fill it in.
#### 1. Add Helm repository for `binami` and update
```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo update
```
#### 2. Install DNS
```bash
# kubeconfig.yaml set globaly
$ helm install dns bitnami/external-dns -f dns.values.yaml
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml install dns bitnami/external-dns -f dns.values.yaml
```
### Ocelot.Social
***Attention:** Before installing your own ocelot.social network, you need to create a DockerHub (account and) organization, put its name in the `package.json` file, and push your deployment and rebranding code to GitHub so that GitHub Actions can push your Docker images to DockerHub. This is because Kubernetes will pull these images to create PODs from them.*
All commands for ocelot need to be executed in the kubernetes folder. Therefore `cd deployment/kubernetes/` is expected to be run before every command. Furthermore the given commands will install ocelot into the default namespace. This can be modified to by attaching `--namespace not.default`.
#### Install
Only run once for the first time of installation:
```bash
# kubeconfig.yaml set globaly
$ helm install ocelot ./
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml install ocelot ./
```
#### Upgrade & Update
Run for all upgrades and updates:
```bash
# kubeconfig.yaml set globaly
$ helm upgrade ocelot ./
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml upgrade ocelot ./
```
#### Rollback
Run for a rollback, in case something went wrong:
```bash
# kubeconfig.yaml set globaly
$ helm rollback ocelot
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml rollback ocelot
```
#### Uninstall
Be aware that if you uninstall ocelot the formerly bound volumes become unbound. Those volumes contain all data from uploads and database. You have to manually free their reference in order to bind them again when reinstalling. Once unbound from their former container references they should automatically be rebound (considering the sizes did not change)
```bash
# kubeconfig.yaml set globaly
$ helm uninstall ocelot
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml uninstall ocelot
```
## Backups
You can and should do [backups](/deployment/kubernetes/Backup.md) with Kubernetes for sure.
## Error Reporting
We use [Sentry](https://github.com/getsentry/sentry) for error reporting in both
our backend and web frontend. You can either use a hosted or a self-hosted
instance. Just set the two `DSN` in your
[configmap](../templates/configmap.template.yaml) and update the `COMMIT`
during a deployment with your commit or the version of your release.
### Self-hosted Sentry
For data privacy it is recommended to set up your own instance of sentry.
If you are lucky enough to have a kubernetes cluster with the required hardware
support, try this [helm chart](https://github.com/helm/charts/tree/master/stable/sentry).
On our kubernetes cluster we get "mult-attach" errors for persistent volumes.
Apparently DigitalOcean's kubernetes clusters do not fulfill the requirements.
## Kubernetes Commands (Without Helm) To Deploy New Docker Images To A Kubernetes Cluster
### Deploy A Version
```bash
# !!! be aware of the correct kube context !!!
$ kubectl config get-contexts
# deploy version '$BUILD_VERSION'
# !!! 'latest' is not recommended on production !!!
# for easyness set env
$ export BUILD_VERSION=1.0.8-48-ocelot.social1.0.8-184 # example
# check this with
$ echo $BUILD_VERSION
1.0.8-48-ocelot.social1.0.8-184
# deploy actual version '$BUILD_VERSION' to Kubernetes cluster
$ kubectl -n default set image deployment/ocelot-webapp container-ocelot-webapp=ocelotsocialnetwork/webapp:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-webapp
$ kubectl -n default set image deployment/ocelot-backend container-ocelot-backend=ocelotsocialnetwork/backend:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-backend
$ kubectl -n default set image deployment/ocelot-maintenance container-ocelot-maintenance=ocelotsocialnetwork/maintenance:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-maintenance
$ kubectl -n default set image deployment/ocelot-neo4j container-ocelot-neo4j=ocelotsocialnetwork/neo4j-community:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-neo4j
# verify deployment and wait for the pods of each deployment to get ready for cleaning and seeding of the database
$ kubectl -n default rollout status deployment/ocelot-webapp --timeout=240s
$ kubectl -n default rollout status deployment/ocelot-maintenance --timeout=240s
$ kubectl -n default rollout status deployment/ocelot-backend --timeout=240s
$ kubectl -n default rollout status deployment/ocelot-neo4j --timeout=240s
```
### Staging Clean And Seed Neo4j Database
***ATTENTION:*** Cleaning and seeding of our Neo4j database is only possible in production if env `PRODUCTION_DB_CLEAN_ALLOW=true` is set in our deployment.
```bash
# !!! be aware of the correct kube context !!!
$ kubectl config get-contexts
# reset and seed Neo4j database via backend for staging
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- /bin/sh -c "node --experimental-repl-await dist/db/clean.js && node --experimental-repl-await dist/db/seed.js"
```

View File

@ -1,12 +0,0 @@
# please duplicate template file and rename to "dns.values.yaml" and fill in your value
provider: digitalocean
digitalocean:
# create the API token at https://cloud.digitalocean.com/account/api/tokens
# needs read + write
apiToken: "TODO"
domainFilters:
# domains you want external-dns to be able to edit
- TODO.TODO
rbac:
create: true

View File

@ -1,13 +0,0 @@
# please duplicate template file and rename to "nginx.values.yaml" and fill in your value
controller:
kind: DaemonSet
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
ingressClass: nginx
daemonset:
useHostPort: true
service:
type: ClusterIP
rbac:
create: true

View File

@ -1 +0,0 @@
You installed ocelot-social! Congrats <3

View File

@ -1,29 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
PRODUCTION_DB_CLEAN_ALLOW: "{{ .Values.PRODUCTION_DB_CLEAN_ALLOW }}"
PUBLIC_REGISTRATION: "{{ .Values.PUBLIC_REGISTRATION }}"
INVITE_REGISTRATION: "{{ .Values.INVITE_REGISTRATION }}"
CATEGORIES_ACTIVE: "{{ .Values.CATEGORIES_ACTIVE }}"
CLIENT_URI: "{{ .Values.BACKEND.CLIENT_URI }}"
EMAIL_DEFAULT_SENDER: "{{ .Values.BACKEND.EMAIL_DEFAULT_SENDER }}"
SMTP_HOST: "{{ .Values.BACKEND.SMTP_HOST }}"
SMTP_PORT: "{{ .Values.BACKEND.SMTP_PORT }}"
SMTP_IGNORE_TLS: "{{ .Values.BACKEND.SMTP_IGNORE_TLS }}"
SMTP_SECURE: "{{ .Values.BACKEND.SMTP_SECURE }}"
GRAPHQL_URI: "http://{{ .Release.Name }}-backend:4000"
NEO4J_URI: "bolt://{{ .Release.Name }}-neo4j:7687"
#REDIS_DOMAIN: ---toBeSet(IP)---
#REDIS_PORT: "6379"
#SENTRY_DSN_WEBAPP: "---toBeSet---"
#SENTRY_DSN_BACKEND: "---toBeSet---"

View File

@ -1,57 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
minReadySeconds: {{ .Values.BACKEND.MIN_READY_SECONDS }}
progressDeadlineSeconds: {{ .Values.BACKEND.PROGRESS_DEADLINE_SECONDS }}
revisionHistoryLimit: {{ .Values.BACKEND.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-backend
template:
metadata:
annotations:
backup.velero.io/backup-volumes: uploads
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: {{ .Release.Name }}-backend
spec:
containers:
- name: container-{{ .Release.Name }}-backend
image: "{{ .Values.BACKEND.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.BACKEND.DOCKER_IMAGE_PULL_POLICY }}
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-backend
- secretRef:
name: secret-{{ .Release.Name }}-backend
ports:
- containerPort: 4000
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /app/public/uploads
name: uploads
dnsPolicy: ClusterFirst
schedulerName: default-scheduler
restartPolicy: {{ .Values.BACKEND.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.BACKEND.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}
volumes:
- name: uploads
persistentVolumeClaim:
claimName: volume-claim-{{ .Release.Name }}-uploads

View File

@ -1,24 +0,0 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: volume-claim-{{ .Release.Name }}-uploads
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "volume-claim-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
#dataSource:
# name: uploads-snapshot
# kind: VolumeSnapshot
# apiGroup: snapshot.storage.k8s.io
storageClassName: storage-{{ .Release.Name }}-persistent
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.BACKEND.STORAGE_UPLOADS }}

View File

@ -1,21 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:
JWT_SECRET: "{{ .Values.BACKEND.JWT_SECRET }}"
MAPBOX_TOKEN: "{{ .Values.BACKEND.MAPBOX_TOKEN }}"
PRIVATE_KEY_PASSPHRASE: "{{ .Values.BACKEND.PRIVATE_KEY_PASSPHRASE }}"
SMTP_USERNAME: "{{ .Values.BACKEND.SMTP_USERNAME }}"
SMTP_PASSWORD: "{{ .Values.BACKEND.SMTP_PASSWORD }}"
#NEO4J_USERNAME: ""
#NEO4J_PASSWORD: ""
#REDIS_PASSWORD: ---toBeSet---

View File

@ -1,20 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-graphql
port: 4000
targetPort: 4000
protocol: TCP
selector:
app: {{ .Release.Name }}-backend

View File

@ -1,22 +0,0 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "letsencrypt-production"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: {{ .Values.LETSENCRYPT.EMAIL }}
privateKeySecretRef:
name: letsencrypt-production
solvers:
- http01:
ingress:
class: nginx

View File

@ -1,22 +0,0 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "letsencrypt-staging"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: {{ .Values.LETSENCRYPT.EMAIL }}
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx

View File

@ -1,29 +0,0 @@
kind: Job
apiVersion: batch/v1
metadata:
name: job-{{ .Release.Name }}-db-init
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "job-db-init"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded, hook-failed
"helm.sh/hook-weight": "0"
spec:
template:
spec:
restartPolicy: Never
containers:
- name: job-{{ .Release.Name }}-db-init
image: "{{ .Values.BACKEND.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
command: ["/bin/sh", "-c", "yarn prod:migrate init"]
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-backend
- secretRef:
name: secret-{{ .Release.Name }}-backend

View File

@ -1,29 +0,0 @@
kind: Job
apiVersion: batch/v1
metadata:
name: job-{{ .Release.Name }}-db-migrate
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "job-db-migrate"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
"helm.sh/hook": post-install, post-upgrade
"helm.sh/hook-delete-policy": hook-succeeded, hook-failed
"helm.sh/hook-weight": "5"
spec:
template:
spec:
restartPolicy: Never
containers:
- name: job-{{ .Release.Name }}-db-migrations
image: "{{ .Values.BACKEND.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
command: ["/bin/sh", "-c", "yarn prod:migrate up"]
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-backend
- secretRef:
name: secret-{{ .Release.Name }}-backend

View File

@ -1,14 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
HOST: "0.0.0.0"

View File

@ -1,40 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
revisionHistoryLimit: {{ .Values.MAINTENANCE.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-maintenance
template:
metadata:
labels:
app: {{ .Release.Name }}-maintenance
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
spec:
containers:
- name: container-{{ .Release.Name }}-maintenance
image: "{{ .Values.MAINTENANCE.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.MAINTENANCE.DOCKER_IMAGE_PULL_POLICY }}
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-webapp
- secretRef:
name: secret-{{ .Release.Name }}-webapp
ports:
- containerPort: 80
restartPolicy: {{ .Values.MAINTENANCE.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.MAINTENANCE.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}

View File

@ -1,13 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:

View File

@ -1,20 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-http
port: 80
targetPort: 80
protocol: TCP
selector:
app: {{ .Release.Name }}-maintenance

View File

@ -1,21 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
NEO4J_ACCEPT_LICENSE_AGREEMENT: "{{ .Values.NEO4J.ACCEPT_LICENSE_AGREEMENT }}"
NEO4J_AUTH: "{{ .Values.NEO4J.AUTH }}"
NEO4J_dbms_connector_bolt_thread__pool__max__size: "{{ .Values.NEO4J.DBMS_CONNECTOR_BOLT_THREAD_POOL_MAX_SIZE }}"
NEO4J_dbms_memory_heap_initial__size: "{{ .Values.NEO4J.DBMS_MEMORY_HEAP_INITIAL_SIZE }}"
NEO4J_dbms_memory_heap_max__size: "{{ .Values.NEO4J.DBMS_MEMORY_HEAP_MAX_SIZE }}"
NEO4J_dbms_memory_pagecache_size: "{{ .Values.NEO4J.DBMS_MEMORY_PAGECACHE_SIZE }}"
NEO4J_dbms_security_procedures_unrestricted: "{{ .Values.NEO4J.DBMS_SECURITY_PROCEDURES_UNRESTRICTED }}"
NEO4J_apoc_import_file_enabled: "{{ .Values.NEO4J.APOC_IMPORT_FILE_ENABLED }}"

View File

@ -1,57 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
revisionHistoryLimit: {{ .Values.NEO4J.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-neo4j
template:
metadata:
name: neo4j
annotations:
backup.velero.io/backup-volumes: neo4j-data
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: {{ .Release.Name }}-neo4j
spec:
containers:
- name: container-{{ .Release.Name }}-neo4j
image: "{{ .Values.NEO4J.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.NEO4J.DOCKER_IMAGE_PULL_POLICY }}
ports:
- containerPort: 7687
- containerPort: 7474
resources:
requests:
memory: {{ .Values.NEO4J.RESOURCE_REQUESTS_MEMORY | default "1G" | quote }}
limits:
memory: {{ .Values.NEO4J.RESOURCE_LIMITS_MEMORY | default "1G" | quote }}
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-neo4j
- secretRef:
name: secret-{{ .Release.Name }}-neo4j
volumeMounts:
- mountPath: /data/
name: neo4j-data
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: volume-claim-{{ .Release.Name }}-neo4j
restartPolicy: {{ .Values.NEO4J.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.NEO4J.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}

View File

@ -1,19 +0,0 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: volume-claim-{{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "volume-claim-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
storageClassName: storage-{{ .Release.Name }}-persistent
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.NEO4J.STORAGE }}

View File

@ -1,15 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:
NEO4J_USERNAME: ""
NEO4J_PASSWORD: ""

View File

@ -1,23 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-bolt
port: 7687
targetPort: 7687
protocol: TCP
#- name: {{ .Release.Name }}-http
# port: 7474
# targetPort: 7474
selector:
app: {{ .Release.Name }}-neo4j

View File

@ -1,16 +0,0 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storage-{{ .Release.Name }}-persistent
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "storage-persistent"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
provisioner: {{ .Values.STORAGE.PROVISIONER }}
reclaimPolicy: {{ .Values.STORAGE.RECLAIM_POLICY }}
volumeBindingMode: {{ .Values.STORAGE.VOLUME_BINDING_MODE }}
allowVolumeExpansion: {{ .Values.STORAGE.ALLOW_VOLUME_EXPANSION }}

View File

@ -1,20 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
HOST: "0.0.0.0"
PUBLIC_REGISTRATION: "{{ .Values.PUBLIC_REGISTRATION }}"
INVITE_REGISTRATION: "{{ .Values.INVITE_REGISTRATION }}"
CATEGORIES_ACTIVE: "{{ .Values.CATEGORIES_ACTIVE }}"
COOKIE_EXPIRE_TIME: "{{ .Values.COOKIE_EXPIRE_TIME }}"
WEBSOCKETS_URI: "{{ .Values.WEBAPP.WEBSOCKETS_URI }}"
GRAPHQL_URI: "http://{{ .Release.Name }}-backend:4000"

View File

@ -1,44 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: {{ .Values.WEBAPP.REPLICAS }}
minReadySeconds: {{ .Values.WEBAPP.MIN_READY_SECONDS }}
progressDeadlineSeconds: {{ .Values.WEBAPP.PROGRESS_DEADLINE_SECONDS }}
revisionHistoryLimit: {{ .Values.WEBAPP.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-webapp
template:
metadata:
annotations:
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: {{ .Release.Name }}-webapp
spec:
containers:
- name: container-{{ .Release.Name }}-webapp
image: "{{ .Values.WEBAPP.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.WEBAPP.DOCKER_IMAGE_PULL_POLICY }}
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-webapp
- secretRef:
name: secret-{{ .Release.Name }}-webapp
restartPolicy: {{ .Values.WEBAPP.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.WEBAPP.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}

View File

@ -1,36 +0,0 @@
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: ingress-{{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "ingress-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: {{ .Values.LETSENCRYPT.ISSUER }}
nginx.ingress.kubernetes.io/proxy-body-size: {{ .Values.NGINX.PROXY_BODY_SIZE }}
spec:
tls:
- hosts:
{{- range .Values.LETSENCRYPT.DOMAINS }}
- {{ . }}
{{- end }}
secretName: tls
rules:
{{- range .Values.LETSENCRYPT.DOMAINS }}
- host: {{ . }}
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ $.Release.Name }}-webapp
port:
number: 3000
{{- end }}

View File

@ -1,13 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:

View File

@ -1,20 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-http
port: 3000
targetPort: 3000
protocol: TCP
selector:
app: {{ .Release.Name }}-webapp

View File

@ -1,120 +0,0 @@
# please duplicate template file and rename to "values.yaml" and fill in your value
# change all the below if needed
PRODUCTION_DB_CLEAN_ALLOW: false # only true for production environments on staging servers
PUBLIC_REGISTRATION: false
INVITE_REGISTRATION: false
COOKIE_EXPIRE_TIME: 730 # days (730 days, two years is the default in main code)
CATEGORIES_ACTIVE: false
BACKEND:
# change all the below if needed
# DOCKER_IMAGE_REPO - change that to your branded docker image
# label is appended based on .Chart.appVersion
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/backend-branded"
CLIENT_URI: "https://staging.ocelot.social"
# create a new one for your network
JWT_SECRET: "b/&&7b78BF&fv/Vd"
MAPBOX_TOKEN: "pk.eyJ1IjoiYnVzZmFrdG9yIiwiYSI6ImNraDNiM3JxcDBhaWQydG1uczhpZWtpOW4ifQ.7TNRTO-o9aK1Y6MyW_Nd4g"
PRIVATE_KEY_PASSPHRASE: "a7dsf78sadg87ad87sfagsadg78"
# ocelot.social mail dummy
EMAIL_DEFAULT_SENDER: "devops@ocelot.social"
SMTP_HOST: "mail.ocelot.social"
SMTP_USERNAME: "devops@ocelot.social"
SMTP_PASSWORD: "devops@ocelot.social"
SMTP_PORT: "587"
SMTP_IGNORE_TLS: 'false'
SMTP_SECURE: 'false' # true for 465, false for other ports
# or
# SMTP_PORT: "465"
# SMTP_IGNORE_TLS: 'true'
# SMTP_SECURE: 'true' # true for 465, false for other ports
# most likely you don't need to change this
MIN_READY_SECONDS: "15"
PROGRESS_DEADLINE_SECONDS: "60"
REVISIONS_HISTORY_LIMIT: "25"
CONTAINER_RESTART_POLICY: "Always"
CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS: "30"
DOCKER_IMAGE_PULL_POLICY: "Always"
STORAGE_UPLOADS: "25Gi"
WEBAPP:
# change all the below if needed
# DOCKER_IMAGE_REPO - change that to your branded docker image
# label is appended based on .Chart.appVersion
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/webapp-branded"
WEBSOCKETS_URI: "wss://staging.ocelot.social/api/graphql"
# Most likely you don't need to change this
REPLICAS: "2"
MIN_READY_SECONDS: "15"
PROGRESS_DEADLINE_SECONDS: "60"
REVISIONS_HISTORY_LIMIT: "25"
CONTAINER_RESTART_POLICY: "Always"
CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS: "30"
DOCKER_IMAGE_PULL_POLICY: "Always"
NEO4J:
# most likely you don't need to change this
REVISIONS_HISTORY_LIMIT: "25"
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/neo4j-community-branded"
DOCKER_IMAGE_PULL_POLICY: "Always"
CONTAINER_RESTART_POLICY: "Always"
CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS: "30"
STORAGE: "5Gi"
# RESOURCE_REQUESTS_MEMORY configures the memory available for requests.
RESOURCE_REQUESTS_MEMORY: "2G"
# RESOURCE_LIMITS_MEMORY configures the memory limits available.
RESOURCE_LIMITS_MEMORY: "4G"
# required for Neo4j Enterprice version
#ACCEPT_LICENSE_AGREEMENT: "yes"
ACCEPT_LICENSE_AGREEMENT: "no"
AUTH: "none"
#DBMS_CONNECTOR_BOLT_THREAD_POOL_MAX_SIZE: "10000" # hc value
DBMS_CONNECTOR_BOLT_THREAD_POOL_MAX_SIZE: "400" # default value
#DBMS_MEMORY_HEAP_INITIAL_SIZE: "500MB" # HC value
DBMS_MEMORY_HEAP_INITIAL_SIZE: "" # default
#DBMS_MEMORY_HEAP_MAX_SIZE: "500MB" # HC value
DBMS_MEMORY_HEAP_MAX_SIZE: "" # default
#DBMS_MEMORY_PAGECACHE_SIZE: "490M" # HC value
DBMS_MEMORY_PAGECACHE_SIZE: "" # default
#APOC_IMPORT_FILE_ENABLED: "true" # HC value
APOC_IMPORT_FILE_ENABLED: "false" # default
DBMS_SECURITY_PROCEDURES_UNRESTRICTED: "algo.*,apoc.*"
MAINTENANCE:
# change all the below if needed
# DOCKER_IMAGE_REPO - change that to your branded docker image
# label is appended based on .Chart.appVersion
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/maintenance-branded"
# Most likely you don't need to change this
REVISIONS_HISTORY_LIMIT: "25"
CONTAINER_RESTART_POLICY: "Always"
CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS: "30"
DOCKER_IMAGE_PULL_POLICY: "Always"
LETSENCRYPT:
# change all the below if needed
# ISSUER is used by cert-manager to set up certificates with the given provider.
# change it to "letsencrypt-production" once you are ready to have valid cetrificates.
# Be aware that the is an issuing limit with letsencrypt, so a dry run with staging might be wise
ISSUER: "letsencrypt-staging"
EMAIL: "devops@ocelot.social"
DOMAINS:
- "staging.ocelot.social"
- "www.staging.ocelot.social"
NGINX:
# most likely you don't need to change this
PROXY_BODY_SIZE: "10m"
STORAGE:
# change all the below if needed
PROVISIONER: "dobs.csi.digitalocean.com"
# most likely you don't need to change this
RECLAIM_POLICY: "Retain"
VOLUME_BINDING_MODE: "Immediate"
ALLOW_VOLUME_EXPANSION: true

View File

@ -1,45 +0,0 @@
# Maintenance mode
> Despite our best efforts, systems sometimes require downtime for a variety of reasons.
Quote from [here](https://www.nrmitchi.com/2017/11/easy-maintenance-mode-in-kubernetes/)
We use our maintenance mode for manual database backup and restore. Also we
bring the database into maintenance mode for manual database migrations.
## Deploy the service
We prepared sample configuration, so you can simply run:
```sh
# in folder deployment/
$ kubectl apply -f ./ocelot-social/maintenance/
```
This will fire up a maintenance service.
## Bring application into maintenance mode
Now if you want to have a controlled downtime and you want to bring your
application into maintenance mode, you can edit your global ingress server.
E.g. copy file [`deployment/digital-ocean/https/templates/ingress.template.yaml`](../../digital-ocean/https/templates/ingress.template.yaml) to new file `deployment/digital-ocean/https/ingress.yaml` and change the following:
```yaml
...
- host: develop-k8s.ocelot.social
http:
paths:
- path: /
backend:
# serviceName: web
serviceName: maintenance
# servicePort: 3000
servicePort: 80
```
Then run `$ kubectl apply -f deployment/digital-ocean/https/ingress.yaml`. If you
want to deactivate the maintenance server, just undo the edit and apply the
configuration again.

View File

@ -1,39 +0,0 @@
# DigitalOcean
As a start, read the [introduction into Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes) by the folks at DigitalOcean. The following section should enable you to deploy ocelot.social to your Kubernetes cluster.
## Connect to your local cluster
1. Create a cluster at [DigitalOcean](https://www.digitalocean.com/).
2. Download the `***-kubeconfig.yaml` from the Web UI.
3. Move the file to the default location where kubectl expects it to be: `mv ***-kubeconfig.yaml ~/.kube/config`. Alternatively you can set the config on every command: `--kubeconfig ***-kubeconfig.yaml`
4. Now check if you can connect to the cluster and if its your newly created one by running: `kubectl get nodes`
The output should look about like this:
```sh
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
nifty-driscoll-uu1w Ready <none> 69d v1.13.2
nifty-driscoll-uuiw Ready <none> 69d v1.13.2
nifty-driscoll-uusn Ready <none> 69d v1.13.2
```
If you got the steps right above and see your nodes you can continue.
DigitalOcean Kubernetes clusters don't have a graphical interface, so I suggest
to setup the [Kubernetes dashboard](./dashboard/README.md) as a next step.
Configuring [HTTPS](./https/README.md) is bit tricky and therefore I suggest to
do this as a last step.
## Spaces
We are storing our images in the s3-compatible [DigitalOcean Spaces](https://www.digitalocean.com/docs/spaces/).
We still want to take backups of our images in case something happens to the images in the cloud. See these [instructions](https://www.digitalocean.com/docs/spaces/resources/s3cmd-usage/) about getting set up with `s3cmd` to take a copy of all images in a `Spaces` namespace, i.e. `ocelot-social-uploads`.
After configuring `s3cmd` with your credentials, etc. you should be able to make a backup with this command.
```sh
s3cmg get --recursive --skip-existing s3://ocelot-social-uploads
```

View File

@ -1,55 +0,0 @@
# Install Kubernetes Dashboard
The kubernetes dashboard is optional but very helpful for debugging. If you want to install it, you have to do so only **once** per cluster:
```bash
# in folder deployment/digital-ocean/
$ kubectl apply -f dashboard/
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
```
### Login to your dashboard
Proxy the remote kubernetes dashboard to localhost:
```bash
$ kubectl proxy
```
Visit:
[http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/)
You should see a login screen.
To get your token for the dashboard you can run this command:
```bash
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
```
It should print something like:
```text
Name: admin-user-token-6gl6l
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=b16afba9-dfec-11e7-bbb9-901b0e532516
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnbDZsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMTZhZmJhOS1kZmVjLTExZTctYmJiOS05MDFiMGU1MzI1MTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.M70CU3lbu3PP4OjhFms8PVL5pQKj-jj4RNSLA4YmQfTXpPUuxqXjiTf094_Rzr0fgN_IVX6gC4fiNUL5ynx9KU-lkPfk0HnX8scxfJNzypL039mpGt0bbe1IXKSIRaq_9VW59Xz-yBUhycYcKPO9RM2Qa1Ax29nqNVko4vLn1_1wPqJ6XSq3GYI8anTzV8Fku4jasUwjrws6Cn6_sPEGmL54sq5R4Z5afUtv-mItTmqZZdxnkRqcJLlg2Y8WbCPogErbsaCDJoABQ7ppaqHetwfM_0yMun6ABOQbIwwl8pspJhpplKwyo700OSpvTT9zlBsu-b35lzXGBRHzv5g_RA
```
Grab the token from above and paste it into the [login screen](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/)
When you are logged in, you should see sth. like:
![Dashboard](./dashboard-screenshot.png)
Feel free to save the login token from above in your password manager. Unlike the `kubeconfig` file, this token does not expire.

View File

@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system

Binary file not shown.

Before

Width:  |  Height:  |  Size: 178 KiB

View File

@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

View File

@ -1,126 +0,0 @@
## Create Letsencrypt Issuers and Ingress Services
Copy the configuration templates and change the file according to your needs.
```bash
# in folder deployment/digital-ocean/https/
cp templates/issuer.template.yaml ./issuer.yaml
cp templates/ingress.template.yaml ./ingress.yaml
```
At least, **change email addresses** in `issuer.yaml`. For sure you also want
to _change the domain name_ in `ingress.yaml`.
Once you are done, apply the configuration:
```bash
# in folder deployment/digital-ocean/https/
$ kubectl apply -f .
```
{% hint style="info" %}
CAUTION: It seems that the behaviour of DigitalOcean has changed and the load balancer is not created automatically anymore.
And to create a load balancer costs money. Please refine the following documentation if required.
{% endhint %}
{% tabs %}
{% tab title="Without Load Balancer" %}
A solution without a load balance you can find [here](../no-loadbalancer/README.md).
{% endtab %}
{% tab title="With DigitalOcean Load Balancer" %}
{% hint style="info" %}
CAUTION: It seems that the behaviour of DigitalOcean has changed and the load balancer is not created automatically anymore.
Please refine the following documentation if required.
{% endhint %}
In earlier days by now, your cluster should have a load balancer assigned with an external IP
address. On DigitalOcean, this is how it should look like:
![Screenshot of DigitalOcean dashboard showing external ip address](./ip-address.png)
If the load balancer isn't created automatically you have to create it your self on DigitalOcean under Networks.
In case you don't need a DigitalOcean load balancer (which costs money by the way) have a look in the tab `Without Load Balancer`.
{% endtab %}
{% endtabs %}
Check the ingress server is working correctly:
```bash
$ curl -kivL -H 'Host: <DOMAIN_NAME>' 'https://<IP_ADDRESS>'
<page HTML>
```
If the response looks good, configure your domain registrar for the new IP address and the domain.
Now let's get a valid HTTPS certificate. According to the tutorial above, check your tls certificate for staging:
```bash
$ kubectl -n ocelot-social describe certificate tls
<
...
Spec:
...
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
...
Events:
<no errors>
>
$ kubectl -n ocelot-social describe secret tls
<
...
Annotations: ...
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-staging
...
>
```
If everything looks good, update the cluster-issuer of your ingress. Change the annotation `cert-manager.io/cluster-issuer` from `letsencrypt-staging` (for testing by getting a dummy certificate no blocking by letsencrypt, because of to many request cycles) to `letsencrypt-prod` (for production with a real certificate possible blocking by letsencrypt for several days, because of to many request cycles) in your ingress configuration in `ingress.yaml`.
```bash
# in folder deployment/digital-ocean/https/
$ kubectl apply -f ingress.yaml
```
Take a minute and have a look if the certificate is now newly generated by `letsencrypt-prod`, the cluster-issuer for production:
```bash
$ kubectl -n ocelot-social describe certificate tls
<
...
Spec:
...
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-prod
...
Events:
<no errors>
>
$ kubectl -n ocelot-social describe secret tls
<
...
Annotations: ...
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-prod
...
>
```
In case the certificate is not newly created delete the former secret to force a refresh:
```bash
$ kubectl -n ocelot-social delete secret tls
```
Now, HTTPS should be configured on your domain. Congrats!
For troubleshooting have a look at the cert-manager's [Troubleshooting](https://cert-manager.io/docs/faq/troubleshooting/) or [Troubleshooting Issuing ACME Certificates](https://cert-manager.io/docs/faq/acme/).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 141 KiB

View File

@ -1,85 +0,0 @@
# Legacy data migration
This setup is **completely optional** and only required if you have data on a
server which is running our legacy code and you want to import that data. It
will import the uploads folder and migrate a dump of the legacy Mongo database
into our new Neo4J graph database.
## Configure Maintenance-Worker Pod
Create a configmap with the specific connection data of your legacy server:
```bash
$ kubectl create configmap maintenance-worker \
-n ocelot-social \
--from-literal=SSH_USERNAME=someuser \
--from-literal=SSH_HOST=yourhost \
--from-literal=MONGODB_USERNAME=hc-api \
--from-literal=MONGODB_PASSWORD=secretpassword \
--from-literal=MONGODB_AUTH_DB=hc_api \
--from-literal=MONGODB_DATABASE=hc_api \
--from-literal=UPLOADS_DIRECTORY=/var/www/api/uploads
```
Create a secret with your public and private ssh keys. As the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys) points out, you should be careful with your ssh keys. Anyone with access to your cluster will have access to your ssh keys. Better create a new pair with `ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`:
```bash
$ kubectl create secret generic ssh-keys \
-n ocelot-social \
--from-file=id_rsa=/path/to/.ssh/id_rsa \
--from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
--from-file=known_hosts=/path/to/.ssh/known_hosts
```
## Deploy a Temporary Maintenance-Worker Pod
Bring the application into maintenance mode.
{% hint style="info" %} TODO: implement maintenance mode {% endhint %}
Then temporarily delete backend and database deployments
```bash
$ kubectl -n ocelot-social get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
backend 1/1 1 1 3d11h
neo4j 1/1 1 1 3d11h
webapp 2/2 2 2 73d
$ kubectl -n ocelot-social delete deployment neo4j
deployment.extensions "neo4j" deleted
$ kubectl -n ocelot-social delete deployment backend
deployment.extensions "backend" deleted
```
Deploy one-time develop-maintenance-worker pod:
```bash
# in deployment/legacy-migration/
$ kubectl apply -f maintenance-worker.yaml
pod/develop-maintenance-worker created
```
Import legacy database and uploads:
```bash
$ kubectl -n ocelot-social exec -it develop-maintenance-worker bash
$ import_legacy_db
$ import_legacy_uploads
$ exit
```
Delete the pod when you're done:
```bash
$ kubectl -n ocelot-social delete pod develop-maintenance-worker
```
Oh, and of course you have to get those deleted deployments back. One way of
doing it would be:
```bash
# in folder deployment/
$ kubectl apply -f human-connection/deployment-backend.yaml -f human-connection/deployment-neo4j.yaml
```

View File

@ -1,40 +0,0 @@
---
kind: Pod
apiVersion: v1
metadata:
name: develop-maintenance-worker
namespace: ocelot-social
spec:
containers:
- name: develop-maintenance-worker
image: ocelotsocialnetwork/develop-maintenance-worker:latest
imagePullPolicy: Always
resources:
requests:
memory: "2G"
limits:
memory: "8G"
envFrom:
- configMapRef:
name: maintenance-worker
- configMapRef:
name: configmap
volumeMounts:
- name: secret-volume
readOnly: false
mountPath: /root/.ssh
- name: uploads
mountPath: /uploads
- name: neo4j-data
mountPath: /data/
volumes:
- name: secret-volume
secret:
secretName: ssh-keys
defaultMode: 0400
- name: uploads
persistentVolumeClaim:
claimName: uploads-claim
- name: neo4j-data
persistentVolumeClaim:
claimName: neo4j-data-claim

View File

@ -1,2 +0,0 @@
.ssh/
ssh/

View File

@ -1,21 +0,0 @@
FROM ocelotsocialnetwork/develop-neo4j:latest
ENV NODE_ENV=maintenance
EXPOSE 7687 7474
ENV BUILD_DEPS="gettext" \
RUNTIME_DEPS="libintl"
RUN set -x && \
apk add --update $RUNTIME_DEPS && \
apk add --virtual build_deps $BUILD_DEPS && \
cp /usr/bin/envsubst /usr/local/bin/envsubst && \
apk del build_deps
RUN apk upgrade --update
RUN apk add --no-cache mongodb-tools openssh nodejs yarn rsync
COPY known_hosts /root/.ssh/known_hosts
COPY migration /migration
COPY ./binaries/* /usr/local/bin/

View File

@ -1,6 +0,0 @@
# SSH Access
# SSH_USERNAME='username'
# SSH_HOST='example.org'
# UPLOADS_DIRECTORY=/var/www/api/uploads
OUTPUT_DIRECTORY='/uploads/'

View File

@ -1,2 +0,0 @@
#!/usr/bin/env bash
tail -f /dev/null

View File

@ -1,12 +0,0 @@
#!/usr/bin/env bash
set -e
for var in "SSH_USERNAME" "SSH_HOST" "MONGODB_USERNAME" "MONGODB_PASSWORD" "MONGODB_DATABASE" "MONGODB_AUTH_DB"
do
if [[ -z "${!var}" ]]; then
echo "${var} is undefined"
exit 1
fi
done
/migration/mongo/export.sh
/migration/neo4j/import.sh

View File

@ -1,17 +0,0 @@
#!/usr/bin/env bash
set -e
# import .env config
set -o allexport
source $(dirname "$0")/.env
set +o allexport
for var in "SSH_USERNAME" "SSH_HOST" "UPLOADS_DIRECTORY"
do
if [[ -z "${!var}" ]]; then
echo "${var} is undefined"
exit 1
fi
done
rsync --archive --update --verbose ${SSH_USERNAME}@${SSH_HOST}:${UPLOADS_DIRECTORY}/ ${OUTPUT_DIRECTORY}

View File

@ -1,3 +0,0 @@
|1|GuOYlVEhTowidPs18zj9p5F2j3o=|sDHJYLz9Ftv11oXeGEjs7SpVyg0= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM5N29bI5CeKu1/RBPyM2fwyf7fuajOO+tyhKe1+CC2sZ1XNB5Ff6t6MtCLNRv2mUuvzTbW/HkisDiA5tuXUHOk=
|1|2KP9NV+Q5g2MrtjAeFSVcs8YeOI=|nf3h4wWVwC4xbBS1kzgzE2tBldk= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhRK6BeIEUxXlS0z/pOfkUkSPfn33g4J1U3L+MyUQYHm+7agT08799ANJhnvELKE1tt4Vx80I9UR81oxzZcy3E=
|1|HonYIRNhKyroUHPKU1HSZw0+Qzs=|5T1btfwFBz2vNSldhqAIfTbfIgQ= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhRK6BeIEUxXlS0z/pOfkUkSPfn33g4J1U3L+MyUQYHm+7agT08799ANJhnvELKE1tt4Vx80I9UR81oxzZcy3E=

View File

@ -1,17 +0,0 @@
# SSH Access
# SSH_USERNAME='username'
# SSH_HOST='example.org'
# Mongo DB on Remote Maschine
# MONGODB_USERNAME='mongouser'
# MONGODB_PASSWORD='mongopassword'
# MONGODB_DATABASE='mongodatabase'
# MONGODB_AUTH_DB='admin'
# Export Settings
# On Windows this resolves to C:\Users\dornhoeschen\AppData\Local\Temp\mongo-export (MinGW)
EXPORT_PATH='/tmp/mongo-export/'
EXPORT_MONGOEXPORT_BIN='mongoexport'
MONGO_EXPORT_SPLIT_SIZE=6000
# On Windows use something like this
# EXPORT_MONGOEXPORT_BIN='C:\Program Files\MongoDB\Server\3.6\bin\mongoexport.exe'

View File

@ -1,53 +0,0 @@
#!/usr/bin/env bash
set -e
# import .env config
set -o allexport
source $(dirname "$0")/.env
set +o allexport
# Export collection function defintion
function export_collection () {
"${EXPORT_MONGOEXPORT_BIN}" --db ${MONGODB_DATABASE} --host localhost -d ${MONGODB_DATABASE} --port 27018 --username ${MONGODB_USERNAME} --password ${MONGODB_PASSWORD} --authenticationDatabase ${MONGODB_AUTH_DB} --collection $1 --out "${EXPORT_PATH}$1.json"
mkdir -p ${EXPORT_PATH}splits/$1/
split -l ${MONGO_EXPORT_SPLIT_SIZE} -a 3 ${EXPORT_PATH}$1.json ${EXPORT_PATH}splits/$1/
}
# Export collection with query function defintion
function export_collection_query () {
"${EXPORT_MONGOEXPORT_BIN}" --db ${MONGODB_DATABASE} --host localhost -d ${MONGODB_DATABASE} --port 27018 --username ${MONGODB_USERNAME} --password ${MONGODB_PASSWORD} --authenticationDatabase ${MONGODB_AUTH_DB} --collection $1 --out "${EXPORT_PATH}$1_$3.json" --query "$2"
mkdir -p ${EXPORT_PATH}splits/$1_$3/
split -l ${MONGO_EXPORT_SPLIT_SIZE} -a 3 ${EXPORT_PATH}$1_$3.json ${EXPORT_PATH}splits/$1_$3/
}
# Delete old export & ensure directory
rm -rf ${EXPORT_PATH}*
mkdir -p ${EXPORT_PATH}
# Open SSH Tunnel
ssh -4 -M -S my-ctrl-socket -fnNT -L 27018:localhost:27017 -l ${SSH_USERNAME} ${SSH_HOST}
# Export all Data from the Alpha to json and split them up
export_collection "badges"
export_collection "categories"
export_collection "comments"
export_collection_query "contributions" '{"type": "DELETED"}' "DELETED"
export_collection_query "contributions" '{"type": "post"}' "post"
# export_collection_query "contributions" '{"type": "cando"}' "cando"
export_collection "emotions"
# export_collection_query "follows" '{"foreignService": "organizations"}' "organizations"
export_collection_query "follows" '{"foreignService": "users"}' "users"
# export_collection "invites"
# export_collection "organizations"
# export_collection "pages"
# export_collection "projects"
# export_collection "settings"
export_collection "shouts"
# export_collection "status"
export_collection_query "users" '{"isVerified": true }' "verified"
# export_collection "userscandos"
# export_collection "usersettings"
# Close SSH Tunnel
ssh -S my-ctrl-socket -O check -l ${SSH_USERNAME} ${SSH_HOST}
ssh -S my-ctrl-socket -O exit -l ${SSH_USERNAME} ${SSH_HOST}

View File

@ -1,16 +0,0 @@
# Neo4J Settings
# NEO4J_USERNAME='neo4j'
# NEO4J_PASSWORD='letmein'
# Import Settings
# On Windows this resolves to C:\Users\dornhoeschen\AppData\Local\Temp\mongo-export (MinGW)
IMPORT_PATH='/tmp/mongo-export/'
IMPORT_CHUNK_PATH='/tmp/mongo-export/splits/'
IMPORT_CHUNK_PATH_CQL='/tmp/mongo-export/splits/'
# On Windows this path needs to be windows style since the cypher-shell runs native - note the forward slash
# IMPORT_CHUNK_PATH_CQL='C:/Users/dornhoeschen/AppData/Local/Temp/mongo-export/splits/'
IMPORT_CYPHERSHELL_BIN='cypher-shell'
# On Windows use something like this
# IMPORT_CYPHERSHELL_BIN='C:\Program Files\neo4j-community\bin\cypher-shell.bat'

View File

@ -1,52 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] image: {
[?] path: { // Path is incorrect in Nitro - is icon the correct name for this field?
[X] type: String,
[X] required: true
},
[ ] alt: { // If we use an image - should we not have an alt?
[ ] type: String,
[ ] required: true
}
},
[?] status: {
[X] type: String,
[X] enum: ['permanent', 'temporary'],
[ ] default: 'permanent', // Default value is missing in Nitro
[X] required: true
},
[?] type: {
[?] type: String, // in nitro this is a defined enum - seems good for now
[X] required: true
},
[X] id: {
[X] type: String,
[X] required: true
},
[?] createdAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as badge
MERGE(b:Badge {id: badge._id["$oid"]})
ON CREATE SET
b.id = badge.key,
b.type = badge.type,
b.icon = replace(badge.image.path, 'https://api-alpha.human-connection.org', ''),
b.status = badge.status,
b.createdAt = badge.createdAt.`$date`,
b.updatedAt = badge.updatedAt.`$date`
;

View File

@ -1 +0,0 @@
MATCH (n:Badge) DETACH DELETE n;

View File

@ -1,129 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[X] title: {
[X] type: String,
[X] required: true
},
[?] slug: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[ ] unique: true // Unique value is not enforced in Nitro?
},
[?] icon: { // Nitro adds required: true
[X] type: String,
[ ] unique: true // Unique value is not enforced in Nitro?
},
[?] createdAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as category
MERGE(c:Category {id: category._id["$oid"]})
ON CREATE SET
c.name = category.title,
c.slug = category.slug,
c.icon = category.icon,
c.createdAt = category.createdAt.`$date`,
c.updatedAt = category.updatedAt.`$date`
;
// Transform icon names
MATCH (c:Category)
WHERE (c.icon = "categories-justforfun")
SET c.icon = 'smile'
SET c.slug = 'just-for-fun'
;
MATCH (c:Category)
WHERE (c.icon = "categories-luck")
SET c.icon = 'heart-o'
SET c.slug = 'happiness-values'
;
MATCH (c:Category)
WHERE (c.icon = "categories-health")
SET c.icon = 'medkit'
;
MATCH (c:Category)
WHERE (c.icon = "categories-environment")
SET c.icon = 'tree'
;
MATCH (c:Category)
WHERE (c.icon = "categories-animal-justice")
SET c.icon = 'paw'
SET c.slug = 'animal-protection'
;
MATCH (c:Category)
WHERE (c.icon = "categories-human-rights")
SET c.icon = 'balance-scale'
SET c.slug = 'human-rights-justice'
;
MATCH (c:Category)
WHERE (c.icon = "categories-education")
SET c.icon = 'graduation-cap'
;
MATCH (c:Category)
WHERE (c.icon = "categories-cooperation")
SET c.icon = 'users'
;
MATCH (c:Category)
WHERE (c.icon = "categories-politics")
SET c.icon = 'university'
;
MATCH (c:Category)
WHERE (c.icon = "categories-economy")
SET c.icon = 'money'
;
MATCH (c:Category)
WHERE (c.icon = "categories-technology")
SET c.icon = 'flash'
;
MATCH (c:Category)
WHERE (c.icon = "categories-internet")
SET c.icon = 'mouse-pointer'
SET c.slug = 'it-internet-data-privacy'
;
MATCH (c:Category)
WHERE (c.icon = "categories-art")
SET c.icon = 'paint-brush'
;
MATCH (c:Category)
WHERE (c.icon = "categories-freedom-of-speech")
SET c.icon = 'bullhorn'
SET c.slug = 'freedom-of-speech'
;
MATCH (c:Category)
WHERE (c.icon = "categories-sustainability")
SET c.icon = 'shopping-cart'
;
MATCH (c:Category)
WHERE (c.icon = "categories-peace")
SET c.icon = 'angellist'
SET c.slug = 'global-peace-nonviolence'
;

View File

@ -1 +0,0 @@
MATCH (n:Category) DETACH DELETE n;

View File

@ -1,67 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] userId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[?] contributionId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[X] content: {
[X] type: String,
[X] required: true
},
[?] contentExcerpt: { // Generated from content
[X] type: String,
[ ] required: true // Not required in Nitro
},
[ ] hasMore: { type: Boolean },
[ ] upvotes: {
[ ] type: Array,
[ ] default: []
},
[ ] upvoteCount: {
[ ] type: Number,
[ ] default: 0
},
[?] deleted: {
[X] type: Boolean,
[ ] default: false, // Default value is missing in Nitro
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as comment
MERGE (c:Comment {id: comment._id["$oid"]})
ON CREATE SET
c.content = comment.content,
c.contentExcerpt = comment.contentExcerpt,
c.deleted = comment.deleted,
c.createdAt = comment.createdAt.`$date`,
c.updatedAt = comment.updatedAt.`$date`,
c.disabled = false
WITH c, comment, comment.contributionId as postId
MATCH (post:Post {id: postId})
WITH c, post, comment.userId as userId
MATCH (author:User {id: userId})
MERGE (c)-[:COMMENTS]->(post)
MERGE (author)-[:WROTE]->(c)
;

View File

@ -1 +0,0 @@
MATCH (n:Comment) DETACH DELETE n;

View File

@ -1,156 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
[?] { //Modeled incorrect as Post
[?] userId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[ ] organizationId: {
[ ] type: String,
[-] index: true
},
[X] categoryIds: {
[X] type: Array,
[-] index: true
},
[X] title: {
[X] type: String,
[X] required: true
},
[?] slug: { // Generated from title
[X] type: String,
[ ] required: true, // Not required in Nitro
[?] unique: true, // Unique value is not enforced in Nitro?
[-] index: true
},
[ ] type: { // db.getCollection('contributions').distinct('type') -> 'DELETED', 'cando', 'post'
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] cando: {
[ ] difficulty: {
[ ] type: String,
[ ] enum: ['easy', 'medium', 'hard']
},
[ ] reasonTitle: { type: String },
[ ] reason: { type: String }
},
[X] content: {
[X] type: String,
[X] required: true
},
[?] contentExcerpt: { // Generated from content
[X] type: String,
[?] required: true // Not required in Nitro
},
[ ] hasMore: { type: Boolean },
[X] teaserImg: { type: String },
[ ] language: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] shoutCount: {
[ ] type: Number,
[ ] default: 0,
[-] index: true
},
[ ] meta: {
[ ] hasVideo: {
[ ] type: Boolean,
[ ] default: false
},
[ ] embedds: {
[ ] type: Object,
[ ] default: {}
}
},
[?] visibility: {
[X] type: String,
[X] enum: ['public', 'friends', 'private'],
[ ] default: 'public', // Default value is missing in Nitro
[-] index: true
},
[?] isEnabled: {
[X] type: Boolean,
[ ] default: true, // Default value is missing in Nitro
[-] index: true
},
[?] tags: { type: Array }, // ensure this is working properly
[ ] emotions: {
[ ] type: Object,
[-] index: true,
[ ] default: {
[ ] angry: {
[ ] count: 0,
[ ] percent: 0
[ ] },
[ ] cry: {
[ ] count: 0,
[ ] percent: 0
[ ] },
[ ] surprised: {
[ ] count: 0,
[ ] percent: 0
},
[ ] happy: {
[ ] count: 0,
[ ] percent: 0
},
[ ] funny: {
[ ] count: 0,
[ ] percent: 0
}
}
},
[?] deleted: { // THis field is not always present in the alpha-data
[?] type: Boolean,
[ ] default: false, // Default value is missing in Nitro
[-] index: true
},
[?] createdAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as post
MERGE (p:Post {id: post._id["$oid"]})
ON CREATE SET
p.title = post.title,
p.slug = post.slug,
p.image = replace(post.teaserImg, 'https://api-alpha.human-connection.org', ''),
p.content = post.content,
p.contentExcerpt = post.contentExcerpt,
p.visibility = toLower(post.visibility),
p.createdAt = post.createdAt.`$date`,
p.updatedAt = post.updatedAt.`$date`,
p.deleted = COALESCE(post.deleted, false),
p.disabled = COALESCE(NOT post.isEnabled, false)
WITH p, post
MATCH (u:User {id: post.userId})
MERGE (u)-[:WROTE]->(p)
WITH p, post, post.categoryIds as categoryIds
UNWIND categoryIds AS categoryId
MATCH (c:Category {id: categoryId})
MERGE (p)-[:CATEGORIZED]->(c)
WITH p, post.tags AS tags
UNWIND tags AS tag
WITH apoc.text.replace(tag, '[^\\p{L}0-9]', '') as tagNoSpacesAllowed
CALL apoc.when(tagNoSpacesAllowed =~ '^((\\p{L}+[\\p{L}0-9]*)|([0-9]+\\p{L}+[\\p{L}0-9]*))$', 'RETURN tagNoSpacesAllowed', '', {tagNoSpacesAllowed: tagNoSpacesAllowed})
YIELD value as validated
WHERE validated.tagNoSpacesAllowed IS NOT NULL
MERGE (t:Tag { id: validated.tagNoSpacesAllowed, disabled: false, deleted: false })
MERGE (p)-[:TAGGED]->(t)
;

View File

@ -1,2 +0,0 @@
MATCH (n:Post) DETACH DELETE n;
MATCH (n:Tag) DETACH DELETE n;

View File

@ -1 +0,0 @@
MATCH (n) DETACH DELETE n;

View File

@ -1 +0,0 @@
MATCH (u:User)-[e:EMOTED]->(c:Post) DETACH DELETE e;

View File

@ -1,58 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[X] userId: {
[X] type: String,
[X] required: true,
[-] index: true
},
[X] contributionId: {
[X] type: String,
[X] required: true,
[-] index: true
},
[?] rated: {
[X] type: String,
[ ] required: true,
[?] enum: ['funny', 'happy', 'surprised', 'cry', 'angry']
},
[X] createdAt: {
[X] type: Date,
[X] default: Date.now
},
[X] updatedAt: {
[X] type: Date,
[X] default: Date.now
},
[-] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as emotion
MATCH (u:User {id: emotion.userId}),
(c:Post {id: emotion.contributionId})
MERGE (u)-[e:EMOTED {
id: emotion._id["$oid"],
emotion: emotion.rated,
createdAt: datetime(emotion.createdAt.`$date`),
updatedAt: datetime(emotion.updatedAt.`$date`)
}]->(c)
RETURN e;
/*
// Queries
// user sets an emotion emotion:
// MERGE (u)-[e:EMOTED {id: ..., emotion: "funny", createdAt: ..., updatedAt: ...}]->(c)
// user removes emotion
// MATCH (u)-[e:EMOTED]->(c) DELETE e
// contribution distributions over every `emotion` property value for one post
// MATCH (u:User)-[e:EMOTED]->(c:Post {id: "5a70bbc8508f5b000b443b1a"}) RETURN e.emotion,COUNT(e.emotion)
// contribution distributions over every `emotion` property value for one user (advanced - "whats the primary emotion used by the user?")
// MATCH (u:User{id:"5a663b1ac64291000bf302a1"})-[e:EMOTED]->(c:Post) RETURN e.emotion,COUNT(e.emotion)
// contribution distributions over every `emotion` property value for all posts created by one user (advanced - "how do others react to my contributions?")
// MATCH (u:User)-[e:EMOTED]->(c:Post)<-[w:WROTE]-(a:User{id:"5a663b1ac64291000bf302a1"}) RETURN e.emotion,COUNT(e.emotion)
// if we can filter the above an a variable timescale that would be great (should be possible on createdAt and updatedAt fields)
*/

View File

@ -1 +0,0 @@
MATCH (u1:User)-[f:FOLLOWS]->(u2:User) DETACH DELETE f;

View File

@ -1,36 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] userId: {
[-] type: String,
[ ] required: true,
[-] index: true
},
[?] foreignId: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[?] foreignService: { // db.getCollection('follows').distinct('foreignService') returns 'organizations' and 'users'
[ ] type: String,
[ ] required: true,
[ ] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
index:
[?] { userId: 1, foreignId: 1, foreignService: 1 },{ unique: true } // is the unique constrain modeled?
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as follow
MATCH (u1:User {id: follow.userId}), (u2:User {id: follow.foreignId})
MERGE (u1)-[:FOLLOWS]->(u2)
;

View File

@ -1,108 +0,0 @@
#!/usr/bin/env bash
set -e
# import .env config
set -o allexport
source $(dirname "$0")/.env
set +o allexport
# Delete collection function defintion
function delete_collection () {
# Delete from Database
echo "Delete $2"
"${IMPORT_CYPHERSHELL_BIN}" < $(dirname "$0")/$1/delete.cql > /dev/null
# Delete index file
rm -f "${IMPORT_PATH}splits/$2.index"
}
# Import collection function defintion
function import_collection () {
# index file of those chunks we have already imported
INDEX_FILE="${IMPORT_PATH}splits/$1.index"
# load index file
if [ -f "$INDEX_FILE" ]; then
readarray -t IMPORT_INDEX <$INDEX_FILE
else
declare -a IMPORT_INDEX
fi
# for each chunk import data
for chunk in ${IMPORT_PATH}splits/$1/*
do
CHUNK_FILE_NAME=$(basename "${chunk}")
# does the index not contain the chunk file name?
if [[ ! " ${IMPORT_INDEX[@]} " =~ " ${CHUNK_FILE_NAME} " ]]; then
# calculate the path of the chunk
export IMPORT_CHUNK_PATH_CQL_FILE="${IMPORT_CHUNK_PATH_CQL}$1/${CHUNK_FILE_NAME}"
# load the neo4j command and replace file variable with actual path
NEO4J_COMMAND="$(envsubst '${IMPORT_CHUNK_PATH_CQL_FILE}' < $(dirname "$0")/$2)"
# run the import of the chunk
echo "Import $1 ${CHUNK_FILE_NAME} (${chunk})"
echo "${NEO4J_COMMAND}" | "${IMPORT_CYPHERSHELL_BIN}" > /dev/null
# add file to array and file
IMPORT_INDEX+=("${CHUNK_FILE_NAME}")
echo "${CHUNK_FILE_NAME}" >> ${INDEX_FILE}
else
echo "Skipping $1 ${CHUNK_FILE_NAME} (${chunk})"
fi
done
}
# Time variable
SECONDS=0
# Delete all Neo4J Database content
echo "Deleting Database Contents"
delete_collection "badges" "badges"
delete_collection "categories" "categories"
delete_collection "users" "users"
delete_collection "follows" "follows_users"
delete_collection "contributions" "contributions_post"
delete_collection "contributions" "contributions_cando"
delete_collection "shouts" "shouts"
delete_collection "comments" "comments"
delete_collection "emotions" "emotions"
#delete_collection "invites"
#delete_collection "notifications"
#delete_collection "organizations"
#delete_collection "pages"
#delete_collection "projects"
#delete_collection "settings"
#delete_collection "status"
#delete_collection "systemnotifications"
#delete_collection "userscandos"
#delete_collection "usersettings"
echo "DONE"
# Import Data
echo "Start Importing Data"
import_collection "badges" "badges/badges.cql"
import_collection "categories" "categories/categories.cql"
import_collection "users_verified" "users/users.cql"
import_collection "follows_users" "follows/follows.cql"
#import_collection "follows_organizations" "follows/follows.cql"
import_collection "contributions_post" "contributions/contributions.cql"
#import_collection "contributions_cando" "contributions/contributions.cql"
#import_collection "contributions_DELETED" "contributions/contributions.cql"
import_collection "shouts" "shouts/shouts.cql"
import_collection "comments" "comments/comments.cql"
import_collection "emotions" "emotions/emotions.cql"
# import_collection "invites"
# import_collection "notifications"
# import_collection "organizations"
# import_collection "pages"
# import_collection "systemnotifications"
# import_collection "userscandos"
# import_collection "usersettings"
# does only contain dummy data
# import_collection "projects"
# does only contain alpha specifc data
# import_collection "status
# import_collection "settings""
echo "DONE"
echo "Time elapsed: $SECONDS seconds"

View File

@ -1,39 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] email: {
[ ] type: String,
[ ] required: true,
[-] index: true,
[ ] unique: true
},
[ ] code: {
[ ] type: String,
[-] index: true,
[ ] required: true
},
[ ] role: {
[ ] type: String,
[ ] enum: ['admin', 'moderator', 'manager', 'editor', 'user'],
[ ] default: 'user'
},
[ ] invitedByUserId: { type: String },
[ ] language: { type: String },
[ ] badgeIds: [],
[ ] wasUsed: {
[ ] type: Boolean,
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as invite;

View File

@ -1,48 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] userId: { // User this notification is sent to
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] type: {
[ ] type: String,
[ ] required: true,
[ ] enum: ['comment','comment-mention','contribution-mention','following-contribution']
},
[ ] relatedUserId: {
[ ] type: String,
[-] index: true
},
[ ] relatedContributionId: {
[ ] type: String,
[-] index: true
},
[ ] relatedOrganizationId: {
[ ] type: String,
[-] index: true
},
[ ] relatedCommentId: {type: String },
[ ] unseen: {
[ ] type: Boolean,
[ ] default: true,
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as notification;

View File

@ -1,137 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] name: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] slug: {
[ ] type: String,
[ ] required: true,
[ ] unique: true,
[-] index: true
},
[ ] followersCounts: {
[ ] users: {
[ ] type: Number,
[ ] default: 0
},
[ ] organizations: {
[ ] type: Number,
[ ] default: 0
},
[ ] projects: {
[ ] type: Number,
[ ] default: 0
}
},
[ ] followingCounts: {
[ ] users: {
[ ] type: Number,
[ ] default: 0
},
[ ] organizations: {
[ ] type: Number,
[ ] default: 0
},
[ ] projects: {
[ ] type: Number,
[ ] default: 0
}
},
[ ] categoryIds: {
[ ] type: Array,
[ ] required: true,
[-] index: true
},
[ ] logo: { type: String },
[ ] coverImg: { type: String },
[ ] userId: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] description: {
[ ] type: String,
[ ] required: true
},
[ ] descriptionExcerpt: { type: String }, // will be generated automatically
[ ] publicEmail: { type: String },
[ ] url: { type: String },
[ ] type: {
[ ] type: String,
[-] index: true,
[ ] enum: ['ngo', 'npo', 'goodpurpose', 'ev', 'eva']
},
[ ] language: {
[ ] type: String,
[ ] required: true,
[ ] default: 'de',
[-] index: true
},
[ ] addresses: {
[ ] type: [{
[ ] street: {
[ ] type: String,
[ ] required: true
},
[ ] zipCode: {
[ ] type: String,
[ ] required: true
},
[ ] city: {
[ ] type: String,
[ ] required: true
},
[ ] country: {
[ ] type: String,
[ ] required: true
},
[ ] lat: {
[ ] type: Number,
[ ] required: true
},
[ ] lng: {
[ ] type: Number,
[ ] required: true
}
}],
[ ] default: []
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] isEnabled: {
[ ] type: Boolean,
[ ] default: false,
[-] index: true
},
[ ] reviewedBy: {
[ ] type: String,
[ ] default: null,
[-] index: true
},
[ ] tags: {
[ ] type: Array,
[-] index: true
},
[ ] deleted: {
[ ] type: Boolean,
[ ] default: false,
[-] index: true
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as organisation;

View File

@ -1,55 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] title: {
[ ] type: String,
[ ] required: true
},
[ ] slug: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] type: {
[ ] type: String,
[ ] required: true,
[ ] default: 'page'
},
[ ] key: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] content: {
[ ] type: String,
[ ] required: true
},
[ ] language: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] active: {
[ ] type: Boolean,
[ ] default: true,
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
index:
[ ] { slug: 1, language: 1 },{ unique: true }
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as page;

View File

@ -1,44 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] name: {
[ ] type: String,
[ ] required: true
},
[ ] slug: { type: String },
[ ] followerIds: [],
[ ] categoryIds: { type: Array },
[ ] logo: { type: String },
[ ] userId: {
[ ] type: String,
[ ] required: true
},
[ ] description: {
[ ] type: String,
[ ] required: true
},
[ ] content: {
[ ] type: String,
[ ] required: true
},
[ ] addresses: {
[ ] type: Array,
[ ] default: []
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as project;

Some files were not shown because too many files have changed in this diff Show More