delete deployment folder

This commit is contained in:
Ulf Gebhardt 2023-04-20 13:34:42 +02:00
parent 246c5dc201
commit 2f03303773
Signed by: ulfgebhardt
GPG Key ID: DA6B843E748679C9
114 changed files with 0 additions and 3975 deletions

View File

@ -1,25 +0,0 @@
# Minikube
There are many Kubernetes providers, but if you're just getting started, Minikube is a tool that you can use to get your feet wet.
After you [installed Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/)
open your minikube dashboard:
```text
$ minikube dashboard
```
This will give you an overview. Some of the steps below need some timing to make resources available to other dependent deployments. Keeping an eye on the dashboard is a great way to check that.
Follow the installation instruction for [Kubernetes with Helm](./kubernetes/README.md).
If all the pods and services have settled and everything looks green in your
minikube dashboard, expose the services you want on your host system.
For example:
```text
$ minikube service webapp --namespace=ocelotsocialnetwork
# optionally
$ minikube service backend --namespace=ocelotsocialnetwork
```

View File

@ -1,23 +0,0 @@
# Deployment
Before you start the deployment you have to do preparations.
## Deployment Preparations
Since all deployment methods described here depend on [Docker](https://docker.com) and [DockerHub](https://hub.docker.com), you need to create your own organisation on DockerHub and put its name in the [package.json](/package.json) file as your `dockerOrganisation`.
Read more details in the [main README](/README.md) under [Usage](/README.md#usage).
## Deployment Methods
You have the following options for a deployment:
- [Kubernetes with Helm](./kubernetes/README.md)
## After Deployment
After the first deployment of the new network on your server, the database is initialized with the default administrator:
- E-mail: admin@example.org
- Password: 1234
***ATTENTION:*** When you are logged in for the first time, please change your (the admin's) e-mail to an existing one and change your password to a secure one !!!

View File

@ -1,3 +0,0 @@
/dns.values.yaml
/nginx.values.yaml
/values.yaml

View File

@ -1,305 +0,0 @@
# Kubernetes Backup Of Ocelot.Social
One of the most important tasks in managing a running [ocelot.social](https://github.com/Ocelot-Social-Community/Ocelot-Social) network is backing up the data, e.g. the Neo4j database and the stored image files.
## Manual Offline Backup
To prepare, [kubectl](https://kubernetes.io/docs/tasks/tools/) must be installed and ready to use so that you have access to Kubernetes on your server.
Check if the correct context is used by running the following commands:
```bash
# check context and set the correct one
$ kubectl config get-contexts
# if the wrong context is chosen use it
$ kubectl config use-context <your-context>
# if you like check additionally if all pods are running well
$ kubectl -n default get pods -o wide
```
The very first step is to put the webside into **maintenance mode**.
### Set Maintenance Mode
There are two ways to put the network into maintenance mode:
- via Kubernetes Dashboard
- via `kubectl`
#### Maintenance Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Ingresses` from the left side menu under `Service`.
After that, in the list that appears, you will find the entry `ingress-ocelot-webapp`, which has three dots on the right, where you can click to edit the entry.
You can scroll to the end of the YAML file, where you will find one or more `host` entries under `rules`, one for each domain of the network.
In all entries, change the value of the `serviceName` entry from ***ocelot-webapp*** to `ocelot-maintenance` and the value of the `servicePort` entry from ***3000*** to `80`.
First, check if your website is still online.
After you click `Update`, the new settings will be applied and you will find your website in maintenance mode.
#### Maintenance Mode Via `kubectl`
To put the network into maintenance mode, run the following commands in the terminal:
```bash
# list ingresses
$ kubectl get ingress -n default
# edit ingress
$ kubectl -n default edit ingress ingress-ocelot-webapp
```
Change the content of the YAML file for all domains to:
```yaml
spec:
rules:
- host: network-domain.social
http:
paths:
- backend:
# serviceName: ocelot-webapp
# servicePort: 3000
serviceName: ocelot-maintenance
servicePort: 80
```
First, check if your website is still online.
After you save the file, the new settings will be applied and you will find your website in maintenance mode.
### Neo4j Database Offline Backup
Before we can back up the database, we need to put it into **sleep mode**.
#### Set Neo4j To Sleep Mode
Again there are two ways to put the network into sleep mode:
- via Kubernetes Dashboard
- via `kubectl`
##### Sleep Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Deployments` from the left side menu under `Workloads`.
After that, in the list that appears, you will find the entry `ocelot-neo4j`, which has three dots on the right, where you can click to edit the entry.
Scroll to the end of the YAML file where you will find the `spec.template.spec.containers` entry. Here you can insert the `command` entry directly after `imagePullPolicy` in a new line.
```yaml
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
command: ["tail", "-f", "/dev/null"]
```
After clicking `Update`, the new settings will be applied and you should check in the `Pods` menu item on the left side if the `ocelot-neo4j-<ID>` pod restarts.
##### Sleep Mode Via `kubectl`
To put Neo4j into sleep mode, run the following commands in the terminal:
```bash
# list deployments
$ kubectl get deployments -n default
# edit deployment
$ kubectl -n default edit deployment ocelot-neo4j
```
Scroll to the `spec.template.spec.containers` entry. Here you can insert the `command` entry directly after `imagePullPolicy` in a new line.
```yaml
image: <network-DockerHub-name>/neo4j-community-branded:latest
imagePullPolicy: Always
command: ["tail", "-f", "/dev/null"]
```
After pressing enter, the new settings will be applied and you should check if the `ocelot-neo4j-<ID>` pod restarts.
Use command:
```bash
# check if the old pod restarts
$ kubectl -n default get pods -o wide
```
#### Generate Offline Backup
The offline backup is generated via `kubectl`:
```bash
# check for the Neo4j pod
$ kubectl -n default get pods -o wide
# ls: see wish backup dumps are already there
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- ls
# bash: enter bash of Neo4j
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- bash
# generate Dump
neo4j% neo4j-admin dump --to=/var/lib/neo4j/$(date +%F)-neo4j-dump
# exit bash
neo4j% exit
# ls: see if the new backup dump is there
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-neo4j | awk '{ print $1 }') -- ls
```
Lets copy the dump backup
```bash
# copy dump onto backup volume direct
$ kubectl cp default/$(kubectl -n default get pods | grep ocelot-neo4j |awk '{ print $1 }'):/var/lib/neo4j/$(date +%F)-neo4j-dump /Volumes/<volume-name>/$(date +%F)-neo4j-dump
```
#### Remove Sleep Mode From Neo4j
Again there are two ways to put the network into working mode:
- via Kubernetes Dashboard
- via `kubectl`
##### Remove Sleep Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Deployments` from the left side menu under `Workloads`.
After that, in the list that appears, you will find the entry `ocelot-neo4j`, which has three dots on the right, where you can click to edit the entry.
Scroll to the `spec.template.spec.containers.command` entry and remove the whole `command` entry like:
```yaml
containers:
- name: container-ocelot-neo4j
image: 'senderfm/neo4j-community-branded:latest'
command:
- tail
- '-f'
- /dev/null
ports:
- containerPort: 7687
protocol: TCP
```
And get:
```yaml
containers:
- name: container-ocelot-neo4j
image: 'senderfm/neo4j-community-branded:latest'
ports:
- containerPort: 7687
protocol: TCP
```
After clicking `Update`, the new settings will be applied and you should check in the `Pods` menu item on the left side if the `ocelot-neo4j-<ID>` pod restarts.
##### Remove Sleep Mode Via `kubectl`
To put Neo4j into working mode, run the following commands in the terminal:
```bash
# list deployments
$ kubectl get deployments -n default
# edit deployment
$ kubectl -n default edit deployment ocelot-neo4j
```
Scroll to the `spec.template.spec.containers.command` entry and remove the whole `command` entry like:
```yaml
spec:
containers:
- command:
- tail
- -f
- /dev/null
envFrom:
- configMapRef:
name: configmap-ocelot-neo4j
```
And get:
```yaml
spec:
containers:
- envFrom:
- configMapRef:
name: configmap-ocelot-neo4j
```
After pressing enter, the new settings will be applied and you should check if the `ocelot-neo4j-<ID>` pod restarts.
Use command:
```bash
# check if the old pod restarts
$ kubectl -n default get pods -o wide
```
### Backend Backup
To back up the images from the backend volume, run commands:
```bash
# ls: backend/public/uploads
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- ls public/uploads
# copy all images from upload to backup volume direct
$ kubectl cp default/$(kubectl -n default get pods | grep ocelot-backend |awk '{ print $1 }'):/app/public/uploads /Volumes/<volume-name>/$(date +%F)-public-uploads
```
### Remove Maintenance Mode
There are two ways to put the network into working mode:
- via Kubernetes Dashboard
- via `kubectl`
#### Remove Maintenance Mode Via Kubernetes Dashboard
In the Kubernetes Dashboard, you can select `Ingresses` from the left side menu under `Service`.
After that, in the list that appears, you will find the entry `ingress-ocelot-webapp`, which has three dots on the right, where you can click to edit the entry.
You can scroll to the end of the YAML file, where you will find one or more `host` entries under `rules`, one for each domain of the network.
In all entries, change the value of the `serviceName` entry from ***ocelot-maintenance*** to `ocelot-webapp` and the value of the `servicePort` entry from ***80*** to `3000`.
First, check if your website is still in maintenance mode.
After you click `Update`, the new settings will be applied and you will find your website online again.
#### Remove Maintenance Mode Via `kubectl`
To put the network into working mode, run the following commands in the terminal:
```bash
# list ingresses
$ kubectl get ingress -n default
# edit ingress
$ kubectl -n default edit ingress ingress-ocelot-webapp
```
Change the content of the YAML file for all domains to:
```yaml
spec:
rules:
- host: network-domain.social
http:
paths:
- backend:
serviceName: ocelot-webapp
servicePort: 3000
# serviceName: ocelot-maintenance
# servicePort: 80
```
First, check if your website is still in maintenance mode.
After you save the file, the new settings will be applied and you will find your website online again.
XXX
```bash
# Dump: Create a Backup in Kubernetes: https://docs.human-connection.org/human-connection/deployment/volumes/neo4j-offline-backup#create-a-backup-in-kubernetes
```

View File

@ -1,39 +0,0 @@
type: application
apiVersion: v2
name: ocelot-social
version: "1.0.0"
# The appVersion defines which docker image is pulled.
# Having it set to latest will pull the latest build on dockerhub.
# You are free to define a specific version here tho.
# e.g. appVersion: "latest" or "1.0.2-3-ocelot.social1.0.2-79"
# Be aware that this requires all your apps to have the same docker image version available.
appVersion: "latest"
description: The Helm chart for ocelot.social
home: https://ocelot.social
sources:
- https://github.com/Ocelot-Social-Community/
- https://github.com/Ocelot-Social-Community/Ocelot-Social
- https://github.com/Ocelot-Social-Community/Ocelot-Social-Deploy-Rebranding
maintainers:
- name: Ulf Gebhardt
email: ulf.gebhardt@webcraft-media.de
url: https://www.webcraft-media.de/#!ulf_gebhardt
icon: https://github.com/Ocelot-Social-Community/Ocelot-Social/raw/master/webapp/static/img/custom/welcome.svg
deprecated: false
# Unused Fields
#dependencies: # A list of the chart requirements (optional)
# - name: ingress-nginx
# version: v1.10.0
# repository: https://kubernetes.github.io/ingress-nginx
# condition: (optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (e.g. subchart1.enabled )
# tags: # (optional)
# - Tags can be used to group charts for enabling/disabling together
# import-values: # (optional)
# - ImportValues holds the mapping of source values to parent key to be imported. Each item can be a string or pair of child/parent sublist items.
# alias: (optional) Alias to be used for the chart. Useful when you have to add the same chart multiple times
#kubeVersion: A SemVer range of compatible Kubernetes versions (optional)
#keywords:
# - A list of keywords about this project (optional)
#annotations:
# example: A list of annotations keyed by name (optional).

View File

@ -1,84 +0,0 @@
# DigitalOcean
If you want to set up a [Kubernetes](https://kubernetes.io) cluster on [DigitalOcean](https://www.digitalocean.com), follow this guide.
## Create Account
Create an account with DigitalOcean.
## Add Project
On the left side you will see a menu. Click on `New Project`. Enter a name and click `Create Project`.
Skip moving resources, probably.
## Create Kubernetes Cluster
On the right top you find the button `Create`. Click on it and choose `Kubernetes - Create Kubernetes Cluster`.
- use the latest Kubernetes version
- choose your datacenter region
- name your node pool: e.g. `pool-<your-network-name>`
- `2 Basic nodes` with `2.5 GB RAM (total of 4 GB)`, `2 shared CPUs`, and `80 GB Disk` each is optimal for the beginning
- set your cluster name: e.g. `cluster-<your-network-name>`
- select your project
- no tags necessary
## Getting Started
After your cluster is set up see progress bar above click on `Getting started`. Please install the following management tools:
- [kubectl v1.24.1](https://kubernetes.io/docs/tasks/tools/)
- [doctl v1.78.0](https://github.com/digitalocean/doctl)
Install the tools as described on the tab or see the links here.
After the installation, click on `Continue`.
### Download Configuration File
Follow the steps to download the configuration file.
You can skip this step if necessary, as you can download the file later. You can then do this by clicking on `Kubernetes` in the left menu. In the menu to the right of the cluster name in the cluster list, click on `More` and select `Download Config`.
### Patch & Minor Version Upgrades
Skip `Patch & Minor Version Upgrades` for now.
### Install 1-Click Apps
You don't need a 1-click app. Our helmet script will install the required NGINXs.
Therefore, skip this step as well.
## DNS Configuration
There are the following two ways to set up the DNS.
### Manage DNS With A Different Domain Provider
If you have registered your domain or subdomain with another domain provider, add an `A` record there with one of the IP addresses from one of the cluster droplets in the DNS.
To find the correct IP address to set in the DNS `A` record, click `Droplets` in the left main menu.
A list of all your droplets will be displayed.
Take one of the IPs of perhaps two or more droplets in your cluster from the list and enter it into the `A` record.
### Manage DNS With DigitalOcean
***TODO:** How to configure the DigitalOcean DNS management service …*
To understand what makes sense to do when managing your DNS with DigitalOcean, you need to know how DNS works:
DNS means `Domain Name System`. It resolves domains like `example.com` into an IP like `123.123.123.123`.
DigitalOcean is not a domain registrar, but provides a DNS management service. If you use DigitalOcean's DNS management service, you can configure [your cluster](/deployment/kubernetes/README.md#dns) to always resolve the domain to the correct IP and automatically update it for that.
The IPs of the DigitalOcean machines are not necessarily stable, so the cluster's DNS service will update the DNS records managed by DigitalOcean to the new IP as needed.
***CAUTION:** If you are using an external DNS, you currently have to do this manually, which can cause downtime.*
## Deploy
Yeah, you're done here. Back to [Deployment with Helm for Kubernetes](/deployment/kubernetes/README.md).
## Backups On DigitalOcean
You can and should do [backups](/deployment/kubernetes/Backup.md) with Kubernetes for sure.
Additional to backup and copying the Neo4j database dump and the backend images you can do a volume snapshot on DigitalOcean at the moment you have the database in sleep mode.

View File

@ -1,299 +0,0 @@
# Kubernetes Helm Installation Of Ocelot.Social
Deploying [ocelot.social](https://github.com/Ocelot-Social-Community/Ocelot-Social) with [Helm](https://helm.sh) for [Kubernetes](https://kubernetes.io) is very straight forward. All you have to do is to change certain parameters, like domain names and API keys, then you just install our provided Helm chart to your cluster.
## Kubernetes Cloud Hosting
There are various ways to set up your own or a managed Kubernetes cluster. We will extend the following lists over time.
Please contact us if you are interested in options not listed below.
Managed Kubernetes:
- [DigitalOcean](/deployment/kubernetes/DigitalOcean.md)
## Configuration
You can customize the network server with your configuration by duplicate the `values.template.yaml` to a new `values.yaml` file and change it to your need. All included variables will be available as environment variables in your deployed kubernetes pods.
Besides the `values.template.yaml` file we provide a `nginx.values.template.yaml` and `dns.values.template.yaml` for a similar procedure. The new `nginx.values.yaml` is the configuration for the ingress-nginx Helm chart, while the `dns.values.yaml` file is for automatically updating the dns values on DigitalOcean and therefore optional.
## Installation
Due to the many limitations of Helm you still have to do several manual steps.
Those occur before you run the actual *ocelot.social* Helm chart.
Obviously it is expected of you to have `helm` and `kubectl` installed.
For the cert-manager you may need `cmctl`, see below.
For DigitalOcean you may also need `doctl`.
Install:
- [kubectl v1.24.1](https://kubernetes.io/docs/tasks/tools/)
- [doctl v1.78.0](https://docs.digitalocean.com/reference/doctl/how-to/install/)
- [cmctl v1.8.2](https://cert-manager.io/docs/usage/cmctl/#installation)
- [helm v3.9.0](https://helm.sh/docs/intro/install/)
### Cert Manager (https)
Please refer to [cert-manager.io docs](https://cert-manager.io/docs/installation/) for more details.
***ATTENTION:*** *Be with the Terminal in your repository in the folder of this README.*
We have three ways to install the cert-manager, purely via `kubectl`, via `cmctl`, or with `helm`.
We recommend using `helm` because then we do not mix the installation methods.
Please have a look here:
- [Installing with Helm](https://cert-manager.io/docs/installation/helm/#installing-with-helm)
Our Helm installation is optimized for cert-manager version `v1.9.1` and `kubectl` version `"v1.24.2`.
Please search here for cert-manager versions that are compatible with your `kubectl` version on the cluster and on the client: [cert-manager Supported Releases](https://cert-manager.io/docs/installation/supported-releases/#supported-releases).
***ATTENTION:*** *When uninstalling cert-manager, be sure to use the same method as for installation! Otherwise, we could end up in a broken state, see [Uninstall](https://cert-manager.io/docs/installation/kubectl/#uninstalling).*
<!-- #### 1. Create Namespace
```bash
# kubeconfig.yaml set globaly
$ kubectl create namespace cert-manager
# or kubeconfig.yaml in your repo, then adjust
$ kubectl --kubeconfig=/../kubeconfig.yaml create namespace cert-manager
```
#### 2. Add Helm repository and update
```bash
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
```
#### 3. Install Cert-Manager Helm chart
```bash
# option 1
# this can't be applied via kubectl to our cluster since the CRDs can't be installed properly this way ...
# $ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.crds.yaml
# option 2
# kubeconfig.yaml set globaly
$ helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.9.1 \
--set installCRDs=true
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml \
install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.9.1 \
--set installCRDs=true
``` -->
### Ingress-Nginx
#### 1. Add Helm repository for `ingress-nginx` and update
```bash
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ helm repo update
```
#### 2. Install ingress-nginx
```bash
# kubeconfig.yaml set globaly
$ helm install ingress-nginx ingress-nginx/ingress-nginx -f nginx.values.yaml
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml install ingress-nginx ingress-nginx/ingress-nginx -f nginx.values.yaml
```
### DigitalOcean Firewall
This is only necessary if you run DigitalOcean without load balancer ([see here for more info](https://stackoverflow.com/questions/54119399/expose-port-80-on-digital-oceans-managed-kubernetes-without-a-load-balancer/55968709)) .
#### 1. Authenticate towards DO with your local `doctl`
You will need a DO token for that.
```bash
# without doctl context
$ doctl auth init
# with doctl new context to be filled in
$ doctl auth init --context <new-context-name>
```
You will need an API token, which you can generate in the control panel at <https://cloud.digitalocean.com/account/api/tokens> .
#### 2. Generate DO firewall
Get the `CLUSTER_UUID` value from the dashboard or from the ID column via `doctl kubernetes cluster list`:
```bash
# need to apply access token by `doctl auth init` before
$ doctl kubernetes cluster list
```
Fill in the `CLUSTER_UUID` and `your-domain`. The latter with hyphens `-` instead of dots `.`:
```bash
# without doctl context
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:<CLUSTER_UUID> \
--name=<your-domain>-http-https
# with doctl context to be filled in
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:<CLUSTER_UUID> \
--name=<your-domain>-http-https --context <context-name>
```
To get informations about your success use this command. (Fill in the `ID` you got at creation.):
```bash
# without doctl context
$ doctl compute firewall get <ID>
# with doctl context to be filled in
$ doctl compute firewall get <ID> --context <context-name>
```
### DNS
***TODO:** I thought this is necessary if we use the DigitalOcean DNS management service? See [Manage DNS With DigitalOcean](/deployment/kubernetes/DigitalOcean.md#manage-dns-with-digitalocean)*
This chart is only necessary (recommended is more precise) if you run DigitalOcean without load balancer.
You need to generate an access token with read + write for the `dns.values.yaml` at <https://cloud.digitalocean.com/account/api/tokens> and fill it in.
#### 1. Add Helm repository for `binami` and update
```bash
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo update
```
#### 2. Install DNS
```bash
# kubeconfig.yaml set globaly
$ helm install dns bitnami/external-dns -f dns.values.yaml
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml install dns bitnami/external-dns -f dns.values.yaml
```
### Ocelot.Social
***Attention:** Before installing your own ocelot.social network, you need to create a DockerHub (account and) organization, put its name in the `package.json` file, and push your deployment and rebranding code to GitHub so that GitHub Actions can push your Docker images to DockerHub. This is because Kubernetes will pull these images to create PODs from them.*
All commands for ocelot need to be executed in the kubernetes folder. Therefore `cd deployment/kubernetes/` is expected to be run before every command. Furthermore the given commands will install ocelot into the default namespace. This can be modified to by attaching `--namespace not.default`.
#### Install
Only run once for the first time of installation:
```bash
# kubeconfig.yaml set globaly
$ helm install ocelot ./
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml install ocelot ./
```
#### Upgrade & Update
Run for all upgrades and updates:
```bash
# kubeconfig.yaml set globaly
$ helm upgrade ocelot ./
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml upgrade ocelot ./
```
#### Rollback
Run for a rollback, in case something went wrong:
```bash
# kubeconfig.yaml set globaly
$ helm rollback ocelot
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml rollback ocelot
```
#### Uninstall
Be aware that if you uninstall ocelot the formerly bound volumes become unbound. Those volumes contain all data from uploads and database. You have to manually free their reference in order to bind them again when reinstalling. Once unbound from their former container references they should automatically be rebound (considering the sizes did not change)
```bash
# kubeconfig.yaml set globaly
$ helm uninstall ocelot
# or kubeconfig.yaml in your repo, then adjust
$ helm --kubeconfig=/../kubeconfig.yaml uninstall ocelot
```
## Backups
You can and should do [backups](/deployment/kubernetes/Backup.md) with Kubernetes for sure.
## Error Reporting
We use [Sentry](https://github.com/getsentry/sentry) for error reporting in both
our backend and web frontend. You can either use a hosted or a self-hosted
instance. Just set the two `DSN` in your
[configmap](../templates/configmap.template.yaml) and update the `COMMIT`
during a deployment with your commit or the version of your release.
### Self-hosted Sentry
For data privacy it is recommended to set up your own instance of sentry.
If you are lucky enough to have a kubernetes cluster with the required hardware
support, try this [helm chart](https://github.com/helm/charts/tree/master/stable/sentry).
On our kubernetes cluster we get "mult-attach" errors for persistent volumes.
Apparently DigitalOcean's kubernetes clusters do not fulfill the requirements.
## Kubernetes Commands (Without Helm) To Deploy New Docker Images To A Kubernetes Cluster
### Deploy A Version
```bash
# !!! be aware of the correct kube context !!!
$ kubectl config get-contexts
# deploy version '$BUILD_VERSION'
# !!! 'latest' is not recommended on production !!!
# for easyness set env
$ export BUILD_VERSION=1.0.8-48-ocelot.social1.0.8-184 # example
# check this with
$ echo $BUILD_VERSION
1.0.8-48-ocelot.social1.0.8-184
# deploy actual version '$BUILD_VERSION' to Kubernetes cluster
$ kubectl -n default set image deployment/ocelot-webapp container-ocelot-webapp=ocelotsocialnetwork/webapp:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-webapp
$ kubectl -n default set image deployment/ocelot-backend container-ocelot-backend=ocelotsocialnetwork/backend:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-backend
$ kubectl -n default set image deployment/ocelot-maintenance container-ocelot-maintenance=ocelotsocialnetwork/maintenance:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-maintenance
$ kubectl -n default set image deployment/ocelot-neo4j container-ocelot-neo4j=ocelotsocialnetwork/neo4j-community:$BUILD_VERSION
$ kubectl -n default rollout restart deployment/ocelot-neo4j
# verify deployment and wait for the pods of each deployment to get ready for cleaning and seeding of the database
$ kubectl -n default rollout status deployment/ocelot-webapp --timeout=240s
$ kubectl -n default rollout status deployment/ocelot-maintenance --timeout=240s
$ kubectl -n default rollout status deployment/ocelot-backend --timeout=240s
$ kubectl -n default rollout status deployment/ocelot-neo4j --timeout=240s
```
### Staging Clean And Seed Neo4j Database
***ATTENTION:*** Cleaning and seeding of our Neo4j database is only possible in production if env `PRODUCTION_DB_CLEAN_ALLOW=true` is set in our deployment.
```bash
# !!! be aware of the correct kube context !!!
$ kubectl config get-contexts
# reset and seed Neo4j database via backend for staging
$ kubectl -n default exec -it $(kubectl -n default get pods | grep ocelot-backend | awk '{ print $1 }') -- /bin/sh -c "node --experimental-repl-await dist/db/clean.js && node --experimental-repl-await dist/db/seed.js"
```

View File

@ -1,12 +0,0 @@
# please duplicate template file and rename to "dns.values.yaml" and fill in your value
provider: digitalocean
digitalocean:
# create the API token at https://cloud.digitalocean.com/account/api/tokens
# needs read + write
apiToken: "TODO"
domainFilters:
# domains you want external-dns to be able to edit
- TODO.TODO
rbac:
create: true

View File

@ -1,13 +0,0 @@
# please duplicate template file and rename to "nginx.values.yaml" and fill in your value
controller:
kind: DaemonSet
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
ingressClass: nginx
daemonset:
useHostPort: true
service:
type: ClusterIP
rbac:
create: true

View File

@ -1 +0,0 @@
You installed ocelot-social! Congrats <3

View File

@ -1,29 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
PRODUCTION_DB_CLEAN_ALLOW: "{{ .Values.PRODUCTION_DB_CLEAN_ALLOW }}"
PUBLIC_REGISTRATION: "{{ .Values.PUBLIC_REGISTRATION }}"
INVITE_REGISTRATION: "{{ .Values.INVITE_REGISTRATION }}"
CATEGORIES_ACTIVE: "{{ .Values.CATEGORIES_ACTIVE }}"
CLIENT_URI: "{{ .Values.BACKEND.CLIENT_URI }}"
EMAIL_DEFAULT_SENDER: "{{ .Values.BACKEND.EMAIL_DEFAULT_SENDER }}"
SMTP_HOST: "{{ .Values.BACKEND.SMTP_HOST }}"
SMTP_PORT: "{{ .Values.BACKEND.SMTP_PORT }}"
SMTP_IGNORE_TLS: "{{ .Values.BACKEND.SMTP_IGNORE_TLS }}"
SMTP_SECURE: "{{ .Values.BACKEND.SMTP_SECURE }}"
GRAPHQL_URI: "http://{{ .Release.Name }}-backend:4000"
NEO4J_URI: "bolt://{{ .Release.Name }}-neo4j:7687"
#REDIS_DOMAIN: ---toBeSet(IP)---
#REDIS_PORT: "6379"
#SENTRY_DSN_WEBAPP: "---toBeSet---"
#SENTRY_DSN_BACKEND: "---toBeSet---"

View File

@ -1,57 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
minReadySeconds: {{ .Values.BACKEND.MIN_READY_SECONDS }}
progressDeadlineSeconds: {{ .Values.BACKEND.PROGRESS_DEADLINE_SECONDS }}
revisionHistoryLimit: {{ .Values.BACKEND.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-backend
template:
metadata:
annotations:
backup.velero.io/backup-volumes: uploads
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: {{ .Release.Name }}-backend
spec:
containers:
- name: container-{{ .Release.Name }}-backend
image: "{{ .Values.BACKEND.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.BACKEND.DOCKER_IMAGE_PULL_POLICY }}
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-backend
- secretRef:
name: secret-{{ .Release.Name }}-backend
ports:
- containerPort: 4000
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /app/public/uploads
name: uploads
dnsPolicy: ClusterFirst
schedulerName: default-scheduler
restartPolicy: {{ .Values.BACKEND.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.BACKEND.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}
volumes:
- name: uploads
persistentVolumeClaim:
claimName: volume-claim-{{ .Release.Name }}-uploads

View File

@ -1,24 +0,0 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: volume-claim-{{ .Release.Name }}-uploads
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "volume-claim-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
#dataSource:
# name: uploads-snapshot
# kind: VolumeSnapshot
# apiGroup: snapshot.storage.k8s.io
storageClassName: storage-{{ .Release.Name }}-persistent
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.BACKEND.STORAGE_UPLOADS }}

View File

@ -1,21 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:
JWT_SECRET: "{{ .Values.BACKEND.JWT_SECRET }}"
MAPBOX_TOKEN: "{{ .Values.BACKEND.MAPBOX_TOKEN }}"
PRIVATE_KEY_PASSPHRASE: "{{ .Values.BACKEND.PRIVATE_KEY_PASSPHRASE }}"
SMTP_USERNAME: "{{ .Values.BACKEND.SMTP_USERNAME }}"
SMTP_PASSWORD: "{{ .Values.BACKEND.SMTP_PASSWORD }}"
#NEO4J_USERNAME: ""
#NEO4J_PASSWORD: ""
#REDIS_PASSWORD: ---toBeSet---

View File

@ -1,20 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-backend
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-backend"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-graphql
port: 4000
targetPort: 4000
protocol: TCP
selector:
app: {{ .Release.Name }}-backend

View File

@ -1,22 +0,0 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "letsencrypt-production"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: {{ .Values.LETSENCRYPT.EMAIL }}
privateKeySecretRef:
name: letsencrypt-production
solvers:
- http01:
ingress:
class: nginx

View File

@ -1,22 +0,0 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "letsencrypt-staging"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: {{ .Values.LETSENCRYPT.EMAIL }}
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx

View File

@ -1,29 +0,0 @@
kind: Job
apiVersion: batch/v1
metadata:
name: job-{{ .Release.Name }}-db-init
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "job-db-init"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded, hook-failed
"helm.sh/hook-weight": "0"
spec:
template:
spec:
restartPolicy: Never
containers:
- name: job-{{ .Release.Name }}-db-init
image: "{{ .Values.BACKEND.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
command: ["/bin/sh", "-c", "yarn prod:migrate init"]
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-backend
- secretRef:
name: secret-{{ .Release.Name }}-backend

View File

@ -1,29 +0,0 @@
kind: Job
apiVersion: batch/v1
metadata:
name: job-{{ .Release.Name }}-db-migrate
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "job-db-migrate"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
"helm.sh/hook": post-install, post-upgrade
"helm.sh/hook-delete-policy": hook-succeeded, hook-failed
"helm.sh/hook-weight": "5"
spec:
template:
spec:
restartPolicy: Never
containers:
- name: job-{{ .Release.Name }}-db-migrations
image: "{{ .Values.BACKEND.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
command: ["/bin/sh", "-c", "yarn prod:migrate up"]
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-backend
- secretRef:
name: secret-{{ .Release.Name }}-backend

View File

@ -1,14 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
HOST: "0.0.0.0"

View File

@ -1,40 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
revisionHistoryLimit: {{ .Values.MAINTENANCE.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-maintenance
template:
metadata:
labels:
app: {{ .Release.Name }}-maintenance
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
spec:
containers:
- name: container-{{ .Release.Name }}-maintenance
image: "{{ .Values.MAINTENANCE.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.MAINTENANCE.DOCKER_IMAGE_PULL_POLICY }}
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-webapp
- secretRef:
name: secret-{{ .Release.Name }}-webapp
ports:
- containerPort: 80
restartPolicy: {{ .Values.MAINTENANCE.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.MAINTENANCE.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}

View File

@ -1,13 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:

View File

@ -1,20 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-maintenance"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-http
port: 80
targetPort: 80
protocol: TCP
selector:
app: {{ .Release.Name }}-maintenance

View File

@ -1,21 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
NEO4J_ACCEPT_LICENSE_AGREEMENT: "{{ .Values.NEO4J.ACCEPT_LICENSE_AGREEMENT }}"
NEO4J_AUTH: "{{ .Values.NEO4J.AUTH }}"
NEO4J_dbms_connector_bolt_thread__pool__max__size: "{{ .Values.NEO4J.DBMS_CONNECTOR_BOLT_THREAD_POOL_MAX_SIZE }}"
NEO4J_dbms_memory_heap_initial__size: "{{ .Values.NEO4J.DBMS_MEMORY_HEAP_INITIAL_SIZE }}"
NEO4J_dbms_memory_heap_max__size: "{{ .Values.NEO4J.DBMS_MEMORY_HEAP_MAX_SIZE }}"
NEO4J_dbms_memory_pagecache_size: "{{ .Values.NEO4J.DBMS_MEMORY_PAGECACHE_SIZE }}"
NEO4J_dbms_security_procedures_unrestricted: "{{ .Values.NEO4J.DBMS_SECURITY_PROCEDURES_UNRESTRICTED }}"
NEO4J_apoc_import_file_enabled: "{{ .Values.NEO4J.APOC_IMPORT_FILE_ENABLED }}"

View File

@ -1,57 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
revisionHistoryLimit: {{ .Values.NEO4J.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-neo4j
template:
metadata:
name: neo4j
annotations:
backup.velero.io/backup-volumes: neo4j-data
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: {{ .Release.Name }}-neo4j
spec:
containers:
- name: container-{{ .Release.Name }}-neo4j
image: "{{ .Values.NEO4J.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.NEO4J.DOCKER_IMAGE_PULL_POLICY }}
ports:
- containerPort: 7687
- containerPort: 7474
resources:
requests:
memory: {{ .Values.NEO4J.RESOURCE_REQUESTS_MEMORY | default "1G" | quote }}
limits:
memory: {{ .Values.NEO4J.RESOURCE_LIMITS_MEMORY | default "1G" | quote }}
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-neo4j
- secretRef:
name: secret-{{ .Release.Name }}-neo4j
volumeMounts:
- mountPath: /data/
name: neo4j-data
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: volume-claim-{{ .Release.Name }}-neo4j
restartPolicy: {{ .Values.NEO4J.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.NEO4J.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}

View File

@ -1,19 +0,0 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: volume-claim-{{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "volume-claim-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
storageClassName: storage-{{ .Release.Name }}-persistent
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.NEO4J.STORAGE }}

View File

@ -1,15 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:
NEO4J_USERNAME: ""
NEO4J_PASSWORD: ""

View File

@ -1,23 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-neo4j"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-bolt
port: 7687
targetPort: 7687
protocol: TCP
#- name: {{ .Release.Name }}-http
# port: 7474
# targetPort: 7474
selector:
app: {{ .Release.Name }}-neo4j

View File

@ -1,16 +0,0 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storage-{{ .Release.Name }}-persistent
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "storage-persistent"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
provisioner: {{ .Values.STORAGE.PROVISIONER }}
reclaimPolicy: {{ .Values.STORAGE.RECLAIM_POLICY }}
volumeBindingMode: {{ .Values.STORAGE.VOLUME_BINDING_MODE }}
allowVolumeExpansion: {{ .Values.STORAGE.ALLOW_VOLUME_EXPANSION }}

View File

@ -1,20 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: configmap-{{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "configmap-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
data:
HOST: "0.0.0.0"
PUBLIC_REGISTRATION: "{{ .Values.PUBLIC_REGISTRATION }}"
INVITE_REGISTRATION: "{{ .Values.INVITE_REGISTRATION }}"
CATEGORIES_ACTIVE: "{{ .Values.CATEGORIES_ACTIVE }}"
COOKIE_EXPIRE_TIME: "{{ .Values.COOKIE_EXPIRE_TIME }}"
WEBSOCKETS_URI: "{{ .Values.WEBAPP.WEBSOCKETS_URI }}"
GRAPHQL_URI: "http://{{ .Release.Name }}-backend:4000"

View File

@ -1,44 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "deployment-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: {{ .Values.WEBAPP.REPLICAS }}
minReadySeconds: {{ .Values.WEBAPP.MIN_READY_SECONDS }}
progressDeadlineSeconds: {{ .Values.WEBAPP.PROGRESS_DEADLINE_SECONDS }}
revisionHistoryLimit: {{ .Values.WEBAPP.REVISIONS_HISTORY_LIMIT }}
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
app: {{ .Release.Name }}-webapp
template:
metadata:
annotations:
# make sure the pod is redeployed
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: {{ .Release.Name }}-webapp
spec:
containers:
- name: container-{{ .Release.Name }}-webapp
image: "{{ .Values.WEBAPP.DOCKER_IMAGE_REPO }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.WEBAPP.DOCKER_IMAGE_PULL_POLICY }}
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: configmap-{{ .Release.Name }}-webapp
- secretRef:
name: secret-{{ .Release.Name }}-webapp
restartPolicy: {{ .Values.WEBAPP.CONTAINER_RESTART_POLICY }}
terminationGracePeriodSeconds: {{ .Values.WEBAPP.CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS }}

View File

@ -1,36 +0,0 @@
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: ingress-{{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "ingress-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: {{ .Values.LETSENCRYPT.ISSUER }}
nginx.ingress.kubernetes.io/proxy-body-size: {{ .Values.NGINX.PROXY_BODY_SIZE }}
spec:
tls:
- hosts:
{{- range .Values.LETSENCRYPT.DOMAINS }}
- {{ . }}
{{- end }}
secretName: tls
rules:
{{- range .Values.LETSENCRYPT.DOMAINS }}
- host: {{ . }}
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ $.Release.Name }}-webapp
port:
number: 3000
{{- end }}

View File

@ -1,13 +0,0 @@
kind: Secret
apiVersion: v1
metadata:
name: secret-{{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "secret-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
stringData:

View File

@ -1,20 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: {{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "service-webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-http
port: 3000
targetPort: 3000
protocol: TCP
selector:
app: {{ .Release.Name }}-webapp

View File

@ -1,120 +0,0 @@
# please duplicate template file and rename to "values.yaml" and fill in your value
# change all the below if needed
PRODUCTION_DB_CLEAN_ALLOW: false # only true for production environments on staging servers
PUBLIC_REGISTRATION: false
INVITE_REGISTRATION: false
COOKIE_EXPIRE_TIME: 730 # days (730 days, two years is the default in main code)
CATEGORIES_ACTIVE: false
BACKEND:
# change all the below if needed
# DOCKER_IMAGE_REPO - change that to your branded docker image
# label is appended based on .Chart.appVersion
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/backend-branded"
CLIENT_URI: "https://staging.ocelot.social"
# create a new one for your network
JWT_SECRET: "b/&&7b78BF&fv/Vd"
MAPBOX_TOKEN: "pk.eyJ1IjoiYnVzZmFrdG9yIiwiYSI6ImNraDNiM3JxcDBhaWQydG1uczhpZWtpOW4ifQ.7TNRTO-o9aK1Y6MyW_Nd4g"
PRIVATE_KEY_PASSPHRASE: "a7dsf78sadg87ad87sfagsadg78"
# ocelot.social mail dummy
EMAIL_DEFAULT_SENDER: "devops@ocelot.social"
SMTP_HOST: "mail.ocelot.social"
SMTP_USERNAME: "devops@ocelot.social"
SMTP_PASSWORD: "devops@ocelot.social"
SMTP_PORT: "587"
SMTP_IGNORE_TLS: 'false'
SMTP_SECURE: 'false' # true for 465, false for other ports
# or
# SMTP_PORT: "465"
# SMTP_IGNORE_TLS: 'true'
# SMTP_SECURE: 'true' # true for 465, false for other ports
# most likely you don't need to change this
MIN_READY_SECONDS: "15"
PROGRESS_DEADLINE_SECONDS: "60"
REVISIONS_HISTORY_LIMIT: "25"
CONTAINER_RESTART_POLICY: "Always"
CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS: "30"
DOCKER_IMAGE_PULL_POLICY: "Always"
STORAGE_UPLOADS: "25Gi"
WEBAPP:
# change all the below if needed
# DOCKER_IMAGE_REPO - change that to your branded docker image
# label is appended based on .Chart.appVersion
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/webapp-branded"
WEBSOCKETS_URI: "wss://staging.ocelot.social/api/graphql"
# Most likely you don't need to change this
REPLICAS: "2"
MIN_READY_SECONDS: "15"
PROGRESS_DEADLINE_SECONDS: "60"
REVISIONS_HISTORY_LIMIT: "25"
CONTAINER_RESTART_POLICY: "Always"
CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS: "30"
DOCKER_IMAGE_PULL_POLICY: "Always"
NEO4J:
# most likely you don't need to change this
REVISIONS_HISTORY_LIMIT: "25"
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/neo4j-community-branded"
DOCKER_IMAGE_PULL_POLICY: "Always"
CONTAINER_RESTART_POLICY: "Always"
CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS: "30"
STORAGE: "5Gi"
# RESOURCE_REQUESTS_MEMORY configures the memory available for requests.
RESOURCE_REQUESTS_MEMORY: "2G"
# RESOURCE_LIMITS_MEMORY configures the memory limits available.
RESOURCE_LIMITS_MEMORY: "4G"
# required for Neo4j Enterprice version
#ACCEPT_LICENSE_AGREEMENT: "yes"
ACCEPT_LICENSE_AGREEMENT: "no"
AUTH: "none"
#DBMS_CONNECTOR_BOLT_THREAD_POOL_MAX_SIZE: "10000" # hc value
DBMS_CONNECTOR_BOLT_THREAD_POOL_MAX_SIZE: "400" # default value
#DBMS_MEMORY_HEAP_INITIAL_SIZE: "500MB" # HC value
DBMS_MEMORY_HEAP_INITIAL_SIZE: "" # default
#DBMS_MEMORY_HEAP_MAX_SIZE: "500MB" # HC value
DBMS_MEMORY_HEAP_MAX_SIZE: "" # default
#DBMS_MEMORY_PAGECACHE_SIZE: "490M" # HC value
DBMS_MEMORY_PAGECACHE_SIZE: "" # default
#APOC_IMPORT_FILE_ENABLED: "true" # HC value
APOC_IMPORT_FILE_ENABLED: "false" # default
DBMS_SECURITY_PROCEDURES_UNRESTRICTED: "algo.*,apoc.*"
MAINTENANCE:
# change all the below if needed
# DOCKER_IMAGE_REPO - change that to your branded docker image
# label is appended based on .Chart.appVersion
DOCKER_IMAGE_REPO: "ocelotsocialnetwork/maintenance-branded"
# Most likely you don't need to change this
REVISIONS_HISTORY_LIMIT: "25"
CONTAINER_RESTART_POLICY: "Always"
CONTAINER_TERMINATION_GRACE_PERIOD_SECONDS: "30"
DOCKER_IMAGE_PULL_POLICY: "Always"
LETSENCRYPT:
# change all the below if needed
# ISSUER is used by cert-manager to set up certificates with the given provider.
# change it to "letsencrypt-production" once you are ready to have valid cetrificates.
# Be aware that the is an issuing limit with letsencrypt, so a dry run with staging might be wise
ISSUER: "letsencrypt-staging"
EMAIL: "devops@ocelot.social"
DOMAINS:
- "staging.ocelot.social"
- "www.staging.ocelot.social"
NGINX:
# most likely you don't need to change this
PROXY_BODY_SIZE: "10m"
STORAGE:
# change all the below if needed
PROVISIONER: "dobs.csi.digitalocean.com"
# most likely you don't need to change this
RECLAIM_POLICY: "Retain"
VOLUME_BINDING_MODE: "Immediate"
ALLOW_VOLUME_EXPANSION: true

View File

@ -1,45 +0,0 @@
# Maintenance mode
> Despite our best efforts, systems sometimes require downtime for a variety of reasons.
Quote from [here](https://www.nrmitchi.com/2017/11/easy-maintenance-mode-in-kubernetes/)
We use our maintenance mode for manual database backup and restore. Also we
bring the database into maintenance mode for manual database migrations.
## Deploy the service
We prepared sample configuration, so you can simply run:
```sh
# in folder deployment/
$ kubectl apply -f ./ocelot-social/maintenance/
```
This will fire up a maintenance service.
## Bring application into maintenance mode
Now if you want to have a controlled downtime and you want to bring your
application into maintenance mode, you can edit your global ingress server.
E.g. copy file [`deployment/digital-ocean/https/templates/ingress.template.yaml`](../../digital-ocean/https/templates/ingress.template.yaml) to new file `deployment/digital-ocean/https/ingress.yaml` and change the following:
```yaml
...
- host: develop-k8s.ocelot.social
http:
paths:
- path: /
backend:
# serviceName: web
serviceName: maintenance
# servicePort: 3000
servicePort: 80
```
Then run `$ kubectl apply -f deployment/digital-ocean/https/ingress.yaml`. If you
want to deactivate the maintenance server, just undo the edit and apply the
configuration again.

View File

@ -1,39 +0,0 @@
# DigitalOcean
As a start, read the [introduction into Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes) by the folks at DigitalOcean. The following section should enable you to deploy ocelot.social to your Kubernetes cluster.
## Connect to your local cluster
1. Create a cluster at [DigitalOcean](https://www.digitalocean.com/).
2. Download the `***-kubeconfig.yaml` from the Web UI.
3. Move the file to the default location where kubectl expects it to be: `mv ***-kubeconfig.yaml ~/.kube/config`. Alternatively you can set the config on every command: `--kubeconfig ***-kubeconfig.yaml`
4. Now check if you can connect to the cluster and if its your newly created one by running: `kubectl get nodes`
The output should look about like this:
```sh
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
nifty-driscoll-uu1w Ready <none> 69d v1.13.2
nifty-driscoll-uuiw Ready <none> 69d v1.13.2
nifty-driscoll-uusn Ready <none> 69d v1.13.2
```
If you got the steps right above and see your nodes you can continue.
DigitalOcean Kubernetes clusters don't have a graphical interface, so I suggest
to setup the [Kubernetes dashboard](./dashboard/README.md) as a next step.
Configuring [HTTPS](./https/README.md) is bit tricky and therefore I suggest to
do this as a last step.
## Spaces
We are storing our images in the s3-compatible [DigitalOcean Spaces](https://www.digitalocean.com/docs/spaces/).
We still want to take backups of our images in case something happens to the images in the cloud. See these [instructions](https://www.digitalocean.com/docs/spaces/resources/s3cmd-usage/) about getting set up with `s3cmd` to take a copy of all images in a `Spaces` namespace, i.e. `ocelot-social-uploads`.
After configuring `s3cmd` with your credentials, etc. you should be able to make a backup with this command.
```sh
s3cmg get --recursive --skip-existing s3://ocelot-social-uploads
```

View File

@ -1,55 +0,0 @@
# Install Kubernetes Dashboard
The kubernetes dashboard is optional but very helpful for debugging. If you want to install it, you have to do so only **once** per cluster:
```bash
# in folder deployment/digital-ocean/
$ kubectl apply -f dashboard/
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
```
### Login to your dashboard
Proxy the remote kubernetes dashboard to localhost:
```bash
$ kubectl proxy
```
Visit:
[http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/)
You should see a login screen.
To get your token for the dashboard you can run this command:
```bash
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
```
It should print something like:
```text
Name: admin-user-token-6gl6l
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=b16afba9-dfec-11e7-bbb9-901b0e532516
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnbDZsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMTZhZmJhOS1kZmVjLTExZTctYmJiOS05MDFiMGU1MzI1MTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.M70CU3lbu3PP4OjhFms8PVL5pQKj-jj4RNSLA4YmQfTXpPUuxqXjiTf094_Rzr0fgN_IVX6gC4fiNUL5ynx9KU-lkPfk0HnX8scxfJNzypL039mpGt0bbe1IXKSIRaq_9VW59Xz-yBUhycYcKPO9RM2Qa1Ax29nqNVko4vLn1_1wPqJ6XSq3GYI8anTzV8Fku4jasUwjrws6Cn6_sPEGmL54sq5R4Z5afUtv-mItTmqZZdxnkRqcJLlg2Y8WbCPogErbsaCDJoABQ7ppaqHetwfM_0yMun6ABOQbIwwl8pspJhpplKwyo700OSpvTT9zlBsu-b35lzXGBRHzv5g_RA
```
Grab the token from above and paste it into the [login screen](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/)
When you are logged in, you should see sth. like:
![Dashboard](./dashboard-screenshot.png)
Feel free to save the login token from above in your password manager. Unlike the `kubeconfig` file, this token does not expire.

View File

@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system

Binary file not shown.

Before

Width:  |  Height:  |  Size: 178 KiB

View File

@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

View File

@ -1,126 +0,0 @@
## Create Letsencrypt Issuers and Ingress Services
Copy the configuration templates and change the file according to your needs.
```bash
# in folder deployment/digital-ocean/https/
cp templates/issuer.template.yaml ./issuer.yaml
cp templates/ingress.template.yaml ./ingress.yaml
```
At least, **change email addresses** in `issuer.yaml`. For sure you also want
to _change the domain name_ in `ingress.yaml`.
Once you are done, apply the configuration:
```bash
# in folder deployment/digital-ocean/https/
$ kubectl apply -f .
```
{% hint style="info" %}
CAUTION: It seems that the behaviour of DigitalOcean has changed and the load balancer is not created automatically anymore.
And to create a load balancer costs money. Please refine the following documentation if required.
{% endhint %}
{% tabs %}
{% tab title="Without Load Balancer" %}
A solution without a load balance you can find [here](../no-loadbalancer/README.md).
{% endtab %}
{% tab title="With DigitalOcean Load Balancer" %}
{% hint style="info" %}
CAUTION: It seems that the behaviour of DigitalOcean has changed and the load balancer is not created automatically anymore.
Please refine the following documentation if required.
{% endhint %}
In earlier days by now, your cluster should have a load balancer assigned with an external IP
address. On DigitalOcean, this is how it should look like:
![Screenshot of DigitalOcean dashboard showing external ip address](./ip-address.png)
If the load balancer isn't created automatically you have to create it your self on DigitalOcean under Networks.
In case you don't need a DigitalOcean load balancer (which costs money by the way) have a look in the tab `Without Load Balancer`.
{% endtab %}
{% endtabs %}
Check the ingress server is working correctly:
```bash
$ curl -kivL -H 'Host: <DOMAIN_NAME>' 'https://<IP_ADDRESS>'
<page HTML>
```
If the response looks good, configure your domain registrar for the new IP address and the domain.
Now let's get a valid HTTPS certificate. According to the tutorial above, check your tls certificate for staging:
```bash
$ kubectl -n ocelot-social describe certificate tls
<
...
Spec:
...
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
...
Events:
<no errors>
>
$ kubectl -n ocelot-social describe secret tls
<
...
Annotations: ...
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-staging
...
>
```
If everything looks good, update the cluster-issuer of your ingress. Change the annotation `cert-manager.io/cluster-issuer` from `letsencrypt-staging` (for testing by getting a dummy certificate no blocking by letsencrypt, because of to many request cycles) to `letsencrypt-prod` (for production with a real certificate possible blocking by letsencrypt for several days, because of to many request cycles) in your ingress configuration in `ingress.yaml`.
```bash
# in folder deployment/digital-ocean/https/
$ kubectl apply -f ingress.yaml
```
Take a minute and have a look if the certificate is now newly generated by `letsencrypt-prod`, the cluster-issuer for production:
```bash
$ kubectl -n ocelot-social describe certificate tls
<
...
Spec:
...
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-prod
...
Events:
<no errors>
>
$ kubectl -n ocelot-social describe secret tls
<
...
Annotations: ...
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-prod
...
>
```
In case the certificate is not newly created delete the former secret to force a refresh:
```bash
$ kubectl -n ocelot-social delete secret tls
```
Now, HTTPS should be configured on your domain. Congrats!
For troubleshooting have a look at the cert-manager's [Troubleshooting](https://cert-manager.io/docs/faq/troubleshooting/) or [Troubleshooting Issuing ACME Certificates](https://cert-manager.io/docs/faq/acme/).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 141 KiB

View File

@ -1,85 +0,0 @@
# Legacy data migration
This setup is **completely optional** and only required if you have data on a
server which is running our legacy code and you want to import that data. It
will import the uploads folder and migrate a dump of the legacy Mongo database
into our new Neo4J graph database.
## Configure Maintenance-Worker Pod
Create a configmap with the specific connection data of your legacy server:
```bash
$ kubectl create configmap maintenance-worker \
-n ocelot-social \
--from-literal=SSH_USERNAME=someuser \
--from-literal=SSH_HOST=yourhost \
--from-literal=MONGODB_USERNAME=hc-api \
--from-literal=MONGODB_PASSWORD=secretpassword \
--from-literal=MONGODB_AUTH_DB=hc_api \
--from-literal=MONGODB_DATABASE=hc_api \
--from-literal=UPLOADS_DIRECTORY=/var/www/api/uploads
```
Create a secret with your public and private ssh keys. As the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys) points out, you should be careful with your ssh keys. Anyone with access to your cluster will have access to your ssh keys. Better create a new pair with `ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`:
```bash
$ kubectl create secret generic ssh-keys \
-n ocelot-social \
--from-file=id_rsa=/path/to/.ssh/id_rsa \
--from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
--from-file=known_hosts=/path/to/.ssh/known_hosts
```
## Deploy a Temporary Maintenance-Worker Pod
Bring the application into maintenance mode.
{% hint style="info" %} TODO: implement maintenance mode {% endhint %}
Then temporarily delete backend and database deployments
```bash
$ kubectl -n ocelot-social get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
backend 1/1 1 1 3d11h
neo4j 1/1 1 1 3d11h
webapp 2/2 2 2 73d
$ kubectl -n ocelot-social delete deployment neo4j
deployment.extensions "neo4j" deleted
$ kubectl -n ocelot-social delete deployment backend
deployment.extensions "backend" deleted
```
Deploy one-time develop-maintenance-worker pod:
```bash
# in deployment/legacy-migration/
$ kubectl apply -f maintenance-worker.yaml
pod/develop-maintenance-worker created
```
Import legacy database and uploads:
```bash
$ kubectl -n ocelot-social exec -it develop-maintenance-worker bash
$ import_legacy_db
$ import_legacy_uploads
$ exit
```
Delete the pod when you're done:
```bash
$ kubectl -n ocelot-social delete pod develop-maintenance-worker
```
Oh, and of course you have to get those deleted deployments back. One way of
doing it would be:
```bash
# in folder deployment/
$ kubectl apply -f human-connection/deployment-backend.yaml -f human-connection/deployment-neo4j.yaml
```

View File

@ -1,40 +0,0 @@
---
kind: Pod
apiVersion: v1
metadata:
name: develop-maintenance-worker
namespace: ocelot-social
spec:
containers:
- name: develop-maintenance-worker
image: ocelotsocialnetwork/develop-maintenance-worker:latest
imagePullPolicy: Always
resources:
requests:
memory: "2G"
limits:
memory: "8G"
envFrom:
- configMapRef:
name: maintenance-worker
- configMapRef:
name: configmap
volumeMounts:
- name: secret-volume
readOnly: false
mountPath: /root/.ssh
- name: uploads
mountPath: /uploads
- name: neo4j-data
mountPath: /data/
volumes:
- name: secret-volume
secret:
secretName: ssh-keys
defaultMode: 0400
- name: uploads
persistentVolumeClaim:
claimName: uploads-claim
- name: neo4j-data
persistentVolumeClaim:
claimName: neo4j-data-claim

View File

@ -1,2 +0,0 @@
.ssh/
ssh/

View File

@ -1,21 +0,0 @@
FROM ocelotsocialnetwork/develop-neo4j:latest
ENV NODE_ENV=maintenance
EXPOSE 7687 7474
ENV BUILD_DEPS="gettext" \
RUNTIME_DEPS="libintl"
RUN set -x && \
apk add --update $RUNTIME_DEPS && \
apk add --virtual build_deps $BUILD_DEPS && \
cp /usr/bin/envsubst /usr/local/bin/envsubst && \
apk del build_deps
RUN apk upgrade --update
RUN apk add --no-cache mongodb-tools openssh nodejs yarn rsync
COPY known_hosts /root/.ssh/known_hosts
COPY migration /migration
COPY ./binaries/* /usr/local/bin/

View File

@ -1,6 +0,0 @@
# SSH Access
# SSH_USERNAME='username'
# SSH_HOST='example.org'
# UPLOADS_DIRECTORY=/var/www/api/uploads
OUTPUT_DIRECTORY='/uploads/'

View File

@ -1,2 +0,0 @@
#!/usr/bin/env bash
tail -f /dev/null

View File

@ -1,12 +0,0 @@
#!/usr/bin/env bash
set -e
for var in "SSH_USERNAME" "SSH_HOST" "MONGODB_USERNAME" "MONGODB_PASSWORD" "MONGODB_DATABASE" "MONGODB_AUTH_DB"
do
if [[ -z "${!var}" ]]; then
echo "${var} is undefined"
exit 1
fi
done
/migration/mongo/export.sh
/migration/neo4j/import.sh

View File

@ -1,17 +0,0 @@
#!/usr/bin/env bash
set -e
# import .env config
set -o allexport
source $(dirname "$0")/.env
set +o allexport
for var in "SSH_USERNAME" "SSH_HOST" "UPLOADS_DIRECTORY"
do
if [[ -z "${!var}" ]]; then
echo "${var} is undefined"
exit 1
fi
done
rsync --archive --update --verbose ${SSH_USERNAME}@${SSH_HOST}:${UPLOADS_DIRECTORY}/ ${OUTPUT_DIRECTORY}

View File

@ -1,3 +0,0 @@
|1|GuOYlVEhTowidPs18zj9p5F2j3o=|sDHJYLz9Ftv11oXeGEjs7SpVyg0= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM5N29bI5CeKu1/RBPyM2fwyf7fuajOO+tyhKe1+CC2sZ1XNB5Ff6t6MtCLNRv2mUuvzTbW/HkisDiA5tuXUHOk=
|1|2KP9NV+Q5g2MrtjAeFSVcs8YeOI=|nf3h4wWVwC4xbBS1kzgzE2tBldk= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhRK6BeIEUxXlS0z/pOfkUkSPfn33g4J1U3L+MyUQYHm+7agT08799ANJhnvELKE1tt4Vx80I9UR81oxzZcy3E=
|1|HonYIRNhKyroUHPKU1HSZw0+Qzs=|5T1btfwFBz2vNSldhqAIfTbfIgQ= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhRK6BeIEUxXlS0z/pOfkUkSPfn33g4J1U3L+MyUQYHm+7agT08799ANJhnvELKE1tt4Vx80I9UR81oxzZcy3E=

View File

@ -1,17 +0,0 @@
# SSH Access
# SSH_USERNAME='username'
# SSH_HOST='example.org'
# Mongo DB on Remote Maschine
# MONGODB_USERNAME='mongouser'
# MONGODB_PASSWORD='mongopassword'
# MONGODB_DATABASE='mongodatabase'
# MONGODB_AUTH_DB='admin'
# Export Settings
# On Windows this resolves to C:\Users\dornhoeschen\AppData\Local\Temp\mongo-export (MinGW)
EXPORT_PATH='/tmp/mongo-export/'
EXPORT_MONGOEXPORT_BIN='mongoexport'
MONGO_EXPORT_SPLIT_SIZE=6000
# On Windows use something like this
# EXPORT_MONGOEXPORT_BIN='C:\Program Files\MongoDB\Server\3.6\bin\mongoexport.exe'

View File

@ -1,53 +0,0 @@
#!/usr/bin/env bash
set -e
# import .env config
set -o allexport
source $(dirname "$0")/.env
set +o allexport
# Export collection function defintion
function export_collection () {
"${EXPORT_MONGOEXPORT_BIN}" --db ${MONGODB_DATABASE} --host localhost -d ${MONGODB_DATABASE} --port 27018 --username ${MONGODB_USERNAME} --password ${MONGODB_PASSWORD} --authenticationDatabase ${MONGODB_AUTH_DB} --collection $1 --out "${EXPORT_PATH}$1.json"
mkdir -p ${EXPORT_PATH}splits/$1/
split -l ${MONGO_EXPORT_SPLIT_SIZE} -a 3 ${EXPORT_PATH}$1.json ${EXPORT_PATH}splits/$1/
}
# Export collection with query function defintion
function export_collection_query () {
"${EXPORT_MONGOEXPORT_BIN}" --db ${MONGODB_DATABASE} --host localhost -d ${MONGODB_DATABASE} --port 27018 --username ${MONGODB_USERNAME} --password ${MONGODB_PASSWORD} --authenticationDatabase ${MONGODB_AUTH_DB} --collection $1 --out "${EXPORT_PATH}$1_$3.json" --query "$2"
mkdir -p ${EXPORT_PATH}splits/$1_$3/
split -l ${MONGO_EXPORT_SPLIT_SIZE} -a 3 ${EXPORT_PATH}$1_$3.json ${EXPORT_PATH}splits/$1_$3/
}
# Delete old export & ensure directory
rm -rf ${EXPORT_PATH}*
mkdir -p ${EXPORT_PATH}
# Open SSH Tunnel
ssh -4 -M -S my-ctrl-socket -fnNT -L 27018:localhost:27017 -l ${SSH_USERNAME} ${SSH_HOST}
# Export all Data from the Alpha to json and split them up
export_collection "badges"
export_collection "categories"
export_collection "comments"
export_collection_query "contributions" '{"type": "DELETED"}' "DELETED"
export_collection_query "contributions" '{"type": "post"}' "post"
# export_collection_query "contributions" '{"type": "cando"}' "cando"
export_collection "emotions"
# export_collection_query "follows" '{"foreignService": "organizations"}' "organizations"
export_collection_query "follows" '{"foreignService": "users"}' "users"
# export_collection "invites"
# export_collection "organizations"
# export_collection "pages"
# export_collection "projects"
# export_collection "settings"
export_collection "shouts"
# export_collection "status"
export_collection_query "users" '{"isVerified": true }' "verified"
# export_collection "userscandos"
# export_collection "usersettings"
# Close SSH Tunnel
ssh -S my-ctrl-socket -O check -l ${SSH_USERNAME} ${SSH_HOST}
ssh -S my-ctrl-socket -O exit -l ${SSH_USERNAME} ${SSH_HOST}

View File

@ -1,16 +0,0 @@
# Neo4J Settings
# NEO4J_USERNAME='neo4j'
# NEO4J_PASSWORD='letmein'
# Import Settings
# On Windows this resolves to C:\Users\dornhoeschen\AppData\Local\Temp\mongo-export (MinGW)
IMPORT_PATH='/tmp/mongo-export/'
IMPORT_CHUNK_PATH='/tmp/mongo-export/splits/'
IMPORT_CHUNK_PATH_CQL='/tmp/mongo-export/splits/'
# On Windows this path needs to be windows style since the cypher-shell runs native - note the forward slash
# IMPORT_CHUNK_PATH_CQL='C:/Users/dornhoeschen/AppData/Local/Temp/mongo-export/splits/'
IMPORT_CYPHERSHELL_BIN='cypher-shell'
# On Windows use something like this
# IMPORT_CYPHERSHELL_BIN='C:\Program Files\neo4j-community\bin\cypher-shell.bat'

View File

@ -1,52 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] image: {
[?] path: { // Path is incorrect in Nitro - is icon the correct name for this field?
[X] type: String,
[X] required: true
},
[ ] alt: { // If we use an image - should we not have an alt?
[ ] type: String,
[ ] required: true
}
},
[?] status: {
[X] type: String,
[X] enum: ['permanent', 'temporary'],
[ ] default: 'permanent', // Default value is missing in Nitro
[X] required: true
},
[?] type: {
[?] type: String, // in nitro this is a defined enum - seems good for now
[X] required: true
},
[X] id: {
[X] type: String,
[X] required: true
},
[?] createdAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as badge
MERGE(b:Badge {id: badge._id["$oid"]})
ON CREATE SET
b.id = badge.key,
b.type = badge.type,
b.icon = replace(badge.image.path, 'https://api-alpha.human-connection.org', ''),
b.status = badge.status,
b.createdAt = badge.createdAt.`$date`,
b.updatedAt = badge.updatedAt.`$date`
;

View File

@ -1 +0,0 @@
MATCH (n:Badge) DETACH DELETE n;

View File

@ -1,129 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[X] title: {
[X] type: String,
[X] required: true
},
[?] slug: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[ ] unique: true // Unique value is not enforced in Nitro?
},
[?] icon: { // Nitro adds required: true
[X] type: String,
[ ] unique: true // Unique value is not enforced in Nitro?
},
[?] createdAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as category
MERGE(c:Category {id: category._id["$oid"]})
ON CREATE SET
c.name = category.title,
c.slug = category.slug,
c.icon = category.icon,
c.createdAt = category.createdAt.`$date`,
c.updatedAt = category.updatedAt.`$date`
;
// Transform icon names
MATCH (c:Category)
WHERE (c.icon = "categories-justforfun")
SET c.icon = 'smile'
SET c.slug = 'just-for-fun'
;
MATCH (c:Category)
WHERE (c.icon = "categories-luck")
SET c.icon = 'heart-o'
SET c.slug = 'happiness-values'
;
MATCH (c:Category)
WHERE (c.icon = "categories-health")
SET c.icon = 'medkit'
;
MATCH (c:Category)
WHERE (c.icon = "categories-environment")
SET c.icon = 'tree'
;
MATCH (c:Category)
WHERE (c.icon = "categories-animal-justice")
SET c.icon = 'paw'
SET c.slug = 'animal-protection'
;
MATCH (c:Category)
WHERE (c.icon = "categories-human-rights")
SET c.icon = 'balance-scale'
SET c.slug = 'human-rights-justice'
;
MATCH (c:Category)
WHERE (c.icon = "categories-education")
SET c.icon = 'graduation-cap'
;
MATCH (c:Category)
WHERE (c.icon = "categories-cooperation")
SET c.icon = 'users'
;
MATCH (c:Category)
WHERE (c.icon = "categories-politics")
SET c.icon = 'university'
;
MATCH (c:Category)
WHERE (c.icon = "categories-economy")
SET c.icon = 'money'
;
MATCH (c:Category)
WHERE (c.icon = "categories-technology")
SET c.icon = 'flash'
;
MATCH (c:Category)
WHERE (c.icon = "categories-internet")
SET c.icon = 'mouse-pointer'
SET c.slug = 'it-internet-data-privacy'
;
MATCH (c:Category)
WHERE (c.icon = "categories-art")
SET c.icon = 'paint-brush'
;
MATCH (c:Category)
WHERE (c.icon = "categories-freedom-of-speech")
SET c.icon = 'bullhorn'
SET c.slug = 'freedom-of-speech'
;
MATCH (c:Category)
WHERE (c.icon = "categories-sustainability")
SET c.icon = 'shopping-cart'
;
MATCH (c:Category)
WHERE (c.icon = "categories-peace")
SET c.icon = 'angellist'
SET c.slug = 'global-peace-nonviolence'
;

View File

@ -1 +0,0 @@
MATCH (n:Category) DETACH DELETE n;

View File

@ -1,67 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] userId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[?] contributionId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[X] content: {
[X] type: String,
[X] required: true
},
[?] contentExcerpt: { // Generated from content
[X] type: String,
[ ] required: true // Not required in Nitro
},
[ ] hasMore: { type: Boolean },
[ ] upvotes: {
[ ] type: Array,
[ ] default: []
},
[ ] upvoteCount: {
[ ] type: Number,
[ ] default: 0
},
[?] deleted: {
[X] type: Boolean,
[ ] default: false, // Default value is missing in Nitro
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as comment
MERGE (c:Comment {id: comment._id["$oid"]})
ON CREATE SET
c.content = comment.content,
c.contentExcerpt = comment.contentExcerpt,
c.deleted = comment.deleted,
c.createdAt = comment.createdAt.`$date`,
c.updatedAt = comment.updatedAt.`$date`,
c.disabled = false
WITH c, comment, comment.contributionId as postId
MATCH (post:Post {id: postId})
WITH c, post, comment.userId as userId
MATCH (author:User {id: userId})
MERGE (c)-[:COMMENTS]->(post)
MERGE (author)-[:WROTE]->(c)
;

View File

@ -1 +0,0 @@
MATCH (n:Comment) DETACH DELETE n;

View File

@ -1,156 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
[?] { //Modeled incorrect as Post
[?] userId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[ ] organizationId: {
[ ] type: String,
[-] index: true
},
[X] categoryIds: {
[X] type: Array,
[-] index: true
},
[X] title: {
[X] type: String,
[X] required: true
},
[?] slug: { // Generated from title
[X] type: String,
[ ] required: true, // Not required in Nitro
[?] unique: true, // Unique value is not enforced in Nitro?
[-] index: true
},
[ ] type: { // db.getCollection('contributions').distinct('type') -> 'DELETED', 'cando', 'post'
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] cando: {
[ ] difficulty: {
[ ] type: String,
[ ] enum: ['easy', 'medium', 'hard']
},
[ ] reasonTitle: { type: String },
[ ] reason: { type: String }
},
[X] content: {
[X] type: String,
[X] required: true
},
[?] contentExcerpt: { // Generated from content
[X] type: String,
[?] required: true // Not required in Nitro
},
[ ] hasMore: { type: Boolean },
[X] teaserImg: { type: String },
[ ] language: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] shoutCount: {
[ ] type: Number,
[ ] default: 0,
[-] index: true
},
[ ] meta: {
[ ] hasVideo: {
[ ] type: Boolean,
[ ] default: false
},
[ ] embedds: {
[ ] type: Object,
[ ] default: {}
}
},
[?] visibility: {
[X] type: String,
[X] enum: ['public', 'friends', 'private'],
[ ] default: 'public', // Default value is missing in Nitro
[-] index: true
},
[?] isEnabled: {
[X] type: Boolean,
[ ] default: true, // Default value is missing in Nitro
[-] index: true
},
[?] tags: { type: Array }, // ensure this is working properly
[ ] emotions: {
[ ] type: Object,
[-] index: true,
[ ] default: {
[ ] angry: {
[ ] count: 0,
[ ] percent: 0
[ ] },
[ ] cry: {
[ ] count: 0,
[ ] percent: 0
[ ] },
[ ] surprised: {
[ ] count: 0,
[ ] percent: 0
},
[ ] happy: {
[ ] count: 0,
[ ] percent: 0
},
[ ] funny: {
[ ] count: 0,
[ ] percent: 0
}
}
},
[?] deleted: { // THis field is not always present in the alpha-data
[?] type: Boolean,
[ ] default: false, // Default value is missing in Nitro
[-] index: true
},
[?] createdAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as post
MERGE (p:Post {id: post._id["$oid"]})
ON CREATE SET
p.title = post.title,
p.slug = post.slug,
p.image = replace(post.teaserImg, 'https://api-alpha.human-connection.org', ''),
p.content = post.content,
p.contentExcerpt = post.contentExcerpt,
p.visibility = toLower(post.visibility),
p.createdAt = post.createdAt.`$date`,
p.updatedAt = post.updatedAt.`$date`,
p.deleted = COALESCE(post.deleted, false),
p.disabled = COALESCE(NOT post.isEnabled, false)
WITH p, post
MATCH (u:User {id: post.userId})
MERGE (u)-[:WROTE]->(p)
WITH p, post, post.categoryIds as categoryIds
UNWIND categoryIds AS categoryId
MATCH (c:Category {id: categoryId})
MERGE (p)-[:CATEGORIZED]->(c)
WITH p, post.tags AS tags
UNWIND tags AS tag
WITH apoc.text.replace(tag, '[^\\p{L}0-9]', '') as tagNoSpacesAllowed
CALL apoc.when(tagNoSpacesAllowed =~ '^((\\p{L}+[\\p{L}0-9]*)|([0-9]+\\p{L}+[\\p{L}0-9]*))$', 'RETURN tagNoSpacesAllowed', '', {tagNoSpacesAllowed: tagNoSpacesAllowed})
YIELD value as validated
WHERE validated.tagNoSpacesAllowed IS NOT NULL
MERGE (t:Tag { id: validated.tagNoSpacesAllowed, disabled: false, deleted: false })
MERGE (p)-[:TAGGED]->(t)
;

View File

@ -1,2 +0,0 @@
MATCH (n:Post) DETACH DELETE n;
MATCH (n:Tag) DETACH DELETE n;

View File

@ -1 +0,0 @@
MATCH (n) DETACH DELETE n;

View File

@ -1 +0,0 @@
MATCH (u:User)-[e:EMOTED]->(c:Post) DETACH DELETE e;

View File

@ -1,58 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[X] userId: {
[X] type: String,
[X] required: true,
[-] index: true
},
[X] contributionId: {
[X] type: String,
[X] required: true,
[-] index: true
},
[?] rated: {
[X] type: String,
[ ] required: true,
[?] enum: ['funny', 'happy', 'surprised', 'cry', 'angry']
},
[X] createdAt: {
[X] type: Date,
[X] default: Date.now
},
[X] updatedAt: {
[X] type: Date,
[X] default: Date.now
},
[-] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as emotion
MATCH (u:User {id: emotion.userId}),
(c:Post {id: emotion.contributionId})
MERGE (u)-[e:EMOTED {
id: emotion._id["$oid"],
emotion: emotion.rated,
createdAt: datetime(emotion.createdAt.`$date`),
updatedAt: datetime(emotion.updatedAt.`$date`)
}]->(c)
RETURN e;
/*
// Queries
// user sets an emotion emotion:
// MERGE (u)-[e:EMOTED {id: ..., emotion: "funny", createdAt: ..., updatedAt: ...}]->(c)
// user removes emotion
// MATCH (u)-[e:EMOTED]->(c) DELETE e
// contribution distributions over every `emotion` property value for one post
// MATCH (u:User)-[e:EMOTED]->(c:Post {id: "5a70bbc8508f5b000b443b1a"}) RETURN e.emotion,COUNT(e.emotion)
// contribution distributions over every `emotion` property value for one user (advanced - "whats the primary emotion used by the user?")
// MATCH (u:User{id:"5a663b1ac64291000bf302a1"})-[e:EMOTED]->(c:Post) RETURN e.emotion,COUNT(e.emotion)
// contribution distributions over every `emotion` property value for all posts created by one user (advanced - "how do others react to my contributions?")
// MATCH (u:User)-[e:EMOTED]->(c:Post)<-[w:WROTE]-(a:User{id:"5a663b1ac64291000bf302a1"}) RETURN e.emotion,COUNT(e.emotion)
// if we can filter the above an a variable timescale that would be great (should be possible on createdAt and updatedAt fields)
*/

View File

@ -1 +0,0 @@
MATCH (u1:User)-[f:FOLLOWS]->(u2:User) DETACH DELETE f;

View File

@ -1,36 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] userId: {
[-] type: String,
[ ] required: true,
[-] index: true
},
[?] foreignId: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[?] foreignService: { // db.getCollection('follows').distinct('foreignService') returns 'organizations' and 'users'
[ ] type: String,
[ ] required: true,
[ ] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
index:
[?] { userId: 1, foreignId: 1, foreignService: 1 },{ unique: true } // is the unique constrain modeled?
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as follow
MATCH (u1:User {id: follow.userId}), (u2:User {id: follow.foreignId})
MERGE (u1)-[:FOLLOWS]->(u2)
;

View File

@ -1,108 +0,0 @@
#!/usr/bin/env bash
set -e
# import .env config
set -o allexport
source $(dirname "$0")/.env
set +o allexport
# Delete collection function defintion
function delete_collection () {
# Delete from Database
echo "Delete $2"
"${IMPORT_CYPHERSHELL_BIN}" < $(dirname "$0")/$1/delete.cql > /dev/null
# Delete index file
rm -f "${IMPORT_PATH}splits/$2.index"
}
# Import collection function defintion
function import_collection () {
# index file of those chunks we have already imported
INDEX_FILE="${IMPORT_PATH}splits/$1.index"
# load index file
if [ -f "$INDEX_FILE" ]; then
readarray -t IMPORT_INDEX <$INDEX_FILE
else
declare -a IMPORT_INDEX
fi
# for each chunk import data
for chunk in ${IMPORT_PATH}splits/$1/*
do
CHUNK_FILE_NAME=$(basename "${chunk}")
# does the index not contain the chunk file name?
if [[ ! " ${IMPORT_INDEX[@]} " =~ " ${CHUNK_FILE_NAME} " ]]; then
# calculate the path of the chunk
export IMPORT_CHUNK_PATH_CQL_FILE="${IMPORT_CHUNK_PATH_CQL}$1/${CHUNK_FILE_NAME}"
# load the neo4j command and replace file variable with actual path
NEO4J_COMMAND="$(envsubst '${IMPORT_CHUNK_PATH_CQL_FILE}' < $(dirname "$0")/$2)"
# run the import of the chunk
echo "Import $1 ${CHUNK_FILE_NAME} (${chunk})"
echo "${NEO4J_COMMAND}" | "${IMPORT_CYPHERSHELL_BIN}" > /dev/null
# add file to array and file
IMPORT_INDEX+=("${CHUNK_FILE_NAME}")
echo "${CHUNK_FILE_NAME}" >> ${INDEX_FILE}
else
echo "Skipping $1 ${CHUNK_FILE_NAME} (${chunk})"
fi
done
}
# Time variable
SECONDS=0
# Delete all Neo4J Database content
echo "Deleting Database Contents"
delete_collection "badges" "badges"
delete_collection "categories" "categories"
delete_collection "users" "users"
delete_collection "follows" "follows_users"
delete_collection "contributions" "contributions_post"
delete_collection "contributions" "contributions_cando"
delete_collection "shouts" "shouts"
delete_collection "comments" "comments"
delete_collection "emotions" "emotions"
#delete_collection "invites"
#delete_collection "notifications"
#delete_collection "organizations"
#delete_collection "pages"
#delete_collection "projects"
#delete_collection "settings"
#delete_collection "status"
#delete_collection "systemnotifications"
#delete_collection "userscandos"
#delete_collection "usersettings"
echo "DONE"
# Import Data
echo "Start Importing Data"
import_collection "badges" "badges/badges.cql"
import_collection "categories" "categories/categories.cql"
import_collection "users_verified" "users/users.cql"
import_collection "follows_users" "follows/follows.cql"
#import_collection "follows_organizations" "follows/follows.cql"
import_collection "contributions_post" "contributions/contributions.cql"
#import_collection "contributions_cando" "contributions/contributions.cql"
#import_collection "contributions_DELETED" "contributions/contributions.cql"
import_collection "shouts" "shouts/shouts.cql"
import_collection "comments" "comments/comments.cql"
import_collection "emotions" "emotions/emotions.cql"
# import_collection "invites"
# import_collection "notifications"
# import_collection "organizations"
# import_collection "pages"
# import_collection "systemnotifications"
# import_collection "userscandos"
# import_collection "usersettings"
# does only contain dummy data
# import_collection "projects"
# does only contain alpha specifc data
# import_collection "status
# import_collection "settings""
echo "DONE"
echo "Time elapsed: $SECONDS seconds"

View File

@ -1,39 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] email: {
[ ] type: String,
[ ] required: true,
[-] index: true,
[ ] unique: true
},
[ ] code: {
[ ] type: String,
[-] index: true,
[ ] required: true
},
[ ] role: {
[ ] type: String,
[ ] enum: ['admin', 'moderator', 'manager', 'editor', 'user'],
[ ] default: 'user'
},
[ ] invitedByUserId: { type: String },
[ ] language: { type: String },
[ ] badgeIds: [],
[ ] wasUsed: {
[ ] type: Boolean,
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as invite;

View File

@ -1,48 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] userId: { // User this notification is sent to
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] type: {
[ ] type: String,
[ ] required: true,
[ ] enum: ['comment','comment-mention','contribution-mention','following-contribution']
},
[ ] relatedUserId: {
[ ] type: String,
[-] index: true
},
[ ] relatedContributionId: {
[ ] type: String,
[-] index: true
},
[ ] relatedOrganizationId: {
[ ] type: String,
[-] index: true
},
[ ] relatedCommentId: {type: String },
[ ] unseen: {
[ ] type: Boolean,
[ ] default: true,
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as notification;

View File

@ -1,137 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] name: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] slug: {
[ ] type: String,
[ ] required: true,
[ ] unique: true,
[-] index: true
},
[ ] followersCounts: {
[ ] users: {
[ ] type: Number,
[ ] default: 0
},
[ ] organizations: {
[ ] type: Number,
[ ] default: 0
},
[ ] projects: {
[ ] type: Number,
[ ] default: 0
}
},
[ ] followingCounts: {
[ ] users: {
[ ] type: Number,
[ ] default: 0
},
[ ] organizations: {
[ ] type: Number,
[ ] default: 0
},
[ ] projects: {
[ ] type: Number,
[ ] default: 0
}
},
[ ] categoryIds: {
[ ] type: Array,
[ ] required: true,
[-] index: true
},
[ ] logo: { type: String },
[ ] coverImg: { type: String },
[ ] userId: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] description: {
[ ] type: String,
[ ] required: true
},
[ ] descriptionExcerpt: { type: String }, // will be generated automatically
[ ] publicEmail: { type: String },
[ ] url: { type: String },
[ ] type: {
[ ] type: String,
[-] index: true,
[ ] enum: ['ngo', 'npo', 'goodpurpose', 'ev', 'eva']
},
[ ] language: {
[ ] type: String,
[ ] required: true,
[ ] default: 'de',
[-] index: true
},
[ ] addresses: {
[ ] type: [{
[ ] street: {
[ ] type: String,
[ ] required: true
},
[ ] zipCode: {
[ ] type: String,
[ ] required: true
},
[ ] city: {
[ ] type: String,
[ ] required: true
},
[ ] country: {
[ ] type: String,
[ ] required: true
},
[ ] lat: {
[ ] type: Number,
[ ] required: true
},
[ ] lng: {
[ ] type: Number,
[ ] required: true
}
}],
[ ] default: []
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] isEnabled: {
[ ] type: Boolean,
[ ] default: false,
[-] index: true
},
[ ] reviewedBy: {
[ ] type: String,
[ ] default: null,
[-] index: true
},
[ ] tags: {
[ ] type: Array,
[-] index: true
},
[ ] deleted: {
[ ] type: Boolean,
[ ] default: false,
[-] index: true
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as organisation;

View File

@ -1,55 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] title: {
[ ] type: String,
[ ] required: true
},
[ ] slug: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] type: {
[ ] type: String,
[ ] required: true,
[ ] default: 'page'
},
[ ] key: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] content: {
[ ] type: String,
[ ] required: true
},
[ ] language: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] active: {
[ ] type: Boolean,
[ ] default: true,
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
index:
[ ] { slug: 1, language: 1 },{ unique: true }
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as page;

View File

@ -1,44 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] name: {
[ ] type: String,
[ ] required: true
},
[ ] slug: { type: String },
[ ] followerIds: [],
[ ] categoryIds: { type: Array },
[ ] logo: { type: String },
[ ] userId: {
[ ] type: String,
[ ] required: true
},
[ ] description: {
[ ] type: String,
[ ] required: true
},
[ ] content: {
[ ] type: String,
[ ] required: true
},
[ ] addresses: {
[ ] type: Array,
[ ] default: []
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as project;

View File

@ -1,36 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] key: {
[ ] type: String,
[ ] default: 'system',
[-] index: true,
[ ] unique: true
},
[ ] invites: {
[ ] userCanInvite: {
[ ] type: Boolean,
[ ] required: true,
[ ] default: false
},
[ ] maxInvitesByUser: {
[ ] type: Number,
[ ] required: true,
[ ] default: 1
},
[ ] onlyUserWithBadgesCanInvite: {
[ ] type: Array,
[ ] default: []
}
},
[ ] maintenance: false
}, {
[ ] timestamps: true
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as setting;

View File

@ -1 +0,0 @@
// this is just a relation between users and contributions - no need to delete

View File

@ -1,36 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] userId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[?] foreignId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[?] foreignService: { // db.getCollection('shots').distinct('foreignService') returns 'contributions'
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
index:
[?] { userId: 1, foreignId: 1 },{ unique: true } // is the unique constrain modeled?
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as shout
MATCH (u:User {id: shout.userId}), (p:Post {id: shout.foreignId})
MERGE (u)-[:SHOUTED]->(p)
;

View File

@ -1,19 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] maintenance: {
[ ] type: Boolean,
[ ] default: false
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as status;

View File

@ -1,61 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] type: {
[ ] type: String,
[ ] default: 'info',
[ ] required: true,
[-] index: true
},
[ ] title: {
[ ] type: String,
[ ] required: true
},
[ ] content: {
[ ] type: String,
[ ] required: true
},
[ ] slot: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] language: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] permanent: {
[ ] type: Boolean,
[ ] default: false
},
[ ] requireConfirmation: {
[ ] type: Boolean,
[ ] default: false
},
[ ] active: {
[ ] type: Boolean,
[ ] default: true,
[-] index: true
},
[ ] totalCount: {
[ ] type: Number,
[ ] default: 0
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as systemnotification;

View File

@ -1,2 +0,0 @@
MATCH (n:User) DETACH DELETE n;
MATCH (e:EmailAddress) DETACH DELETE e;

View File

@ -1,124 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] email: {
[X] type: String,
[-] index: true,
[X] required: true,
[?] unique: true //unique constrain missing in Nitro
},
[?] password: { // Not required in Alpha -> verify if always present
[X] type: String
},
[X] name: { type: String },
[X] slug: {
[X] type: String,
[-] index: true
},
[ ] gender: { type: String },
[ ] followersCounts: {
[ ] users: {
[ ] type: Number,
[ ] default: 0
},
[ ] organizations: {
[ ] type: Number,
[ ] default: 0
},
[ ] projects: {
[ ] type: Number,
[ ] default: 0
}
},
[ ] followingCounts: {
[ ] users: {
[ ] type: Number,
[ ] default: 0
},
[ ] organizations: {
[ ] type: Number,
[ ] default: 0
},
[ ] projects: {
[ ] type: Number,
[ ] default: 0
}
},
[ ] timezone: { type: String },
[X] avatar: { type: String },
[X] coverImg: { type: String },
[ ] doiToken: { type: String },
[ ] confirmedAt: { type: Date },
[?] badgeIds: [], // Verify this is working properly
[?] deletedAt: { type: Date }, // The Date of deletion is not saved in Nitro
[?] createdAt: {
[?] type: Date, // Modeled as String in Nitro
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Modeled as String in Nitro
[ ] default: Date.now // Default value is missing in Nitro
},
[ ] lastActiveAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] isVerified: { type: Boolean },
[?] role: {
[X] type: String,
[-] index: true,
[?] enum: ['admin', 'moderator', 'manager', 'editor', 'user'], // missing roles manager & editor in Nitro
[ ] default: 'user' // Default value is missing in Nitro
},
[ ] verifyToken: { type: String },
[ ] verifyShortToken: { type: String },
[ ] verifyExpires: { type: Date },
[ ] verifyChanges: { type: Object },
[ ] resetToken: { type: String },
[ ] resetShortToken: { type: String },
[ ] resetExpires: { type: Date },
[X] wasSeeded: { type: Boolean },
[X] wasInvited: { type: Boolean },
[ ] language: {
[ ] type: String,
[ ] default: 'en'
},
[ ] termsAndConditionsAccepted: { type: Date }, // we display the terms and conditions on registration
[ ] systemNotificationsSeen: {
[ ] type: Array,
[ ] default: []
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as user
MERGE(u:User {id: user._id["$oid"]})
ON CREATE SET
u.name = user.name,
u.slug = COALESCE(user.slug, apoc.text.random(20, "[A-Za-z]")),
u.email = user.email,
u.encryptedPassword = user.password,
u.avatar = replace(user.avatar, 'https://api-alpha.human-connection.org', ''),
u.coverImg = replace(user.coverImg, 'https://api-alpha.human-connection.org', ''),
u.wasInvited = user.wasInvited,
u.wasSeeded = user.wasSeeded,
u.role = toLower(user.role),
u.createdAt = user.createdAt.`$date`,
u.updatedAt = user.updatedAt.`$date`,
u.deleted = user.deletedAt IS NOT NULL,
u.disabled = false
MERGE (e:EmailAddress {
email: user.email,
createdAt: toString(datetime()),
verifiedAt: toString(datetime())
})
MERGE (e)-[:BELONGS_TO]->(u)
MERGE (u)-[:PRIMARY_EMAIL]->(e)
WITH u, user, user.badgeIds AS badgeIds
UNWIND badgeIds AS badgeId
MATCH (b:Badge {id: badgeId})
MERGE (b)-[:REWARDED]->(u)
;

View File

@ -1,35 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] userId: {
[ ] type: String,
[ ] required: true
},
[ ] contributionId: {
[ ] type: String,
[ ] required: true
},
[ ] done: {
[ ] type: Boolean,
[ ] default: false
},
[ ] doneAt: { type: Date },
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
index:
[ ] { userId: 1, contributionId: 1 },{ unique: true }
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as usercando;

View File

@ -1,43 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] userId: {
[ ] type: String,
[ ] required: true,
[ ] unique: true
},
[ ] blacklist: {
[ ] type: Array,
[ ] default: []
},
[ ] uiLanguage: {
[ ] type: String,
[ ] required: true
},
[ ] contentLanguages: {
[ ] type: Array,
[ ] default: []
},
[ ] filter: {
[ ] categoryIds: {
[ ] type: Array,
[ ] index: true
},
[ ] emotions: {
[ ] type: Array,
[ ] index: true
}
},
[ ] hideUsersWithoutTermsOfUseSigniture: {type: Boolean},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as usersetting;

View File

@ -1,40 +0,0 @@
{{- if .Values.developmentMailserverDomain }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-mailserver
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
minReadySeconds: 15
progressDeadlineSeconds: 60
selector:
matchLabels:
ocelot.social/selector: deployment-mailserver
template:
metadata:
labels:
ocelot.social/selector: deployment-mailserver
name: mailserver
spec:
containers:
- name: mailserver
image: djfarrelly/maildev
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 80
- containerPort: 25
envFrom:
- configMapRef:
name: {{ .Release.Name }}-configmap
- secretRef:
name: {{ .Release.Name }}-secrets
restartPolicy: Always
terminationGracePeriodSeconds: 30
status: {}
{{- end}}

View File

@ -1,18 +0,0 @@
# Development Mail Server
You can deploy a fake smtp server which captures all send mails and displays
them in a web interface. The [sample configuration](../templates/configmap.template.yaml)
is assuming such a dummy server in the `SMTP_HOST` configuration and points to
a cluster-internal SMTP server.
To deploy the SMTP server just uncomment the relevant code in the
[ingress server configuration](../../https/templates/ingress.template.yaml) and
run the following:
```bash
# in folder deployment/ocelot-social
$ kubectl apply -f mailserver/
```
You might need to refresh the TLS secret to enable HTTPS on the publicly
available web interface.

View File

@ -1,22 +0,0 @@
{{- if .Values.developmentMailserverDomain }}
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-mailserver
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: web
port: 80
targetPort: 80
- name: smtp
port: 25
targetPort: 25
selector:
ocelot.social/selector: deployment-mailserver
{{- end}}

View File

@ -1,42 +0,0 @@
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: ingress-{{ .Release.Name }}-webapp
labels:
app.kubernetes.io/name: "{{ .Chart.Name }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
app.kubernetes.io/component: "ingress webapp"
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: {{ .Values.LETSENCRYPT.ISSUER }}
nginx.ingress.kubernetes.io/proxy-body-size: {{ .Values.NGINX.PROXY_BODY_SIZE }}
spec:
tls:
- hosts:
- {{ .Values.LETSENCRYPT.DOMAIN }}
secretName: tls
rules:
- host: {{ .Values.LETSENCRYPT.DOMAIN }}
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ .Release.Name }}-webapp
port:
number: 3000
#{{- if .Values.developmentMailserverDomain }}
# - host: {{ .Values.developmentMailserverDomain }}
# http:
# paths:
# - path: /
# backend:
# serviceName: {{ .Release.Name }}-mailserver
# servicePort: 80
#{{- end }}

Some files were not shown because too many files have changed in this diff Show More