removed all deployment scripts (moved to rebranding repo)

This commit is contained in:
Ulf Gebhardt 2021-02-25 19:06:34 +01:00
parent 8cc490f1db
commit 208b893bef
No known key found for this signature in database
GPG Key ID: 81308EFE29ABFEBD
121 changed files with 0 additions and 3721 deletions

View File

@ -1,6 +0,0 @@
secrets.yaml
configmap.yaml
**/secrets.yaml
**/configmap.yaml
**/staging-values.yaml
**/production-values.yaml

View File

@ -1,62 +0,0 @@
# ocelot.social \| Deployment Configuration
There are a couple different ways we have tested to deploy an instance of ocelot.social, with [Kubernetes](https://kubernetes.io/) and via [Helm](https://helm.sh/docs/). In order to manage your own
network, you have to [install Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), [install Helm](https://helm.sh/docs/intro/install/) (optional, but the preferred way),
and set up a Kubernetes cluster. Since there are many different options to host your cluster, we won't go into specifics here.
We have tested two different Kubernetes providers: [Minikube](./minikube/README.md)
and [Digital Ocean](./digital-ocean/README.md).
Check out the specific documentation for your provider. After that, choose whether you want to go with the recommended deploy option [Helm](./helm/README.md), or use Kubernetes to apply the configuration for [ocelot.social](./ocelot-social/README.md).
## Initialise Database For Production After Deployment
After the first deployment of the new network on your server, the database must be initialized to start your network. This involves setting up a default administrator with the following data:
- E-mail: admin@example.org
- Password: 1234
{% hint style="danger" %}
TODO: When you are logged in for the first time, please change your (the admin's) e-mail to an existing one and change your password to a secure one !!!
{% endhint %}
Run the following command in the Docker container of the or a backend:
{% tabs %}
{% tab title="Kubernetes For Docker" %}
```bash
# with explicit pod backend name
$ kubectl -n ocelot-social exec -it <backend-name> yarn prod:migrate init
# or
# if you have only one backend grep it
$ kubectl -n ocelot-social exec -it $(kubectl -n ocelot-social get pods | grep backend | awk '{ print $1 }') yarn prod:migrate init
# or
# sh in your backend and do the command here
$ kubectl -n ocelot-social exec -it $(kubectl -n ocelot-social get pods | grep backend | awk '{ print $1 }') sh
backend: $ yarn prod:migrate init
backend: $ exit
```
{% endtab %}
{% tab title="Docker-Compose Running Local" %}
```bash
# exec in backend
$ docker-compose exec backend yarn run db:migrate init
```
{% endtab %}
{% tab title="Running Local" %}
```bash
# exec in folder backend/
$ yarn run db:migrate init
```
{% endtab %}
{% endtabs %}

View File

@ -1,39 +0,0 @@
# Digital Ocean
As a start, read the [introduction into Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes) by the folks at Digital Ocean. The following section should enable you to deploy ocelot.social to your Kubernetes cluster.
## Connect to your local cluster
1. Create a cluster at [Digital Ocean](https://www.digitalocean.com/).
2. Download the `***-kubeconfig.yaml` from the Web UI.
3. Move the file to the default location where kubectl expects it to be: `mv ***-kubeconfig.yaml ~/.kube/config`. Alternatively you can set the config on every command: `--kubeconfig ***-kubeconfig.yaml`
4. Now check if you can connect to the cluster and if its your newly created one by running: `kubectl get nodes`
The output should look about like this:
```sh
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
nifty-driscoll-uu1w Ready <none> 69d v1.13.2
nifty-driscoll-uuiw Ready <none> 69d v1.13.2
nifty-driscoll-uusn Ready <none> 69d v1.13.2
```
If you got the steps right above and see your nodes you can continue.
Digital Ocean Kubernetes clusters don't have a graphical interface, so I suggest
to setup the [Kubernetes dashboard](./dashboard/README.md) as a next step.
Configuring [HTTPS](./https/README.md) is bit tricky and therefore I suggest to
do this as a last step.
## Spaces
We are storing our images in the s3-compatible [DigitalOcean Spaces](https://www.digitalocean.com/docs/spaces/).
We still want to take backups of our images in case something happens to the images in the cloud. See these [instructions](https://www.digitalocean.com/docs/spaces/resources/s3cmd-usage/) about getting set up with `s3cmd` to take a copy of all images in a `Spaces` namespace, i.e. `ocelot-social-uploads`.
After configuring `s3cmd` with your credentials, etc. you should be able to make a backup with this command.
```sh
s3cmg get --recursive --skip-existing s3://ocelot-social-uploads
```

View File

@ -1,55 +0,0 @@
# Install Kubernetes Dashboard
The kubernetes dashboard is optional but very helpful for debugging. If you want to install it, you have to do so only **once** per cluster:
```bash
# in folder deployment/digital-ocean/
$ kubectl apply -f dashboard/
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
```
### Login to your dashboard
Proxy the remote kubernetes dashboard to localhost:
```bash
$ kubectl proxy
```
Visit:
[http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/)
You should see a login screen.
To get your token for the dashboard you can run this command:
```bash
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
```
It should print something like:
```text
Name: admin-user-token-6gl6l
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=b16afba9-dfec-11e7-bbb9-901b0e532516
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnbDZsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMTZhZmJhOS1kZmVjLTExZTctYmJiOS05MDFiMGU1MzI1MTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.M70CU3lbu3PP4OjhFms8PVL5pQKj-jj4RNSLA4YmQfTXpPUuxqXjiTf094_Rzr0fgN_IVX6gC4fiNUL5ynx9KU-lkPfk0HnX8scxfJNzypL039mpGt0bbe1IXKSIRaq_9VW59Xz-yBUhycYcKPO9RM2Qa1Ax29nqNVko4vLn1_1wPqJ6XSq3GYI8anTzV8Fku4jasUwjrws6Cn6_sPEGmL54sq5R4Z5afUtv-mItTmqZZdxnkRqcJLlg2Y8WbCPogErbsaCDJoABQ7ppaqHetwfM_0yMun6ABOQbIwwl8pspJhpplKwyo700OSpvTT9zlBsu-b35lzXGBRHzv5g_RA
```
Grab the token from above and paste it into the [login screen](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/)
When you are logged in, you should see sth. like:
![Dashboard](./dashboard-screenshot.png)
Feel free to save the login token from above in your password manager. Unlike the `kubeconfig` file, this token does not expire.

View File

@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system

Binary file not shown.

Before

Width:  |  Height:  |  Size: 178 KiB

View File

@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

View File

@ -1,2 +0,0 @@
ingress.yaml
issuer.yaml

View File

@ -1,164 +0,0 @@
# Setup Ingress and HTTPS
{% tabs %}
{% tab title="Helm 3" %}
## Via Helm 3
Follow [this quick start guide](https://cert-manager.io/docs/) and install certmanager via Helm 3:
## Or Via Kubernetes Directly
```bash
$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.1.0/cert-manager.yaml
```
{% endtab %}
{% tab title="Helm 2" %}
{% hint style="info" %}
CAUTION: Tiller on Helm 2 is [removed](https://helm.sh/docs/faq/#removal-of-tiller) on Helm 3, because of savety issues. So we recomment Helm 3.
{% endhint %}
Follow [this quick start guide](https://docs.cert-manager.io/en/latest/tutorials/acme/quick-start/index.html) and install certmanager via Helm 2 and tiller:
[This resource was also helpful](https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html#installing-with-helm)
```bash
$ kubectl create serviceaccount tiller --namespace=kube-system
$ kubectl create clusterrolebinding tiller-admin --serviceaccount=kube-system:tiller --clusterrole=cluster-admin
$ helm init --service-account=tiller
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
$ helm install --name cert-manager --namespace cert-manager --version v0.11.0 jetstack/cert-manager
```
{% endtab %}
{% endtabs %}
## Create Letsencrypt Issuers and Ingress Services
Copy the configuration templates and change the file according to your needs.
```bash
# in folder deployment/digital-ocean/https/
cp templates/issuer.template.yaml ./issuer.yaml
cp templates/ingress.template.yaml ./ingress.yaml
```
At least, **change email addresses** in `issuer.yaml`. For sure you also want
to _change the domain name_ in `ingress.yaml`.
Once you are done, apply the configuration:
```bash
# in folder deployment/digital-ocean/https/
$ kubectl apply -f .
```
{% hint style="info" %}
CAUTION: It seems that the behaviour of Digital Ocean has changed and the load balancer is not created automatically anymore.
And to create a load balancer costs money. Please refine the following documentation if required.
{% endhint %}
{% tabs %}
{% tab title="Without Load Balancer" %}
A solution without a load balance you can find [here](../no-loadbalancer/README.md).
{% endtab %}
{% tab title="With Digital Ocean Load Balancer" %}
{% hint style="info" %}
CAUTION: It seems that the behaviour of Digital Ocean has changed and the load balancer is not created automatically anymore.
Please refine the following documentation if required.
{% endhint %}
In earlier days by now, your cluster should have a load balancer assigned with an external IP
address. On Digital Ocean, this is how it should look like:
![Screenshot of Digital Ocean dashboard showing external ip address](./ip-address.png)
If the load balancer isn't created automatically you have to create it your self on Digital Ocean under Networks.
In case you don't need a Digital Ocean load balancer (which costs money by the way) have a look in the tab `Without Load Balancer`.
{% endtab %}
{% endtabs %}
Check the ingress server is working correctly:
```bash
$ curl -kivL -H 'Host: <DOMAIN_NAME>' 'https://<IP_ADDRESS>'
<page HTML>
```
If the response looks good, configure your domain registrar for the new IP address and the domain.
Now let's get a valid HTTPS certificate. According to the tutorial above, check your tls certificate for staging:
```bash
$ kubectl -n ocelot-social describe certificate tls
<
...
Spec:
...
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
...
Events:
<no errors>
>
$ kubectl -n ocelot-social describe secret tls
<
...
Annotations: ...
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-staging
...
>
```
If everything looks good, update the cluster-issuer of your ingress. Change the annotation `cert-manager.io/cluster-issuer` from `letsencrypt-staging` (for testing by getting a dummy certificate no blocking by letsencrypt, because of to many request cycles) to `letsencrypt-prod` (for production with a real certificate possible blocking by letsencrypt for several days, because of to many request cycles) in your ingress configuration in `ingress.yaml`.
```bash
# in folder deployment/digital-ocean/https/
$ kubectl apply -f ingress.yaml
```
Take a minute and have a look if the certificate is now newly generated by `letsencrypt-prod`, the cluster-issuer for production:
```bash
$ kubectl -n ocelot-social describe certificate tls
<
...
Spec:
...
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-prod
...
Events:
<no errors>
>
$ kubectl -n ocelot-social describe secret tls
<
...
Annotations: ...
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-prod
...
>
```
In case the certificate is not newly created delete the former secret to force a refresh:
```bash
$ kubectl -n ocelot-social delete secret tls
```
Now, HTTPS should be configured on your domain. Congrats!
For troubleshooting have a look at the cert-manager's [Troubleshooting](https://cert-manager.io/docs/faq/troubleshooting/) or [Troubleshooting Issuing ACME Certificates](https://cert-manager.io/docs/faq/acme/).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 141 KiB

View File

@ -1,6 +0,0 @@
kind: Namespace
apiVersion: v1
metadata:
name: ocelot-social
labels:
name: ocelot-social

View File

@ -1,32 +0,0 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: ocelot-social
annotations:
kubernetes.io/ingress.class: "nginx"
# cert-manager.io/issuer: "letsencrypt-staging" # in case using issuers instead of a cluster-issuers
cert-manager.io/cluster-issuer: "letsencrypt-staging"
nginx.ingress.kubernetes.io/proxy-body-size: 10m
spec:
rules:
- host: develop-k8s.ocelot.social
http:
paths:
- backend:
serviceName: web
servicePort: 3000
path: /
# decommt if you have installed the mailservice
# - host: mail.ocelot.social
# http:
# paths:
# - backend:
# serviceName: mailserver
# servicePort: 80
# path: /
# decommt to activate SSL via port 443 if you have installed the certificate. probalby via the cert-manager
# tls:
# - hosts:
# - develop-k8s.ocelot.social
# secretName: tls

View File

@ -1,70 +0,0 @@
---
# used while installation as first setup for testing purposes, recognize 'server: https://acme-staging-v02…'
# !!! replace the e-mail for expiring certificates, see below !!!
# !!! create the used secret, see below !!!
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: ocelot-social
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: user@example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging-issuer-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
---
# used after installation for production, recognize 'server: https://acme-v02…'
# !!! replace the e-mail for expiring certificates, see below !!!
# !!! create the used secret, see below !!!
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: ocelot-social
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: user@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-prod-issuer-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
---
# fill in your letsencrypt-staging-issuer-account-key
# generate base 64: $ echo -n '<your data>' | base64
apiVersion: v1
data:
tls.key: <your base 64 data>
kind: Secret
metadata:
name: letsencrypt-staging-issuer-account-key
namespace: ocelot-social
type: Opaque
---
# fill in your letsencrypt-prod-issuer-account-key
# generate base 64: $ echo -n '<your data>' | base64
apiVersion: v1
data:
tls.key: <your base 64 data>
kind: Secret
metadata:
name: letsencrypt-prod-issuer-account-key
namespace: ocelot-social
type: Opaque

View File

@ -1,2 +0,0 @@
mydns.values.yaml
myingress.values.yaml

View File

@ -1,9 +0,0 @@
# Solution Withou A Loadbalancer
## Expose Port 80 On Digital Ocean's Managed Kubernetes Without A Loadbalancer
Follow [this solution](https://stackoverflow.com/questions/54119399/expose-port-80-on-digital-oceans-managed-kubernetes-without-a-load-balancer/55968709) and install a second firewall, nginx, and use external DNS via Helm 3.
{% hint style="info" %}
CAUTION: Some of the Helm charts are already depricated, so do an investigation of the approbriate charts and fill the correct commands in here.
{% endhint %}

View File

@ -1,11 +0,0 @@
---
provider: digitalocean
digitalocean:
# create the API token at https://cloud.digitalocean.com/account/api/tokens
# needs read + write
apiToken: "DIGITALOCEAN_API_TOKEN"
domainFilters:
# domains you want external-dns to be able to edit
- example.com
rbac:
create: true

View File

@ -1,11 +0,0 @@
---
controller:
kind: DaemonSet
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
daemonset:
useHostPort: true
service:
type: ClusterIP
rbac:
create: true

View File

@ -1,22 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -1,5 +0,0 @@
apiVersion: v1
appVersion: "0.3.1"
description: A Helm chart for ocelot.social
name: ocelot-social
version: 0.1.0

View File

@ -1,72 +0,0 @@
# Helm installation of Human Connection
Deploying Human Connection with Helm is very straight forward. All you have to
do is to change certain parameters, like domain names and API keys, then you
just install our provided Helm chart to your cluster.
## Configuration
You can customize the network with your configuration by changing the `values.yaml`, all variables will be available as
environment variables in your deployed kubernetes pods.
Probably you want to change this environment variable to your actual domain:
```bash
# in folder /deployment/helm
CLIENT_URI: "https://develop-k8s.ocelot.social"
```
If you want to edit secrets, you have to `base64` encode them. See [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually). You can also use `helm-secrets`, but we have yet to test it.
```bash
# example how to base64 a string:
$ echo -n 'admin' | base64
YWRtaW4=
```
Those secrets get `base64` decoded and are available as environment variables in
your deployed kubernetes pods.
# https
If you start with setting up the `https`, when you install the app, it will automatically take care of the certificates for you.
First check that you are using `Helm v3`, this is important since it removes the need for `Tiller`. See, [FAQ](https://helm.sh/docs/faq/#removal-of-tiller)
```bash
$ helm version
# output should look similar to this:
#version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
```
Apply cert-manager CRDs before installing (or it will fail)
```bash
$ kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.13.0/deploy/manifests/00-crds.yaml
```
Next, create the `cert-manager` namespace
```bash
$ kubectl create namespace cert-manager
```
Add the `jetstack` repo and update
```bash
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
```
Install cert-manager
```bash
$ helm install cert-manager --namespace cert-manager --version v0.13.0 jetstack/cert-manager
```
# Deploy
Once you are satisfied with the configuration, you can install the app.
```bash
# in folder /deployment/helm/human-connection
$ helm install develop ./ --namespace human-connection
```
Where `develop` is the release name, in this case develop for our develop server and `human-connection` is the namespace, again customize for your needs. The release name can be anything you want. Just keep in mind that it is used in the templates to prepend the `CLIENT_URI` and other places.
This will set up everything you need for the network, including `deployments`, and their `pods`, `services`, `ingress`, `volumes`(PersitentVolumes), `PersistentVolumeClaims`, and even `ClusterIssuers` for https certificates.

View File

@ -1,20 +0,0 @@
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: {{ .Values.supportEmail }}
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx

View File

@ -1,20 +0,0 @@
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: {{ .Values.supportEmail }}
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx

View File

@ -1,58 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-backend
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
minReadySeconds: 15
progressDeadlineSeconds: 60
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: "100%"
selector:
matchLabels:
ocelot.social/selector: deployment-backend
template:
metadata:
name: deployment-backend
annotations:
backup.velero.io/backup-volumes: uploads
labels:
ocelot.social/commit: {{ .Values.commit }}
ocelot.social/selector: deployment-backend
spec:
containers:
- name: backend
image: "{{ .Values.backendImage }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
envFrom:
- configMapRef:
name: {{ .Release.Name }}-configmap
- secretRef:
name: {{ .Release.Name }}-secrets
ports:
- containerPort: 4000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /develop-backend/public/uploads
name: uploads
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: uploads
persistentVolumeClaim:
claimName: uploads-claim
status: {}

View File

@ -1,40 +0,0 @@
{{- if .Values.developmentMailserverDomain }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-mailserver
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
minReadySeconds: 15
progressDeadlineSeconds: 60
selector:
matchLabels:
ocelot.social/selector: deployment-mailserver
template:
metadata:
labels:
ocelot.social/selector: deployment-mailserver
name: mailserver
spec:
containers:
- name: mailserver
image: djfarrelly/maildev
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 80
- containerPort: 25
envFrom:
- configMapRef:
name: {{ .Release.Name }}-configmap
- secretRef:
name: {{ .Release.Name }}-secrets
restartPolicy: Always
terminationGracePeriodSeconds: 30
status: {}
{{- end}}

View File

@ -1,32 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
selector:
matchLabels:
ocelot.social/selector: deployment-maintenance
template:
metadata:
labels:
ocelot.social/commit: {{ .Values.commit }}
ocelot.social/selector: deployment-maintenance
name: maintenance
spec:
containers:
- name: maintenance
env:
- name: HOST
value: 0.0.0.0
image: "{{ .Values.maintenanceImage }}:{{ .Chart.AppVersion }}"
ports:
- containerPort: 80
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30

View File

@ -1,52 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: "100%"
selector:
matchLabels:
ocelot.social/selector: deployment-neo4j
template:
metadata:
name: neo4j
annotations:
backup.velero.io/backup-volumes: neo4j-data
labels:
ocelot.social/commit: {{ .Values.commit }}
ocelot.social/selector: deployment-neo4j
spec:
containers:
- name: neo4j
image: "{{ .Values.neo4jImage }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 7687
- containerPort: 7474
resources:
requests:
memory: {{ .Values.neo4jResourceRequestsMemory | default "1G" | quote }}
limits:
memory: {{ .Values.neo4jResourceLimitsMemory | default "1G" | quote }}
envFrom:
- configMapRef:
name: {{ .Release.Name }}-configmap
volumeMounts:
- mountPath: /data/
name: neo4j-data
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: neo4j-data-claim
restartPolicy: Always
terminationGracePeriodSeconds: 30

View File

@ -1,43 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-webapp
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: 2
minReadySeconds: 15
progressDeadlineSeconds: 60
selector:
matchLabels:
ocelot.social/selector: deployment-webapp
template:
metadata:
name: webapp
labels:
ocelot.social/commit: {{ .Values.commit }}
ocelot.social/selector: deployment-webapp
spec:
containers:
- name: webapp
image: "{{ .Values.webappImage }}:{{ .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
envFrom:
- configMapRef:
name: {{ .Release.Name }}-configmap
- secretRef:
name: {{ .Release.Name }}-secrets
env:
- name: HOST
value: 0.0.0.0
ports:
- containerPort: 3000
resources: {}
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
status: {}

View File

@ -1,36 +0,0 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ingress
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: {{ .Values.letsencryptIssuer }}
nginx.ingress.kubernetes.io/proxy-body-size: 10m
spec:
tls:
- hosts:
- {{ .Values.domain }}
secretName: tls
rules:
- host: {{ .Values.domain }}
http:
paths:
- path: /
backend:
serviceName: {{ .Release.Name }}-webapp
servicePort: 3000
{{- if .Values.developmentMailserverDomain }}
- host: {{ .Values.developmentMailserverDomain }}
http:
paths:
- path: /
backend:
serviceName: {{ .Release.Name }}-mailserver
servicePort: 80
{{- end }}

View File

@ -1,29 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-db-migrations
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
annotations:
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded, hook-failed
spec:
template:
metadata:
name: {{ .Release.Name }}
spec:
restartPolicy: Never
containers:
- name: db-migrations-job
image: "{{ .Values.backendImage }}:latest"
command: ["/bin/sh", "-c", "{{ .Values.dbMigrations }}"]
envFrom:
- configMapRef:
name: {{ .Release.Name }}-configmap
- secretRef:
name: {{ .Release.Name }}-secrets

View File

@ -1,17 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-backend
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: graphql
port: 4000
targetPort: 4000
selector:
ocelot.social/selector: deployment-backend

View File

@ -1,22 +0,0 @@
{{- if .Values.developmentMailserverDomain }}
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-mailserver
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: web
port: 80
targetPort: 80
- name: smtp
port: 25
targetPort: 25
selector:
ocelot.social/selector: deployment-mailserver
{{- end}}

View File

@ -1,17 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-maintenance
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: web
port: 80
targetPort: 80
selector:
ocelot.social/selector: deployment-maintenance

View File

@ -1,20 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-neo4j
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: bolt
port: 7687
targetPort: 7687
- name: web
port: 7474
targetPort: 7474
selector:
ocelot.social/selector: deployment-neo4j

View File

@ -1,18 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-webapp
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/name: ocelot-social
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
ports:
- name: {{ .Release.Name }}-webapp
port: 3000
protocol: TCP
targetPort: 3000
selector:
ocelot.social/selector: deployment-webapp

View File

@ -1,10 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: neo4j-data-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.neo4jStorage }}

View File

@ -1,16 +0,0 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: uploads-claim
spec:
dataSource:
name: uploads-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.uploadsStorage }}

View File

@ -1,53 +0,0 @@
# domain is the user-facing domain.
domain: develop-docker.ocelot.social
# commit is the latest github commit deployed.
commit: 889a7cdd24dda04a139b2b77d626e984d6db6781
# dbInitialization runs the database initializations in a post-install hook.
dbInitializion: "yarn prod:migrate init"
# dbMigrations runs the database migrations in a post-upgrade hook.
dbMigrations: "yarn prod:migrate up"
# bakendImage is the docker image for the backend deployment
backendImage: ocelotsocialnetwork/backend
# maintenanceImage is the docker image for the maintenance deployment
maintenanceImage: ocelotsocialnetwork/maintenance
# neo4jImage is the docker image for the neo4j deployment
neo4jImage: ocelotsocialnetwork/neo4j
# webappImage is the docker image for the webapp deployment
webappImage: ocelotsocialnetwork/webapp
# image configures pullPolicy related to the docker images
image:
# pullPolicy indicates when, if ever, pods pull a new image from docker hub.
pullPolicy: IfNotPresent
# letsencryptIssuer is used by cert-manager to set up certificates with the given provider.
letsencryptIssuer: "letsencrypt-prod"
# neo4jConfig changes any default neo4j config/adds it.
neo4jConfig:
# acceptLicenseAgreement is used to agree to the license agreement for neo4j's enterprise edition.
acceptLicenseAgreement: \"yes\"
# apocImportFileEnabled enables the import of files to neo4j using the plugin apoc
apocImportFileEnabled: \"true\"
# dbmsMemoryHeapInitialSize configures initial heap size. By default, it is calculated based on available system resources.(valid units are `k`, `K`, `m`, `M`, `g`, `G`)
dbmsMemoryHeapInitialSize: "500M"
# dbmsMemoryHeapMaxSize configures maximum heap size. By default it is calculated based on available system resources.(valid units are `k`, `K`, `m`, `M`, `g`, `G`)
dbmsMemoryHeapMaxSize: "500M"
# dbmsMemoryPagecacheSize configures the amount of memory to use for mapping the store files, in bytes (or 'k', 'm', and 'g')
dbmsMemoryPagecacheSize: "490M"
# neo4jResourceLimitsMemory configures the memory limits available.
neo4jResourceLimitsMemory: "2G"
# neo4jResourceLimitsMemory configures the memory available for requests.
neo4jResourceRequestsMemory: "1G"
# supportEmail is used for letsencrypt certs.
supportEmail: "devops@ocelot.social"
# smtpHost is the host for the mailserver.
smtpHost: "mail.ocelot.social"
# smtpPort is the port to be used for the mailserver.
smtpPort: \"25\"
# jwtSecret is used to encode/decode a user's JWT for authentication
jwtSecret: "Yi8mJjdiNzhCRiZmdi9WZA=="
# privateKeyPassphrase is used for activity pub
privateKeyPassphrase: "YTdkc2Y3OHNhZGc4N2FkODdzZmFnc2FkZzc4"
# mapboxToken is used for the Mapbox API, geolocalization.
mapboxToken: "cGsuZXlKMUlqb2lhSFZ0WVc0dFkyOXVibVZqZEdsdmJpSXNJbUVpT2lKamFqbDBjbkJ1Ykdvd2VUVmxNM1Z3WjJsek5UTnVkM1p0SW4wLktaOEtLOWw3MG9talhiRWtrYkhHc1E="
uploadsStorage: "25Gi"
neo4jStorage: "5Gi"
developmentMailserverDomain: mail.ocelot.social

View File

@ -1,85 +0,0 @@
# Legacy data migration
This setup is **completely optional** and only required if you have data on a
server which is running our legacy code and you want to import that data. It
will import the uploads folder and migrate a dump of the legacy Mongo database
into our new Neo4J graph database.
## Configure Maintenance-Worker Pod
Create a configmap with the specific connection data of your legacy server:
```bash
$ kubectl create configmap maintenance-worker \
-n ocelot-social \
--from-literal=SSH_USERNAME=someuser \
--from-literal=SSH_HOST=yourhost \
--from-literal=MONGODB_USERNAME=hc-api \
--from-literal=MONGODB_PASSWORD=secretpassword \
--from-literal=MONGODB_AUTH_DB=hc_api \
--from-literal=MONGODB_DATABASE=hc_api \
--from-literal=UPLOADS_DIRECTORY=/var/www/api/uploads
```
Create a secret with your public and private ssh keys. As the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys) points out, you should be careful with your ssh keys. Anyone with access to your cluster will have access to your ssh keys. Better create a new pair with `ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`:
```bash
$ kubectl create secret generic ssh-keys \
-n ocelot-social \
--from-file=id_rsa=/path/to/.ssh/id_rsa \
--from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
--from-file=known_hosts=/path/to/.ssh/known_hosts
```
## Deploy a Temporary Maintenance-Worker Pod
Bring the application into maintenance mode.
{% hint style="info" %} TODO: implement maintenance mode {% endhint %}
Then temporarily delete backend and database deployments
```bash
$ kubectl -n ocelot-social get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
backend 1/1 1 1 3d11h
neo4j 1/1 1 1 3d11h
webapp 2/2 2 2 73d
$ kubectl -n ocelot-social delete deployment neo4j
deployment.extensions "neo4j" deleted
$ kubectl -n ocelot-social delete deployment backend
deployment.extensions "backend" deleted
```
Deploy one-time develop-maintenance-worker pod:
```bash
# in deployment/legacy-migration/
$ kubectl apply -f maintenance-worker.yaml
pod/develop-maintenance-worker created
```
Import legacy database and uploads:
```bash
$ kubectl -n ocelot-social exec -it develop-maintenance-worker bash
$ import_legacy_db
$ import_legacy_uploads
$ exit
```
Delete the pod when you're done:
```bash
$ kubectl -n ocelot-social delete pod develop-maintenance-worker
```
Oh, and of course you have to get those deleted deployments back. One way of
doing it would be:
```bash
# in folder deployment/
$ kubectl apply -f human-connection/deployment-backend.yaml -f human-connection/deployment-neo4j.yaml
```

View File

@ -1,40 +0,0 @@
---
kind: Pod
apiVersion: v1
metadata:
name: develop-maintenance-worker
namespace: ocelot-social
spec:
containers:
- name: develop-maintenance-worker
image: ocelotsocialnetwork/develop-maintenance-worker:latest
imagePullPolicy: Always
resources:
requests:
memory: "2G"
limits:
memory: "8G"
envFrom:
- configMapRef:
name: maintenance-worker
- configMapRef:
name: configmap
volumeMounts:
- name: secret-volume
readOnly: false
mountPath: /root/.ssh
- name: uploads
mountPath: /uploads
- name: neo4j-data
mountPath: /data/
volumes:
- name: secret-volume
secret:
secretName: ssh-keys
defaultMode: 0400
- name: uploads
persistentVolumeClaim:
claimName: uploads-claim
- name: neo4j-data
persistentVolumeClaim:
claimName: neo4j-data-claim

View File

@ -1,2 +0,0 @@
.ssh/
ssh/

View File

@ -1,21 +0,0 @@
FROM ocelotsocialnetwork/develop-neo4j:latest
ENV NODE_ENV=maintenance
EXPOSE 7687 7474
ENV BUILD_DEPS="gettext" \
RUNTIME_DEPS="libintl"
RUN set -x && \
apk add --update $RUNTIME_DEPS && \
apk add --virtual build_deps $BUILD_DEPS && \
cp /usr/bin/envsubst /usr/local/bin/envsubst && \
apk del build_deps
RUN apk upgrade --update
RUN apk add --no-cache mongodb-tools openssh nodejs yarn rsync
COPY known_hosts /root/.ssh/known_hosts
COPY migration /migration
COPY ./binaries/* /usr/local/bin/

View File

@ -1,6 +0,0 @@
# SSH Access
# SSH_USERNAME='username'
# SSH_HOST='example.org'
# UPLOADS_DIRECTORY=/var/www/api/uploads
OUTPUT_DIRECTORY='/uploads/'

View File

@ -1,2 +0,0 @@
#!/usr/bin/env bash
tail -f /dev/null

View File

@ -1,12 +0,0 @@
#!/usr/bin/env bash
set -e
for var in "SSH_USERNAME" "SSH_HOST" "MONGODB_USERNAME" "MONGODB_PASSWORD" "MONGODB_DATABASE" "MONGODB_AUTH_DB"
do
if [[ -z "${!var}" ]]; then
echo "${var} is undefined"
exit 1
fi
done
/migration/mongo/export.sh
/migration/neo4j/import.sh

View File

@ -1,17 +0,0 @@
#!/usr/bin/env bash
set -e
# import .env config
set -o allexport
source $(dirname "$0")/.env
set +o allexport
for var in "SSH_USERNAME" "SSH_HOST" "UPLOADS_DIRECTORY"
do
if [[ -z "${!var}" ]]; then
echo "${var} is undefined"
exit 1
fi
done
rsync --archive --update --verbose ${SSH_USERNAME}@${SSH_HOST}:${UPLOADS_DIRECTORY}/ ${OUTPUT_DIRECTORY}

View File

@ -1,3 +0,0 @@
|1|GuOYlVEhTowidPs18zj9p5F2j3o=|sDHJYLz9Ftv11oXeGEjs7SpVyg0= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM5N29bI5CeKu1/RBPyM2fwyf7fuajOO+tyhKe1+CC2sZ1XNB5Ff6t6MtCLNRv2mUuvzTbW/HkisDiA5tuXUHOk=
|1|2KP9NV+Q5g2MrtjAeFSVcs8YeOI=|nf3h4wWVwC4xbBS1kzgzE2tBldk= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhRK6BeIEUxXlS0z/pOfkUkSPfn33g4J1U3L+MyUQYHm+7agT08799ANJhnvELKE1tt4Vx80I9UR81oxzZcy3E=
|1|HonYIRNhKyroUHPKU1HSZw0+Qzs=|5T1btfwFBz2vNSldhqAIfTbfIgQ= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhRK6BeIEUxXlS0z/pOfkUkSPfn33g4J1U3L+MyUQYHm+7agT08799ANJhnvELKE1tt4Vx80I9UR81oxzZcy3E=

View File

@ -1,17 +0,0 @@
# SSH Access
# SSH_USERNAME='username'
# SSH_HOST='example.org'
# Mongo DB on Remote Maschine
# MONGODB_USERNAME='mongouser'
# MONGODB_PASSWORD='mongopassword'
# MONGODB_DATABASE='mongodatabase'
# MONGODB_AUTH_DB='admin'
# Export Settings
# On Windows this resolves to C:\Users\dornhoeschen\AppData\Local\Temp\mongo-export (MinGW)
EXPORT_PATH='/tmp/mongo-export/'
EXPORT_MONGOEXPORT_BIN='mongoexport'
MONGO_EXPORT_SPLIT_SIZE=6000
# On Windows use something like this
# EXPORT_MONGOEXPORT_BIN='C:\Program Files\MongoDB\Server\3.6\bin\mongoexport.exe'

View File

@ -1,53 +0,0 @@
#!/usr/bin/env bash
set -e
# import .env config
set -o allexport
source $(dirname "$0")/.env
set +o allexport
# Export collection function defintion
function export_collection () {
"${EXPORT_MONGOEXPORT_BIN}" --db ${MONGODB_DATABASE} --host localhost -d ${MONGODB_DATABASE} --port 27018 --username ${MONGODB_USERNAME} --password ${MONGODB_PASSWORD} --authenticationDatabase ${MONGODB_AUTH_DB} --collection $1 --out "${EXPORT_PATH}$1.json"
mkdir -p ${EXPORT_PATH}splits/$1/
split -l ${MONGO_EXPORT_SPLIT_SIZE} -a 3 ${EXPORT_PATH}$1.json ${EXPORT_PATH}splits/$1/
}
# Export collection with query function defintion
function export_collection_query () {
"${EXPORT_MONGOEXPORT_BIN}" --db ${MONGODB_DATABASE} --host localhost -d ${MONGODB_DATABASE} --port 27018 --username ${MONGODB_USERNAME} --password ${MONGODB_PASSWORD} --authenticationDatabase ${MONGODB_AUTH_DB} --collection $1 --out "${EXPORT_PATH}$1_$3.json" --query "$2"
mkdir -p ${EXPORT_PATH}splits/$1_$3/
split -l ${MONGO_EXPORT_SPLIT_SIZE} -a 3 ${EXPORT_PATH}$1_$3.json ${EXPORT_PATH}splits/$1_$3/
}
# Delete old export & ensure directory
rm -rf ${EXPORT_PATH}*
mkdir -p ${EXPORT_PATH}
# Open SSH Tunnel
ssh -4 -M -S my-ctrl-socket -fnNT -L 27018:localhost:27017 -l ${SSH_USERNAME} ${SSH_HOST}
# Export all Data from the Alpha to json and split them up
export_collection "badges"
export_collection "categories"
export_collection "comments"
export_collection_query "contributions" '{"type": "DELETED"}' "DELETED"
export_collection_query "contributions" '{"type": "post"}' "post"
# export_collection_query "contributions" '{"type": "cando"}' "cando"
export_collection "emotions"
# export_collection_query "follows" '{"foreignService": "organizations"}' "organizations"
export_collection_query "follows" '{"foreignService": "users"}' "users"
# export_collection "invites"
# export_collection "organizations"
# export_collection "pages"
# export_collection "projects"
# export_collection "settings"
export_collection "shouts"
# export_collection "status"
export_collection_query "users" '{"isVerified": true }' "verified"
# export_collection "userscandos"
# export_collection "usersettings"
# Close SSH Tunnel
ssh -S my-ctrl-socket -O check -l ${SSH_USERNAME} ${SSH_HOST}
ssh -S my-ctrl-socket -O exit -l ${SSH_USERNAME} ${SSH_HOST}

View File

@ -1,16 +0,0 @@
# Neo4J Settings
# NEO4J_USERNAME='neo4j'
# NEO4J_PASSWORD='letmein'
# Import Settings
# On Windows this resolves to C:\Users\dornhoeschen\AppData\Local\Temp\mongo-export (MinGW)
IMPORT_PATH='/tmp/mongo-export/'
IMPORT_CHUNK_PATH='/tmp/mongo-export/splits/'
IMPORT_CHUNK_PATH_CQL='/tmp/mongo-export/splits/'
# On Windows this path needs to be windows style since the cypher-shell runs native - note the forward slash
# IMPORT_CHUNK_PATH_CQL='C:/Users/dornhoeschen/AppData/Local/Temp/mongo-export/splits/'
IMPORT_CYPHERSHELL_BIN='cypher-shell'
# On Windows use something like this
# IMPORT_CYPHERSHELL_BIN='C:\Program Files\neo4j-community\bin\cypher-shell.bat'

View File

@ -1,52 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] image: {
[?] path: { // Path is incorrect in Nitro - is icon the correct name for this field?
[X] type: String,
[X] required: true
},
[ ] alt: { // If we use an image - should we not have an alt?
[ ] type: String,
[ ] required: true
}
},
[?] status: {
[X] type: String,
[X] enum: ['permanent', 'temporary'],
[ ] default: 'permanent', // Default value is missing in Nitro
[X] required: true
},
[?] type: {
[?] type: String, // in nitro this is a defined enum - seems good for now
[X] required: true
},
[X] id: {
[X] type: String,
[X] required: true
},
[?] createdAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as badge
MERGE(b:Badge {id: badge._id["$oid"]})
ON CREATE SET
b.id = badge.key,
b.type = badge.type,
b.icon = replace(badge.image.path, 'https://api-alpha.human-connection.org', ''),
b.status = badge.status,
b.createdAt = badge.createdAt.`$date`,
b.updatedAt = badge.updatedAt.`$date`
;

View File

@ -1 +0,0 @@
MATCH (n:Badge) DETACH DELETE n;

View File

@ -1,129 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[X] title: {
[X] type: String,
[X] required: true
},
[?] slug: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[ ] unique: true // Unique value is not enforced in Nitro?
},
[?] icon: { // Nitro adds required: true
[X] type: String,
[ ] unique: true // Unique value is not enforced in Nitro?
},
[?] createdAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as category
MERGE(c:Category {id: category._id["$oid"]})
ON CREATE SET
c.name = category.title,
c.slug = category.slug,
c.icon = category.icon,
c.createdAt = category.createdAt.`$date`,
c.updatedAt = category.updatedAt.`$date`
;
// Transform icon names
MATCH (c:Category)
WHERE (c.icon = "categories-justforfun")
SET c.icon = 'smile'
SET c.slug = 'just-for-fun'
;
MATCH (c:Category)
WHERE (c.icon = "categories-luck")
SET c.icon = 'heart-o'
SET c.slug = 'happiness-values'
;
MATCH (c:Category)
WHERE (c.icon = "categories-health")
SET c.icon = 'medkit'
;
MATCH (c:Category)
WHERE (c.icon = "categories-environment")
SET c.icon = 'tree'
;
MATCH (c:Category)
WHERE (c.icon = "categories-animal-justice")
SET c.icon = 'paw'
SET c.slug = 'animal-protection'
;
MATCH (c:Category)
WHERE (c.icon = "categories-human-rights")
SET c.icon = 'balance-scale'
SET c.slug = 'human-rights-justice'
;
MATCH (c:Category)
WHERE (c.icon = "categories-education")
SET c.icon = 'graduation-cap'
;
MATCH (c:Category)
WHERE (c.icon = "categories-cooperation")
SET c.icon = 'users'
;
MATCH (c:Category)
WHERE (c.icon = "categories-politics")
SET c.icon = 'university'
;
MATCH (c:Category)
WHERE (c.icon = "categories-economy")
SET c.icon = 'money'
;
MATCH (c:Category)
WHERE (c.icon = "categories-technology")
SET c.icon = 'flash'
;
MATCH (c:Category)
WHERE (c.icon = "categories-internet")
SET c.icon = 'mouse-pointer'
SET c.slug = 'it-internet-data-privacy'
;
MATCH (c:Category)
WHERE (c.icon = "categories-art")
SET c.icon = 'paint-brush'
;
MATCH (c:Category)
WHERE (c.icon = "categories-freedom-of-speech")
SET c.icon = 'bullhorn'
SET c.slug = 'freedom-of-speech'
;
MATCH (c:Category)
WHERE (c.icon = "categories-sustainability")
SET c.icon = 'shopping-cart'
;
MATCH (c:Category)
WHERE (c.icon = "categories-peace")
SET c.icon = 'angellist'
SET c.slug = 'global-peace-nonviolence'
;

View File

@ -1 +0,0 @@
MATCH (n:Category) DETACH DELETE n;

View File

@ -1,67 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] userId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[?] contributionId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[X] content: {
[X] type: String,
[X] required: true
},
[?] contentExcerpt: { // Generated from content
[X] type: String,
[ ] required: true // Not required in Nitro
},
[ ] hasMore: { type: Boolean },
[ ] upvotes: {
[ ] type: Array,
[ ] default: []
},
[ ] upvoteCount: {
[ ] type: Number,
[ ] default: 0
},
[?] deleted: {
[X] type: Boolean,
[ ] default: false, // Default value is missing in Nitro
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as comment
MERGE (c:Comment {id: comment._id["$oid"]})
ON CREATE SET
c.content = comment.content,
c.contentExcerpt = comment.contentExcerpt,
c.deleted = comment.deleted,
c.createdAt = comment.createdAt.`$date`,
c.updatedAt = comment.updatedAt.`$date`,
c.disabled = false
WITH c, comment, comment.contributionId as postId
MATCH (post:Post {id: postId})
WITH c, post, comment.userId as userId
MATCH (author:User {id: userId})
MERGE (c)-[:COMMENTS]->(post)
MERGE (author)-[:WROTE]->(c)
;

View File

@ -1 +0,0 @@
MATCH (n:Comment) DETACH DELETE n;

View File

@ -1,156 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
[?] { //Modeled incorrect as Post
[?] userId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[ ] organizationId: {
[ ] type: String,
[-] index: true
},
[X] categoryIds: {
[X] type: Array,
[-] index: true
},
[X] title: {
[X] type: String,
[X] required: true
},
[?] slug: { // Generated from title
[X] type: String,
[ ] required: true, // Not required in Nitro
[?] unique: true, // Unique value is not enforced in Nitro?
[-] index: true
},
[ ] type: { // db.getCollection('contributions').distinct('type') -> 'DELETED', 'cando', 'post'
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] cando: {
[ ] difficulty: {
[ ] type: String,
[ ] enum: ['easy', 'medium', 'hard']
},
[ ] reasonTitle: { type: String },
[ ] reason: { type: String }
},
[X] content: {
[X] type: String,
[X] required: true
},
[?] contentExcerpt: { // Generated from content
[X] type: String,
[?] required: true // Not required in Nitro
},
[ ] hasMore: { type: Boolean },
[X] teaserImg: { type: String },
[ ] language: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] shoutCount: {
[ ] type: Number,
[ ] default: 0,
[-] index: true
},
[ ] meta: {
[ ] hasVideo: {
[ ] type: Boolean,
[ ] default: false
},
[ ] embedds: {
[ ] type: Object,
[ ] default: {}
}
},
[?] visibility: {
[X] type: String,
[X] enum: ['public', 'friends', 'private'],
[ ] default: 'public', // Default value is missing in Nitro
[-] index: true
},
[?] isEnabled: {
[X] type: Boolean,
[ ] default: true, // Default value is missing in Nitro
[-] index: true
},
[?] tags: { type: Array }, // ensure this is working properly
[ ] emotions: {
[ ] type: Object,
[-] index: true,
[ ] default: {
[ ] angry: {
[ ] count: 0,
[ ] percent: 0
[ ] },
[ ] cry: {
[ ] count: 0,
[ ] percent: 0
[ ] },
[ ] surprised: {
[ ] count: 0,
[ ] percent: 0
},
[ ] happy: {
[ ] count: 0,
[ ] percent: 0
},
[ ] funny: {
[ ] count: 0,
[ ] percent: 0
}
}
},
[?] deleted: { // THis field is not always present in the alpha-data
[?] type: Boolean,
[ ] default: false, // Default value is missing in Nitro
[-] index: true
},
[?] createdAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
[ ] default: Date.now // Default value is missing in Nitro
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as post
MERGE (p:Post {id: post._id["$oid"]})
ON CREATE SET
p.title = post.title,
p.slug = post.slug,
p.image = replace(post.teaserImg, 'https://api-alpha.human-connection.org', ''),
p.content = post.content,
p.contentExcerpt = post.contentExcerpt,
p.visibility = toLower(post.visibility),
p.createdAt = post.createdAt.`$date`,
p.updatedAt = post.updatedAt.`$date`,
p.deleted = COALESCE(post.deleted, false),
p.disabled = COALESCE(NOT post.isEnabled, false)
WITH p, post
MATCH (u:User {id: post.userId})
MERGE (u)-[:WROTE]->(p)
WITH p, post, post.categoryIds as categoryIds
UNWIND categoryIds AS categoryId
MATCH (c:Category {id: categoryId})
MERGE (p)-[:CATEGORIZED]->(c)
WITH p, post.tags AS tags
UNWIND tags AS tag
WITH apoc.text.replace(tag, '[^\\p{L}0-9]', '') as tagNoSpacesAllowed
CALL apoc.when(tagNoSpacesAllowed =~ '^((\\p{L}+[\\p{L}0-9]*)|([0-9]+\\p{L}+[\\p{L}0-9]*))$', 'RETURN tagNoSpacesAllowed', '', {tagNoSpacesAllowed: tagNoSpacesAllowed})
YIELD value as validated
WHERE validated.tagNoSpacesAllowed IS NOT NULL
MERGE (t:Tag { id: validated.tagNoSpacesAllowed, disabled: false, deleted: false })
MERGE (p)-[:TAGGED]->(t)
;

View File

@ -1,2 +0,0 @@
MATCH (n:Post) DETACH DELETE n;
MATCH (n:Tag) DETACH DELETE n;

View File

@ -1 +0,0 @@
MATCH (n) DETACH DELETE n;

View File

@ -1 +0,0 @@
MATCH (u:User)-[e:EMOTED]->(c:Post) DETACH DELETE e;

View File

@ -1,58 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[X] userId: {
[X] type: String,
[X] required: true,
[-] index: true
},
[X] contributionId: {
[X] type: String,
[X] required: true,
[-] index: true
},
[?] rated: {
[X] type: String,
[ ] required: true,
[?] enum: ['funny', 'happy', 'surprised', 'cry', 'angry']
},
[X] createdAt: {
[X] type: Date,
[X] default: Date.now
},
[X] updatedAt: {
[X] type: Date,
[X] default: Date.now
},
[-] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as emotion
MATCH (u:User {id: emotion.userId}),
(c:Post {id: emotion.contributionId})
MERGE (u)-[e:EMOTED {
id: emotion._id["$oid"],
emotion: emotion.rated,
createdAt: datetime(emotion.createdAt.`$date`),
updatedAt: datetime(emotion.updatedAt.`$date`)
}]->(c)
RETURN e;
/*
// Queries
// user sets an emotion emotion:
// MERGE (u)-[e:EMOTED {id: ..., emotion: "funny", createdAt: ..., updatedAt: ...}]->(c)
// user removes emotion
// MATCH (u)-[e:EMOTED]->(c) DELETE e
// contribution distributions over every `emotion` property value for one post
// MATCH (u:User)-[e:EMOTED]->(c:Post {id: "5a70bbc8508f5b000b443b1a"}) RETURN e.emotion,COUNT(e.emotion)
// contribution distributions over every `emotion` property value for one user (advanced - "whats the primary emotion used by the user?")
// MATCH (u:User{id:"5a663b1ac64291000bf302a1"})-[e:EMOTED]->(c:Post) RETURN e.emotion,COUNT(e.emotion)
// contribution distributions over every `emotion` property value for all posts created by one user (advanced - "how do others react to my contributions?")
// MATCH (u:User)-[e:EMOTED]->(c:Post)<-[w:WROTE]-(a:User{id:"5a663b1ac64291000bf302a1"}) RETURN e.emotion,COUNT(e.emotion)
// if we can filter the above an a variable timescale that would be great (should be possible on createdAt and updatedAt fields)
*/

View File

@ -1 +0,0 @@
MATCH (u1:User)-[f:FOLLOWS]->(u2:User) DETACH DELETE f;

View File

@ -1,36 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] userId: {
[-] type: String,
[ ] required: true,
[-] index: true
},
[?] foreignId: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[?] foreignService: { // db.getCollection('follows').distinct('foreignService') returns 'organizations' and 'users'
[ ] type: String,
[ ] required: true,
[ ] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
index:
[?] { userId: 1, foreignId: 1, foreignService: 1 },{ unique: true } // is the unique constrain modeled?
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as follow
MATCH (u1:User {id: follow.userId}), (u2:User {id: follow.foreignId})
MERGE (u1)-[:FOLLOWS]->(u2)
;

View File

@ -1,108 +0,0 @@
#!/usr/bin/env bash
set -e
# import .env config
set -o allexport
source $(dirname "$0")/.env
set +o allexport
# Delete collection function defintion
function delete_collection () {
# Delete from Database
echo "Delete $2"
"${IMPORT_CYPHERSHELL_BIN}" < $(dirname "$0")/$1/delete.cql > /dev/null
# Delete index file
rm -f "${IMPORT_PATH}splits/$2.index"
}
# Import collection function defintion
function import_collection () {
# index file of those chunks we have already imported
INDEX_FILE="${IMPORT_PATH}splits/$1.index"
# load index file
if [ -f "$INDEX_FILE" ]; then
readarray -t IMPORT_INDEX <$INDEX_FILE
else
declare -a IMPORT_INDEX
fi
# for each chunk import data
for chunk in ${IMPORT_PATH}splits/$1/*
do
CHUNK_FILE_NAME=$(basename "${chunk}")
# does the index not contain the chunk file name?
if [[ ! " ${IMPORT_INDEX[@]} " =~ " ${CHUNK_FILE_NAME} " ]]; then
# calculate the path of the chunk
export IMPORT_CHUNK_PATH_CQL_FILE="${IMPORT_CHUNK_PATH_CQL}$1/${CHUNK_FILE_NAME}"
# load the neo4j command and replace file variable with actual path
NEO4J_COMMAND="$(envsubst '${IMPORT_CHUNK_PATH_CQL_FILE}' < $(dirname "$0")/$2)"
# run the import of the chunk
echo "Import $1 ${CHUNK_FILE_NAME} (${chunk})"
echo "${NEO4J_COMMAND}" | "${IMPORT_CYPHERSHELL_BIN}" > /dev/null
# add file to array and file
IMPORT_INDEX+=("${CHUNK_FILE_NAME}")
echo "${CHUNK_FILE_NAME}" >> ${INDEX_FILE}
else
echo "Skipping $1 ${CHUNK_FILE_NAME} (${chunk})"
fi
done
}
# Time variable
SECONDS=0
# Delete all Neo4J Database content
echo "Deleting Database Contents"
delete_collection "badges" "badges"
delete_collection "categories" "categories"
delete_collection "users" "users"
delete_collection "follows" "follows_users"
delete_collection "contributions" "contributions_post"
delete_collection "contributions" "contributions_cando"
delete_collection "shouts" "shouts"
delete_collection "comments" "comments"
delete_collection "emotions" "emotions"
#delete_collection "invites"
#delete_collection "notifications"
#delete_collection "organizations"
#delete_collection "pages"
#delete_collection "projects"
#delete_collection "settings"
#delete_collection "status"
#delete_collection "systemnotifications"
#delete_collection "userscandos"
#delete_collection "usersettings"
echo "DONE"
# Import Data
echo "Start Importing Data"
import_collection "badges" "badges/badges.cql"
import_collection "categories" "categories/categories.cql"
import_collection "users_verified" "users/users.cql"
import_collection "follows_users" "follows/follows.cql"
#import_collection "follows_organizations" "follows/follows.cql"
import_collection "contributions_post" "contributions/contributions.cql"
#import_collection "contributions_cando" "contributions/contributions.cql"
#import_collection "contributions_DELETED" "contributions/contributions.cql"
import_collection "shouts" "shouts/shouts.cql"
import_collection "comments" "comments/comments.cql"
import_collection "emotions" "emotions/emotions.cql"
# import_collection "invites"
# import_collection "notifications"
# import_collection "organizations"
# import_collection "pages"
# import_collection "systemnotifications"
# import_collection "userscandos"
# import_collection "usersettings"
# does only contain dummy data
# import_collection "projects"
# does only contain alpha specifc data
# import_collection "status
# import_collection "settings""
echo "DONE"
echo "Time elapsed: $SECONDS seconds"

View File

@ -1,39 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] email: {
[ ] type: String,
[ ] required: true,
[-] index: true,
[ ] unique: true
},
[ ] code: {
[ ] type: String,
[-] index: true,
[ ] required: true
},
[ ] role: {
[ ] type: String,
[ ] enum: ['admin', 'moderator', 'manager', 'editor', 'user'],
[ ] default: 'user'
},
[ ] invitedByUserId: { type: String },
[ ] language: { type: String },
[ ] badgeIds: [],
[ ] wasUsed: {
[ ] type: Boolean,
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as invite;

View File

@ -1,48 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] userId: { // User this notification is sent to
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] type: {
[ ] type: String,
[ ] required: true,
[ ] enum: ['comment','comment-mention','contribution-mention','following-contribution']
},
[ ] relatedUserId: {
[ ] type: String,
[-] index: true
},
[ ] relatedContributionId: {
[ ] type: String,
[-] index: true
},
[ ] relatedOrganizationId: {
[ ] type: String,
[-] index: true
},
[ ] relatedCommentId: {type: String },
[ ] unseen: {
[ ] type: Boolean,
[ ] default: true,
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as notification;

View File

@ -1,137 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] name: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] slug: {
[ ] type: String,
[ ] required: true,
[ ] unique: true,
[-] index: true
},
[ ] followersCounts: {
[ ] users: {
[ ] type: Number,
[ ] default: 0
},
[ ] organizations: {
[ ] type: Number,
[ ] default: 0
},
[ ] projects: {
[ ] type: Number,
[ ] default: 0
}
},
[ ] followingCounts: {
[ ] users: {
[ ] type: Number,
[ ] default: 0
},
[ ] organizations: {
[ ] type: Number,
[ ] default: 0
},
[ ] projects: {
[ ] type: Number,
[ ] default: 0
}
},
[ ] categoryIds: {
[ ] type: Array,
[ ] required: true,
[-] index: true
},
[ ] logo: { type: String },
[ ] coverImg: { type: String },
[ ] userId: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] description: {
[ ] type: String,
[ ] required: true
},
[ ] descriptionExcerpt: { type: String }, // will be generated automatically
[ ] publicEmail: { type: String },
[ ] url: { type: String },
[ ] type: {
[ ] type: String,
[-] index: true,
[ ] enum: ['ngo', 'npo', 'goodpurpose', 'ev', 'eva']
},
[ ] language: {
[ ] type: String,
[ ] required: true,
[ ] default: 'de',
[-] index: true
},
[ ] addresses: {
[ ] type: [{
[ ] street: {
[ ] type: String,
[ ] required: true
},
[ ] zipCode: {
[ ] type: String,
[ ] required: true
},
[ ] city: {
[ ] type: String,
[ ] required: true
},
[ ] country: {
[ ] type: String,
[ ] required: true
},
[ ] lat: {
[ ] type: Number,
[ ] required: true
},
[ ] lng: {
[ ] type: Number,
[ ] required: true
}
}],
[ ] default: []
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] isEnabled: {
[ ] type: Boolean,
[ ] default: false,
[-] index: true
},
[ ] reviewedBy: {
[ ] type: String,
[ ] default: null,
[-] index: true
},
[ ] tags: {
[ ] type: Array,
[-] index: true
},
[ ] deleted: {
[ ] type: Boolean,
[ ] default: false,
[-] index: true
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as organisation;

View File

@ -1,55 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] title: {
[ ] type: String,
[ ] required: true
},
[ ] slug: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] type: {
[ ] type: String,
[ ] required: true,
[ ] default: 'page'
},
[ ] key: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] content: {
[ ] type: String,
[ ] required: true
},
[ ] language: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] active: {
[ ] type: Boolean,
[ ] default: true,
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
index:
[ ] { slug: 1, language: 1 },{ unique: true }
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as page;

View File

@ -1,44 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] name: {
[ ] type: String,
[ ] required: true
},
[ ] slug: { type: String },
[ ] followerIds: [],
[ ] categoryIds: { type: Array },
[ ] logo: { type: String },
[ ] userId: {
[ ] type: String,
[ ] required: true
},
[ ] description: {
[ ] type: String,
[ ] required: true
},
[ ] content: {
[ ] type: String,
[ ] required: true
},
[ ] addresses: {
[ ] type: Array,
[ ] default: []
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as project;

View File

@ -1,36 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] key: {
[ ] type: String,
[ ] default: 'system',
[-] index: true,
[ ] unique: true
},
[ ] invites: {
[ ] userCanInvite: {
[ ] type: Boolean,
[ ] required: true,
[ ] default: false
},
[ ] maxInvitesByUser: {
[ ] type: Number,
[ ] required: true,
[ ] default: 1
},
[ ] onlyUserWithBadgesCanInvite: {
[ ] type: Array,
[ ] default: []
}
},
[ ] maintenance: false
}, {
[ ] timestamps: true
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as setting;

View File

@ -1 +0,0 @@
// this is just a relation between users and contributions - no need to delete

View File

@ -1,36 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] userId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[?] foreignId: {
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[?] foreignService: { // db.getCollection('shots').distinct('foreignService') returns 'contributions'
[X] type: String,
[ ] required: true, // Not required in Nitro
[-] index: true
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
index:
[?] { userId: 1, foreignId: 1 },{ unique: true } // is the unique constrain modeled?
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as shout
MATCH (u:User {id: shout.userId}), (p:Post {id: shout.foreignId})
MERGE (u)-[:SHOUTED]->(p)
;

View File

@ -1,19 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] maintenance: {
[ ] type: Boolean,
[ ] default: false
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as status;

View File

@ -1,61 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] type: {
[ ] type: String,
[ ] default: 'info',
[ ] required: true,
[-] index: true
},
[ ] title: {
[ ] type: String,
[ ] required: true
},
[ ] content: {
[ ] type: String,
[ ] required: true
},
[ ] slot: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] language: {
[ ] type: String,
[ ] required: true,
[-] index: true
},
[ ] permanent: {
[ ] type: Boolean,
[ ] default: false
},
[ ] requireConfirmation: {
[ ] type: Boolean,
[ ] default: false
},
[ ] active: {
[ ] type: Boolean,
[ ] default: true,
[-] index: true
},
[ ] totalCount: {
[ ] type: Number,
[ ] default: 0
},
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as systemnotification;

View File

@ -1,2 +0,0 @@
MATCH (n:User) DETACH DELETE n;
MATCH (e:EmailAddress) DETACH DELETE e;

View File

@ -1,124 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[?] email: {
[X] type: String,
[-] index: true,
[X] required: true,
[?] unique: true //unique constrain missing in Nitro
},
[?] password: { // Not required in Alpha -> verify if always present
[X] type: String
},
[X] name: { type: String },
[X] slug: {
[X] type: String,
[-] index: true
},
[ ] gender: { type: String },
[ ] followersCounts: {
[ ] users: {
[ ] type: Number,
[ ] default: 0
},
[ ] organizations: {
[ ] type: Number,
[ ] default: 0
},
[ ] projects: {
[ ] type: Number,
[ ] default: 0
}
},
[ ] followingCounts: {
[ ] users: {
[ ] type: Number,
[ ] default: 0
},
[ ] organizations: {
[ ] type: Number,
[ ] default: 0
},
[ ] projects: {
[ ] type: Number,
[ ] default: 0
}
},
[ ] timezone: { type: String },
[X] avatar: { type: String },
[X] coverImg: { type: String },
[ ] doiToken: { type: String },
[ ] confirmedAt: { type: Date },
[?] badgeIds: [], // Verify this is working properly
[?] deletedAt: { type: Date }, // The Date of deletion is not saved in Nitro
[?] createdAt: {
[?] type: Date, // Modeled as String in Nitro
[ ] default: Date.now // Default value is missing in Nitro
},
[?] updatedAt: {
[?] type: Date, // Modeled as String in Nitro
[ ] default: Date.now // Default value is missing in Nitro
},
[ ] lastActiveAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] isVerified: { type: Boolean },
[?] role: {
[X] type: String,
[-] index: true,
[?] enum: ['admin', 'moderator', 'manager', 'editor', 'user'], // missing roles manager & editor in Nitro
[ ] default: 'user' // Default value is missing in Nitro
},
[ ] verifyToken: { type: String },
[ ] verifyShortToken: { type: String },
[ ] verifyExpires: { type: Date },
[ ] verifyChanges: { type: Object },
[ ] resetToken: { type: String },
[ ] resetShortToken: { type: String },
[ ] resetExpires: { type: Date },
[X] wasSeeded: { type: Boolean },
[X] wasInvited: { type: Boolean },
[ ] language: {
[ ] type: String,
[ ] default: 'en'
},
[ ] termsAndConditionsAccepted: { type: Date }, // we display the terms and conditions on registration
[ ] systemNotificationsSeen: {
[ ] type: Array,
[ ] default: []
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as user
MERGE(u:User {id: user._id["$oid"]})
ON CREATE SET
u.name = user.name,
u.slug = COALESCE(user.slug, apoc.text.random(20, "[A-Za-z]")),
u.email = user.email,
u.encryptedPassword = user.password,
u.avatar = replace(user.avatar, 'https://api-alpha.human-connection.org', ''),
u.coverImg = replace(user.coverImg, 'https://api-alpha.human-connection.org', ''),
u.wasInvited = user.wasInvited,
u.wasSeeded = user.wasSeeded,
u.role = toLower(user.role),
u.createdAt = user.createdAt.`$date`,
u.updatedAt = user.updatedAt.`$date`,
u.deleted = user.deletedAt IS NOT NULL,
u.disabled = false
MERGE (e:EmailAddress {
email: user.email,
createdAt: toString(datetime()),
verifiedAt: toString(datetime())
})
MERGE (e)-[:BELONGS_TO]->(u)
MERGE (u)-[:PRIMARY_EMAIL]->(e)
WITH u, user, user.badgeIds AS badgeIds
UNWIND badgeIds AS badgeId
MATCH (b:Badge {id: badgeId})
MERGE (b)-[:REWARDED]->(u)
;

View File

@ -1,35 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] userId: {
[ ] type: String,
[ ] required: true
},
[ ] contributionId: {
[ ] type: String,
[ ] required: true
},
[ ] done: {
[ ] type: Boolean,
[ ] default: false
},
[ ] doneAt: { type: Date },
[ ] createdAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
},
[ ] wasSeeded: { type: Boolean }
}
index:
[ ] { userId: 1, contributionId: 1 },{ unique: true }
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as usercando;

View File

@ -1,43 +0,0 @@
/*
// Alpha Model
// [ ] Not modeled in Nitro
// [X] Modeled in Nitro
// [-] Omitted in Nitro
// [?] Unclear / has work to be done for Nitro
{
[ ] userId: {
[ ] type: String,
[ ] required: true,
[ ] unique: true
},
[ ] blacklist: {
[ ] type: Array,
[ ] default: []
},
[ ] uiLanguage: {
[ ] type: String,
[ ] required: true
},
[ ] contentLanguages: {
[ ] type: Array,
[ ] default: []
},
[ ] filter: {
[ ] categoryIds: {
[ ] type: Array,
[ ] index: true
},
[ ] emotions: {
[ ] type: Array,
[ ] index: true
}
},
[ ] hideUsersWithoutTermsOfUseSigniture: {type: Boolean},
[ ] updatedAt: {
[ ] type: Date,
[ ] default: Date.now
}
}
*/
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as usersetting;

View File

@ -1,25 +0,0 @@
# Minikube
There are many Kubernetes providers, but if you're just getting started, Minikube is a tool that you can use to get your feet wet.
After you [installed Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/)
open your minikube dashboard:
```text
$ minikube dashboard
```
This will give you an overview. Some of the steps below need some timing to make resources available to other dependent deployments. Keeping an eye on the dashboard is a great way to check that.
Follow the installation instruction for [Human Connection](../ocelot-social/README.md).
If all the pods and services have settled and everything looks green in your
minikube dashboard, expose the services you want on your host system.
For example:
```text
$ minikube service webapp --namespace=ocelotsocialnetwork
# optionally
$ minikube service backend --namespace=ocelotsocialnetwork
```

View File

@ -1,43 +0,0 @@
# Metrics
You can optionally setup [prometheus](https://prometheus.io/) and
[grafana](https://grafana.com/) for metrics.
We follow this tutorial [here](https://medium.com/@chris_linguine/how-to-monitor-your-kubernetes-cluster-with-prometheus-and-grafana-2d5704187fc8):
```bash
kubectl proxy # proxy to your kubernetes dashboard
helm repo list
# If using helm v3, the stable repository is not set, so you need to manually add it.
helm repo add stable https://kubernetes-charts.storage.googleapis.com
# Create a monitoring namespace for your cluster
kubectl create namespace monitoring
helm --namespace monitoring install prometheus stable/prometheus
kubectl -n monitoring get pods # look for 'server'
kubectl port-forward -n monitoring <PROMETHEUS_SERVER_ID> 9090
# You can now see your prometheus server on: http://localhost:9090
# Make sure you are in folder `deployment/`
kubectl apply -f monitoring/grafana/config.yml
helm --namespace monitoring install grafana stable/grafana -f monitoring/grafana/values.yml
# Get the admin password for grafana from your kubernetes dashboard.
kubectl --namespace monitoring port-forward <POD_NAME> 3000
# You can now see your grafana dashboard on: http://localhost:3000
# Login with user 'admin' and the password you just looked up.
# In your dashboard import this dashboard:
# https://grafana.com/grafana/dashboards/1860
# Enter ID 180 and choose "Prometheus" as datasource.
# You got metrics!
```
Now you should see something like this:
![Grafana dashboard](./grafana/metrics.png)
You can set up a grafana dashboard, by visiting https://grafana.com/dashboards, finding one that is suitable and copying it's id.
You then go to the left hand menu in localhost, choose `Dashboard` > `Manage` > `Import`
Paste in the id, click `Load`, select `Prometheus` for the data source, and click `Import`
When you just installed prometheus and grafana, the data will not be available
immediately, so wait for a couple of minutes and reload.

View File

@ -1,16 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-datasource
namespace: monitoring
labels:
grafana_datasource: '1'
data:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://prometheus-server.monitoring.svc.cluster.local

Binary file not shown.

Before

Width:  |  Height:  |  Size: 206 KiB

View File

@ -1,4 +0,0 @@
sidecar:
datasources:
enabled: true
label: grafana_datasource

View File

@ -1,6 +0,0 @@
kind: Namespace
apiVersion: v1
metadata:
name: ocelot-social
labels:
name: ocelot-social

View File

@ -1,71 +0,0 @@
# Kubernetes Configuration For ocelot.social
Deploying *ocelot.social* with kubernetes is straight forward. All you have to
do is to change certain parameters, like domain names and API keys, then you
just apply our provided configuration files to your cluster.
## Configuration
Change into the `./deployment` directory and copy our provided templates:
```bash
# in folder deployment/ocelot-social/
$ cp templates/secrets.template.yaml ./secrets.yaml
$ cp templates/configmap.template.yaml ./configmap.yaml
```
Change the `configmap.yaml` in the `./deployment/ocelot-social` directory as needed, all variables will be available as
environment variables in your deployed Kubernetes pods.
Probably you want to change this environment variable to your actual domain:
```yaml
# in configmap.yaml
CLIENT_URI: "https://develop-k8s.ocelot.social"
```
If you want to edit secrets, you have to `base64` encode them. See [Kubernetes Documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually).
```bash
# example how to base64 a string:
$ echo -n 'admin' | base64
YWRtaW4=
```
Those secrets get `base64` decoded and are available as environment variables in
your deployed Kubernetes pods.
## Create A Namespace
```bash
# in folder deployment/
$ kubectl apply -f namespace.yaml
```
If you have a [Kubernets Dashboard](../digital-ocean/dashboard/README.md)
deployed you should switch to namespace `ocelot-social` in order to
monitor the state of your deployments.
## Create Persistent Volumes
While the deployments and services can easily be restored, simply by deleting
and applying the Kubernetes configurations again, certain data is not that
easily recovered. Therefore we separated persistent volumes from deployments
and services. There is a [dedicated section](../volumes/README.md). Create those
persistent volumes once before you apply the configuration.
## Apply The Configuration
Before you apply you should think about the size of the droplet(s) you need.
For example, the requirements for Neo4j v3.5.14 are [here](https://neo4j.com/docs/operations-manual/3.5/installation/requirements/).
Tips to configure the pod resources you find [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
```bash
# in folder deployment/
$ kubectl apply -f ocelot-social/
```
This can take a while, because Kubernetes will download the Docker images from Docker Hub. Sit
back and relax and have a look into your kubernetes dashboard. Wait until all
pods turn green and they don't show a warning `Waiting: ContainerCreating`
anymore.

View File

@ -1,62 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
ocelot.social/commit: COMMIT
ocelot.social/selector: deployment-ocelot-social-backend
name: backend
namespace: ocelot-social
spec:
minReadySeconds: 15
progressDeadlineSeconds: 60
replicas: 1
revisionHistoryLimit: 2147483647
selector:
matchLabels:
ocelot.social/selector: deployment-ocelot-social-backend
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 100%
type: RollingUpdate
template:
metadata:
annotations:
backup.velero.io/backup-volumes: uploads
creationTimestamp: null
labels:
ocelot.social/commit: COMMIT
ocelot.social/selector: deployment-ocelot-social-backend
name: backend
spec:
containers:
- envFrom:
- configMapRef:
name: configmap
- secretRef:
name: ocelot-social
image: ocelotsocialnetwork/develop-backend:latest # for develop
# image: ocelotsocialnetwork/develop-backend:0.6.3 # for production or staging
imagePullPolicy: Always # for develop or staging
# imagePullPolicy: IfNotPresent # for production
name: backend
ports:
- containerPort: 4000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /develop-backend/public/uploads
name: uploads
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: uploads
persistentVolumeClaim:
claimName: uploads-claim
status: {}

View File

@ -1,65 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
ocelot.social/selector: deployment-ocelot-social-neo4j
name: neo4j
namespace: ocelot-social
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 2147483647
selector:
matchLabels:
ocelot.social/selector: deployment-ocelot-social-neo4j
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 100%
type: RollingUpdate
template:
metadata:
annotations:
backup.velero.io/backup-volumes: neo4j-data
creationTimestamp: null
labels:
ocelot.social/selector: deployment-ocelot-social-neo4j
name: neo4j
spec:
containers:
- envFrom:
- configMapRef:
name: configmap
image: ocelotsocialnetwork/develop-neo4j:latest # for develop
# image: ocelotsocialnetwork/develop-neo4j:0.6.3 # for production or staging
imagePullPolicy: Always # for develop or staging
# imagePullPolicy: IfNotPresent # for production
name: neo4j
ports:
- containerPort: 7687
protocol: TCP
- containerPort: 7474
protocol: TCP
resources:
# see description and add cpu https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
# see requirements for Neo4j v3.5.14 https://neo4j.com/docs/operations-manual/3.5/installation/requirements/
limits:
memory: 2G
requests:
memory: 2G
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data/
name: neo4j-data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: neo4j-data-claim
status: {}

View File

@ -1,54 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
ocelot.social/commit: COMMIT
ocelot.social/selector: deployment-ocelot-social-webapp
name: web
namespace: ocelot-social
spec:
minReadySeconds: 15
progressDeadlineSeconds: 60
replicas: 2
revisionHistoryLimit: 2147483647
selector:
matchLabels:
ocelot.social/selector: deployment-ocelot-social-webapp
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
ocelot.social/commit: COMMIT
ocelot.social/selector: deployment-ocelot-social-webapp
name: web
spec:
containers:
- env:
- name: HOST
value: 0.0.0.0
envFrom:
- configMapRef:
name: configmap
- secretRef:
name: ocelot-social
image: ocelotsocialnetwork/webapp:latest
imagePullPolicy: Always
name: web
ports:
- containerPort: 3000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}

View File

@ -1,16 +0,0 @@
# Error reporting
We use [Sentry](https://github.com/getsentry/sentry) for error reporting in both
our backend and web frontend. You can either use a hosted or a self-hosted
instance. Just set the two `DSN` in your
[configmap](../templates/configmap.template.yaml) and update the `COMMIT`
during a deployment with your commit or the version of your release.
## Self-hosted Sentry
For data privacy it is recommended to set up your own instance of sentry.
If you are lucky enough to have a kubernetes cluster with the required hardware
support, try this [helm chart](https://github.com/helm/charts/tree/master/stable/sentry).
On our kubernetes cluster we get "mult-attach" errors for persistent volumes.
Apparently Digital Ocean's kubernetes clusters do not fulfill the requirements.

View File

@ -1,18 +0,0 @@
# Development Mail Server
You can deploy a fake smtp server which captures all send mails and displays
them in a web interface. The [sample configuration](../templates/configmap.template.yaml)
is assuming such a dummy server in the `SMTP_HOST` configuration and points to
a cluster-internal SMTP server.
To deploy the SMTP server just uncomment the relevant code in the
[ingress server configuration](../../https/templates/ingress.template.yaml) and
run the following:
```bash
# in folder deployment/ocelot-social
$ kubectl apply -f mailserver/
```
You might need to refresh the TLS secret to enable HTTPS on the publicly
available web interface.

Some files were not shown because too many files have changed in this diff Show More