remains from the deployment folder of the ocelot-social repository
This commit is contained in:
parent
8e1708284b
commit
dfc07d88af
45
deployment/old/Maintenance.md
Normal file
45
deployment/old/Maintenance.md
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
# Maintenance mode
|
||||||
|
|
||||||
|
> Despite our best efforts, systems sometimes require downtime for a variety of reasons.
|
||||||
|
|
||||||
|
Quote from [here](https://www.nrmitchi.com/2017/11/easy-maintenance-mode-in-kubernetes/)
|
||||||
|
|
||||||
|
We use our maintenance mode for manual database backup and restore. Also we
|
||||||
|
bring the database into maintenance mode for manual database migrations.
|
||||||
|
|
||||||
|
## Deploy the service
|
||||||
|
|
||||||
|
We prepared sample configuration, so you can simply run:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# in folder deployment/
|
||||||
|
$ kubectl apply -f ./ocelot-social/maintenance/
|
||||||
|
```
|
||||||
|
|
||||||
|
This will fire up a maintenance service.
|
||||||
|
|
||||||
|
## Bring application into maintenance mode
|
||||||
|
|
||||||
|
Now if you want to have a controlled downtime and you want to bring your
|
||||||
|
application into maintenance mode, you can edit your global ingress server.
|
||||||
|
|
||||||
|
E.g. copy file [`deployment/digital-ocean/https/templates/ingress.template.yaml`](../../digital-ocean/https/templates/ingress.template.yaml) to new file `deployment/digital-ocean/https/ingress.yaml` and change the following:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
...
|
||||||
|
|
||||||
|
- host: develop-k8s.ocelot.social
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
backend:
|
||||||
|
# serviceName: web
|
||||||
|
serviceName: maintenance
|
||||||
|
# servicePort: 3000
|
||||||
|
servicePort: 80
|
||||||
|
```
|
||||||
|
|
||||||
|
Then run `$ kubectl apply -f deployment/digital-ocean/https/ingress.yaml`. If you
|
||||||
|
want to deactivate the maintenance server, just undo the edit and apply the
|
||||||
|
configuration again.
|
||||||
|
|
||||||
39
deployment/old/digital-ocean/README.md
Normal file
39
deployment/old/digital-ocean/README.md
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
# Digital Ocean
|
||||||
|
|
||||||
|
As a start, read the [introduction into Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes) by the folks at Digital Ocean. The following section should enable you to deploy ocelot.social to your Kubernetes cluster.
|
||||||
|
|
||||||
|
## Connect to your local cluster
|
||||||
|
|
||||||
|
1. Create a cluster at [Digital Ocean](https://www.digitalocean.com/).
|
||||||
|
2. Download the `***-kubeconfig.yaml` from the Web UI.
|
||||||
|
3. Move the file to the default location where kubectl expects it to be: `mv ***-kubeconfig.yaml ~/.kube/config`. Alternatively you can set the config on every command: `--kubeconfig ***-kubeconfig.yaml`
|
||||||
|
4. Now check if you can connect to the cluster and if its your newly created one by running: `kubectl get nodes`
|
||||||
|
|
||||||
|
The output should look about like this:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl get nodes
|
||||||
|
NAME STATUS ROLES AGE VERSION
|
||||||
|
nifty-driscoll-uu1w Ready <none> 69d v1.13.2
|
||||||
|
nifty-driscoll-uuiw Ready <none> 69d v1.13.2
|
||||||
|
nifty-driscoll-uusn Ready <none> 69d v1.13.2
|
||||||
|
```
|
||||||
|
|
||||||
|
If you got the steps right above and see your nodes you can continue.
|
||||||
|
|
||||||
|
Digital Ocean Kubernetes clusters don't have a graphical interface, so I suggest
|
||||||
|
to setup the [Kubernetes dashboard](./dashboard/README.md) as a next step.
|
||||||
|
Configuring [HTTPS](./https/README.md) is bit tricky and therefore I suggest to
|
||||||
|
do this as a last step.
|
||||||
|
|
||||||
|
## Spaces
|
||||||
|
|
||||||
|
We are storing our images in the s3-compatible [DigitalOcean Spaces](https://www.digitalocean.com/docs/spaces/).
|
||||||
|
|
||||||
|
We still want to take backups of our images in case something happens to the images in the cloud. See these [instructions](https://www.digitalocean.com/docs/spaces/resources/s3cmd-usage/) about getting set up with `s3cmd` to take a copy of all images in a `Spaces` namespace, i.e. `ocelot-social-uploads`.
|
||||||
|
|
||||||
|
After configuring `s3cmd` with your credentials, etc. you should be able to make a backup with this command.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
s3cmg get --recursive --skip-existing s3://ocelot-social-uploads
|
||||||
|
```
|
||||||
55
deployment/old/digital-ocean/dashboard/README.md
Normal file
55
deployment/old/digital-ocean/dashboard/README.md
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
# Install Kubernetes Dashboard
|
||||||
|
|
||||||
|
The kubernetes dashboard is optional but very helpful for debugging. If you want to install it, you have to do so only **once** per cluster:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# in folder deployment/digital-ocean/
|
||||||
|
$ kubectl apply -f dashboard/
|
||||||
|
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Login to your dashboard
|
||||||
|
|
||||||
|
Proxy the remote kubernetes dashboard to localhost:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl proxy
|
||||||
|
```
|
||||||
|
|
||||||
|
Visit:
|
||||||
|
|
||||||
|
[http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/)
|
||||||
|
|
||||||
|
You should see a login screen.
|
||||||
|
|
||||||
|
To get your token for the dashboard you can run this command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
|
||||||
|
```
|
||||||
|
|
||||||
|
It should print something like:
|
||||||
|
|
||||||
|
```text
|
||||||
|
Name: admin-user-token-6gl6l
|
||||||
|
Namespace: kube-system
|
||||||
|
Labels: <none>
|
||||||
|
Annotations: kubernetes.io/service-account.name=admin-user
|
||||||
|
kubernetes.io/service-account.uid=b16afba9-dfec-11e7-bbb9-901b0e532516
|
||||||
|
|
||||||
|
Type: kubernetes.io/service-account-token
|
||||||
|
|
||||||
|
Data
|
||||||
|
====
|
||||||
|
ca.crt: 1025 bytes
|
||||||
|
namespace: 11 bytes
|
||||||
|
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnbDZsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMTZhZmJhOS1kZmVjLTExZTctYmJiOS05MDFiMGU1MzI1MTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.M70CU3lbu3PP4OjhFms8PVL5pQKj-jj4RNSLA4YmQfTXpPUuxqXjiTf094_Rzr0fgN_IVX6gC4fiNUL5ynx9KU-lkPfk0HnX8scxfJNzypL039mpGt0bbe1IXKSIRaq_9VW59Xz-yBUhycYcKPO9RM2Qa1Ax29nqNVko4vLn1_1wPqJ6XSq3GYI8anTzV8Fku4jasUwjrws6Cn6_sPEGmL54sq5R4Z5afUtv-mItTmqZZdxnkRqcJLlg2Y8WbCPogErbsaCDJoABQ7ppaqHetwfM_0yMun6ABOQbIwwl8pspJhpplKwyo700OSpvTT9zlBsu-b35lzXGBRHzv5g_RA
|
||||||
|
```
|
||||||
|
|
||||||
|
Grab the token from above and paste it into the [login screen](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/)
|
||||||
|
|
||||||
|
When you are logged in, you should see sth. like:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Feel free to save the login token from above in your password manager. Unlike the `kubeconfig` file, this token does not expire.
|
||||||
5
deployment/old/digital-ocean/dashboard/admin-user.yaml
Normal file
5
deployment/old/digital-ocean/dashboard/admin-user.yaml
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: admin-user
|
||||||
|
namespace: kube-system
|
||||||
BIN
deployment/old/digital-ocean/dashboard/dashboard-screenshot.png
Normal file
BIN
deployment/old/digital-ocean/dashboard/dashboard-screenshot.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 178 KiB |
12
deployment/old/digital-ocean/dashboard/role-binding.yaml
Normal file
12
deployment/old/digital-ocean/dashboard/role-binding.yaml
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: admin-user
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: cluster-admin
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: admin-user
|
||||||
|
namespace: kube-system
|
||||||
126
deployment/old/digital-ocean/https/README.md
Normal file
126
deployment/old/digital-ocean/https/README.md
Normal file
@ -0,0 +1,126 @@
|
|||||||
|
## Create Letsencrypt Issuers and Ingress Services
|
||||||
|
|
||||||
|
Copy the configuration templates and change the file according to your needs.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# in folder deployment/digital-ocean/https/
|
||||||
|
cp templates/issuer.template.yaml ./issuer.yaml
|
||||||
|
cp templates/ingress.template.yaml ./ingress.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
At least, **change email addresses** in `issuer.yaml`. For sure you also want
|
||||||
|
to _change the domain name_ in `ingress.yaml`.
|
||||||
|
|
||||||
|
Once you are done, apply the configuration:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# in folder deployment/digital-ocean/https/
|
||||||
|
$ kubectl apply -f .
|
||||||
|
```
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
|
CAUTION: It seems that the behaviour of Digital Ocean has changed and the load balancer is not created automatically anymore.
|
||||||
|
And to create a load balancer costs money. Please refine the following documentation if required.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
{% tabs %}
|
||||||
|
{% tab title="Without Load Balancer" %}
|
||||||
|
|
||||||
|
A solution without a load balance you can find [here](../no-loadbalancer/README.md).
|
||||||
|
|
||||||
|
{% endtab %}
|
||||||
|
{% tab title="With Digital Ocean Load Balancer" %}
|
||||||
|
|
||||||
|
{% hint style="info" %}
|
||||||
|
CAUTION: It seems that the behaviour of Digital Ocean has changed and the load balancer is not created automatically anymore.
|
||||||
|
Please refine the following documentation if required.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
In earlier days by now, your cluster should have a load balancer assigned with an external IP
|
||||||
|
address. On Digital Ocean, this is how it should look like:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
If the load balancer isn't created automatically you have to create it your self on Digital Ocean under Networks.
|
||||||
|
In case you don't need a Digital Ocean load balancer (which costs money by the way) have a look in the tab `Without Load Balancer`.
|
||||||
|
|
||||||
|
{% endtab %}
|
||||||
|
{% endtabs %}
|
||||||
|
|
||||||
|
Check the ingress server is working correctly:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ curl -kivL -H 'Host: <DOMAIN_NAME>' 'https://<IP_ADDRESS>'
|
||||||
|
<page HTML>
|
||||||
|
```
|
||||||
|
|
||||||
|
If the response looks good, configure your domain registrar for the new IP address and the domain.
|
||||||
|
|
||||||
|
Now let's get a valid HTTPS certificate. According to the tutorial above, check your tls certificate for staging:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl -n ocelot-social describe certificate tls
|
||||||
|
<
|
||||||
|
...
|
||||||
|
Spec:
|
||||||
|
...
|
||||||
|
Issuer Ref:
|
||||||
|
Group: cert-manager.io
|
||||||
|
Kind: ClusterIssuer
|
||||||
|
Name: letsencrypt-staging
|
||||||
|
...
|
||||||
|
Events:
|
||||||
|
<no errors>
|
||||||
|
>
|
||||||
|
$ kubectl -n ocelot-social describe secret tls
|
||||||
|
<
|
||||||
|
...
|
||||||
|
Annotations: ...
|
||||||
|
cert-manager.io/issuer-kind: ClusterIssuer
|
||||||
|
cert-manager.io/issuer-name: letsencrypt-staging
|
||||||
|
...
|
||||||
|
>
|
||||||
|
```
|
||||||
|
|
||||||
|
If everything looks good, update the cluster-issuer of your ingress. Change the annotation `cert-manager.io/cluster-issuer` from `letsencrypt-staging` (for testing by getting a dummy certificate – no blocking by letsencrypt, because of to many request cycles) to `letsencrypt-prod` (for production with a real certificate – possible blocking by letsencrypt for several days, because of to many request cycles) in your ingress configuration in `ingress.yaml`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# in folder deployment/digital-ocean/https/
|
||||||
|
$ kubectl apply -f ingress.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Take a minute and have a look if the certificate is now newly generated by `letsencrypt-prod`, the cluster-issuer for production:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl -n ocelot-social describe certificate tls
|
||||||
|
<
|
||||||
|
...
|
||||||
|
Spec:
|
||||||
|
...
|
||||||
|
Issuer Ref:
|
||||||
|
Group: cert-manager.io
|
||||||
|
Kind: ClusterIssuer
|
||||||
|
Name: letsencrypt-prod
|
||||||
|
...
|
||||||
|
Events:
|
||||||
|
<no errors>
|
||||||
|
>
|
||||||
|
$ kubectl -n ocelot-social describe secret tls
|
||||||
|
<
|
||||||
|
...
|
||||||
|
Annotations: ...
|
||||||
|
cert-manager.io/issuer-kind: ClusterIssuer
|
||||||
|
cert-manager.io/issuer-name: letsencrypt-prod
|
||||||
|
...
|
||||||
|
>
|
||||||
|
```
|
||||||
|
|
||||||
|
In case the certificate is not newly created delete the former secret to force a refresh:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl -n ocelot-social delete secret tls
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, HTTPS should be configured on your domain. Congrats!
|
||||||
|
|
||||||
|
For troubleshooting have a look at the cert-manager's [Troubleshooting](https://cert-manager.io/docs/faq/troubleshooting/) or [Troubleshooting Issuing ACME Certificates](https://cert-manager.io/docs/faq/acme/).
|
||||||
BIN
deployment/old/digital-ocean/https/ip-address.png
Normal file
BIN
deployment/old/digital-ocean/https/ip-address.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 141 KiB |
85
deployment/old/legacy-migration/README.md
Normal file
85
deployment/old/legacy-migration/README.md
Normal file
@ -0,0 +1,85 @@
|
|||||||
|
# Legacy data migration
|
||||||
|
|
||||||
|
This setup is **completely optional** and only required if you have data on a
|
||||||
|
server which is running our legacy code and you want to import that data. It
|
||||||
|
will import the uploads folder and migrate a dump of the legacy Mongo database
|
||||||
|
into our new Neo4J graph database.
|
||||||
|
|
||||||
|
## Configure Maintenance-Worker Pod
|
||||||
|
|
||||||
|
Create a configmap with the specific connection data of your legacy server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl create configmap maintenance-worker \
|
||||||
|
-n ocelot-social \
|
||||||
|
--from-literal=SSH_USERNAME=someuser \
|
||||||
|
--from-literal=SSH_HOST=yourhost \
|
||||||
|
--from-literal=MONGODB_USERNAME=hc-api \
|
||||||
|
--from-literal=MONGODB_PASSWORD=secretpassword \
|
||||||
|
--from-literal=MONGODB_AUTH_DB=hc_api \
|
||||||
|
--from-literal=MONGODB_DATABASE=hc_api \
|
||||||
|
--from-literal=UPLOADS_DIRECTORY=/var/www/api/uploads
|
||||||
|
```
|
||||||
|
|
||||||
|
Create a secret with your public and private ssh keys. As the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys) points out, you should be careful with your ssh keys. Anyone with access to your cluster will have access to your ssh keys. Better create a new pair with `ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl create secret generic ssh-keys \
|
||||||
|
-n ocelot-social \
|
||||||
|
--from-file=id_rsa=/path/to/.ssh/id_rsa \
|
||||||
|
--from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
|
||||||
|
--from-file=known_hosts=/path/to/.ssh/known_hosts
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deploy a Temporary Maintenance-Worker Pod
|
||||||
|
|
||||||
|
Bring the application into maintenance mode.
|
||||||
|
|
||||||
|
{% hint style="info" %} TODO: implement maintenance mode {% endhint %}
|
||||||
|
|
||||||
|
|
||||||
|
Then temporarily delete backend and database deployments
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl -n ocelot-social get deployments
|
||||||
|
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||||
|
backend 1/1 1 1 3d11h
|
||||||
|
neo4j 1/1 1 1 3d11h
|
||||||
|
webapp 2/2 2 2 73d
|
||||||
|
$ kubectl -n ocelot-social delete deployment neo4j
|
||||||
|
deployment.extensions "neo4j" deleted
|
||||||
|
$ kubectl -n ocelot-social delete deployment backend
|
||||||
|
deployment.extensions "backend" deleted
|
||||||
|
```
|
||||||
|
|
||||||
|
Deploy one-time develop-maintenance-worker pod:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# in deployment/legacy-migration/
|
||||||
|
$ kubectl apply -f maintenance-worker.yaml
|
||||||
|
pod/develop-maintenance-worker created
|
||||||
|
```
|
||||||
|
|
||||||
|
Import legacy database and uploads:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl -n ocelot-social exec -it develop-maintenance-worker bash
|
||||||
|
$ import_legacy_db
|
||||||
|
$ import_legacy_uploads
|
||||||
|
$ exit
|
||||||
|
```
|
||||||
|
|
||||||
|
Delete the pod when you're done:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl -n ocelot-social delete pod develop-maintenance-worker
|
||||||
|
```
|
||||||
|
|
||||||
|
Oh, and of course you have to get those deleted deployments back. One way of
|
||||||
|
doing it would be:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# in folder deployment/
|
||||||
|
$ kubectl apply -f human-connection/deployment-backend.yaml -f human-connection/deployment-neo4j.yaml
|
||||||
|
```
|
||||||
|
|
||||||
40
deployment/old/legacy-migration/maintenance-worker.yaml
Normal file
40
deployment/old/legacy-migration/maintenance-worker.yaml
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
---
|
||||||
|
kind: Pod
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: develop-maintenance-worker
|
||||||
|
namespace: ocelot-social
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: develop-maintenance-worker
|
||||||
|
image: ocelotsocialnetwork/develop-maintenance-worker:latest
|
||||||
|
imagePullPolicy: Always
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "2G"
|
||||||
|
limits:
|
||||||
|
memory: "8G"
|
||||||
|
envFrom:
|
||||||
|
- configMapRef:
|
||||||
|
name: maintenance-worker
|
||||||
|
- configMapRef:
|
||||||
|
name: configmap
|
||||||
|
volumeMounts:
|
||||||
|
- name: secret-volume
|
||||||
|
readOnly: false
|
||||||
|
mountPath: /root/.ssh
|
||||||
|
- name: uploads
|
||||||
|
mountPath: /uploads
|
||||||
|
- name: neo4j-data
|
||||||
|
mountPath: /data/
|
||||||
|
volumes:
|
||||||
|
- name: secret-volume
|
||||||
|
secret:
|
||||||
|
secretName: ssh-keys
|
||||||
|
defaultMode: 0400
|
||||||
|
- name: uploads
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: uploads-claim
|
||||||
|
- name: neo4j-data
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: neo4j-data-claim
|
||||||
@ -0,0 +1 @@
|
|||||||
|
.ssh/
|
||||||
2
deployment/old/legacy-migration/maintenance-worker/.gitignore
vendored
Normal file
2
deployment/old/legacy-migration/maintenance-worker/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
.ssh/
|
||||||
|
ssh/
|
||||||
@ -0,0 +1,21 @@
|
|||||||
|
FROM ocelotsocialnetwork/develop-neo4j:latest
|
||||||
|
|
||||||
|
ENV NODE_ENV=maintenance
|
||||||
|
EXPOSE 7687 7474
|
||||||
|
|
||||||
|
ENV BUILD_DEPS="gettext" \
|
||||||
|
RUNTIME_DEPS="libintl"
|
||||||
|
|
||||||
|
RUN set -x && \
|
||||||
|
apk add --update $RUNTIME_DEPS && \
|
||||||
|
apk add --virtual build_deps $BUILD_DEPS && \
|
||||||
|
cp /usr/bin/envsubst /usr/local/bin/envsubst && \
|
||||||
|
apk del build_deps
|
||||||
|
|
||||||
|
|
||||||
|
RUN apk upgrade --update
|
||||||
|
RUN apk add --no-cache mongodb-tools openssh nodejs yarn rsync
|
||||||
|
|
||||||
|
COPY known_hosts /root/.ssh/known_hosts
|
||||||
|
COPY migration /migration
|
||||||
|
COPY ./binaries/* /usr/local/bin/
|
||||||
@ -0,0 +1,6 @@
|
|||||||
|
# SSH Access
|
||||||
|
# SSH_USERNAME='username'
|
||||||
|
# SSH_HOST='example.org'
|
||||||
|
|
||||||
|
# UPLOADS_DIRECTORY=/var/www/api/uploads
|
||||||
|
OUTPUT_DIRECTORY='/uploads/'
|
||||||
2
deployment/old/legacy-migration/maintenance-worker/binaries/idle
Executable file
2
deployment/old/legacy-migration/maintenance-worker/binaries/idle
Executable file
@ -0,0 +1,2 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
tail -f /dev/null
|
||||||
12
deployment/old/legacy-migration/maintenance-worker/binaries/import_legacy_db
Executable file
12
deployment/old/legacy-migration/maintenance-worker/binaries/import_legacy_db
Executable file
@ -0,0 +1,12 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -e
|
||||||
|
for var in "SSH_USERNAME" "SSH_HOST" "MONGODB_USERNAME" "MONGODB_PASSWORD" "MONGODB_DATABASE" "MONGODB_AUTH_DB"
|
||||||
|
do
|
||||||
|
if [[ -z "${!var}" ]]; then
|
||||||
|
echo "${var} is undefined"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
/migration/mongo/export.sh
|
||||||
|
/migration/neo4j/import.sh
|
||||||
@ -0,0 +1,17 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# import .env config
|
||||||
|
set -o allexport
|
||||||
|
source $(dirname "$0")/.env
|
||||||
|
set +o allexport
|
||||||
|
|
||||||
|
for var in "SSH_USERNAME" "SSH_HOST" "UPLOADS_DIRECTORY"
|
||||||
|
do
|
||||||
|
if [[ -z "${!var}" ]]; then
|
||||||
|
echo "${var} is undefined"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
rsync --archive --update --verbose ${SSH_USERNAME}@${SSH_HOST}:${UPLOADS_DIRECTORY}/ ${OUTPUT_DIRECTORY}
|
||||||
@ -0,0 +1,3 @@
|
|||||||
|
|1|GuOYlVEhTowidPs18zj9p5F2j3o=|sDHJYLz9Ftv11oXeGEjs7SpVyg0= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM5N29bI5CeKu1/RBPyM2fwyf7fuajOO+tyhKe1+CC2sZ1XNB5Ff6t6MtCLNRv2mUuvzTbW/HkisDiA5tuXUHOk=
|
||||||
|
|1|2KP9NV+Q5g2MrtjAeFSVcs8YeOI=|nf3h4wWVwC4xbBS1kzgzE2tBldk= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhRK6BeIEUxXlS0z/pOfkUkSPfn33g4J1U3L+MyUQYHm+7agT08799ANJhnvELKE1tt4Vx80I9UR81oxzZcy3E=
|
||||||
|
|1|HonYIRNhKyroUHPKU1HSZw0+Qzs=|5T1btfwFBz2vNSldhqAIfTbfIgQ= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhRK6BeIEUxXlS0z/pOfkUkSPfn33g4J1U3L+MyUQYHm+7agT08799ANJhnvELKE1tt4Vx80I9UR81oxzZcy3E=
|
||||||
@ -0,0 +1,17 @@
|
|||||||
|
# SSH Access
|
||||||
|
# SSH_USERNAME='username'
|
||||||
|
# SSH_HOST='example.org'
|
||||||
|
|
||||||
|
# Mongo DB on Remote Maschine
|
||||||
|
# MONGODB_USERNAME='mongouser'
|
||||||
|
# MONGODB_PASSWORD='mongopassword'
|
||||||
|
# MONGODB_DATABASE='mongodatabase'
|
||||||
|
# MONGODB_AUTH_DB='admin'
|
||||||
|
|
||||||
|
# Export Settings
|
||||||
|
# On Windows this resolves to C:\Users\dornhoeschen\AppData\Local\Temp\mongo-export (MinGW)
|
||||||
|
EXPORT_PATH='/tmp/mongo-export/'
|
||||||
|
EXPORT_MONGOEXPORT_BIN='mongoexport'
|
||||||
|
MONGO_EXPORT_SPLIT_SIZE=6000
|
||||||
|
# On Windows use something like this
|
||||||
|
# EXPORT_MONGOEXPORT_BIN='C:\Program Files\MongoDB\Server\3.6\bin\mongoexport.exe'
|
||||||
53
deployment/old/legacy-migration/maintenance-worker/migration/mongo/export.sh
Executable file
53
deployment/old/legacy-migration/maintenance-worker/migration/mongo/export.sh
Executable file
@ -0,0 +1,53 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# import .env config
|
||||||
|
set -o allexport
|
||||||
|
source $(dirname "$0")/.env
|
||||||
|
set +o allexport
|
||||||
|
|
||||||
|
# Export collection function defintion
|
||||||
|
function export_collection () {
|
||||||
|
"${EXPORT_MONGOEXPORT_BIN}" --db ${MONGODB_DATABASE} --host localhost -d ${MONGODB_DATABASE} --port 27018 --username ${MONGODB_USERNAME} --password ${MONGODB_PASSWORD} --authenticationDatabase ${MONGODB_AUTH_DB} --collection $1 --out "${EXPORT_PATH}$1.json"
|
||||||
|
mkdir -p ${EXPORT_PATH}splits/$1/
|
||||||
|
split -l ${MONGO_EXPORT_SPLIT_SIZE} -a 3 ${EXPORT_PATH}$1.json ${EXPORT_PATH}splits/$1/
|
||||||
|
}
|
||||||
|
|
||||||
|
# Export collection with query function defintion
|
||||||
|
function export_collection_query () {
|
||||||
|
"${EXPORT_MONGOEXPORT_BIN}" --db ${MONGODB_DATABASE} --host localhost -d ${MONGODB_DATABASE} --port 27018 --username ${MONGODB_USERNAME} --password ${MONGODB_PASSWORD} --authenticationDatabase ${MONGODB_AUTH_DB} --collection $1 --out "${EXPORT_PATH}$1_$3.json" --query "$2"
|
||||||
|
mkdir -p ${EXPORT_PATH}splits/$1_$3/
|
||||||
|
split -l ${MONGO_EXPORT_SPLIT_SIZE} -a 3 ${EXPORT_PATH}$1_$3.json ${EXPORT_PATH}splits/$1_$3/
|
||||||
|
}
|
||||||
|
|
||||||
|
# Delete old export & ensure directory
|
||||||
|
rm -rf ${EXPORT_PATH}*
|
||||||
|
mkdir -p ${EXPORT_PATH}
|
||||||
|
|
||||||
|
# Open SSH Tunnel
|
||||||
|
ssh -4 -M -S my-ctrl-socket -fnNT -L 27018:localhost:27017 -l ${SSH_USERNAME} ${SSH_HOST}
|
||||||
|
|
||||||
|
# Export all Data from the Alpha to json and split them up
|
||||||
|
export_collection "badges"
|
||||||
|
export_collection "categories"
|
||||||
|
export_collection "comments"
|
||||||
|
export_collection_query "contributions" '{"type": "DELETED"}' "DELETED"
|
||||||
|
export_collection_query "contributions" '{"type": "post"}' "post"
|
||||||
|
# export_collection_query "contributions" '{"type": "cando"}' "cando"
|
||||||
|
export_collection "emotions"
|
||||||
|
# export_collection_query "follows" '{"foreignService": "organizations"}' "organizations"
|
||||||
|
export_collection_query "follows" '{"foreignService": "users"}' "users"
|
||||||
|
# export_collection "invites"
|
||||||
|
# export_collection "organizations"
|
||||||
|
# export_collection "pages"
|
||||||
|
# export_collection "projects"
|
||||||
|
# export_collection "settings"
|
||||||
|
export_collection "shouts"
|
||||||
|
# export_collection "status"
|
||||||
|
export_collection_query "users" '{"isVerified": true }' "verified"
|
||||||
|
# export_collection "userscandos"
|
||||||
|
# export_collection "usersettings"
|
||||||
|
|
||||||
|
# Close SSH Tunnel
|
||||||
|
ssh -S my-ctrl-socket -O check -l ${SSH_USERNAME} ${SSH_HOST}
|
||||||
|
ssh -S my-ctrl-socket -O exit -l ${SSH_USERNAME} ${SSH_HOST}
|
||||||
@ -0,0 +1,16 @@
|
|||||||
|
# Neo4J Settings
|
||||||
|
# NEO4J_USERNAME='neo4j'
|
||||||
|
# NEO4J_PASSWORD='letmein'
|
||||||
|
|
||||||
|
# Import Settings
|
||||||
|
# On Windows this resolves to C:\Users\dornhoeschen\AppData\Local\Temp\mongo-export (MinGW)
|
||||||
|
IMPORT_PATH='/tmp/mongo-export/'
|
||||||
|
IMPORT_CHUNK_PATH='/tmp/mongo-export/splits/'
|
||||||
|
|
||||||
|
IMPORT_CHUNK_PATH_CQL='/tmp/mongo-export/splits/'
|
||||||
|
# On Windows this path needs to be windows style since the cypher-shell runs native - note the forward slash
|
||||||
|
# IMPORT_CHUNK_PATH_CQL='C:/Users/dornhoeschen/AppData/Local/Temp/mongo-export/splits/'
|
||||||
|
|
||||||
|
IMPORT_CYPHERSHELL_BIN='cypher-shell'
|
||||||
|
# On Windows use something like this
|
||||||
|
# IMPORT_CYPHERSHELL_BIN='C:\Program Files\neo4j-community\bin\cypher-shell.bat'
|
||||||
@ -0,0 +1,52 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[?] image: {
|
||||||
|
[?] path: { // Path is incorrect in Nitro - is icon the correct name for this field?
|
||||||
|
[X] type: String,
|
||||||
|
[X] required: true
|
||||||
|
},
|
||||||
|
[ ] alt: { // If we use an image - should we not have an alt?
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[?] status: {
|
||||||
|
[X] type: String,
|
||||||
|
[X] enum: ['permanent', 'temporary'],
|
||||||
|
[ ] default: 'permanent', // Default value is missing in Nitro
|
||||||
|
[X] required: true
|
||||||
|
},
|
||||||
|
[?] type: {
|
||||||
|
[?] type: String, // in nitro this is a defined enum - seems good for now
|
||||||
|
[X] required: true
|
||||||
|
},
|
||||||
|
[X] id: {
|
||||||
|
[X] type: String,
|
||||||
|
[X] required: true
|
||||||
|
},
|
||||||
|
[?] createdAt: {
|
||||||
|
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
|
||||||
|
[ ] default: Date.now // Default value is missing in Nitro
|
||||||
|
},
|
||||||
|
[?] updatedAt: {
|
||||||
|
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
|
||||||
|
[ ] default: Date.now // Default value is missing in Nitro
|
||||||
|
}
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as badge
|
||||||
|
MERGE(b:Badge {id: badge._id["$oid"]})
|
||||||
|
ON CREATE SET
|
||||||
|
b.id = badge.key,
|
||||||
|
b.type = badge.type,
|
||||||
|
b.icon = replace(badge.image.path, 'https://api-alpha.human-connection.org', ''),
|
||||||
|
b.status = badge.status,
|
||||||
|
b.createdAt = badge.createdAt.`$date`,
|
||||||
|
b.updatedAt = badge.updatedAt.`$date`
|
||||||
|
;
|
||||||
@ -0,0 +1 @@
|
|||||||
|
MATCH (n:Badge) DETACH DELETE n;
|
||||||
@ -0,0 +1,129 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[X] title: {
|
||||||
|
[X] type: String,
|
||||||
|
[X] required: true
|
||||||
|
},
|
||||||
|
[?] slug: {
|
||||||
|
[X] type: String,
|
||||||
|
[ ] required: true, // Not required in Nitro
|
||||||
|
[ ] unique: true // Unique value is not enforced in Nitro?
|
||||||
|
},
|
||||||
|
[?] icon: { // Nitro adds required: true
|
||||||
|
[X] type: String,
|
||||||
|
[ ] unique: true // Unique value is not enforced in Nitro?
|
||||||
|
},
|
||||||
|
[?] createdAt: {
|
||||||
|
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
|
||||||
|
[ ] default: Date.now // Default value is missing in Nitro
|
||||||
|
},
|
||||||
|
[?] updatedAt: {
|
||||||
|
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
|
||||||
|
[ ] default: Date.now // Default value is missing in Nitro
|
||||||
|
}
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as category
|
||||||
|
MERGE(c:Category {id: category._id["$oid"]})
|
||||||
|
ON CREATE SET
|
||||||
|
c.name = category.title,
|
||||||
|
c.slug = category.slug,
|
||||||
|
c.icon = category.icon,
|
||||||
|
c.createdAt = category.createdAt.`$date`,
|
||||||
|
c.updatedAt = category.updatedAt.`$date`
|
||||||
|
;
|
||||||
|
|
||||||
|
// Transform icon names
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-justforfun")
|
||||||
|
SET c.icon = 'smile'
|
||||||
|
SET c.slug = 'just-for-fun'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-luck")
|
||||||
|
SET c.icon = 'heart-o'
|
||||||
|
SET c.slug = 'happiness-values'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-health")
|
||||||
|
SET c.icon = 'medkit'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-environment")
|
||||||
|
SET c.icon = 'tree'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-animal-justice")
|
||||||
|
SET c.icon = 'paw'
|
||||||
|
SET c.slug = 'animal-protection'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-human-rights")
|
||||||
|
SET c.icon = 'balance-scale'
|
||||||
|
SET c.slug = 'human-rights-justice'
|
||||||
|
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-education")
|
||||||
|
SET c.icon = 'graduation-cap'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-cooperation")
|
||||||
|
SET c.icon = 'users'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-politics")
|
||||||
|
SET c.icon = 'university'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-economy")
|
||||||
|
SET c.icon = 'money'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-technology")
|
||||||
|
SET c.icon = 'flash'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-internet")
|
||||||
|
SET c.icon = 'mouse-pointer'
|
||||||
|
SET c.slug = 'it-internet-data-privacy'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-art")
|
||||||
|
SET c.icon = 'paint-brush'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-freedom-of-speech")
|
||||||
|
SET c.icon = 'bullhorn'
|
||||||
|
SET c.slug = 'freedom-of-speech'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-sustainability")
|
||||||
|
SET c.icon = 'shopping-cart'
|
||||||
|
;
|
||||||
|
|
||||||
|
MATCH (c:Category)
|
||||||
|
WHERE (c.icon = "categories-peace")
|
||||||
|
SET c.icon = 'angellist'
|
||||||
|
SET c.slug = 'global-peace-nonviolence'
|
||||||
|
;
|
||||||
@ -0,0 +1 @@
|
|||||||
|
MATCH (n:Category) DETACH DELETE n;
|
||||||
@ -0,0 +1,67 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[?] userId: {
|
||||||
|
[X] type: String,
|
||||||
|
[ ] required: true, // Not required in Nitro
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[?] contributionId: {
|
||||||
|
[X] type: String,
|
||||||
|
[ ] required: true, // Not required in Nitro
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[X] content: {
|
||||||
|
[X] type: String,
|
||||||
|
[X] required: true
|
||||||
|
},
|
||||||
|
[?] contentExcerpt: { // Generated from content
|
||||||
|
[X] type: String,
|
||||||
|
[ ] required: true // Not required in Nitro
|
||||||
|
},
|
||||||
|
[ ] hasMore: { type: Boolean },
|
||||||
|
[ ] upvotes: {
|
||||||
|
[ ] type: Array,
|
||||||
|
[ ] default: []
|
||||||
|
},
|
||||||
|
[ ] upvoteCount: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
},
|
||||||
|
[?] deleted: {
|
||||||
|
[X] type: Boolean,
|
||||||
|
[ ] default: false, // Default value is missing in Nitro
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] createdAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] updatedAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as comment
|
||||||
|
MERGE (c:Comment {id: comment._id["$oid"]})
|
||||||
|
ON CREATE SET
|
||||||
|
c.content = comment.content,
|
||||||
|
c.contentExcerpt = comment.contentExcerpt,
|
||||||
|
c.deleted = comment.deleted,
|
||||||
|
c.createdAt = comment.createdAt.`$date`,
|
||||||
|
c.updatedAt = comment.updatedAt.`$date`,
|
||||||
|
c.disabled = false
|
||||||
|
WITH c, comment, comment.contributionId as postId
|
||||||
|
MATCH (post:Post {id: postId})
|
||||||
|
WITH c, post, comment.userId as userId
|
||||||
|
MATCH (author:User {id: userId})
|
||||||
|
MERGE (c)-[:COMMENTS]->(post)
|
||||||
|
MERGE (author)-[:WROTE]->(c)
|
||||||
|
;
|
||||||
@ -0,0 +1 @@
|
|||||||
|
MATCH (n:Comment) DETACH DELETE n;
|
||||||
@ -0,0 +1,156 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
[?] { //Modeled incorrect as Post
|
||||||
|
[?] userId: {
|
||||||
|
[X] type: String,
|
||||||
|
[ ] required: true, // Not required in Nitro
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] organizationId: {
|
||||||
|
[ ] type: String,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[X] categoryIds: {
|
||||||
|
[X] type: Array,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[X] title: {
|
||||||
|
[X] type: String,
|
||||||
|
[X] required: true
|
||||||
|
},
|
||||||
|
[?] slug: { // Generated from title
|
||||||
|
[X] type: String,
|
||||||
|
[ ] required: true, // Not required in Nitro
|
||||||
|
[?] unique: true, // Unique value is not enforced in Nitro?
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] type: { // db.getCollection('contributions').distinct('type') -> 'DELETED', 'cando', 'post'
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] cando: {
|
||||||
|
[ ] difficulty: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] enum: ['easy', 'medium', 'hard']
|
||||||
|
},
|
||||||
|
[ ] reasonTitle: { type: String },
|
||||||
|
[ ] reason: { type: String }
|
||||||
|
},
|
||||||
|
[X] content: {
|
||||||
|
[X] type: String,
|
||||||
|
[X] required: true
|
||||||
|
},
|
||||||
|
[?] contentExcerpt: { // Generated from content
|
||||||
|
[X] type: String,
|
||||||
|
[?] required: true // Not required in Nitro
|
||||||
|
},
|
||||||
|
[ ] hasMore: { type: Boolean },
|
||||||
|
[X] teaserImg: { type: String },
|
||||||
|
[ ] language: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] shoutCount: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] meta: {
|
||||||
|
[ ] hasVideo: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] default: false
|
||||||
|
},
|
||||||
|
[ ] embedds: {
|
||||||
|
[ ] type: Object,
|
||||||
|
[ ] default: {}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[?] visibility: {
|
||||||
|
[X] type: String,
|
||||||
|
[X] enum: ['public', 'friends', 'private'],
|
||||||
|
[ ] default: 'public', // Default value is missing in Nitro
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[?] isEnabled: {
|
||||||
|
[X] type: Boolean,
|
||||||
|
[ ] default: true, // Default value is missing in Nitro
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[?] tags: { type: Array }, // ensure this is working properly
|
||||||
|
[ ] emotions: {
|
||||||
|
[ ] type: Object,
|
||||||
|
[-] index: true,
|
||||||
|
[ ] default: {
|
||||||
|
[ ] angry: {
|
||||||
|
[ ] count: 0,
|
||||||
|
[ ] percent: 0
|
||||||
|
[ ] },
|
||||||
|
[ ] cry: {
|
||||||
|
[ ] count: 0,
|
||||||
|
[ ] percent: 0
|
||||||
|
[ ] },
|
||||||
|
[ ] surprised: {
|
||||||
|
[ ] count: 0,
|
||||||
|
[ ] percent: 0
|
||||||
|
},
|
||||||
|
[ ] happy: {
|
||||||
|
[ ] count: 0,
|
||||||
|
[ ] percent: 0
|
||||||
|
},
|
||||||
|
[ ] funny: {
|
||||||
|
[ ] count: 0,
|
||||||
|
[ ] percent: 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[?] deleted: { // THis field is not always present in the alpha-data
|
||||||
|
[?] type: Boolean,
|
||||||
|
[ ] default: false, // Default value is missing in Nitro
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[?] createdAt: {
|
||||||
|
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
|
||||||
|
[ ] default: Date.now // Default value is missing in Nitro
|
||||||
|
},
|
||||||
|
[?] updatedAt: {
|
||||||
|
[?] type: Date, // Type is modeled as string in Nitro which is incorrect
|
||||||
|
[ ] default: Date.now // Default value is missing in Nitro
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as post
|
||||||
|
MERGE (p:Post {id: post._id["$oid"]})
|
||||||
|
ON CREATE SET
|
||||||
|
p.title = post.title,
|
||||||
|
p.slug = post.slug,
|
||||||
|
p.image = replace(post.teaserImg, 'https://api-alpha.human-connection.org', ''),
|
||||||
|
p.content = post.content,
|
||||||
|
p.contentExcerpt = post.contentExcerpt,
|
||||||
|
p.visibility = toLower(post.visibility),
|
||||||
|
p.createdAt = post.createdAt.`$date`,
|
||||||
|
p.updatedAt = post.updatedAt.`$date`,
|
||||||
|
p.deleted = COALESCE(post.deleted, false),
|
||||||
|
p.disabled = COALESCE(NOT post.isEnabled, false)
|
||||||
|
WITH p, post
|
||||||
|
MATCH (u:User {id: post.userId})
|
||||||
|
MERGE (u)-[:WROTE]->(p)
|
||||||
|
WITH p, post, post.categoryIds as categoryIds
|
||||||
|
UNWIND categoryIds AS categoryId
|
||||||
|
MATCH (c:Category {id: categoryId})
|
||||||
|
MERGE (p)-[:CATEGORIZED]->(c)
|
||||||
|
WITH p, post.tags AS tags
|
||||||
|
UNWIND tags AS tag
|
||||||
|
WITH apoc.text.replace(tag, '[^\\p{L}0-9]', '') as tagNoSpacesAllowed
|
||||||
|
CALL apoc.when(tagNoSpacesAllowed =~ '^((\\p{L}+[\\p{L}0-9]*)|([0-9]+\\p{L}+[\\p{L}0-9]*))$', 'RETURN tagNoSpacesAllowed', '', {tagNoSpacesAllowed: tagNoSpacesAllowed})
|
||||||
|
YIELD value as validated
|
||||||
|
WHERE validated.tagNoSpacesAllowed IS NOT NULL
|
||||||
|
MERGE (t:Tag { id: validated.tagNoSpacesAllowed, disabled: false, deleted: false })
|
||||||
|
MERGE (p)-[:TAGGED]->(t)
|
||||||
|
;
|
||||||
@ -0,0 +1,2 @@
|
|||||||
|
MATCH (n:Post) DETACH DELETE n;
|
||||||
|
MATCH (n:Tag) DETACH DELETE n;
|
||||||
@ -0,0 +1 @@
|
|||||||
|
MATCH (n) DETACH DELETE n;
|
||||||
@ -0,0 +1 @@
|
|||||||
|
MATCH (u:User)-[e:EMOTED]->(c:Post) DETACH DELETE e;
|
||||||
@ -0,0 +1,58 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[X] userId: {
|
||||||
|
[X] type: String,
|
||||||
|
[X] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[X] contributionId: {
|
||||||
|
[X] type: String,
|
||||||
|
[X] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[?] rated: {
|
||||||
|
[X] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[?] enum: ['funny', 'happy', 'surprised', 'cry', 'angry']
|
||||||
|
},
|
||||||
|
[X] createdAt: {
|
||||||
|
[X] type: Date,
|
||||||
|
[X] default: Date.now
|
||||||
|
},
|
||||||
|
[X] updatedAt: {
|
||||||
|
[X] type: Date,
|
||||||
|
[X] default: Date.now
|
||||||
|
},
|
||||||
|
[-] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as emotion
|
||||||
|
MATCH (u:User {id: emotion.userId}),
|
||||||
|
(c:Post {id: emotion.contributionId})
|
||||||
|
MERGE (u)-[e:EMOTED {
|
||||||
|
id: emotion._id["$oid"],
|
||||||
|
emotion: emotion.rated,
|
||||||
|
createdAt: datetime(emotion.createdAt.`$date`),
|
||||||
|
updatedAt: datetime(emotion.updatedAt.`$date`)
|
||||||
|
}]->(c)
|
||||||
|
RETURN e;
|
||||||
|
/*
|
||||||
|
// Queries
|
||||||
|
// user sets an emotion emotion:
|
||||||
|
// MERGE (u)-[e:EMOTED {id: ..., emotion: "funny", createdAt: ..., updatedAt: ...}]->(c)
|
||||||
|
// user removes emotion
|
||||||
|
// MATCH (u)-[e:EMOTED]->(c) DELETE e
|
||||||
|
// contribution distributions over every `emotion` property value for one post
|
||||||
|
// MATCH (u:User)-[e:EMOTED]->(c:Post {id: "5a70bbc8508f5b000b443b1a"}) RETURN e.emotion,COUNT(e.emotion)
|
||||||
|
// contribution distributions over every `emotion` property value for one user (advanced - "whats the primary emotion used by the user?")
|
||||||
|
// MATCH (u:User{id:"5a663b1ac64291000bf302a1"})-[e:EMOTED]->(c:Post) RETURN e.emotion,COUNT(e.emotion)
|
||||||
|
// contribution distributions over every `emotion` property value for all posts created by one user (advanced - "how do others react to my contributions?")
|
||||||
|
// MATCH (u:User)-[e:EMOTED]->(c:Post)<-[w:WROTE]-(a:User{id:"5a663b1ac64291000bf302a1"}) RETURN e.emotion,COUNT(e.emotion)
|
||||||
|
// if we can filter the above an a variable timescale that would be great (should be possible on createdAt and updatedAt fields)
|
||||||
|
*/
|
||||||
@ -0,0 +1 @@
|
|||||||
|
MATCH (u1:User)-[f:FOLLOWS]->(u2:User) DETACH DELETE f;
|
||||||
@ -0,0 +1,36 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[?] userId: {
|
||||||
|
[-] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[?] foreignId: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[?] foreignService: { // db.getCollection('follows').distinct('foreignService') returns 'organizations' and 'users'
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[ ] index: true
|
||||||
|
},
|
||||||
|
[ ] createdAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
index:
|
||||||
|
[?] { userId: 1, foreignId: 1, foreignService: 1 },{ unique: true } // is the unique constrain modeled?
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as follow
|
||||||
|
MATCH (u1:User {id: follow.userId}), (u2:User {id: follow.foreignId})
|
||||||
|
MERGE (u1)-[:FOLLOWS]->(u2)
|
||||||
|
;
|
||||||
108
deployment/old/legacy-migration/maintenance-worker/migration/neo4j/import.sh
Executable file
108
deployment/old/legacy-migration/maintenance-worker/migration/neo4j/import.sh
Executable file
@ -0,0 +1,108 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# import .env config
|
||||||
|
set -o allexport
|
||||||
|
source $(dirname "$0")/.env
|
||||||
|
set +o allexport
|
||||||
|
|
||||||
|
# Delete collection function defintion
|
||||||
|
function delete_collection () {
|
||||||
|
# Delete from Database
|
||||||
|
echo "Delete $2"
|
||||||
|
"${IMPORT_CYPHERSHELL_BIN}" < $(dirname "$0")/$1/delete.cql > /dev/null
|
||||||
|
# Delete index file
|
||||||
|
rm -f "${IMPORT_PATH}splits/$2.index"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Import collection function defintion
|
||||||
|
function import_collection () {
|
||||||
|
# index file of those chunks we have already imported
|
||||||
|
INDEX_FILE="${IMPORT_PATH}splits/$1.index"
|
||||||
|
# load index file
|
||||||
|
if [ -f "$INDEX_FILE" ]; then
|
||||||
|
readarray -t IMPORT_INDEX <$INDEX_FILE
|
||||||
|
else
|
||||||
|
declare -a IMPORT_INDEX
|
||||||
|
fi
|
||||||
|
# for each chunk import data
|
||||||
|
for chunk in ${IMPORT_PATH}splits/$1/*
|
||||||
|
do
|
||||||
|
CHUNK_FILE_NAME=$(basename "${chunk}")
|
||||||
|
# does the index not contain the chunk file name?
|
||||||
|
if [[ ! " ${IMPORT_INDEX[@]} " =~ " ${CHUNK_FILE_NAME} " ]]; then
|
||||||
|
# calculate the path of the chunk
|
||||||
|
export IMPORT_CHUNK_PATH_CQL_FILE="${IMPORT_CHUNK_PATH_CQL}$1/${CHUNK_FILE_NAME}"
|
||||||
|
# load the neo4j command and replace file variable with actual path
|
||||||
|
NEO4J_COMMAND="$(envsubst '${IMPORT_CHUNK_PATH_CQL_FILE}' < $(dirname "$0")/$2)"
|
||||||
|
# run the import of the chunk
|
||||||
|
echo "Import $1 ${CHUNK_FILE_NAME} (${chunk})"
|
||||||
|
echo "${NEO4J_COMMAND}" | "${IMPORT_CYPHERSHELL_BIN}" > /dev/null
|
||||||
|
# add file to array and file
|
||||||
|
IMPORT_INDEX+=("${CHUNK_FILE_NAME}")
|
||||||
|
echo "${CHUNK_FILE_NAME}" >> ${INDEX_FILE}
|
||||||
|
else
|
||||||
|
echo "Skipping $1 ${CHUNK_FILE_NAME} (${chunk})"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
# Time variable
|
||||||
|
SECONDS=0
|
||||||
|
|
||||||
|
# Delete all Neo4J Database content
|
||||||
|
echo "Deleting Database Contents"
|
||||||
|
delete_collection "badges" "badges"
|
||||||
|
delete_collection "categories" "categories"
|
||||||
|
delete_collection "users" "users"
|
||||||
|
delete_collection "follows" "follows_users"
|
||||||
|
delete_collection "contributions" "contributions_post"
|
||||||
|
delete_collection "contributions" "contributions_cando"
|
||||||
|
delete_collection "shouts" "shouts"
|
||||||
|
delete_collection "comments" "comments"
|
||||||
|
delete_collection "emotions" "emotions"
|
||||||
|
|
||||||
|
#delete_collection "invites"
|
||||||
|
#delete_collection "notifications"
|
||||||
|
#delete_collection "organizations"
|
||||||
|
#delete_collection "pages"
|
||||||
|
#delete_collection "projects"
|
||||||
|
#delete_collection "settings"
|
||||||
|
#delete_collection "status"
|
||||||
|
#delete_collection "systemnotifications"
|
||||||
|
#delete_collection "userscandos"
|
||||||
|
#delete_collection "usersettings"
|
||||||
|
echo "DONE"
|
||||||
|
|
||||||
|
# Import Data
|
||||||
|
echo "Start Importing Data"
|
||||||
|
import_collection "badges" "badges/badges.cql"
|
||||||
|
import_collection "categories" "categories/categories.cql"
|
||||||
|
import_collection "users_verified" "users/users.cql"
|
||||||
|
import_collection "follows_users" "follows/follows.cql"
|
||||||
|
#import_collection "follows_organizations" "follows/follows.cql"
|
||||||
|
import_collection "contributions_post" "contributions/contributions.cql"
|
||||||
|
#import_collection "contributions_cando" "contributions/contributions.cql"
|
||||||
|
#import_collection "contributions_DELETED" "contributions/contributions.cql"
|
||||||
|
import_collection "shouts" "shouts/shouts.cql"
|
||||||
|
import_collection "comments" "comments/comments.cql"
|
||||||
|
import_collection "emotions" "emotions/emotions.cql"
|
||||||
|
|
||||||
|
# import_collection "invites"
|
||||||
|
# import_collection "notifications"
|
||||||
|
# import_collection "organizations"
|
||||||
|
# import_collection "pages"
|
||||||
|
# import_collection "systemnotifications"
|
||||||
|
# import_collection "userscandos"
|
||||||
|
# import_collection "usersettings"
|
||||||
|
|
||||||
|
# does only contain dummy data
|
||||||
|
# import_collection "projects"
|
||||||
|
|
||||||
|
# does only contain alpha specifc data
|
||||||
|
# import_collection "status
|
||||||
|
# import_collection "settings""
|
||||||
|
|
||||||
|
echo "DONE"
|
||||||
|
|
||||||
|
echo "Time elapsed: $SECONDS seconds"
|
||||||
@ -0,0 +1,39 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[ ] email: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true,
|
||||||
|
[ ] unique: true
|
||||||
|
},
|
||||||
|
[ ] code: {
|
||||||
|
[ ] type: String,
|
||||||
|
[-] index: true,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] role: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] enum: ['admin', 'moderator', 'manager', 'editor', 'user'],
|
||||||
|
[ ] default: 'user'
|
||||||
|
},
|
||||||
|
[ ] invitedByUserId: { type: String },
|
||||||
|
[ ] language: { type: String },
|
||||||
|
[ ] badgeIds: [],
|
||||||
|
[ ] wasUsed: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] createdAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as invite;
|
||||||
@ -0,0 +1,48 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[ ] userId: { // User this notification is sent to
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] type: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[ ] enum: ['comment','comment-mention','contribution-mention','following-contribution']
|
||||||
|
},
|
||||||
|
[ ] relatedUserId: {
|
||||||
|
[ ] type: String,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] relatedContributionId: {
|
||||||
|
[ ] type: String,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] relatedOrganizationId: {
|
||||||
|
[ ] type: String,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] relatedCommentId: {type: String },
|
||||||
|
[ ] unseen: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] default: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] createdAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] updatedAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as notification;
|
||||||
@ -0,0 +1,137 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[ ] name: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] slug: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[ ] unique: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] followersCounts: {
|
||||||
|
[ ] users: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
},
|
||||||
|
[ ] organizations: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
},
|
||||||
|
[ ] projects: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[ ] followingCounts: {
|
||||||
|
[ ] users: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
},
|
||||||
|
[ ] organizations: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
},
|
||||||
|
[ ] projects: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[ ] categoryIds: {
|
||||||
|
[ ] type: Array,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] logo: { type: String },
|
||||||
|
[ ] coverImg: { type: String },
|
||||||
|
[ ] userId: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] description: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] descriptionExcerpt: { type: String }, // will be generated automatically
|
||||||
|
[ ] publicEmail: { type: String },
|
||||||
|
[ ] url: { type: String },
|
||||||
|
[ ] type: {
|
||||||
|
[ ] type: String,
|
||||||
|
[-] index: true,
|
||||||
|
[ ] enum: ['ngo', 'npo', 'goodpurpose', 'ev', 'eva']
|
||||||
|
},
|
||||||
|
[ ] language: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[ ] default: 'de',
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] addresses: {
|
||||||
|
[ ] type: [{
|
||||||
|
[ ] street: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] zipCode: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] city: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] country: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] lat: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] lng: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] required: true
|
||||||
|
}
|
||||||
|
}],
|
||||||
|
[ ] default: []
|
||||||
|
},
|
||||||
|
[ ] createdAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] updatedAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] isEnabled: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] default: false,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] reviewedBy: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] default: null,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] tags: {
|
||||||
|
[ ] type: Array,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] deleted: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] default: false,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as organisation;
|
||||||
@ -0,0 +1,55 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[ ] title: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] slug: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] type: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[ ] default: 'page'
|
||||||
|
},
|
||||||
|
[ ] key: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] content: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] language: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] active: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] default: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] createdAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] updatedAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
index:
|
||||||
|
[ ] { slug: 1, language: 1 },{ unique: true }
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as page;
|
||||||
@ -0,0 +1,44 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[ ] name: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] slug: { type: String },
|
||||||
|
[ ] followerIds: [],
|
||||||
|
[ ] categoryIds: { type: Array },
|
||||||
|
[ ] logo: { type: String },
|
||||||
|
[ ] userId: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] description: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] content: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] addresses: {
|
||||||
|
[ ] type: Array,
|
||||||
|
[ ] default: []
|
||||||
|
},
|
||||||
|
[ ] createdAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] updatedAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as project;
|
||||||
@ -0,0 +1,36 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[ ] key: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] default: 'system',
|
||||||
|
[-] index: true,
|
||||||
|
[ ] unique: true
|
||||||
|
},
|
||||||
|
[ ] invites: {
|
||||||
|
[ ] userCanInvite: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] required: true,
|
||||||
|
[ ] default: false
|
||||||
|
},
|
||||||
|
[ ] maxInvitesByUser: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] required: true,
|
||||||
|
[ ] default: 1
|
||||||
|
},
|
||||||
|
[ ] onlyUserWithBadgesCanInvite: {
|
||||||
|
[ ] type: Array,
|
||||||
|
[ ] default: []
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[ ] maintenance: false
|
||||||
|
}, {
|
||||||
|
[ ] timestamps: true
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as setting;
|
||||||
@ -0,0 +1 @@
|
|||||||
|
// this is just a relation between users and contributions - no need to delete
|
||||||
@ -0,0 +1,36 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[?] userId: {
|
||||||
|
[X] type: String,
|
||||||
|
[ ] required: true, // Not required in Nitro
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[?] foreignId: {
|
||||||
|
[X] type: String,
|
||||||
|
[ ] required: true, // Not required in Nitro
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[?] foreignService: { // db.getCollection('shots').distinct('foreignService') returns 'contributions'
|
||||||
|
[X] type: String,
|
||||||
|
[ ] required: true, // Not required in Nitro
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] createdAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
index:
|
||||||
|
[?] { userId: 1, foreignId: 1 },{ unique: true } // is the unique constrain modeled?
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as shout
|
||||||
|
MATCH (u:User {id: shout.userId}), (p:Post {id: shout.foreignId})
|
||||||
|
MERGE (u)-[:SHOUTED]->(p)
|
||||||
|
;
|
||||||
@ -0,0 +1,19 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[ ] maintenance: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] default: false
|
||||||
|
},
|
||||||
|
[ ] updatedAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
}
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as status;
|
||||||
@ -0,0 +1,61 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[ ] type: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] default: 'info',
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] title: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] content: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] slot: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] language: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] permanent: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] default: false
|
||||||
|
},
|
||||||
|
[ ] requireConfirmation: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] default: false
|
||||||
|
},
|
||||||
|
[ ] active: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] default: true,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] totalCount: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
},
|
||||||
|
[ ] createdAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] updatedAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as systemnotification;
|
||||||
@ -0,0 +1,2 @@
|
|||||||
|
MATCH (n:User) DETACH DELETE n;
|
||||||
|
MATCH (e:EmailAddress) DETACH DELETE e;
|
||||||
@ -0,0 +1,124 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[?] email: {
|
||||||
|
[X] type: String,
|
||||||
|
[-] index: true,
|
||||||
|
[X] required: true,
|
||||||
|
[?] unique: true //unique constrain missing in Nitro
|
||||||
|
},
|
||||||
|
[?] password: { // Not required in Alpha -> verify if always present
|
||||||
|
[X] type: String
|
||||||
|
},
|
||||||
|
[X] name: { type: String },
|
||||||
|
[X] slug: {
|
||||||
|
[X] type: String,
|
||||||
|
[-] index: true
|
||||||
|
},
|
||||||
|
[ ] gender: { type: String },
|
||||||
|
[ ] followersCounts: {
|
||||||
|
[ ] users: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
},
|
||||||
|
[ ] organizations: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
},
|
||||||
|
[ ] projects: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[ ] followingCounts: {
|
||||||
|
[ ] users: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
},
|
||||||
|
[ ] organizations: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
},
|
||||||
|
[ ] projects: {
|
||||||
|
[ ] type: Number,
|
||||||
|
[ ] default: 0
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[ ] timezone: { type: String },
|
||||||
|
[X] avatar: { type: String },
|
||||||
|
[X] coverImg: { type: String },
|
||||||
|
[ ] doiToken: { type: String },
|
||||||
|
[ ] confirmedAt: { type: Date },
|
||||||
|
[?] badgeIds: [], // Verify this is working properly
|
||||||
|
[?] deletedAt: { type: Date }, // The Date of deletion is not saved in Nitro
|
||||||
|
[?] createdAt: {
|
||||||
|
[?] type: Date, // Modeled as String in Nitro
|
||||||
|
[ ] default: Date.now // Default value is missing in Nitro
|
||||||
|
},
|
||||||
|
[?] updatedAt: {
|
||||||
|
[?] type: Date, // Modeled as String in Nitro
|
||||||
|
[ ] default: Date.now // Default value is missing in Nitro
|
||||||
|
},
|
||||||
|
[ ] lastActiveAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] isVerified: { type: Boolean },
|
||||||
|
[?] role: {
|
||||||
|
[X] type: String,
|
||||||
|
[-] index: true,
|
||||||
|
[?] enum: ['admin', 'moderator', 'manager', 'editor', 'user'], // missing roles manager & editor in Nitro
|
||||||
|
[ ] default: 'user' // Default value is missing in Nitro
|
||||||
|
},
|
||||||
|
[ ] verifyToken: { type: String },
|
||||||
|
[ ] verifyShortToken: { type: String },
|
||||||
|
[ ] verifyExpires: { type: Date },
|
||||||
|
[ ] verifyChanges: { type: Object },
|
||||||
|
[ ] resetToken: { type: String },
|
||||||
|
[ ] resetShortToken: { type: String },
|
||||||
|
[ ] resetExpires: { type: Date },
|
||||||
|
[X] wasSeeded: { type: Boolean },
|
||||||
|
[X] wasInvited: { type: Boolean },
|
||||||
|
[ ] language: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] default: 'en'
|
||||||
|
},
|
||||||
|
[ ] termsAndConditionsAccepted: { type: Date }, // we display the terms and conditions on registration
|
||||||
|
[ ] systemNotificationsSeen: {
|
||||||
|
[ ] type: Array,
|
||||||
|
[ ] default: []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as user
|
||||||
|
MERGE(u:User {id: user._id["$oid"]})
|
||||||
|
ON CREATE SET
|
||||||
|
u.name = user.name,
|
||||||
|
u.slug = COALESCE(user.slug, apoc.text.random(20, "[A-Za-z]")),
|
||||||
|
u.email = user.email,
|
||||||
|
u.encryptedPassword = user.password,
|
||||||
|
u.avatar = replace(user.avatar, 'https://api-alpha.human-connection.org', ''),
|
||||||
|
u.coverImg = replace(user.coverImg, 'https://api-alpha.human-connection.org', ''),
|
||||||
|
u.wasInvited = user.wasInvited,
|
||||||
|
u.wasSeeded = user.wasSeeded,
|
||||||
|
u.role = toLower(user.role),
|
||||||
|
u.createdAt = user.createdAt.`$date`,
|
||||||
|
u.updatedAt = user.updatedAt.`$date`,
|
||||||
|
u.deleted = user.deletedAt IS NOT NULL,
|
||||||
|
u.disabled = false
|
||||||
|
MERGE (e:EmailAddress {
|
||||||
|
email: user.email,
|
||||||
|
createdAt: toString(datetime()),
|
||||||
|
verifiedAt: toString(datetime())
|
||||||
|
})
|
||||||
|
MERGE (e)-[:BELONGS_TO]->(u)
|
||||||
|
MERGE (u)-[:PRIMARY_EMAIL]->(e)
|
||||||
|
WITH u, user, user.badgeIds AS badgeIds
|
||||||
|
UNWIND badgeIds AS badgeId
|
||||||
|
MATCH (b:Badge {id: badgeId})
|
||||||
|
MERGE (b)-[:REWARDED]->(u)
|
||||||
|
;
|
||||||
@ -0,0 +1,35 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[ ] userId: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] contributionId: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] done: {
|
||||||
|
[ ] type: Boolean,
|
||||||
|
[ ] default: false
|
||||||
|
},
|
||||||
|
[ ] doneAt: { type: Date },
|
||||||
|
[ ] createdAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] updatedAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
},
|
||||||
|
[ ] wasSeeded: { type: Boolean }
|
||||||
|
}
|
||||||
|
index:
|
||||||
|
[ ] { userId: 1, contributionId: 1 },{ unique: true }
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as usercando;
|
||||||
@ -0,0 +1,43 @@
|
|||||||
|
/*
|
||||||
|
// Alpha Model
|
||||||
|
// [ ] Not modeled in Nitro
|
||||||
|
// [X] Modeled in Nitro
|
||||||
|
// [-] Omitted in Nitro
|
||||||
|
// [?] Unclear / has work to be done for Nitro
|
||||||
|
{
|
||||||
|
[ ] userId: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true,
|
||||||
|
[ ] unique: true
|
||||||
|
},
|
||||||
|
[ ] blacklist: {
|
||||||
|
[ ] type: Array,
|
||||||
|
[ ] default: []
|
||||||
|
},
|
||||||
|
[ ] uiLanguage: {
|
||||||
|
[ ] type: String,
|
||||||
|
[ ] required: true
|
||||||
|
},
|
||||||
|
[ ] contentLanguages: {
|
||||||
|
[ ] type: Array,
|
||||||
|
[ ] default: []
|
||||||
|
},
|
||||||
|
[ ] filter: {
|
||||||
|
[ ] categoryIds: {
|
||||||
|
[ ] type: Array,
|
||||||
|
[ ] index: true
|
||||||
|
},
|
||||||
|
[ ] emotions: {
|
||||||
|
[ ] type: Array,
|
||||||
|
[ ] index: true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
[ ] hideUsersWithoutTermsOfUseSigniture: {type: Boolean},
|
||||||
|
[ ] updatedAt: {
|
||||||
|
[ ] type: Date,
|
||||||
|
[ ] default: Date.now
|
||||||
|
}
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
CALL apoc.load.json("file:${IMPORT_CHUNK_PATH_CQL_FILE}") YIELD value as usersetting;
|
||||||
40
deployment/old/mailserver/Deployment.yaml
Normal file
40
deployment/old/mailserver/Deployment.yaml
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
{{- if .Values.developmentMailserverDomain }}
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: {{ .Release.Name }}-mailserver
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||||
|
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||||
|
app.kubernetes.io/name: ocelot-social
|
||||||
|
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||||
|
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
minReadySeconds: 15
|
||||||
|
progressDeadlineSeconds: 60
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
ocelot.social/selector: deployment-mailserver
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
ocelot.social/selector: deployment-mailserver
|
||||||
|
name: mailserver
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: mailserver
|
||||||
|
image: djfarrelly/maildev
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
- containerPort: 25
|
||||||
|
envFrom:
|
||||||
|
- configMapRef:
|
||||||
|
name: {{ .Release.Name }}-configmap
|
||||||
|
- secretRef:
|
||||||
|
name: {{ .Release.Name }}-secrets
|
||||||
|
restartPolicy: Always
|
||||||
|
terminationGracePeriodSeconds: 30
|
||||||
|
status: {}
|
||||||
|
{{- end}}
|
||||||
18
deployment/old/mailserver/README.md
Normal file
18
deployment/old/mailserver/README.md
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
# Development Mail Server
|
||||||
|
|
||||||
|
You can deploy a fake smtp server which captures all send mails and displays
|
||||||
|
them in a web interface. The [sample configuration](../templates/configmap.template.yaml)
|
||||||
|
is assuming such a dummy server in the `SMTP_HOST` configuration and points to
|
||||||
|
a cluster-internal SMTP server.
|
||||||
|
|
||||||
|
To deploy the SMTP server just uncomment the relevant code in the
|
||||||
|
[ingress server configuration](../../https/templates/ingress.template.yaml) and
|
||||||
|
run the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# in folder deployment/ocelot-social
|
||||||
|
$ kubectl apply -f mailserver/
|
||||||
|
```
|
||||||
|
|
||||||
|
You might need to refresh the TLS secret to enable HTTPS on the publicly
|
||||||
|
available web interface.
|
||||||
22
deployment/old/mailserver/Service.yaml
Normal file
22
deployment/old/mailserver/Service.yaml
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
{{- if .Values.developmentMailserverDomain }}
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: {{ .Release.Name }}-mailserver
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||||
|
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||||
|
app.kubernetes.io/name: ocelot-social
|
||||||
|
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||||
|
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- name: web
|
||||||
|
port: 80
|
||||||
|
targetPort: 80
|
||||||
|
- name: smtp
|
||||||
|
port: 25
|
||||||
|
targetPort: 25
|
||||||
|
selector:
|
||||||
|
ocelot.social/selector: deployment-mailserver
|
||||||
|
{{- end}}
|
||||||
42
deployment/old/mailserver/ingress.yaml
Normal file
42
deployment/old/mailserver/ingress.yaml
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
kind: Ingress
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
metadata:
|
||||||
|
name: ingress-{{ .Release.Name }}-webapp
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: "{{ .Chart.Name }}"
|
||||||
|
app.kubernetes.io/instance: "{{ .Release.Name }}"
|
||||||
|
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
|
||||||
|
app.kubernetes.io/component: "ingress webapp"
|
||||||
|
app.kubernetes.io/part-of: "{{ .Chart.Name }}"
|
||||||
|
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
|
||||||
|
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||||
|
annotations:
|
||||||
|
kubernetes.io/ingress.class: "nginx"
|
||||||
|
cert-manager.io/cluster-issuer: {{ .Values.LETSENCRYPT.ISSUER }}
|
||||||
|
nginx.ingress.kubernetes.io/proxy-body-size: {{ .Values.NGINX.PROXY_BODY_SIZE }}
|
||||||
|
spec:
|
||||||
|
tls:
|
||||||
|
- hosts:
|
||||||
|
- {{ .Values.LETSENCRYPT.DOMAIN }}
|
||||||
|
secretName: tls
|
||||||
|
rules:
|
||||||
|
- host: {{ .Values.LETSENCRYPT.DOMAIN }}
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
pathType: ImplementationSpecific
|
||||||
|
backend:
|
||||||
|
service:
|
||||||
|
name: {{ .Release.Name }}-webapp
|
||||||
|
port:
|
||||||
|
number: 3000
|
||||||
|
|
||||||
|
#{{- if .Values.developmentMailserverDomain }}
|
||||||
|
# - host: {{ .Values.developmentMailserverDomain }}
|
||||||
|
# http:
|
||||||
|
# paths:
|
||||||
|
# - path: /
|
||||||
|
# backend:
|
||||||
|
# serviceName: {{ .Release.Name }}-mailserver
|
||||||
|
# servicePort: 80
|
||||||
|
#{{- end }}
|
||||||
43
deployment/old/monitoring/README.md
Normal file
43
deployment/old/monitoring/README.md
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
# Metrics
|
||||||
|
|
||||||
|
You can optionally setup [prometheus](https://prometheus.io/) and
|
||||||
|
[grafana](https://grafana.com/) for metrics.
|
||||||
|
|
||||||
|
We follow this tutorial [here](https://medium.com/@chris_linguine/how-to-monitor-your-kubernetes-cluster-with-prometheus-and-grafana-2d5704187fc8):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl proxy # proxy to your kubernetes dashboard
|
||||||
|
|
||||||
|
helm repo list
|
||||||
|
# If using helm v3, the stable repository is not set, so you need to manually add it.
|
||||||
|
helm repo add stable https://kubernetes-charts.storage.googleapis.com
|
||||||
|
# Create a monitoring namespace for your cluster
|
||||||
|
kubectl create namespace monitoring
|
||||||
|
helm --namespace monitoring install prometheus stable/prometheus
|
||||||
|
kubectl -n monitoring get pods # look for 'server'
|
||||||
|
kubectl port-forward -n monitoring <PROMETHEUS_SERVER_ID> 9090
|
||||||
|
# You can now see your prometheus server on: http://localhost:9090
|
||||||
|
|
||||||
|
# Make sure you are in folder `deployment/`
|
||||||
|
kubectl apply -f monitoring/grafana/config.yml
|
||||||
|
helm --namespace monitoring install grafana stable/grafana -f monitoring/grafana/values.yml
|
||||||
|
# Get the admin password for grafana from your kubernetes dashboard.
|
||||||
|
kubectl --namespace monitoring port-forward <POD_NAME> 3000
|
||||||
|
# You can now see your grafana dashboard on: http://localhost:3000
|
||||||
|
# Login with user 'admin' and the password you just looked up.
|
||||||
|
# In your dashboard import this dashboard:
|
||||||
|
# https://grafana.com/grafana/dashboards/1860
|
||||||
|
# Enter ID 180 and choose "Prometheus" as datasource.
|
||||||
|
# You got metrics!
|
||||||
|
```
|
||||||
|
|
||||||
|
Now you should see something like this:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
You can set up a grafana dashboard, by visiting https://grafana.com/dashboards, finding one that is suitable and copying it's id.
|
||||||
|
You then go to the left hand menu in localhost, choose `Dashboard` > `Manage` > `Import`
|
||||||
|
Paste in the id, click `Load`, select `Prometheus` for the data source, and click `Import`
|
||||||
|
|
||||||
|
When you just installed prometheus and grafana, the data will not be available
|
||||||
|
immediately, so wait for a couple of minutes and reload.
|
||||||
16
deployment/old/monitoring/grafana/config.yml
Normal file
16
deployment/old/monitoring/grafana/config.yml
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: prometheus-grafana-datasource
|
||||||
|
namespace: monitoring
|
||||||
|
labels:
|
||||||
|
grafana_datasource: '1'
|
||||||
|
data:
|
||||||
|
datasource.yaml: |-
|
||||||
|
apiVersion: 1
|
||||||
|
datasources:
|
||||||
|
- name: Prometheus
|
||||||
|
type: prometheus
|
||||||
|
access: proxy
|
||||||
|
orgId: 1
|
||||||
|
url: http://prometheus-server.monitoring.svc.cluster.local
|
||||||
BIN
deployment/old/monitoring/grafana/metrics.png
Normal file
BIN
deployment/old/monitoring/grafana/metrics.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 206 KiB |
4
deployment/old/monitoring/grafana/values.yml
Normal file
4
deployment/old/monitoring/grafana/values.yml
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
sidecar:
|
||||||
|
datasources:
|
||||||
|
enabled: true
|
||||||
|
label: grafana_datasource
|
||||||
37
deployment/old/volumes/README.md
Normal file
37
deployment/old/volumes/README.md
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
# Persistent Volumes
|
||||||
|
|
||||||
|
At the moment, the application needs two persistent volumes:
|
||||||
|
|
||||||
|
* The `/data/` folder where `neo4j` stores its database and
|
||||||
|
* the folder `/develop-backend/public/uploads` where the backend stores uploads, in case you don't use Digital Ocean Spaces (an AWS S3 bucket) for this purpose.
|
||||||
|
|
||||||
|
As a matter of precaution, the persistent volume claims that setup these volumes
|
||||||
|
live in a separate folder. You don't want to accidently loose all your data in
|
||||||
|
your database by running
|
||||||
|
|
||||||
|
```sh
|
||||||
|
kubectl delete -f ocelot-social/
|
||||||
|
```
|
||||||
|
|
||||||
|
or do you?
|
||||||
|
|
||||||
|
## Create Persistent Volume Claims
|
||||||
|
|
||||||
|
Run the following:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# in folder deployments/
|
||||||
|
$ kubectl apply -f volumes
|
||||||
|
persistentvolumeclaim/neo4j-data-claim created
|
||||||
|
persistentvolumeclaim/uploads-claim created
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backup And Restore
|
||||||
|
|
||||||
|
We tested a couple of options how to do disaster recovery in kubernetes. First,
|
||||||
|
there is the [offline backup strategy](./neo4j-offline-backup/README.md) of the
|
||||||
|
community edition of Neo4J, which you can also run on a local installation.
|
||||||
|
Kubernetes also offers so-called [volume snapshots](./volume-snapshots/README.md).
|
||||||
|
Changing the [reclaim policy](./reclaim-policy/README.md) of your persistent
|
||||||
|
volumes might be an additional safety measure. Finally, there is also a
|
||||||
|
kubernetes specific disaster recovery tool called [Velero](./velero/README.md).
|
||||||
12
deployment/old/volumes/neo4j-data.yaml
Normal file
12
deployment/old/volumes/neo4j-data.yaml
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: neo4j-data-claim
|
||||||
|
namespace: ocelot-social
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi" # see requirements for Neo4j v3.5.14 https://neo4j.com/docs/operations-manual/3.5/installation/requirements/
|
||||||
88
deployment/old/volumes/neo4j-offline-backup/README.md
Normal file
88
deployment/old/volumes/neo4j-offline-backup/README.md
Normal file
@ -0,0 +1,88 @@
|
|||||||
|
# Backup (offline)
|
||||||
|
|
||||||
|
This tutorial explains how to carry out an offline backup of your Neo4J
|
||||||
|
database in a kubernetes cluster.
|
||||||
|
|
||||||
|
An offline backup requires the Neo4J database to be stopped. Read
|
||||||
|
[the docs](https://neo4j.com/docs/operations-manual/current/tools/dump-load/).
|
||||||
|
Neo4J also offers online backups but this is available in enterprise edition
|
||||||
|
only.
|
||||||
|
|
||||||
|
The tricky part is to stop the Neo4J database *without* stopping the container.
|
||||||
|
Neo4J's docker container image starts `neo4j` by default, so we have to override
|
||||||
|
this command with sth. that keeps the container spinning but does not terminate
|
||||||
|
it.
|
||||||
|
|
||||||
|
## Stop and Restart Neo4J Database in Kubernetes
|
||||||
|
|
||||||
|
[This tutorial](http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/)
|
||||||
|
explains how to keep a docker container running. For kubernetes, the way to
|
||||||
|
override the docker image `CMD` is explained [here](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod).
|
||||||
|
|
||||||
|
So, all we have to do is edit the kubernetes deployment of our Neo4J database
|
||||||
|
and set a custom `command` every time we have to carry out tasks like backup,
|
||||||
|
restore, seed etc.
|
||||||
|
|
||||||
|
First bring the application into [maintenance mode](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/deployment/ocelot-social/maintenance/README.md) to ensure there are no
|
||||||
|
database connections left and nobody can access the application.
|
||||||
|
|
||||||
|
Run the following:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl -n ocelot-social edit deployment develop-neo4j
|
||||||
|
```
|
||||||
|
|
||||||
|
Add the following to `spec.template.spec.containers`:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
["tail", "-f", "/dev/null"]
|
||||||
|
```
|
||||||
|
|
||||||
|
and write the file which will update the deployment.
|
||||||
|
|
||||||
|
The command `tail -f /dev/null` is the equivalent of *sleep forever*. It is a
|
||||||
|
hack to keep the container busy and to prevent its shutdown. It will also
|
||||||
|
override the default `neo4j` command and the kubernetes pod will not start the
|
||||||
|
database.
|
||||||
|
|
||||||
|
Now perform your tasks!
|
||||||
|
|
||||||
|
When you're done, edit the deployment again and remove the `command`. Write the
|
||||||
|
file and trigger an update of the deployment.
|
||||||
|
|
||||||
|
## Create a Backup in Kubernetes
|
||||||
|
|
||||||
|
First stop your Neo4J database, see above. Then:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl -n ocelot-social get pods
|
||||||
|
# Copy the ID of the pod running Neo4J.
|
||||||
|
$ kubectl -n ocelot-social exec -it <POD-ID> bash
|
||||||
|
# Once you're in the pod, dump the db to a file e.g. `/root/neo4j-backup`.
|
||||||
|
> neo4j-admin dump --to=/root/neo4j-backup
|
||||||
|
> exit
|
||||||
|
# Download the file from the pod to your computer.
|
||||||
|
$ kubectl cp human-connection/<POD-ID>:/root/neo4j-backup ./neo4j-backup
|
||||||
|
```
|
||||||
|
|
||||||
|
Revert your changes to deployment `develop-neo4j` which will restart the database.
|
||||||
|
|
||||||
|
## Restore a Backup in Kubernetes
|
||||||
|
|
||||||
|
First stop your Neo4J database. Then:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl -n ocelot-social get pods
|
||||||
|
# Copy the ID of the pod running Neo4J.
|
||||||
|
# Then upload your local backup to the pod. Note that once the pod gets deleted
|
||||||
|
# e.g. if you change the deployment, the backup file is gone with it.
|
||||||
|
$ kubectl cp ./neo4j-backup human-connection/<POD-ID>:/root/
|
||||||
|
$ kubectl -n ocelot-social exec -it <POD-ID> bash
|
||||||
|
# Once you're in the pod restore the backup and overwrite the default database
|
||||||
|
# called `graph.db` with `--force`.
|
||||||
|
# This will delete all existing data in database `graph.db`!
|
||||||
|
> neo4j-admin load --from=/root/neo4j-backup --force
|
||||||
|
> exit
|
||||||
|
```
|
||||||
|
|
||||||
|
Revert your changes to deployment `develop-neo4j` which will restart the database.
|
||||||
59
deployment/old/volumes/neo4j-online-backup/README.md
Normal file
59
deployment/old/volumes/neo4j-online-backup/README.md
Normal file
@ -0,0 +1,59 @@
|
|||||||
|
# Backup (online)
|
||||||
|
|
||||||
|
## Online backups are only avaible with a Neo4j Enterprise and a license, see https://neo4j.com/licensing/ for the different licenses available
|
||||||
|
|
||||||
|
This tutorial explains how to carry out an online backup of your Neo4J
|
||||||
|
database in a kubernetes cluster.
|
||||||
|
|
||||||
|
One of the benefits of doing an online backup is that the Neo4j database does not need to be stopped, so there is no downtime. Read [the docs](https://neo4j.com/docs/operations-manual/current/backup/performing/)
|
||||||
|
|
||||||
|
To use Neo4j Enterprise you must add this line to your configmap, if using, or your deployment `develop-neo4j` env.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
NEO4J_ACCEPT_LICENSE_AGREEMENT: "yes"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Create a Backup in Kubernetes
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# Backup the database with one command, this will get the develop-neo4j pod, ssh into it, and run the backup command
|
||||||
|
$ kubectl -n=human-connection exec -it $(kubectl -n=human-connection get pods | grep develop-neo4j | awk '{ print $1 }') -- neo4j-admin backup --backup-dir=/var/lib/neo4j --name=neo4j-backup
|
||||||
|
# Download the file from the pod to your computer.
|
||||||
|
$ kubectl cp human-connection/$(kubectl -n=human-connection get pods | grep develop-neo4j | awk '{ print $1 }'):/var/lib/neo4j/neo4j-backup ./neo4j-backup/
|
||||||
|
```
|
||||||
|
|
||||||
|
You should now have a backup of the database locally. If you want, you can simulate disaster recovery by sshing into the develop-neo4j pod, deleting all data and restoring from backup
|
||||||
|
|
||||||
|
## Disaster where database data is gone somehow
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl -n=human-connection exec -it $(kubectl -n=human-connection get pods | grep develop-neo4j |awk '{ print $1 }') bash
|
||||||
|
# Enter cypher-shell
|
||||||
|
$ cypher-shell
|
||||||
|
# Delete all data
|
||||||
|
> MATCH (n) DETACH DELETE (n);
|
||||||
|
|
||||||
|
> exit
|
||||||
|
```
|
||||||
|
|
||||||
|
## Restore a backup in Kubernetes
|
||||||
|
|
||||||
|
Restoration must be done while the database is not running, see [our docs](https://docs.human-connection.org/human-connection/deployment/volumes/neo4j-offline-backup#stop-and-restart-neo-4-j-database-in-kubernetes) for how to stop the database, but keep the container running
|
||||||
|
|
||||||
|
After, you have stopped the database, and have the pod running, you can restore the database by running these commands:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl -n ocelot-social get pods
|
||||||
|
# Copy the ID of the pod running Neo4J.
|
||||||
|
# Then upload your local backup to the pod. Note that once the pod gets deleted
|
||||||
|
# e.g. if you change the deployment, the backup file is gone with it.
|
||||||
|
$ kubectl cp ./neo4j-backup/ human-connection/<POD-ID>:/root/
|
||||||
|
$ kubectl -n ocelot-social exec -it <POD-ID> bash
|
||||||
|
# Once you're in the pod restore the backup and overwrite the default database
|
||||||
|
# called `graph.db` with `--force`.
|
||||||
|
# This will delete all existing data in database `graph.db`!
|
||||||
|
> neo4j-admin restore --from=/root/neo4j-backup --force
|
||||||
|
> exit
|
||||||
|
```
|
||||||
|
|
||||||
|
Revert your changes to deployment `develop-neo4j` which will restart the database.
|
||||||
12
deployment/old/volumes/uploads.yaml
Normal file
12
deployment/old/volumes/uploads.yaml
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
---
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: uploads-claim
|
||||||
|
namespace: ocelot-social
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
112
deployment/old/volumes/velero/README.md
Normal file
112
deployment/old/volumes/velero/README.md
Normal file
@ -0,0 +1,112 @@
|
|||||||
|
# Velero
|
||||||
|
|
||||||
|
{% hint style="danger" %}
|
||||||
|
I tried Velero and it did not work reliably all the time. Sometimes the
|
||||||
|
kubernetes cluster crashes during recovery or data is not fully recovered.
|
||||||
|
|
||||||
|
Feel free to test it out and update this documentation once you feel that it's
|
||||||
|
working reliably. It is very likely that Digital Ocean had some bugs when I
|
||||||
|
tried out the steps below.
|
||||||
|
{% endhint %}
|
||||||
|
|
||||||
|
We use [velero](https://github.com/heptio/velero) for on premise backups, we
|
||||||
|
tested on version `v0.11.0`, you can find their
|
||||||
|
documentation [here](https://heptio.github.io/velero/v0.11.0/).
|
||||||
|
|
||||||
|
Our kubernets configurations adds some annotations to pods. The annotations
|
||||||
|
define the important persistent volumes that need to be backed up. Velero will
|
||||||
|
pick them up and store the volumes in the same cluster but in another namespace
|
||||||
|
`velero`.
|
||||||
|
|
||||||
|
## Prequisites
|
||||||
|
|
||||||
|
You have to install the binary `velero` on your computer and get a tarball of
|
||||||
|
the latest release. We use `v0.11.0` so visit the
|
||||||
|
[release](https://github.com/heptio/velero/releases/tag/v0.11.0) page and
|
||||||
|
download and extract e.g. [velero-v0.11.0-linux-arm64.tar.gz](https://github.com/heptio/velero/releases/download/v0.11.0/velero-v0.11.0-linux-amd64.tar.gz).
|
||||||
|
|
||||||
|
|
||||||
|
## Setup Velero Namespace
|
||||||
|
|
||||||
|
Follow their [getting started](https://heptio.github.io/velero/v0.11.0/get-started)
|
||||||
|
instructions to setup the Velero namespace. We use
|
||||||
|
[Minio](https://docs.min.io/docs/deploy-minio-on-kubernetes) and
|
||||||
|
[restic](https://github.com/restic/restic), so check out Velero's instructions
|
||||||
|
how to setup [restic](https://heptio.github.io/velero/v0.11.0/restic):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# run from the extracted folder of the tarball
|
||||||
|
$ kubectl apply -f config/common/00-prereqs.yaml
|
||||||
|
$ kubectl apply -f config/minio/
|
||||||
|
```
|
||||||
|
|
||||||
|
Once completed, you should see the namespace in your kubernetes dashboard.
|
||||||
|
|
||||||
|
## Manually Create an On-Premise Backup
|
||||||
|
|
||||||
|
When you create your deployments for Human Connection the required annotations
|
||||||
|
should already be in place. So when you create a backup of namespace
|
||||||
|
`human-connection`:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ velero backup create hc-backup --include-namespaces=human-connection
|
||||||
|
```
|
||||||
|
|
||||||
|
That should backup your persistent volumes, too. When you enter:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ velero backup describe hc-backup --details
|
||||||
|
```
|
||||||
|
|
||||||
|
You should see the persistent volumes at the end of the log:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
....
|
||||||
|
|
||||||
|
Restic Backups:
|
||||||
|
Completed:
|
||||||
|
human-connection/develop-backend-5b6dd96d6b-q77n6: uploads
|
||||||
|
human-connection/develop-neo4j-686d768598-z2vhh: neo4j-data
|
||||||
|
```
|
||||||
|
|
||||||
|
## Simulate a Disaster
|
||||||
|
|
||||||
|
Feel free to try out if you loose any data when you simulate a disaster and try
|
||||||
|
to restore the namespace from the backup:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ kubectl delete namespace human-connection
|
||||||
|
```
|
||||||
|
|
||||||
|
Wait until the wrongdoing has completed, then:
|
||||||
|
```sh
|
||||||
|
$ velero restore create --from-backup hc-backup
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, I keep my fingers crossed that everything comes back again. If not, I feel
|
||||||
|
very sorry for you.
|
||||||
|
|
||||||
|
|
||||||
|
## Schedule a Regular Backup
|
||||||
|
|
||||||
|
Check out the [docs](https://heptio.github.io/velero/v0.11.0/get-started). You
|
||||||
|
can create a regular schedule e.g. with:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ velero schedule create hc-weekly-backup --schedule="@weekly" --include-namespaces=human-connection
|
||||||
|
```
|
||||||
|
|
||||||
|
Inspect the created backups:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ velero schedule get
|
||||||
|
NAME STATUS CREATED SCHEDULE BACKUP TTL LAST BACKUP SELECTOR
|
||||||
|
hc-weekly-backup Enabled 2019-05-08 17:51:31 +0200 CEST @weekly 720h0m0s 6s ago <none>
|
||||||
|
|
||||||
|
$ velero backup get
|
||||||
|
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
|
||||||
|
hc-weekly-backup-20190508155132 Completed 2019-05-08 17:51:32 +0200 CEST 29d default <none>
|
||||||
|
|
||||||
|
$ velero backup describe hc-weekly-backup-20190508155132 --details
|
||||||
|
# see if the persistent volumes are backed up
|
||||||
|
```
|
||||||
50
deployment/old/volumes/volume-snapshots/README.md
Normal file
50
deployment/old/volumes/volume-snapshots/README.md
Normal file
@ -0,0 +1,50 @@
|
|||||||
|
# Kubernetes Volume Snapshots
|
||||||
|
|
||||||
|
It is possible to backup persistent volumes through volume snapshots. This is
|
||||||
|
especially handy if you don't want to stop the database to create an [offline
|
||||||
|
backup](../neo4j-offline-backup/README.md) thus having a downtime.
|
||||||
|
|
||||||
|
Kubernetes announced this feature in a [blog post](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/). Please make yourself familiar with it before you continue.
|
||||||
|
|
||||||
|
## Create a Volume Snapshot
|
||||||
|
|
||||||
|
There is an example in this folder how you can e.g. create a volume snapshot for
|
||||||
|
the persistent volume claim `neo4j-data-claim`:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
# in folder deployment/volumes/volume-snapshots/
|
||||||
|
kubectl apply -f snapshot.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
If you are on Digital Ocean the volume snapshot should show up in the Web UI:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Provision a Volume based on a Snapshot
|
||||||
|
|
||||||
|
Edit your persistent volume claim configuration and add a `dataSource` pointing
|
||||||
|
to your volume snapshot. [The blog post](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/) has an example in section "Provision a new volume from a snapshot with
|
||||||
|
Kubernetes".
|
||||||
|
|
||||||
|
There is also an example in this folder how the configuration could look like.
|
||||||
|
If you apply the configuration new persistent volume claim will be provisioned
|
||||||
|
with the data from the volume snapshot:
|
||||||
|
|
||||||
|
```
|
||||||
|
# in folder deployment/volumes/volume-snapshots/
|
||||||
|
kubectl apply -f neo4j-data.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Data Consistency Warning
|
||||||
|
|
||||||
|
Note that volume snapshots do not guarantee data consistency. Quote from the
|
||||||
|
[blog post](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/):
|
||||||
|
|
||||||
|
> Please note that the alpha release of Kubernetes Snapshot does not provide
|
||||||
|
> any consistency guarantees. You have to prepare your application (pause
|
||||||
|
> application, freeze filesystem etc.) before taking the snapshot for data
|
||||||
|
> consistency.
|
||||||
|
|
||||||
|
In case of Neo4J this probably means that enterprise edition is required which
|
||||||
|
supports [online backups](https://neo4j.com/docs/operations-manual/current/backup/).
|
||||||
|
|
||||||
Binary file not shown.
|
After Width: | Height: | Size: 118 KiB |
18
deployment/old/volumes/volume-snapshots/neo4j-data.yaml
Normal file
18
deployment/old/volumes/volume-snapshots/neo4j-data.yaml
Normal file
@ -0,0 +1,18 @@
|
|||||||
|
---
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: neo4j-data-claim
|
||||||
|
namespace: ocelot-social
|
||||||
|
labels:
|
||||||
|
app: ocelot-social
|
||||||
|
spec:
|
||||||
|
dataSource:
|
||||||
|
name: neo4j-data-snapshot
|
||||||
|
kind: VolumeSnapshot
|
||||||
|
apiGroup: snapshot.storage.k8s.io
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
||||||
10
deployment/old/volumes/volume-snapshots/snapshot.yaml
Normal file
10
deployment/old/volumes/volume-snapshots/snapshot.yaml
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
---
|
||||||
|
apiVersion: snapshot.storage.k8s.io/v1alpha1
|
||||||
|
kind: VolumeSnapshot
|
||||||
|
metadata:
|
||||||
|
name: uploads-snapshot
|
||||||
|
namespace: ocelot-social
|
||||||
|
spec:
|
||||||
|
source:
|
||||||
|
name: uploads-claim
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
Loading…
x
Reference in New Issue
Block a user