Merging deployment to master

This commit is contained in:
Robert Schäfer 2019-03-20 21:07:57 +01:00
commit 4fe89e88ac
21 changed files with 721 additions and 0 deletions

3
deployment/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
secrets.yaml
*/secrets.yaml
kubeconfig.yaml

25
deployment/.travis.yml Normal file
View File

@ -0,0 +1,25 @@
language: generic
before_install:
- openssl aes-256-cbc -K $encrypted_87342d90efbe_key -iv $encrypted_87342d90efbe_iv
-in kubeconfig.yaml.enc -out kubeconfig.yaml -d
install:
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin/kubectl
- mkdir ${HOME}/.kube
- cp kubeconfig.yaml ${HOME}/.kube/config
script:
- kubectl get nodes
deploy:
provider: script
# TODO: fix downtime
# instead of deleting all pods, update the deployment and make a rollout
# TODO: fix multiple access error on volumes
# this happens if more than two pods access a volume
script: kubectl --namespace=human-connection delete pods --all
on:
branch: master

225
deployment/README.md Normal file
View File

@ -0,0 +1,225 @@
# Human-Connection Nitro | Deployment Configuration
[![Build Status](https://travis-ci.com/Human-Connection/Nitro-Deployment.svg?branch=master)](https://travis-ci.com/Human-Connection/Nitro-Deployment)
Todos:
- [x] check labels and selectors if they all are correct
- [x] configure NGINX from yml
- [x] configure Let's Encrypt cert-manager from yml
- [x] configure ingress from yml
- [x] configure persistent & shared storage between nodes
- [x] reproduce setup locally
## Minikube
There are many Kubernetes distributions, but if you're just getting started,
Minikube is a tool that you can use to get your feet wet.
[Install Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/)
Open minikube dashboard:
```
$ minikube dashboard
```
This will give you an overview.
Some of the steps below need some timing to make ressources available to other
dependent deployments. Keeping an eye on the dashboard is a great way to check
that.
Follow the [installation instruction](#installation-with-kubernetes) below.
If all the pods and services have settled and everything looks green in your
minikube dashboard, expose the `nitro-web` service on your host system with:
```shell
$ minikube service nitro-web --namespace=human-connection
```
## Digital Ocean
1. At first, create a cluster on Digital Ocean.
2. Download the config.yaml if the process has finished.
3. Put the config file where you can find it later (preferable in your home directory under `~/.kube/`)
4. In the open terminal you can set the current config for the active session: `export KUBECONFIG=~/.kube/THE-NAME-OF-YOUR-CLUSTER-kubeconfig.yaml`. You could make this change permanent by adding the line to your `.bashrc` or `~/.config/fish/config.fish` depending on your shell.
Otherwise you would have to always add `--kubeconfig ~/.kube/THE-NAME-OF-YOUR-CLUSTER-kubeconfig.yaml` on every `kubectl` command that you are running.
5. Now check if you can connect to the cluster and if its your newly created one by running: `kubectl get nodes`
If you got the steps right above and see your nodes you can continue.
First, install kubernetes dashboard:
```sh
$ kubectl apply -f dashboard/
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
```
Get your token on the command line:
```sh
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
```
It should print something like:
```
Name: admin-user-token-6gl6l
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=b16afba9-dfec-11e7-bbb9-901b0e532516
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnbDZsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMTZhZmJhOS1kZmVjLTExZTctYmJiOS05MDFiMGU1MzI1MTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.M70CU3lbu3PP4OjhFms8PVL5pQKj-jj4RNSLA4YmQfTXpPUuxqXjiTf094_Rzr0fgN_IVX6gC4fiNUL5ynx9KU-lkPfk0HnX8scxfJNzypL039mpGt0bbe1IXKSIRaq_9VW59Xz-yBUhycYcKPO9RM2Qa1Ax29nqNVko4vLn1_1wPqJ6XSq3GYI8anTzV8Fku4jasUwjrws6Cn6_sPEGmL54sq5R4Z5afUtv-mItTmqZZdxnkRqcJLlg2Y8WbCPogErbsaCDJoABQ7ppaqHetwfM_0yMun6ABOQbIwwl8pspJhpplKwyo700OSpvTT9zlBsu-b35lzXGBRHzv5g_RA
```
Proxy localhost to the remote kubernetes dashboard:
```sh
$ kubectl proxy
```
Grab the token from above and paste it into the login screen at [http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/)
## Installation with kubernetes
You have to do some prerequisites e.g. change some secrets according to your
own setup.
### Edit secrets
```sh
$ cp secrets.template.yaml human-connection/secrets.yaml
```
Change all secrets as needed.
If you want to edit secrets, you have to `base64` encode them. See [kubernetes
documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually).
```shell
# example how to base64 a string:
$ echo -n 'admin' | base64
YWRtaW4=
```
Those secrets get `base64` decoded in a kubernetes pod.
### Create a namespace
```shell
$ kubectl apply -f namespace-human-connection.yaml
```
Switch to the namespace `human-connection` in your kubernetes dashboard.
### Run the configuration
```shell
$ kubectl apply -f human-connection/
```
This can take a while because kubernetes will download the docker images.
Sit back and relax and have a look into your kubernetes dashboard.
Wait until all pods turn green and they don't show a warning
`Waiting: ContainerCreating` anymore.
#### Setup Ingress and HTTPS
Follow [this quick start guide](https://docs.cert-manager.io/en/latest/tutorials/acme/quick-start/index.html)
and install certmanager via helm and tiller:
```
$ kubectl create serviceaccount tiller --namespace=kube-system
$ kubectl create clusterrolebinding tiller-admin --serviceaccount=kube-system:tiller --clusterrole=cluster-admin
$ helm init --service-account=tiller
$ helm repo update
$ helm install stable/nginx-ingress
$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/00-crds.yaml
$ helm install --name cert-manager --namespace cert-manager stable/cert-manager
```
Create letsencrypt issuers. *Change the email address* in these files before
running this command.
```sh
$ kubectl apply -f human-connection/https/
```
Create an ingress service in namespace `human-connection`. *Change the domain
name* according to your needs:
```sh
$ kubectl apply -f human-connection/ingress/
```
Check the ingress server is working correctly:
```sh
$ curl -kivL -H 'Host: <DOMAIN_NAME>' 'https://<IP_ADDRESS>'
```
If the response looks good, configure your domain registrar for the new IP
address and the domain.
Now let's get a valid HTTPS certificate. According to the tutorial above, check
your tls certificate for staging:
```sh
$ kubectl describe --namespace=human-connection certificate tls
$ kubectl describe --namespace=human-connection secret tls
```
If everything looks good, update the issuer of your ingress. Change the
annotation `certmanager.k8s.io/issuer` from `letsencrypt-staging` to
`letsencrypt-prod` in your ingress configuration in
`human-connection/ingress/ingress.yaml`.
```sh
$ kubectl apply -f human-connection/ingress/ingress.yaml
```
Delete the former secret to force a refresh:
```
$ kubectl --namespace=human-connection delete secret tls
```
Now, HTTPS should be configured on your domain. Congrats.
#### Legacy data migration
This setup is completely optional and only required if you have data on a server
which is running our legacy code and you want to import that data. It will
import the uploads folder and migrate a dump of mongodb into neo4j.
##### Prepare migration of Human Connection legacy server
Create a configmap with the specific connection data of your legacy server:
```sh
$ kubectl create configmap db-migration-worker \
--namespace=human-connection \
--from-literal=SSH_USERNAME=someuser \
--from-literal=SSH_HOST=yourhost \
--from-literal=MONGODB_USERNAME=hc-api \
--from-literal=MONGODB_PASSWORD=secretpassword \
--from-literal=MONGODB_AUTH_DB=hc_api \
--from-literal=MONGODB_DATABASE=hc_api \
--from-literal=UPLOADS_DIRECTORY=/var/www/api/uploads \
--from-literal=NEO4J_URI=bolt://localhost:7687
```
Create a secret with your public and private ssh keys. As the
[kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys)
points out, you should be careful with your ssh keys. Anyone with access to your
cluster will have access to your ssh keys. Better create a new pair with
`ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`:
```sh
$ kubectl create secret generic ssh-keys \
--namespace=human-connection \
--from-file=id_rsa=/path/to/.ssh/id_rsa \
--from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
--from-file=known_hosts=/path/to/.ssh/known_hosts
```
##### Migrate legacy database
Patch the existing deployments to use a multi-container setup:
```bash
cd legacy-migration
kubectl apply -f volume-claim-mongo-export.yaml
kubectl patch --namespace=human-connection deployment nitro-backend --patch "$(cat deployment-backend.yaml)"
kubectl patch --namespace=human-connection deployment nitro-neo4j --patch "$(cat deployment-neo4j.yaml)"
cd ..
```
Run the migration:
```shell
$ kubectl --namespace=human-connection get pods
# change <POD_IDs> below
$ kubectl --namespace=human-connection exec -it nitro-neo4j-65bbdb597c-nc2lv migrate
$ kubectl --namespace=human-connection exec -it nitro-backend-c6cc5ff69-8h96z sync_uploads
```

View File

@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

View File

@ -0,0 +1,39 @@
---
kind: Pod
apiVersion: v1
metadata:
name: nitro-db-migration-worker
namespace: human-connection
spec:
volumes:
- name: secret-volume
secret:
secretName: ssh-keys
defaultMode: 0400
- name: mongo-export
persistentVolumeClaim:
claimName: mongo-export-claim
containers:
- name: nitro-db-migration-worker
image: humanconnection/db-migration-worker:latest
envFrom:
- configMapRef:
name: db-migration-worker
volumeMounts:
- name: secret-volume
readOnly: false
mountPath: /root/.ssh
- name: mongo-export
mountPath: /mongo-export/
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-export-claim
namespace: human-connection
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

View File

@ -0,0 +1,15 @@
---
apiVersion: v1
kind: ConfigMap
data:
GRAPHQL_PORT: "4000"
GRAPHQL_URI: "https://nitro-staging.human-connection.org/api"
MOCK: "false"
NEO4J_URI: "bolt://nitro-neo4j.human-connection:7687"
NEO4J_USER: "neo4j"
NEO4J_AUTH: none
CLIENT_URI: "https://nitro-staging.human-connection.org"
MAPBOX_TOKEN: pk.eyJ1IjoiaHVtYW4tY29ubmVjdGlvbiIsImEiOiJjajl0cnBubGoweTVlM3VwZ2lzNTNud3ZtIn0.KZ8KK9l70omjXbEkkbHGsQ
metadata:
name: configmap
namespace: human-connection

View File

@ -0,0 +1,83 @@
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nitro-backend
namespace: human-connection
spec:
replicas: 1
minReadySeconds: 15
progressDeadlineSeconds: 60
selector:
matchLabels:
human-connection.org/selector: deployment-human-connection-backend
template:
metadata:
labels:
human-connection.org/selector: deployment-human-connection-backend
name: "nitro-backend"
spec:
containers:
- name: nitro-backend
image: humanconnection/nitro-backend:latest
imagePullPolicy: Always
ports:
- containerPort: 4000
env:
- name: COMMIT
value: <BACKEND_COMMIT>
- name: MOCK
value: "false"
- name: CLIENT_URI
valueFrom:
configMapKeyRef:
name: configmap
key: CLIENT_URI
- name: GRAPHQL_PORT
valueFrom:
configMapKeyRef:
name: configmap
key: GRAPHQL_PORT
- name: GRAPHQL_URI
valueFrom:
configMapKeyRef:
name: configmap
key: GRAPHQL_URI
- name: MAPBOX_TOKEN
valueFrom:
configMapKeyRef:
name: configmap
key: MAPBOX_TOKEN
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: human-connection
key: JWT_SECRET
optional: false
- name: NEO4J_URI
valueFrom:
configMapKeyRef:
name: configmap
key: NEO4J_URI
volumeMounts:
- mountPath: /nitro-backend/public/uploads
name: uploads
volumes:
- name: uploads
persistentVolumeClaim:
claimName: uploads-claim
restartPolicy: Always
terminationGracePeriodSeconds: 30
status: {}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: uploads-claim
namespace: human-connection
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

View File

@ -0,0 +1,72 @@
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nitro-neo4j
namespace: human-connection
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
human-connection.org/selector: deployment-human-connection-neo4j
template:
metadata:
labels:
human-connection.org/selector: deployment-human-connection-neo4j
name: nitro-neo4j
spec:
containers:
- name: nitro-neo4j
image: humanconnection/neo4j:latest
imagePullPolicy: Always
env:
- name: COMMIT
value: <BACKEND_COMMIT>
- name: NEO4J_apoc_import_file_enabled
value: "true"
- name: NEO4J_dbms_memory_pagecache_size
value: 1G
- name: NEO4J_dbms_memory_heap_max__size
value: 1G
- name: NEO4J_AUTH
value: none
- name: NEO4J_URI
valueFrom:
configMapKeyRef:
name: configmap
key: NEO4J_URI
- name: NEO4J_USER
valueFrom:
configMapKeyRef:
name: configmap
key: NEO4J_USER
- name: NEO4J_AUTH
valueFrom:
configMapKeyRef:
name: configmap
key: NEO4J_AUTH
ports:
- containerPort: 7687
- containerPort: 7474
volumeMounts:
- mountPath: /data/
name: neo4j-data
volumes:
- name: neo4j-data
persistentVolumeClaim:
claimName: neo4j-data-claim
restartPolicy: Always
terminationGracePeriodSeconds: 30
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: neo4j-data-claim
namespace: human-connection
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

View File

@ -0,0 +1,49 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nitro-web
namespace: human-connection
spec:
replicas: 2
minReadySeconds: 15
progressDeadlineSeconds: 60
selector:
matchLabels:
human-connection.org/selector: deployment-human-connection-web
template:
metadata:
labels:
human-connection.org/selector: deployment-human-connection-web
name: nitro-web
spec:
containers:
- name: web
env:
- name: COMMIT
value: <WEBAPP_COMMIT>
- name: HOST
value: 0.0.0.0
- name: BACKEND_URL
valueFrom:
configMapKeyRef:
name: configmap
key: GRAPHQL_URI
- name: MAPBOX_TOKEN
valueFrom:
configMapKeyRef:
name: configmap
key: MAPBOX_TOKEN
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: human-connection
key: JWT_SECRET
optional: false
image: humanconnection/nitro-web:latest
ports:
- containerPort: 3000
resources: {}
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
status: {}

View File

@ -0,0 +1,34 @@
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
namespace: human-connection
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: user@example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
http01: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-prod
namespace: human-connection
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: user@example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
http01: {}

View File

@ -0,0 +1,22 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: human-connection
annotations:
kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/issuer: "letsencrypt-staging"
certmanager.k8s.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- nitro-master.human-connection.org
secretName: tls
rules:
- host: nitro-master.human-connection.org
http:
paths:
- path: /
backend:
serviceName: nitro-web
servicePort: 3000

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: nitro-backend
namespace: human-connection
labels:
human-connection.org/selector: deployment-human-connection-backend
spec:
ports:
- name: web
port: 4000
targetPort: 4000
selector:
human-connection.org/selector: deployment-human-connection-backend

View File

@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: nitro-neo4j
namespace: human-connection
labels:
human-connection.org/selector: deployment-human-connection-neo4j
spec:
ports:
- name: bolt
port: 7687
targetPort: 7687
- name: web
port: 7474
targetPort: 7474
selector:
human-connection.org/selector: deployment-human-connection-neo4j

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: nitro-web
namespace: human-connection
labels:
human-connection.org/selector: deployment-human-connection-web
spec:
ports:
- name: web
port: 3000
targetPort: 3000
selector:
human-connection.org/selector: deployment-human-connection-web

Binary file not shown.

View File

@ -0,0 +1,27 @@
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nitro-backend
namespace: human-connection
spec:
template:
spec:
containers:
- name: nitro-db-migration-worker
image: humanconnection/db-migration-worker:latest
imagePullPolicy: Always
envFrom:
- configMapRef:
name: db-migration-worker
volumeMounts:
- name: secret-volume
readOnly: false
mountPath: /root/.ssh
- name: uploads
mountPath: /uploads/
volumes:
- name: secret-volume
secret:
secretName: ssh-keys
defaultMode: 0400

View File

@ -0,0 +1,39 @@
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nitro-neo4j
namespace: human-connection
spec:
template:
spec:
containers:
- name: nitro-db-migration-worker
image: humanconnection/db-migration-worker:latest
imagePullPolicy: Always
envFrom:
- configMapRef:
name: db-migration-worker
env:
- name: COMMIT
value: <BACKEND_COMMIT>
- name: NEO4J_URI
value: bolt://localhost:7687
volumeMounts:
- name: secret-volume
readOnly: false
mountPath: /root/.ssh
- name: mongo-export
mountPath: /mongo-export/
- name: nitro-neo4j
volumeMounts:
- mountPath: /mongo-export/
name: mongo-export
volumes:
- name: secret-volume
secret:
secretName: ssh-keys
defaultMode: 0400
- name: mongo-export
persistentVolumeClaim:
claimName: mongo-export-claim

View File

@ -0,0 +1,12 @@
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-export-claim
namespace: human-connection
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

View File

@ -0,0 +1,6 @@
kind: Namespace
apiVersion: v1
metadata:
name: human-connection
labels:
name: human-connection

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
data:
JWT_SECRET: "Yi8mJjdiNzhCRiZmdi9WZA=="
MONGODB_PASSWORD: "TU9OR09EQl9QQVNTV09SRA=="
metadata:
name: human-connection
namespace: human-connection