Merge pull request #5 from Human-Connection/digital_ocean

Fix certain configuration for Digital Ocean
This commit is contained in:
Robert Schäfer 2019-02-02 21:22:47 +01:00 committed by GitHub
commit 672770ad18
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
24 changed files with 150 additions and 138 deletions

1
.gitignore vendored
View File

@ -1 +0,0 @@
*secrets*.yml

119
README.md
View File

@ -1,22 +1,20 @@
# Human-Connection Nitro | Deployment Configuration
> Currently the deployment is not primetime ready as you still have to do some manual work. That we need to change, the following list gives some glimpse of the missing steps.
## Todo`s
- [ ] check labels and selectors if they all are correct
- [ ] configure NGINX from yml
Todos:
- [x] check labels and selectors if they all are correct
- [x] configure NGINX from yml
- [ ] configure Let's Encrypt cert-manager from yml
- [ ] configure ingress form yml
- [ ] configure persistent & shared storage between nodes
- [x] configure ingress from yml
- [x] configure persistent & shared storage between nodes
- [x] reproduce setup locally
## Install Minikube, kubectl
There are many Kubernetes distributions, but if you're just getting started, Minikube is a tool that you can use to get your feet wet.
## Minikube
There are many Kubernetes distributions, but if you're just getting started,
Minikube is a tool that you can use to get your feet wet.
[Install Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/)
# Open minikube dashboard
Open minikube dashboard:
```
$ minikube dashboard
```
@ -25,64 +23,95 @@ Some of the steps below need some timing to make ressources available to other
dependent deployments. Keeping an eye on the dashboard is a great way to check
that.
## Create a namespace locally
```shell
$ kubectl create -f namespace-staging.yml
```
Switch to the namespace `staging` in your kubernetes dashboard.
Follow the [installation instruction](#installation-with-kubernetes) below.
If all the pods and services have settled and everything looks green in your
minikube dashboard, expose the `nitro-web` service on your host system with:
## Setup config maps
```shell
$ cp db-migration-worker.template.yml config/db-migration-worker.yml
# edit all variables according to the setup of the remote legacy server
$ kubectl apply -f config/
$ minikube service nitro-web --namespace=staging
```
## Setup secrets and deploy themn
## Digital Ocean
First, install kubernetes dashboard:
```sh
$ kubectl apply -f dashboard/
```
Proxy localhost to the remote kubernetes dashboard:
```sh
$ kubectl proxy
```
Get your token on the command line:
```sh
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
```
It should print something like:
```
Name: admin-user-token-6gl6l
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=b16afba9-dfec-11e7-bbb9-901b0e532516
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnbDZsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMTZhZmJhOS1kZmVjLTExZTctYmJiOS05MDFiMGU1MzI1MTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.M70CU3lbu3PP4OjhFms8PVL5pQKj-jj4RNSLA4YmQfTXpPUuxqXjiTf094_Rzr0fgN_IVX6gC4fiNUL5ynx9KU-lkPfk0HnX8scxfJNzypL039mpGt0bbe1IXKSIRaq_9VW59Xz-yBUhycYcKPO9RM2Qa1Ax29nqNVko4vLn1_1wPqJ6XSq3GYI8anTzV8Fku4jasUwjrws6Cn6_sPEGmL54sq5R4Z5afUtv-mItTmqZZdxnkRqcJLlg2Y8WbCPogErbsaCDJoABQ7ppaqHetwfM_0yMun6ABOQbIwwl8pspJhpplKwyo700OSpvTT9zlBsu-b35lzXGBRHzv5g_RA
```
Grab the token and paste it into the login screen at [http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/)
## Installation with kubernetes
You have to do some prerequisites e.g. change some secrets according to your
own setup.
#### Setup config maps
```shell
$ cp configmap-db-migration-worker.template.yaml staging/configmap-db-migration-worker.yaml
```
Edit all variables according to the setup of the remote legacy server.
#### Setup secrets and deploy themn
```sh
$ cp secrets.template.yaml staging/secrets.yaml
```
Change all secrets as needed.
If you want to edit secrets, you have to `base64` encode them. See [kubernetes
documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually).
```shell
# example how to base64 a string:
$ echo -n 'admin' | base64
YWRtaW4=
$ cp secrets.yml.template secrets.yml
# change all variables as needed and deploy them
$ kubectl apply -f secrets.yml
```
Those secrets get `base64` decoded in a kubernetes pod.
## Create volumes
#### Create a namespace locally
```shell
$ kubectl apply -f volumes/
$ kubectl create -f namespace-staging.yaml
```
Switch to the namespace `staging` in your kubernetes dashboard.
## Expose the services
### Run the configuration
```shell
$ kubectl apply -f services/
$ kubectl apply -f staging/
```
Wait until persistent volumes and services become available.
## Create deployments
```shell
$ kubectl apply -f deployments/
```
This can take a while because kubernetes will download the docker images.
Sit back and relax and have a look into your kubernetes dashboard.
Wait until all pods turn green and they don't show a warning
`Waiting: ContainerCreating` anymore.
## Access the services
```shell
$ minikube service nitro-web --namespace=staging
```
## Provision db-migration-worker
Copy your private ssh key and the `.known-hosts` file of your remote legacy server.
### Provision db-migration-worker
Copy your private ssh key and the `.known-hosts` file of your remote legacy
server.
```shell
# check the corresponding db-migration-worker pod

1
config/.gitignore vendored
View File

@ -1 +0,0 @@
db-migration-worker.yml

View File

@ -1,9 +0,0 @@
apiVersion: v1
kind: ConfigMap
data:
GRAPHQL_PORT: "4000"
GRAPHQL_URI: "http://nitro-backend.staging:4000"
MOCK: "false"
metadata:
name: staging-backend
namespace: staging

View File

@ -1,9 +0,0 @@
apiVersion: v1
kind: ConfigMap
data:
NEO4J_URI: "bolt://neo4j.staging:7687"
NEO4J_USER: "neo4j"
NEO4J_AUTH: none
metadata:
name: staging-neo4j
namespace: staging

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: ConfigMap
data:
CLIENT_URI: "https://nitro-staging.human-connection.org"
MAPBOX_TOKEN: pk.eyJ1IjoiaHVtYW4tY29ubmVjdGlvbiIsImEiOiJjajl0cnBubGoweTVlM3VwZ2lzNTNud3ZtIn0.KZ8KK9l70omjXbEkkbHGsQ
metadata:
name: staging-web
namespace: staging

View File

@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

2
staging/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
configmap-db-migration-worker.yaml
secrets.yaml

29
staging/configmaps.yaml Normal file
View File

@ -0,0 +1,29 @@
---
apiVersion: v1
kind: ConfigMap
data:
GRAPHQL_PORT: "4000"
GRAPHQL_URI: "http://nitro-backend.staging:4000"
MOCK: "false"
metadata:
name: staging-backend
namespace: staging
---
apiVersion: v1
kind: ConfigMap
data:
NEO4J_URI: "bolt://nitro-neo4j.staging:7687"
NEO4J_USER: "neo4j"
NEO4J_AUTH: none
metadata:
name: staging-neo4j
namespace: staging
---
apiVersion: v1
kind: ConfigMap
data:
CLIENT_URI: "https://nitro-staging.human-connection.org"
MAPBOX_TOKEN: pk.eyJ1IjoiaHVtYW4tY29ubmVjdGlvbiIsImEiOiJjajl0cnBubGoweTVlM3VwZ2lzNTNud3ZtIn0.KZ8KK9l70omjXbEkkbHGsQ
metadata:
name: staging-web
namespace: staging

View File

@ -76,19 +76,6 @@
restartPolicy: Always
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: ssh-keys-volume
namespace: staging
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Mi
hostPath:
path: /data/pv0001/
---
kind: PersistentVolumeClaim
apiVersion: v1
@ -100,4 +87,6 @@
- ReadWriteOnce
resources:
requests:
storage: 1Mi
# waaay too much
# unfortunately Digital Oceans volumes start at 1Gi
storage: 1Gi

View File

@ -1,7 +1,7 @@
apiVersion: v1
kind: Service
metadata:
name: neo4j
name: nitro-neo4j
namespace: staging
labels:
workload.user.cattle.io/workloadselector: deployment-staging-neo4j

View File

@ -0,0 +1,12 @@
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-export-claim
namespace: staging
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

View File

@ -0,0 +1,12 @@
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: uploads-claim
namespace: staging
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi

View File

@ -1,25 +0,0 @@
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mongo-export-volume
namespace: staging
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
hostPath:
path: /data/shared/mongo-exports/
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-export-claim
namespace: staging
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi

View File

@ -1,25 +0,0 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: uploads-volume
namespace: staging
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 8Gi
hostPath:
path: /data/shared/uploads/
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: uploads-claim
namespace: staging
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi