diff --git a/README.md b/README.md index 6ab975a07..366598bd1 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ Todos: - [x] check labels and selectors if they all are correct - [x] configure NGINX from yml -- [ ] configure Let's Encrypt cert-manager from yml +- [x] configure Let's Encrypt cert-manager from yml - [x] configure ingress from yml - [x] configure persistent & shared storage between nodes - [x] reproduce setup locally @@ -28,7 +28,7 @@ If all the pods and services have settled and everything looks green in your minikube dashboard, expose the `nitro-web` service on your host system with: ```shell -$ minikube service nitro-web --namespace=staging +$ minikube service nitro-web --namespace=human-connection ``` ## Digital Ocean @@ -36,6 +36,8 @@ $ minikube service nitro-web --namespace=staging First, install kubernetes dashboard: ```sh $ kubectl apply -f dashboard/ +$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml + ``` Proxy localhost to the remote kubernetes dashboard: ```sh @@ -70,16 +72,10 @@ Grab the token and paste it into the login screen at [http://localhost:8001/api/ You have to do some prerequisites e.g. change some secrets according to your own setup. -#### Setup config maps -```shell -$ cp configmap-db-migration-worker.template.yaml staging/configmap-db-migration-worker.yaml -``` -Edit all variables according to the setup of the remote legacy server. - -#### Setup secrets and deploy themn +### Edit secrets ```sh -$ cp secrets.template.yaml staging/secrets.yaml +$ cp secrets.template.yaml human-connection/secrets.yaml ``` Change all secrets as needed. @@ -92,16 +88,16 @@ YWRtaW4= ``` Those secrets get `base64` decoded in a kubernetes pod. -#### Create a namespace locally +### Create a namespace ```shell -$ kubectl create -f namespace-staging.yaml +$ kubectl apply -f namespace-human-connection.yaml ``` -Switch to the namespace `staging` in your kubernetes dashboard. +Switch to the namespace `human-connection` in your kubernetes dashboard. ### Run the configuration ```shell -$ kubectl apply -f staging/ +$ kubectl apply -f human-connection/ ``` This can take a while because kubernetes will download the docker images. @@ -109,6 +105,58 @@ Sit back and relax and have a look into your kubernetes dashboard. Wait until all pods turn green and they don't show a warning `Waiting: ContainerCreating` anymore. +#### Setup Ingress and HTTPS + +Follow [this quick start guide](https://docs.cert-manager.io/en/latest/tutorials/acme/quick-start/index.html) +and install certmanager via helm and tiller: +``` +$ kubectl create serviceaccount tiller --namespace=kube-system +$ kubectl create clusterrolebinding tiller-admin --serviceaccount=kube-system:tiller --clusterrole=cluster-admin +$ helm init --service-account=tiller +$ helm repo update +$ helm install stable/nginx-ingress +$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/00-crds.yaml +$ helm install --name cert-manager --namespace cert-manager stable/cert-manager +``` + +Create letsencrypt issuers. *Change the email address* in these files before +running this command. +```sh +$ kubectl apply -f human-connection/https/ +``` +Create an ingress service in namespace `human-connection`. *Change the domain +name* according to your needs: +```sh +$ kubectl apply -f human-connection/ingress/ +``` +Check the ingress server is working correctly: +```sh +$ curl -kivL -H 'Host: ' 'https://' +``` +If the response looks good, configure your domain registrar for the new IP +address and the domain. + +Now let's get a valid HTTPS certificate. According to the tutorial above, check +your tls certificate for staging: +```sh +$ kubectl describe --namespace=human-connection certificate tls +$ kubectl describe --namespace=human-connection secret tls +``` + +If everything looks good, update the issuer of your ingress. Change the +annotation `certmanager.k8s.io/issuer` from `letsencrypt-staging` to +`letsencrypt-prod` in your ingress configuration in +`human-connection/ingress/ingress.yaml`. + +```sh +$ kubectl apply -f human-connection/ingress/ingress.yaml +``` +Delete the former secret to force a refresh: +``` +$ kubectl --namespace=human-connection delete secret tls +``` +Now, HTTPS should be configured on your domain. Congrats. + #### Legacy data migration This setup is completely optional and only required if you have data on a server @@ -119,7 +167,7 @@ import the uploads folder and migrate a dump of mongodb into neo4j. Create a configmap with the specific connection data of your legacy server: ```sh $ kubectl create configmap db-migration-worker \ - --namespace=staging \ + --namespace=human-connection \ --from-literal=SSH_USERNAME=someuser \ --from-literal=SSH_HOST=yourhost \ --from-literal=MONGODB_USERNAME=hc-api \ @@ -127,36 +175,37 @@ $ kubectl create configmap db-migration-worker \ --from-literal=MONGODB_AUTH_DB=hc_api \ --from-literal=MONGODB_DATABASE=hc_api \ --from-literal=UPLOADS_DIRECTORY=/var/www/api/uploads \ - --from-literal=NEO4J_URI=bolt://neo4j:7687 - + --from-literal=NEO4J_URI=bolt://localhost:7687 ``` -Create a secret with your public and private ssh keys: + +Create a secret with your public and private ssh keys. As the +[kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys) +points out, you should be careful with your ssh keys. Anyone with access to your +cluster will have access to your ssh keys. Better create a new pair with +`ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`: + ```sh $ kubectl create secret generic ssh-keys \ - --namespace=staging \ + --namespace=human-connection \ --from-file=id_rsa=/path/to/.ssh/id_rsa \ --from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \ --from-file=known_hosts=/path/to/.ssh/known_hosts ``` -As the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys) -points out, you should be careful with your ssh keys. Anyone with access to your -cluster will have access to your ssh keys. Better create a new pair with -`ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`. ##### Migrate legacy database Patch the existing deployments to use a multi-container setup: ```bash cd legacy-migration kubectl apply -f volume-claim-mongo-export.yaml -kubectl patch --namespace=staging deployment nitro-backend --patch "$(cat deployment-backend.yaml)" -kubectl patch --namespace=staging deployment nitro-neo4j --patch "$(cat deployment-neo4j.yaml)" +kubectl patch --namespace=human-connection deployment nitro-backend --patch "$(cat deployment-backend.yaml)" +kubectl patch --namespace=human-connection deployment nitro-neo4j --patch "$(cat deployment-neo4j.yaml)" cd .. ``` Run the migration: ```shell -$ kubectl --namespace=staging get pods +$ kubectl --namespace=human-connection get pods # change below -$ kubectl --namespace=staging exec -it nitro-neo4j-65bbdb597c-nc2lv migrate -$ kubectl --namespace=staging exec -it nitro-backend-c6cc5ff69-8h96z sync_uploads +$ kubectl --namespace=human-connection exec -it nitro-neo4j-65bbdb597c-nc2lv migrate +$ kubectl --namespace=human-connection exec -it nitro-backend-c6cc5ff69-8h96z sync_uploads ``` diff --git a/db-migration-worker.yaml b/db-migration-worker.yaml index e0b520e58..55743e360 100644 --- a/db-migration-worker.yaml +++ b/db-migration-worker.yaml @@ -3,7 +3,7 @@ apiVersion: v1 metadata: name: nitro-db-migration-worker - namespace: staging + namespace: human-connection spec: volumes: - name: secret-volume @@ -30,7 +30,7 @@ apiVersion: v1 metadata: name: mongo-export-claim - namespace: staging + namespace: human-connection spec: accessModes: - ReadWriteOnce diff --git a/human-connection/configmap.yaml b/human-connection/configmap.yaml new file mode 100644 index 000000000..50ae17e23 --- /dev/null +++ b/human-connection/configmap.yaml @@ -0,0 +1,15 @@ +--- + apiVersion: v1 + kind: ConfigMap + data: + GRAPHQL_PORT: "4000" + GRAPHQL_URI: "http://nitro-backend.human-connection:4000" + MOCK: "false" + NEO4J_URI: "bolt://nitro-neo4j.human-connection:7687" + NEO4J_USER: "neo4j" + NEO4J_AUTH: none + CLIENT_URI: "https://nitro-human-connection.human-connection.org" + MAPBOX_TOKEN: pk.eyJ1IjoiaHVtYW4tY29ubmVjdGlvbiIsImEiOiJjajl0cnBubGoweTVlM3VwZ2lzNTNud3ZtIn0.KZ8KK9l70omjXbEkkbHGsQ + metadata: + name: configmap + namespace: human-connection diff --git a/staging/deployment-backend.yaml b/human-connection/deployment-backend.yaml similarity index 79% rename from staging/deployment-backend.yaml rename to human-connection/deployment-backend.yaml index 4c2832a71..8f8c6bf51 100644 --- a/staging/deployment-backend.yaml +++ b/human-connection/deployment-backend.yaml @@ -3,18 +3,18 @@ kind: Deployment metadata: name: nitro-backend - namespace: staging + namespace: human-connection spec: - replicas: 2 + replicas: 1 minReadySeconds: 15 progressDeadlineSeconds: 60 selector: matchLabels: - workload.user.cattle.io/workloadselector: deployment-staging-backend + human-connection.org/selector: deployment-human-connection-backend template: metadata: labels: - workload.user.cattle.io/workloadselector: deployment-staging-backend + human-connection.org/selector: deployment-human-connection-backend name: "nitro-backend" spec: containers: @@ -31,33 +31,33 @@ - name: CLIENT_URI valueFrom: configMapKeyRef: - name: staging-web + name: configmap key: CLIENT_URI - name: GRAPHQL_PORT valueFrom: configMapKeyRef: - name: staging-backend + name: configmap key: GRAPHQL_PORT - name: GRAPHQL_URI valueFrom: configMapKeyRef: - name: staging-backend + name: configmap key: GRAPHQL_URI - name: MAPBOX_TOKEN valueFrom: configMapKeyRef: - name: staging-web + name: configmap key: MAPBOX_TOKEN - name: JWT_SECRET valueFrom: secretKeyRef: - name: staging + name: secret key: JWT_SECRET optional: false - name: NEO4J_URI valueFrom: configMapKeyRef: - name: staging-neo4j + name: configmap key: NEO4J_URI volumeMounts: - mountPath: /nitro-backend/public/uploads @@ -74,10 +74,10 @@ apiVersion: v1 metadata: name: uploads-claim - namespace: staging + namespace: human-connection spec: accessModes: - ReadWriteOnce resources: requests: - storage: 10Gi + storage: 2Gi diff --git a/staging/deployment-neo4j.yaml b/human-connection/deployment-neo4j.yaml similarity index 82% rename from staging/deployment-neo4j.yaml rename to human-connection/deployment-neo4j.yaml index d9aeab542..5ef5204a2 100644 --- a/staging/deployment-neo4j.yaml +++ b/human-connection/deployment-neo4j.yaml @@ -3,17 +3,17 @@ kind: Deployment metadata: name: nitro-neo4j - namespace: staging + namespace: human-connection spec: replicas: 1 strategy: {} selector: matchLabels: - workload.user.cattle.io/workloadselector: deployment-staging-neo4j + human-connection.org/selector: deployment-human-connection-neo4j template: metadata: labels: - workload.user.cattle.io/workloadselector: deployment-staging-neo4j + human-connection.org/selector: deployment-human-connection-neo4j name: nitro-neo4j spec: containers: @@ -34,17 +34,17 @@ - name: NEO4J_URI valueFrom: configMapKeyRef: - name: staging-neo4j + name: configmap key: NEO4J_URI - name: NEO4J_USER valueFrom: configMapKeyRef: - name: staging-neo4j + name: configmap key: NEO4J_USER - name: NEO4J_AUTH valueFrom: configMapKeyRef: - name: staging-neo4j + name: configmap key: NEO4J_AUTH ports: - containerPort: 7687 @@ -63,10 +63,10 @@ apiVersion: v1 metadata: name: neo4j-data-claim - namespace: staging + namespace: human-connection spec: accessModes: - ReadWriteOnce resources: requests: - storage: 4Gi + storage: 1Gi diff --git a/staging/deployment-web.yaml b/human-connection/deployment-web.yaml similarity index 78% rename from staging/deployment-web.yaml rename to human-connection/deployment-web.yaml index de9651528..a3dafe766 100644 --- a/staging/deployment-web.yaml +++ b/human-connection/deployment-web.yaml @@ -2,18 +2,18 @@ apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nitro-web - namespace: staging + namespace: human-connection spec: replicas: 2 minReadySeconds: 15 progressDeadlineSeconds: 60 selector: matchLabels: - workload.user.cattle.io/workloadselector: deployment-staging-web + human-connection.org/selector: deployment-human-connection-web template: metadata: labels: - workload.user.cattle.io/workloadselector: deployment-staging-web + human-connection.org/selector: deployment-human-connection-web name: nitro-web spec: containers: @@ -26,17 +26,17 @@ spec: - name: BACKEND_URL valueFrom: configMapKeyRef: - name: staging-backend + name: configmap key: GRAPHQL_URI - name: MAPBOX_TOKEN valueFrom: configMapKeyRef: - name: staging-web + name: configmap key: MAPBOX_TOKEN - name: JWT_SECRET valueFrom: secretKeyRef: - name: staging + name: secret key: JWT_SECRET optional: false image: humanconnection/nitro-web:latest diff --git a/human-connection/https/issuer.yaml b/human-connection/https/issuer.yaml new file mode 100644 index 000000000..8cb554fc6 --- /dev/null +++ b/human-connection/https/issuer.yaml @@ -0,0 +1,34 @@ +--- + apiVersion: certmanager.k8s.io/v1alpha1 + kind: Issuer + metadata: + name: letsencrypt-staging + namespace: human-connection + spec: + acme: + # The ACME server URL + server: https://acme-staging-v02.api.letsencrypt.org/directory + # Email address used for ACME registration + email: user@example.com + # Name of a secret used to store the ACME account private key + privateKeySecretRef: + name: letsencrypt-staging + # Enable the HTTP-01 challenge provider + http01: {} +--- + apiVersion: certmanager.k8s.io/v1alpha1 + kind: Issuer + metadata: + name: letsencrypt-prod + namespace: human-connection + spec: + acme: + # The ACME server URL + server: https://acme-v02.api.letsencrypt.org/directory + # Email address used for ACME registration + email: user@example.com + # Name of a secret used to store the ACME account private key + privateKeySecretRef: + name: letsencrypt-prod + # Enable the HTTP-01 challenge provider + http01: {} diff --git a/human-connection/ingress/ingress.yaml b/human-connection/ingress/ingress.yaml new file mode 100644 index 000000000..52e358196 --- /dev/null +++ b/human-connection/ingress/ingress.yaml @@ -0,0 +1,22 @@ +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: ingress + namespace: human-connection + annotations: + kubernetes.io/ingress.class: "nginx" + certmanager.k8s.io/issuer: "letsencrypt-staging" + certmanager.k8s.io/acme-challenge-type: http01 +spec: + tls: + - hosts: + - nitro-master.human-connection.org + secretName: tls + rules: + - host: nitro-master.human-connection.org + http: + paths: + - path: / + backend: + serviceName: nitro-web + servicePort: 3000 diff --git a/human-connection/service-backend.yaml b/human-connection/service-backend.yaml new file mode 100644 index 000000000..52e4621b2 --- /dev/null +++ b/human-connection/service-backend.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Service +metadata: + name: nitro-backend + namespace: human-connection + labels: + human-connection.org/selector: deployment-human-connection-backend +spec: + ports: + - name: web + port: 4000 + targetPort: 4000 + selector: + human-connection.org/selector: deployment-human-connection-backend diff --git a/staging/service-neo4j.yaml b/human-connection/service-neo4j.yaml similarity index 53% rename from staging/service-neo4j.yaml rename to human-connection/service-neo4j.yaml index d6c7a95b4..ebe7c5208 100644 --- a/staging/service-neo4j.yaml +++ b/human-connection/service-neo4j.yaml @@ -2,9 +2,9 @@ apiVersion: v1 kind: Service metadata: name: nitro-neo4j - namespace: staging + namespace: human-connection labels: - workload.user.cattle.io/workloadselector: deployment-staging-neo4j + human-connection.org/selector: deployment-human-connection-neo4j spec: ports: - name: bolt @@ -14,4 +14,4 @@ spec: port: 7474 targetPort: 7474 selector: - workload.user.cattle.io/workloadselector: deployment-staging-neo4j + human-connection.org/selector: deployment-human-connection-neo4j diff --git a/human-connection/service-web.yaml b/human-connection/service-web.yaml new file mode 100644 index 000000000..548b874c2 --- /dev/null +++ b/human-connection/service-web.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Service +metadata: + name: nitro-web + namespace: human-connection + labels: + human-connection.org/selector: deployment-human-connection-web +spec: + ports: + - name: web + port: 3000 + targetPort: 3000 + selector: + human-connection.org/selector: deployment-human-connection-web diff --git a/legacy-migration/deployment-backend.yaml b/legacy-migration/deployment-backend.yaml index e29730cae..1adeb0665 100644 --- a/legacy-migration/deployment-backend.yaml +++ b/legacy-migration/deployment-backend.yaml @@ -3,7 +3,7 @@ kind: Deployment metadata: name: nitro-backend - namespace: staging + namespace: human-connection spec: template: spec: diff --git a/legacy-migration/deployment-neo4j.yaml b/legacy-migration/deployment-neo4j.yaml index 887c02f3a..2852b90cb 100644 --- a/legacy-migration/deployment-neo4j.yaml +++ b/legacy-migration/deployment-neo4j.yaml @@ -3,7 +3,7 @@ kind: Deployment metadata: name: nitro-neo4j - namespace: staging + namespace: human-connection spec: template: spec: diff --git a/legacy-migration/volume-claim-mongo-export.yaml b/legacy-migration/volume-claim-mongo-export.yaml index 563a9cfe6..106ef4736 100644 --- a/legacy-migration/volume-claim-mongo-export.yaml +++ b/legacy-migration/volume-claim-mongo-export.yaml @@ -3,7 +3,7 @@ apiVersion: v1 metadata: name: mongo-export-claim - namespace: staging + namespace: human-connection spec: accessModes: - ReadWriteOnce diff --git a/namespace-human-connection.yaml b/namespace-human-connection.yaml new file mode 100644 index 000000000..0710da55b --- /dev/null +++ b/namespace-human-connection.yaml @@ -0,0 +1,6 @@ +kind: Namespace +apiVersion: v1 +metadata: + name: human-connection + labels: + name: human-connection diff --git a/namespace-staging.yaml b/namespace-staging.yaml deleted file mode 100644 index d63b4e0f9..000000000 --- a/namespace-staging.yaml +++ /dev/null @@ -1,6 +0,0 @@ -kind: Namespace -apiVersion: v1 -metadata: - name: staging - labels: - name: staging diff --git a/secrets.template.yaml b/secrets.template.yaml index 755cd2d06..915a31be5 100644 --- a/secrets.template.yaml +++ b/secrets.template.yaml @@ -4,5 +4,5 @@ data: JWT_SECRET: "Yi8mJjdiNzhCRiZmdi9WZA==" MONGODB_PASSWORD: "TU9OR09EQl9QQVNTV09SRA==" metadata: - name: staging - namespace: staging + name: human-connection + namespace: human-connection diff --git a/staging/configmaps.yaml b/staging/configmaps.yaml deleted file mode 100644 index c07353141..000000000 --- a/staging/configmaps.yaml +++ /dev/null @@ -1,29 +0,0 @@ ---- - apiVersion: v1 - kind: ConfigMap - data: - GRAPHQL_PORT: "4000" - GRAPHQL_URI: "http://nitro-backend.staging:4000" - MOCK: "false" - metadata: - name: staging-backend - namespace: staging ---- - apiVersion: v1 - kind: ConfigMap - data: - NEO4J_URI: "bolt://nitro-neo4j.staging:7687" - NEO4J_USER: "neo4j" - NEO4J_AUTH: none - metadata: - name: staging-neo4j - namespace: staging ---- - apiVersion: v1 - kind: ConfigMap - data: - CLIENT_URI: "https://nitro-staging.human-connection.org" - MAPBOX_TOKEN: pk.eyJ1IjoiaHVtYW4tY29ubmVjdGlvbiIsImEiOiJjajl0cnBubGoweTVlM3VwZ2lzNTNud3ZtIn0.KZ8KK9l70omjXbEkkbHGsQ - metadata: - name: staging-web - namespace: staging diff --git a/staging/service-backend.yaml b/staging/service-backend.yaml deleted file mode 100644 index 39cfca63a..000000000 --- a/staging/service-backend.yaml +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: nitro-backend - namespace: staging - labels: - workload.user.cattle.io/workloadselector: deployment-staging-backend -spec: - ports: - - name: web - port: 4000 - targetPort: 4000 - selector: - workload.user.cattle.io/workloadselector: deployment-staging-backend diff --git a/staging/service-web.yaml b/staging/service-web.yaml deleted file mode 100644 index ad2b9678b..000000000 --- a/staging/service-web.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: nitro-web - namespace: staging - labels: - workload.user.cattle.io/workloadselector: deployment-staging-web -spec: - ports: - - name: web - port: 3000 - targetPort: 3000 - selector: - workload.user.cattle.io/workloadselector: deployment-staging-web - type: LoadBalancer - externalTrafficPolicy: Cluster