Change namespace

- from '--namespace=human-connection' to '-n ocelot-social'.
This commit is contained in:
Wolfgang Huß 2020-12-08 08:50:30 +01:00
parent 68fa07aa28
commit cbdbe276cd
5 changed files with 20 additions and 20 deletions

View File

@ -49,8 +49,8 @@ If the response looks good, configure your domain registrar for the new IP addre
Now let's get a valid HTTPS certificate. According to the tutorial above, check your tls certificate for staging:
```bash
$ kubectl describe --namespace=human-connection certificate tls
$ kubectl describe --namespace=human-connection secret tls
$ kubectl describe -n ocelot-social certificate tls
$ kubectl describe -n ocelot-social secret tls
```
If everything looks good, update the issuer of your ingress. Change the annotation `certmanager.k8s.io/issuer` from `letsencrypt-develop` to `letsencrypt-production` in your ingress configuration in `ingress.yaml`.
@ -63,7 +63,7 @@ $ kubectl apply -f ingress.yaml
Delete the former secret to force a refresh:
```text
$ kubectl --namespace=human-connection delete secret tls
$ kubectl -n ocelot-social delete secret tls
```
Now, HTTPS should be configured on your domain. Congrats.

View File

@ -11,7 +11,7 @@ Create a configmap with the specific connection data of your legacy server:
```bash
$ kubectl create configmap maintenance-worker \
--namespace=human-connection \
-n ocelot-social \
--from-literal=SSH_USERNAME=someuser \
--from-literal=SSH_HOST=yourhost \
--from-literal=MONGODB_USERNAME=hc-api \
@ -25,7 +25,7 @@ Create a secret with your public and private ssh keys. As the [kubernetes docume
```bash
$ kubectl create secret generic ssh-keys \
--namespace=human-connection \
-n ocelot-social \
--from-file=id_rsa=/path/to/.ssh/id_rsa \
--from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
--from-file=known_hosts=/path/to/.ssh/known_hosts
@ -41,14 +41,14 @@ Bring the application into maintenance mode.
Then temporarily delete backend and database deployments
```bash
$ kubectl --namespace=human-connection get deployments
$ kubectl -n ocelot-social get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
develop-backend 1/1 1 1 3d11h
develop-neo4j 1/1 1 1 3d11h
develop-webapp 2/2 2 2 73d
$ kubectl --namespace=human-connection delete deployment develop-neo4j
$ kubectl -n ocelot-social delete deployment develop-neo4j
deployment.extensions "develop-neo4j" deleted
$ kubectl --namespace=human-connection delete deployment develop-backend
$ kubectl -n ocelot-social delete deployment develop-backend
deployment.extensions "develop-backend" deleted
```
@ -63,7 +63,7 @@ pod/develop-maintenance-worker created
Import legacy database and uploads:
```bash
$ kubectl --namespace=human-connection exec -it develop-maintenance-worker bash
$ kubectl -n ocelot-social exec -it develop-maintenance-worker bash
$ import_legacy_db
$ import_legacy_uploads
$ exit
@ -72,7 +72,7 @@ $ exit
Delete the pod when you're done:
```bash
$ kubectl --namespace=human-connection delete pod develop-maintenance-worker
$ kubectl -n ocelot-social delete pod develop-maintenance-worker
```
Oh, and of course you have to get those deleted deployments back. One way of

View File

@ -29,7 +29,7 @@ database connections left and nobody can access the application.
Run the following:
```sh
$ kubectl --namespace=human-connection edit deployment develop-neo4j
$ kubectl -n ocelot-social edit deployment develop-neo4j
```
Add the following to `spec.template.spec.containers`:
@ -55,9 +55,9 @@ file and trigger an update of the deployment.
First stop your Neo4J database, see above. Then:
```sh
$ kubectl --namespace=human-connection get pods
$ kubectl -n ocelot-social get pods
# Copy the ID of the pod running Neo4J.
$ kubectl --namespace=human-connection exec -it <POD-ID> bash
$ kubectl -n ocelot-social exec -it <POD-ID> bash
# Once you're in the pod, dump the db to a file e.g. `/root/neo4j-backup`.
> neo4j-admin dump --to=/root/neo4j-backup
> exit
@ -72,12 +72,12 @@ Revert your changes to deployment `develop-neo4j` which will restart the databas
First stop your Neo4J database. Then:
```sh
$ kubectl --namespace=human-connection get pods
$ kubectl -n ocelot-social get pods
# Copy the ID of the pod running Neo4J.
# Then upload your local backup to the pod. Note that once the pod gets deleted
# e.g. if you change the deployment, the backup file is gone with it.
$ kubectl cp ./neo4j-backup human-connection/<POD-ID>:/root/
$ kubectl --namespace=human-connection exec -it <POD-ID> bash
$ kubectl -n ocelot-social exec -it <POD-ID> bash
# Once you're in the pod restore the backup and overwrite the default database
# called `graph.db` with `--force`.
# This will delete all existing data in database `graph.db`!

View File

@ -43,12 +43,12 @@ Restoration must be done while the database is not running, see [our docs](https
After, you have stopped the database, and have the pod running, you can restore the database by running these commands:
```sh
$ kubectl --namespace=human-connection get pods
$ kubectl -n ocelot-social get pods
# Copy the ID of the pod running Neo4J.
# Then upload your local backup to the pod. Note that once the pod gets deleted
# e.g. if you change the deployment, the backup file is gone with it.
$ kubectl cp ./neo4j-backup/ human-connection/<POD-ID>:/root/
$ kubectl --namespace=human-connection exec -it <POD-ID> bash
$ kubectl -n ocelot-social exec -it <POD-ID> bash
# Once you're in the pod restore the backup and overwrite the default database
# called `graph.db` with `--force`.
# This will delete all existing data in database `graph.db`!

View File

@ -1,6 +1,6 @@
#!/usr/bin/env bash
sed -i "s/<COMMIT>/${TRAVIS_COMMIT}/g" $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml
sed -i "s/<COMMIT>/${TRAVIS_COMMIT}/g" $TRAVIS_BUILD_DIR/scripts/patches/patch-configmap.yaml
kubectl --namespace=human-connection patch configmap develop-configmap -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-configmap.yaml)"
kubectl --namespace=human-connection patch deployment develop-backend -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml)"
kubectl --namespace=human-connection patch deployment develop-webapp -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml)"
kubectl -n ocelot-social patch configmap develop-configmap -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-configmap.yaml)"
kubectl -n ocelot-social patch deployment develop-backend -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml)"
kubectl -n ocelot-social patch deployment develop-webapp -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml)"