mirror of
https://github.com/IT4Change/Ocelot-Social.git
synced 2025-12-13 07:45:56 +00:00
Merge pull request #4 from Human-Connection/add_db_migration_worker_deployment
Add db migration worker deployment
This commit is contained in:
commit
15581de701
3
.gitignore
vendored
3
.gitignore
vendored
@ -1 +1,2 @@
|
||||
*secrets*.yaml
|
||||
secrets.yaml
|
||||
*/secrets.yaml
|
||||
|
||||
195
README.md
195
README.md
@ -1,63 +1,162 @@
|
||||
# Human-Connection Nitro | Deployment Configuration
|
||||
|
||||
> Currently the deployment is not primetime ready as you still have to do some manual work. That we need to change, the following list gives some glimpse of the missing steps.
|
||||
Todos:
|
||||
- [x] check labels and selectors if they all are correct
|
||||
- [x] configure NGINX from yml
|
||||
- [ ] configure Let's Encrypt cert-manager from yml
|
||||
- [x] configure ingress from yml
|
||||
- [x] configure persistent & shared storage between nodes
|
||||
- [x] reproduce setup locally
|
||||
|
||||
## Todo`s
|
||||
- [ ] check labels and selectors if they all are correct
|
||||
- [ ] configure NGINX from yaml
|
||||
- [ ] configure Let's Encrypt cert-manager from yaml
|
||||
- [ ] configure ingress form yaml
|
||||
- [ ] configure persistent & shared storage between nodes
|
||||
- [ ] reproduce setup locally
|
||||
|
||||
> The dummy directory has some lb configurations that did not work properly on Digital Ocean but could be used as a starting point for getting it right
|
||||
|
||||
## Install Minikube, kubectl
|
||||
There are many Kubernetes distributions, but if you're just getting started, Minikube is a tool that you can use to get your feet wet.
|
||||
## Minikube
|
||||
There are many Kubernetes distributions, but if you're just getting started,
|
||||
Minikube is a tool that you can use to get your feet wet.
|
||||
|
||||
[Install Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/)
|
||||
|
||||
## Create a namespace locally
|
||||
```shell
|
||||
kubectl create -f namespace-staging.json
|
||||
Open minikube dashboard:
|
||||
```
|
||||
$ minikube dashboard
|
||||
```
|
||||
This will give you an overview.
|
||||
Some of the steps below need some timing to make ressources available to other
|
||||
dependent deployments. Keeping an eye on the dashboard is a great way to check
|
||||
that.
|
||||
|
||||
## Apply the config map to staging namespace
|
||||
```shell
|
||||
cd ./staging
|
||||
kubectl apply -f neo4j-configmap.yaml -f backend-configmap.yaml -f web-configmap.yaml
|
||||
```
|
||||
|
||||
## Setup secrets and deploy themn
|
||||
```shell
|
||||
cd ./staging
|
||||
cp secrets.yaml.template secrets.yaml
|
||||
# change all vars as needed and deploy it afterwards
|
||||
kubectl apply -f secrets.yaml
|
||||
```
|
||||
|
||||
## Deploy the app
|
||||
```shell
|
||||
cd ./staging
|
||||
kubectl apply -f neo4j-deployment.yaml -f backend-deployment.yaml -f web-deployment.yaml
|
||||
```
|
||||
This can take a while.
|
||||
Sit back and relax and have a look into your minikube dashboard:
|
||||
```
|
||||
minikube dashboard
|
||||
```
|
||||
Wait until all pods turn green and they don't show a warning `Waiting: ContainerCreating` anymore.
|
||||
|
||||
## Expose the services
|
||||
Follow the [installation instruction](#installation-with-kubernetes) below.
|
||||
If all the pods and services have settled and everything looks green in your
|
||||
minikube dashboard, expose the `nitro-web` service on your host system with:
|
||||
|
||||
```shell
|
||||
kubectl expose deployment nitro-backend --namespace=staging --type=LoadBalancer --port=4000
|
||||
kubectl expose deployment nitro-web --namespace=staging --type=LoadBalancer --port=3000
|
||||
$ minikube service nitro-web --namespace=staging
|
||||
```
|
||||
|
||||
## Access the service
|
||||
## Digital Ocean
|
||||
|
||||
First, install kubernetes dashboard:
|
||||
```sh
|
||||
$ kubectl apply -f dashboard/
|
||||
```
|
||||
Proxy localhost to the remote kubernetes dashboard:
|
||||
```sh
|
||||
$ kubectl proxy
|
||||
```
|
||||
Get your token on the command line:
|
||||
```sh
|
||||
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
|
||||
```
|
||||
It should print something like:
|
||||
```
|
||||
Name: admin-user-token-6gl6l
|
||||
Namespace: kube-system
|
||||
Labels: <none>
|
||||
Annotations: kubernetes.io/service-account.name=admin-user
|
||||
kubernetes.io/service-account.uid=b16afba9-dfec-11e7-bbb9-901b0e532516
|
||||
|
||||
Type: kubernetes.io/service-account-token
|
||||
|
||||
Data
|
||||
====
|
||||
ca.crt: 1025 bytes
|
||||
namespace: 11 bytes
|
||||
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnbDZsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMTZhZmJhOS1kZmVjLTExZTctYmJiOS05MDFiMGU1MzI1MTYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.M70CU3lbu3PP4OjhFms8PVL5pQKj-jj4RNSLA4YmQfTXpPUuxqXjiTf094_Rzr0fgN_IVX6gC4fiNUL5ynx9KU-lkPfk0HnX8scxfJNzypL039mpGt0bbe1IXKSIRaq_9VW59Xz-yBUhycYcKPO9RM2Qa1Ax29nqNVko4vLn1_1wPqJ6XSq3GYI8anTzV8Fku4jasUwjrws6Cn6_sPEGmL54sq5R4Z5afUtv-mItTmqZZdxnkRqcJLlg2Y8WbCPogErbsaCDJoABQ7ppaqHetwfM_0yMun6ABOQbIwwl8pspJhpplKwyo700OSpvTT9zlBsu-b35lzXGBRHzv5g_RA
|
||||
|
||||
```
|
||||
Grab the token and paste it into the login screen at [http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/](http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/)
|
||||
|
||||
|
||||
## Installation with kubernetes
|
||||
|
||||
You have to do some prerequisites e.g. change some secrets according to your
|
||||
own setup.
|
||||
|
||||
#### Setup config maps
|
||||
```shell
|
||||
minikube service nitro-backend --namespace=staging
|
||||
minikube service nitro-web --namespace=staging
|
||||
$ cp configmap-db-migration-worker.template.yaml staging/configmap-db-migration-worker.yaml
|
||||
```
|
||||
Edit all variables according to the setup of the remote legacy server.
|
||||
|
||||
#### Setup secrets and deploy themn
|
||||
|
||||
```sh
|
||||
$ cp secrets.template.yaml staging/secrets.yaml
|
||||
```
|
||||
Change all secrets as needed.
|
||||
|
||||
If you want to edit secrets, you have to `base64` encode them. See [kubernetes
|
||||
documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually).
|
||||
```shell
|
||||
# example how to base64 a string:
|
||||
$ echo -n 'admin' | base64
|
||||
YWRtaW4=
|
||||
```
|
||||
Those secrets get `base64` decoded in a kubernetes pod.
|
||||
|
||||
#### Create a namespace locally
|
||||
```shell
|
||||
$ kubectl create -f namespace-staging.yaml
|
||||
```
|
||||
Switch to the namespace `staging` in your kubernetes dashboard.
|
||||
|
||||
|
||||
### Run the configuration
|
||||
```shell
|
||||
$ kubectl apply -f staging/
|
||||
```
|
||||
|
||||
This can take a while because kubernetes will download the docker images.
|
||||
Sit back and relax and have a look into your kubernetes dashboard.
|
||||
Wait until all pods turn green and they don't show a warning
|
||||
`Waiting: ContainerCreating` anymore.
|
||||
|
||||
#### Legacy data migration
|
||||
|
||||
This setup is completely optional and only required if you have data on a server
|
||||
which is running our legacy code and you want to import that data. It will
|
||||
import the uploads folder and migrate a dump of mongodb into neo4j.
|
||||
|
||||
##### Prepare migration of Human Connection legacy server
|
||||
Create a configmap with the specific connection data of your legacy server:
|
||||
```sh
|
||||
$ kubectl create configmap db-migration-worker \
|
||||
--namespace=staging \
|
||||
--from-literal=SSH_USERNAME=someuser \
|
||||
--from-literal=SSH_HOST=yourhost \
|
||||
--from-literal=MONGODB_USERNAME=hc-api \
|
||||
--from-literal=MONGODB_PASSWORD=secretpassword \
|
||||
--from-literal=MONGODB_AUTH_DB=hc_api \
|
||||
--from-literal=MONGODB_DATABASE=hc_api \
|
||||
--from-literal=UPLOADS_DIRECTORY=/var/www/api/uploads \
|
||||
--from-literal=NEO4J_URI=bolt://neo4j:7687
|
||||
|
||||
```
|
||||
Create a secret with your public and private ssh keys:
|
||||
```sh
|
||||
$ kubectl create secret generic ssh-keys \
|
||||
--namespace=staging \
|
||||
--from-file=id_rsa=/path/to/.ssh/id_rsa \
|
||||
--from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
|
||||
--from-file=known_hosts=/path/to/.ssh/known_hosts
|
||||
```
|
||||
As the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys)
|
||||
points out, you should be careful with your ssh keys. Anyone with access to your
|
||||
cluster will have access to your ssh keys. Better create a new pair with
|
||||
`ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`.
|
||||
|
||||
##### Migrate legacy database
|
||||
Patch the existing deployments to use a multi-container setup:
|
||||
```bash
|
||||
cd legacy-migration
|
||||
kubectl apply -f volume-claim-mongo-export.yaml
|
||||
kubectl patch --namespace=staging deployment nitro-backend --patch "$(cat deployment-backend.yaml)"
|
||||
kubectl patch --namespace=staging deployment nitro-neo4j --patch "$(cat deployment-neo4j.yaml)"
|
||||
cd ..
|
||||
```
|
||||
|
||||
Run the migration:
|
||||
```shell
|
||||
$ kubectl --namespace=staging get pods
|
||||
# change <POD_IDs> below
|
||||
$ kubectl --namespace=staging exec -it nitro-neo4j-65bbdb597c-nc2lv migrate
|
||||
$ kubectl --namespace=staging exec -it nitro-backend-c6cc5ff69-8h96z sync_uploads
|
||||
```
|
||||
|
||||
5
dashboard/admin-user.yaml
Normal file
5
dashboard/admin-user.yaml
Normal file
@ -0,0 +1,5 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: admin-user
|
||||
namespace: kube-system
|
||||
12
dashboard/role-binding.yaml
Normal file
12
dashboard/role-binding.yaml
Normal file
@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: admin-user
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: admin-user
|
||||
namespace: kube-system
|
||||
39
db-migration-worker.yaml
Normal file
39
db-migration-worker.yaml
Normal file
@ -0,0 +1,39 @@
|
||||
---
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: nitro-db-migration-worker
|
||||
namespace: staging
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: ssh-keys
|
||||
defaultMode: 0400
|
||||
- name: mongo-export
|
||||
persistentVolumeClaim:
|
||||
claimName: mongo-export-claim
|
||||
containers:
|
||||
- name: nitro-db-migration-worker
|
||||
image: humanconnection/db-migration-worker:latest
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: db-migration-worker
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: false
|
||||
mountPath: /root/.ssh
|
||||
- name: mongo-export
|
||||
mountPath: /mongo-export/
|
||||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mongo-export-claim
|
||||
namespace: staging
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
@ -1,13 +0,0 @@
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nitro-backend
|
||||
name: nitro-backend
|
||||
namespace: staging
|
||||
spec:
|
||||
ports:
|
||||
- port: 4000
|
||||
targetPort: 4000
|
||||
selector:
|
||||
k8s-app: nitro-backend
|
||||
@ -1,12 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: sample-load-balancer
|
||||
namespace: staging
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 80
|
||||
targetPort: 80
|
||||
name: http
|
||||
@ -1,15 +0,0 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: backend-ingress
|
||||
namespace: staging
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/rewrite-target: /
|
||||
spec:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: backend
|
||||
servicePort: 4000
|
||||
@ -1,22 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ingress-nginx
|
||||
namespace: staging
|
||||
labels:
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/part-of: ingress-nginx
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
- name: https
|
||||
port: 443
|
||||
targetPort: 443
|
||||
protocol: TCP
|
||||
selector:
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
app.kubernetes.io/part-of: ingress-nginx
|
||||
@ -1,13 +0,0 @@
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nitro-web
|
||||
name: nitro-web
|
||||
namespace: staging
|
||||
spec:
|
||||
ports:
|
||||
- port: 3000
|
||||
targetPort: 3000
|
||||
selector:
|
||||
k8s-app: nitro-web
|
||||
27
legacy-migration/deployment-backend.yaml
Normal file
27
legacy-migration/deployment-backend.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-backend
|
||||
namespace: staging
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: nitro-db-migration-worker
|
||||
image: humanconnection/db-migration-worker:latest
|
||||
imagePullPolicy: Always
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: db-migration-worker
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: false
|
||||
mountPath: /root/.ssh
|
||||
- name: uploads
|
||||
mountPath: /uploads/
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: ssh-keys
|
||||
defaultMode: 0400
|
||||
39
legacy-migration/deployment-neo4j.yaml
Normal file
39
legacy-migration/deployment-neo4j.yaml
Normal file
@ -0,0 +1,39 @@
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-neo4j
|
||||
namespace: staging
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: nitro-db-migration-worker
|
||||
image: humanconnection/db-migration-worker:latest
|
||||
imagePullPolicy: Always
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: db-migration-worker
|
||||
env:
|
||||
- name: COMMIT
|
||||
value: <BACKEND_COMMIT>
|
||||
- name: NEO4J_URI
|
||||
value: bolt://localhost:7687
|
||||
volumeMounts:
|
||||
- name: secret-volume
|
||||
readOnly: false
|
||||
mountPath: /root/.ssh
|
||||
- name: mongo-export
|
||||
mountPath: /mongo-export/
|
||||
- name: nitro-neo4j
|
||||
volumeMounts:
|
||||
- mountPath: /mongo-export/
|
||||
name: mongo-export
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
secret:
|
||||
secretName: ssh-keys
|
||||
defaultMode: 0400
|
||||
- name: mongo-export
|
||||
persistentVolumeClaim:
|
||||
claimName: mongo-export-claim
|
||||
12
legacy-migration/volume-claim-mongo-export.yaml
Normal file
12
legacy-migration/volume-claim-mongo-export.yaml
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mongo-export-claim
|
||||
namespace: staging
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
@ -1,10 +0,0 @@
|
||||
{
|
||||
"kind": "Namespace",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "staging",
|
||||
"labels": {
|
||||
"name": "staging"
|
||||
}
|
||||
}
|
||||
}
|
||||
6
namespace-staging.yaml
Normal file
6
namespace-staging.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: staging
|
||||
labels:
|
||||
name: staging
|
||||
8
secrets.template.yaml
Normal file
8
secrets.template.yaml
Normal file
@ -0,0 +1,8 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
data:
|
||||
JWT_SECRET: "Yi8mJjdiNzhCRiZmdi9WZA=="
|
||||
MONGODB_PASSWORD: "TU9OR09EQl9QQVNTV09SRA=="
|
||||
metadata:
|
||||
name: staging
|
||||
namespace: staging
|
||||
@ -1,9 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
GRAPHQL_PORT: "4000"
|
||||
GRAPHQL_URI: "https://api-nitro-staging.human-connection.org"
|
||||
MOCK: "false"
|
||||
metadata:
|
||||
name: staging-backend
|
||||
namespace: staging
|
||||
@ -1,67 +0,0 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-backend
|
||||
namespace: staging
|
||||
spec:
|
||||
replicas: 2
|
||||
minReadySeconds: 15
|
||||
progressDeadlineSeconds: 60
|
||||
# strategy:
|
||||
# rollingUpdate:
|
||||
# maxSurge: 1
|
||||
# maxUnavailable: 0
|
||||
# type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
name: "nitro-backend"
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: MOCK
|
||||
value: "false"
|
||||
- name: CLIENT_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-web
|
||||
key: CLIENT_URI
|
||||
- name: GRAPHQL_PORT
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-backend
|
||||
key: GRAPHQL_PORT
|
||||
- name: GRAPHQL_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-backend
|
||||
key: GRAPHQL_URI
|
||||
- name: MAPBOX_TOKEN
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-web
|
||||
key: MAPBOX_TOKEN
|
||||
- name: JWT_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: staging
|
||||
key: JWT_SECRET
|
||||
optional: false
|
||||
- name: NEO4J_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
key: NEO4J_URI
|
||||
image: humanconnection/nitro-backend:latest
|
||||
name: nitro-backend
|
||||
ports:
|
||||
- containerPort: 4000
|
||||
resources: {}
|
||||
imagePullPolicy: Always
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 30
|
||||
status: {}
|
||||
29
staging/configmaps.yaml
Normal file
29
staging/configmaps.yaml
Normal file
@ -0,0 +1,29 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
GRAPHQL_PORT: "4000"
|
||||
GRAPHQL_URI: "http://nitro-backend.staging:4000"
|
||||
MOCK: "false"
|
||||
metadata:
|
||||
name: staging-backend
|
||||
namespace: staging
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
NEO4J_URI: "bolt://nitro-neo4j.staging:7687"
|
||||
NEO4J_USER: "neo4j"
|
||||
NEO4J_AUTH: none
|
||||
metadata:
|
||||
name: staging-neo4j
|
||||
namespace: staging
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
CLIENT_URI: "https://nitro-staging.human-connection.org"
|
||||
MAPBOX_TOKEN: pk.eyJ1IjoiaHVtYW4tY29ubmVjdGlvbiIsImEiOiJjajl0cnBubGoweTVlM3VwZ2lzNTNud3ZtIn0.KZ8KK9l70omjXbEkkbHGsQ
|
||||
metadata:
|
||||
name: staging-web
|
||||
namespace: staging
|
||||
83
staging/deployment-backend.yaml
Normal file
83
staging/deployment-backend.yaml
Normal file
@ -0,0 +1,83 @@
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-backend
|
||||
namespace: staging
|
||||
spec:
|
||||
replicas: 2
|
||||
minReadySeconds: 15
|
||||
progressDeadlineSeconds: 60
|
||||
selector:
|
||||
matchLabels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
name: "nitro-backend"
|
||||
spec:
|
||||
containers:
|
||||
- name: nitro-backend
|
||||
image: humanconnection/nitro-backend:latest
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 4000
|
||||
env:
|
||||
- name: COMMIT
|
||||
value: <BACKEND_COMMIT>
|
||||
- name: MOCK
|
||||
value: "false"
|
||||
- name: CLIENT_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-web
|
||||
key: CLIENT_URI
|
||||
- name: GRAPHQL_PORT
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-backend
|
||||
key: GRAPHQL_PORT
|
||||
- name: GRAPHQL_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-backend
|
||||
key: GRAPHQL_URI
|
||||
- name: MAPBOX_TOKEN
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-web
|
||||
key: MAPBOX_TOKEN
|
||||
- name: JWT_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: staging
|
||||
key: JWT_SECRET
|
||||
optional: false
|
||||
- name: NEO4J_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
key: NEO4J_URI
|
||||
volumeMounts:
|
||||
- mountPath: /nitro-backend/public/uploads
|
||||
name: uploads
|
||||
volumes:
|
||||
- name: uploads
|
||||
persistentVolumeClaim:
|
||||
claimName: uploads-claim
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 30
|
||||
status: {}
|
||||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: uploads-claim
|
||||
namespace: staging
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
72
staging/deployment-neo4j.yaml
Normal file
72
staging/deployment-neo4j.yaml
Normal file
@ -0,0 +1,72 @@
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-neo4j
|
||||
namespace: staging
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy: {}
|
||||
selector:
|
||||
matchLabels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-neo4j
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-neo4j
|
||||
name: nitro-neo4j
|
||||
spec:
|
||||
containers:
|
||||
- name: nitro-neo4j
|
||||
image: humanconnection/neo4j:latest
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: COMMIT
|
||||
value: <BACKEND_COMMIT>
|
||||
- name: NEO4J_apoc_import_file_enabled
|
||||
value: "true"
|
||||
- name: NEO4J_dbms_memory_pagecache_size
|
||||
value: 1G
|
||||
- name: NEO4J_dbms_memory_heap_max__size
|
||||
value: 1G
|
||||
- name: NEO4J_AUTH
|
||||
value: none
|
||||
- name: NEO4J_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
key: NEO4J_URI
|
||||
- name: NEO4J_USER
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
key: NEO4J_USER
|
||||
- name: NEO4J_AUTH
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
key: NEO4J_AUTH
|
||||
ports:
|
||||
- containerPort: 7687
|
||||
- containerPort: 7474
|
||||
volumeMounts:
|
||||
- mountPath: /data/
|
||||
name: neo4j-data
|
||||
volumes:
|
||||
- name: neo4j-data
|
||||
persistentVolumeClaim:
|
||||
claimName: neo4j-data-claim
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 30
|
||||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: neo4j-data-claim
|
||||
namespace: staging
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 4Gi
|
||||
@ -7,11 +7,6 @@ spec:
|
||||
replicas: 2
|
||||
minReadySeconds: 15
|
||||
progressDeadlineSeconds: 60
|
||||
# strategy:
|
||||
# rollingUpdate:
|
||||
# maxSurge: 1
|
||||
# maxUnavailable: 0
|
||||
# type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-web
|
||||
@ -22,7 +17,10 @@ spec:
|
||||
name: nitro-web
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: web
|
||||
env:
|
||||
- name: COMMIT
|
||||
value: <WEBAPP_COMMIT>
|
||||
- name: HOST
|
||||
value: 0.0.0.0
|
||||
- name: BACKEND_URL
|
||||
@ -42,7 +40,6 @@ spec:
|
||||
key: JWT_SECRET
|
||||
optional: false
|
||||
image: humanconnection/nitro-web:latest
|
||||
name: web
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
resources: {}
|
||||
@ -1,260 +0,0 @@
|
||||
apiVersion: v1
|
||||
items:
|
||||
- apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: backend
|
||||
namespace: staging
|
||||
spec:
|
||||
minReadySeconds: 15
|
||||
progressDeadlineSeconds: 60
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 10
|
||||
selector:
|
||||
matchLabels:
|
||||
cattle.io/creator: norman
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
maxUnavailable: 0
|
||||
type: RollingUpdate
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: MOCK
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
key: MOCK
|
||||
name: staging-backend
|
||||
optional: false
|
||||
- name: NEO4J_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
key: NEO4J_URI
|
||||
name: staging-neo4j
|
||||
optional: false
|
||||
- name: JWT_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
key: JWT_SECRET
|
||||
name: staging
|
||||
optional: false
|
||||
- name: NEO4J_AUTH
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
key: NEO4J_AUTH
|
||||
name: staging-neo4j
|
||||
optional: false
|
||||
- name: CLIENT_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
key: CLIENT_URI
|
||||
name: staging-web
|
||||
optional: false
|
||||
- name: GRAPHQL_PORT
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
key: GRAPHQL_PORT
|
||||
name: staging-backend
|
||||
optional: false
|
||||
- name: GRAPHQL_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
key: GRAPHQL_URI
|
||||
name: staging-backend
|
||||
optional: false
|
||||
image: humanconnection/nitro-backend:latest
|
||||
imagePullPolicy: Always
|
||||
name: backend
|
||||
resources: {}
|
||||
tty: true
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 30
|
||||
#- apiVersion: extensions/v1beta1
|
||||
# kind: Deployment
|
||||
# metadata:
|
||||
# annotations:
|
||||
# deployment.kubernetes.io/revision: "2"
|
||||
# field.cattle.io/creatorId: user-x8jr4
|
||||
# field.cattle.io/publicEndpoints: '[{"nodeName":"c-2kbhr:m-bmgq4","addresses":["104.248.30.130"],"port":7687,"protocol":"TCP","podName":"staging:neo4j-2-6589cbc4d5-q4bxl","allNodes":false},{"nodeName":"c-2kbhr:m-bmgq4","addresses":["104.248.30.130"],"port":7474,"protocol":"TCP","podName":"staging:neo4j-2-6589cbc4d5-q4bxl","allNodes":false},{"nodeName":"c-2kbhr:m-bmgq4","addresses":["104.248.30.130"],"port":7473,"protocol":"TCP","podName":"staging:neo4j-2-6589cbc4d5-q4bxl","allNodes":false}]'
|
||||
# creationTimestamp: 2018-12-10T19:07:58Z
|
||||
# generation: 8
|
||||
# labels:
|
||||
# cattle.io/creator: norman
|
||||
# workload.user.cattle.io/workloadselector: deployment-staging-neo4j-2
|
||||
# name: neo4j-2
|
||||
# namespace: staging
|
||||
# resourceVersion: "2380945"
|
||||
# selfLink: /apis/extensions/v1beta1/namespaces/staging/deployments/neo4j-2
|
||||
# uid: e80460f6-fcae-11e8-943a-c6c288d5f6fa
|
||||
# spec:
|
||||
# progressDeadlineSeconds: 600
|
||||
# replicas: 1
|
||||
# revisionHistoryLimit: 10
|
||||
# selector:
|
||||
# matchLabels:
|
||||
# workload.user.cattle.io/workloadselector: deployment-staging-neo4j-2
|
||||
# strategy:
|
||||
# rollingUpdate:
|
||||
# maxSurge: 1
|
||||
# maxUnavailable: 0
|
||||
# type: RollingUpdate
|
||||
# template:
|
||||
# metadata:
|
||||
# annotations:
|
||||
# cattle.io/timestamp: 2018-12-11T11:11:09Z
|
||||
# field.cattle.io/ports: '[[{"containerPort":7687,"dnsName":"neo4j-2-hostport","hostPort":7687,"kind":"HostPort","name":"7687tcp76870","protocol":"TCP","sourcePort":7687},{"containerPort":7474,"dnsName":"neo4j-2-hostport","hostPort":7474,"kind":"HostPort","name":"7474tcp74740","protocol":"TCP","sourcePort":7474},{"containerPort":7473,"dnsName":"neo4j-2-hostport","hostPort":7473,"kind":"HostPort","name":"7473tcp74730","protocol":"TCP","sourcePort":7473}]]'
|
||||
# creationTimestamp: null
|
||||
# labels:
|
||||
# workload.user.cattle.io/workloadselector: deployment-staging-neo4j-2
|
||||
# spec:
|
||||
# containers:
|
||||
# - env:
|
||||
# - name: NEO4J_AUTH
|
||||
# value: none
|
||||
# image: humanconnection/neo4j:latest
|
||||
# imagePullPolicy: IfNotPresent
|
||||
# name: neo4j-2
|
||||
# ports:
|
||||
# - containerPort: 7687
|
||||
# hostPort: 7687
|
||||
# name: 7687tcp76870
|
||||
# protocol: TCP
|
||||
# - containerPort: 7474
|
||||
# hostPort: 7474
|
||||
# name: 7474tcp74740
|
||||
# protocol: TCP
|
||||
# - containerPort: 7473
|
||||
# hostPort: 7473
|
||||
# name: 7473tcp74730
|
||||
# protocol: TCP
|
||||
# resources: {}
|
||||
# securityContext:
|
||||
# allowPrivilegeEscalation: false
|
||||
# capabilities: {}
|
||||
# privileged: false
|
||||
# readOnlyRootFilesystem: false
|
||||
# runAsNonRoot: false
|
||||
# stdin: true
|
||||
# terminationMessagePath: /dev/termination-log
|
||||
# terminationMessagePolicy: File
|
||||
# tty: true
|
||||
# dnsPolicy: ClusterFirst
|
||||
# restartPolicy: Always
|
||||
# schedulerName: default-scheduler
|
||||
# securityContext: {}
|
||||
# terminationGracePeriodSeconds: 30
|
||||
# status:
|
||||
# availableReplicas: 1
|
||||
# conditions:
|
||||
# - lastTransitionTime: 2018-12-10T19:07:58Z
|
||||
# lastUpdateTime: 2018-12-11T11:11:18Z
|
||||
# message: ReplicaSet "neo4j-2-6589cbc4d5" has successfully progressed.
|
||||
# reason: NewReplicaSetAvailable
|
||||
# status: "True"
|
||||
# type: Progressing
|
||||
# - lastTransitionTime: 2018-12-11T12:12:41Z
|
||||
# lastUpdateTime: 2018-12-11T12:12:41Z
|
||||
# message: Deployment has minimum availability.
|
||||
# reason: MinimumReplicasAvailable
|
||||
# status: "True"
|
||||
# type: Available
|
||||
# observedGeneration: 8
|
||||
# readyReplicas: 1
|
||||
# replicas: 1
|
||||
# updatedReplicas: 1
|
||||
##- apiVersion: extensions/v1beta1
|
||||
# kind: Deployment
|
||||
# metadata:
|
||||
# annotations:
|
||||
# deployment.kubernetes.io/revision: "15"
|
||||
# field.cattle.io/creatorId: user-x8jr4
|
||||
# field.cattle.io/publicEndpoints: '[{"addresses":["68.183.211.116"],"port":31726,"protocol":"TCP","serviceName":"staging:web-nodeport","allNodes":true},{"addresses":["104.248.25.205"],"port":80,"protocol":"HTTP","serviceName":"staging:ingress-ef72b2ceebfff95d50b0537c0e9e98d8","ingressName":"staging:web","hostname":"web.staging.104.248.25.205.xip.io","allNodes":true}]'
|
||||
# creationTimestamp: 2018-11-30T13:56:41Z
|
||||
# generation: 56
|
||||
# labels:
|
||||
# cattle.io/creator: norman
|
||||
# workload.user.cattle.io/workloadselector: deployment-staging-web
|
||||
# name: web
|
||||
# namespace: staging
|
||||
# resourceVersion: "2401610"
|
||||
# selfLink: /apis/extensions/v1beta1/namespaces/staging/deployments/web
|
||||
# uid: c3870196-f4a7-11e8-943a-c6c288d5f6fa
|
||||
# spec:
|
||||
# progressDeadlineSeconds: 600
|
||||
# replicas: 1
|
||||
# revisionHistoryLimit: 10
|
||||
# selector:
|
||||
# matchLabels:
|
||||
# workload.user.cattle.io/workloadselector: deployment-staging-web
|
||||
# strategy:
|
||||
# rollingUpdate:
|
||||
# maxSurge: 1
|
||||
# maxUnavailable: 0
|
||||
# type: RollingUpdate
|
||||
# template:
|
||||
# metadata:
|
||||
# labels:
|
||||
# workload.user.cattle.io/workloadselector: deployment-staging-web
|
||||
# spec:
|
||||
# containers:
|
||||
# - env:
|
||||
# - name: HOST
|
||||
# value: 0.0.0.0
|
||||
# - name: JWT_SECRET
|
||||
# valueFrom:
|
||||
# secretKeyRef:
|
||||
# key: JWT_SECRET
|
||||
# name: jwt-secret
|
||||
# optional: false
|
||||
# - name: BACKEND_URL
|
||||
# valueFrom:
|
||||
# configMapKeyRef:
|
||||
# key: GRAPHQL_URI
|
||||
# name: staging-configs
|
||||
# optional: false
|
||||
# image: humanconnection/nitro-web:latest
|
||||
# imagePullPolicy: Always
|
||||
# name: web
|
||||
# ports:
|
||||
# - containerPort: 3000
|
||||
# name: 3000tcp01
|
||||
# protocol: TCP
|
||||
# resources: {}
|
||||
# securityContext:
|
||||
# allowPrivilegeEscalation: false
|
||||
# capabilities: {}
|
||||
# privileged: false
|
||||
# readOnlyRootFilesystem: false
|
||||
# runAsNonRoot: false
|
||||
# stdin: true
|
||||
# terminationMessagePath: /dev/termination-log
|
||||
# terminationMessagePolicy: File
|
||||
# tty: true
|
||||
# dnsPolicy: ClusterFirst
|
||||
# restartPolicy: Always
|
||||
# schedulerName: default-scheduler
|
||||
# securityContext: {}
|
||||
# terminationGracePeriodSeconds: 30
|
||||
# status:
|
||||
# availableReplicas: 1
|
||||
# conditions:
|
||||
# - lastTransitionTime: 2018-11-30T14:53:36Z
|
||||
# lastUpdateTime: 2018-12-11T11:17:34Z
|
||||
# message: ReplicaSet "web-5864d6db9c" has successfully progressed.
|
||||
# reason: NewReplicaSetAvailable
|
||||
# status: "True"
|
||||
# type: Progressing
|
||||
# - lastTransitionTime: 2018-12-11T11:23:17Z
|
||||
# lastUpdateTime: 2018-12-11T11:23:17Z
|
||||
# message: Deployment has minimum availability.
|
||||
# reason: MinimumReplicasAvailable
|
||||
# status: "True"
|
||||
# type: Available
|
||||
# observedGeneration: 56
|
||||
# readyReplicas: 1
|
||||
# replicas: 1
|
||||
# updatedReplicas: 1
|
||||
kind: List
|
||||
@ -1,9 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
NEO4J_URI: "bolt://neo4j:7687"
|
||||
NEO4J_USER: "neo4j"
|
||||
NEO4J_AUTH: none
|
||||
metadata:
|
||||
name: staging-neo4j
|
||||
namespace: staging
|
||||
@ -1,50 +0,0 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-neo4j
|
||||
namespace: staging
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy: {}
|
||||
selector:
|
||||
matchLabels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-neo4j
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-neo4j
|
||||
name: "nitro-neo4j"
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: NEO4J_dbms_memory_pagecache_size
|
||||
value: 1G
|
||||
- name: NEO4J_dbms_memory_heap_max__size
|
||||
value: 1G
|
||||
- name: NEO4J_AUTH
|
||||
value: none
|
||||
- name: NEO4J_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
key: NEO4J_URI
|
||||
- name: NEO4J_USER
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
key: NEO4J_USER
|
||||
- name: NEO4J_AUTH
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
key: NEO4J_AUTH
|
||||
image: humanconnection/neo4j:latest
|
||||
name: nitro-neo4j
|
||||
ports:
|
||||
- containerPort: 7687
|
||||
- containerPort: 7474
|
||||
# - containerPort: 7473
|
||||
resources: {}
|
||||
imagePullPolicy: IfNotPresent
|
||||
restartPolicy: Always
|
||||
status: {}
|
||||
@ -1,22 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
annotations:
|
||||
field.cattle.io/ipAddresses: "null"
|
||||
field.cattle.io/targetDnsRecordIds: "null"
|
||||
field.cattle.io/targetWorkloadIds: '["deployment:staging:nitro-neo4j"]'
|
||||
labels:
|
||||
cattle.io/creator: norman
|
||||
name: neo4j
|
||||
namespace: staging
|
||||
spec:
|
||||
clusterIP: None
|
||||
ports:
|
||||
- name: default
|
||||
port: 42
|
||||
protocol: TCP
|
||||
targetPort: 42
|
||||
selector:
|
||||
workloadID_neo4j: "true"
|
||||
sessionAffinity: None
|
||||
type: ClusterIP
|
||||
@ -1,7 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
data:
|
||||
JWT_SECRET: "aHVtYW5jb25uZWN0aW9uLWRlcGxveW1lbnQ="
|
||||
metadata:
|
||||
name: staging
|
||||
namespace: staging
|
||||
14
staging/service-backend.yaml
Normal file
14
staging/service-backend.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nitro-backend
|
||||
namespace: staging
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 4000
|
||||
targetPort: 4000
|
||||
selector:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
17
staging/service-neo4j.yaml
Normal file
17
staging/service-neo4j.yaml
Normal file
@ -0,0 +1,17 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nitro-neo4j
|
||||
namespace: staging
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-neo4j
|
||||
spec:
|
||||
ports:
|
||||
- name: bolt
|
||||
port: 7687
|
||||
targetPort: 7687
|
||||
- name: web
|
||||
port: 7474
|
||||
targetPort: 7474
|
||||
selector:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-neo4j
|
||||
16
staging/service-web.yaml
Normal file
16
staging/service-web.yaml
Normal file
@ -0,0 +1,16 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nitro-web
|
||||
namespace: staging
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-web
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 3000
|
||||
targetPort: 3000
|
||||
selector:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-web
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Cluster
|
||||
@ -1,8 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
CLIENT_URI: "https://nitro-staging.human-connection.org"
|
||||
MAPBOX_TOKEN: pk.eyJ1IjoiaHVtYW4tY29ubmVjdGlvbiIsImEiOiJjajl0cnBubGoweTVlM3VwZ2lzNTNud3ZtIn0.KZ8KK9l70omjXbEkkbHGsQ
|
||||
metadata:
|
||||
name: staging-web
|
||||
namespace: staging
|
||||
Loading…
x
Reference in New Issue
Block a user