mirror of
https://github.com/Ocelot-Social-Community/Ocelot-Social.git
synced 2025-12-13 07:46:06 +00:00
commit
4d6c888253
105
README.md
105
README.md
@ -3,7 +3,7 @@
|
||||
Todos:
|
||||
- [x] check labels and selectors if they all are correct
|
||||
- [x] configure NGINX from yml
|
||||
- [ ] configure Let's Encrypt cert-manager from yml
|
||||
- [x] configure Let's Encrypt cert-manager from yml
|
||||
- [x] configure ingress from yml
|
||||
- [x] configure persistent & shared storage between nodes
|
||||
- [x] reproduce setup locally
|
||||
@ -28,7 +28,7 @@ If all the pods and services have settled and everything looks green in your
|
||||
minikube dashboard, expose the `nitro-web` service on your host system with:
|
||||
|
||||
```shell
|
||||
$ minikube service nitro-web --namespace=staging
|
||||
$ minikube service nitro-web --namespace=human-connection
|
||||
```
|
||||
|
||||
## Digital Ocean
|
||||
@ -36,6 +36,8 @@ $ minikube service nitro-web --namespace=staging
|
||||
First, install kubernetes dashboard:
|
||||
```sh
|
||||
$ kubectl apply -f dashboard/
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
|
||||
|
||||
```
|
||||
Proxy localhost to the remote kubernetes dashboard:
|
||||
```sh
|
||||
@ -70,16 +72,10 @@ Grab the token and paste it into the login screen at [http://localhost:8001/api/
|
||||
You have to do some prerequisites e.g. change some secrets according to your
|
||||
own setup.
|
||||
|
||||
#### Setup config maps
|
||||
```shell
|
||||
$ cp configmap-db-migration-worker.template.yaml staging/configmap-db-migration-worker.yaml
|
||||
```
|
||||
Edit all variables according to the setup of the remote legacy server.
|
||||
|
||||
#### Setup secrets and deploy themn
|
||||
### Edit secrets
|
||||
|
||||
```sh
|
||||
$ cp secrets.template.yaml staging/secrets.yaml
|
||||
$ cp secrets.template.yaml human-connection/secrets.yaml
|
||||
```
|
||||
Change all secrets as needed.
|
||||
|
||||
@ -92,16 +88,16 @@ YWRtaW4=
|
||||
```
|
||||
Those secrets get `base64` decoded in a kubernetes pod.
|
||||
|
||||
#### Create a namespace locally
|
||||
### Create a namespace
|
||||
```shell
|
||||
$ kubectl create -f namespace-staging.yaml
|
||||
$ kubectl apply -f namespace-human-connection.yaml
|
||||
```
|
||||
Switch to the namespace `staging` in your kubernetes dashboard.
|
||||
Switch to the namespace `human-connection` in your kubernetes dashboard.
|
||||
|
||||
|
||||
### Run the configuration
|
||||
```shell
|
||||
$ kubectl apply -f staging/
|
||||
$ kubectl apply -f human-connection/
|
||||
```
|
||||
|
||||
This can take a while because kubernetes will download the docker images.
|
||||
@ -109,6 +105,58 @@ Sit back and relax and have a look into your kubernetes dashboard.
|
||||
Wait until all pods turn green and they don't show a warning
|
||||
`Waiting: ContainerCreating` anymore.
|
||||
|
||||
#### Setup Ingress and HTTPS
|
||||
|
||||
Follow [this quick start guide](https://docs.cert-manager.io/en/latest/tutorials/acme/quick-start/index.html)
|
||||
and install certmanager via helm and tiller:
|
||||
```
|
||||
$ kubectl create serviceaccount tiller --namespace=kube-system
|
||||
$ kubectl create clusterrolebinding tiller-admin --serviceaccount=kube-system:tiller --clusterrole=cluster-admin
|
||||
$ helm init --service-account=tiller
|
||||
$ helm repo update
|
||||
$ helm install stable/nginx-ingress
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/00-crds.yaml
|
||||
$ helm install --name cert-manager --namespace cert-manager stable/cert-manager
|
||||
```
|
||||
|
||||
Create letsencrypt issuers. *Change the email address* in these files before
|
||||
running this command.
|
||||
```sh
|
||||
$ kubectl apply -f human-connection/https/
|
||||
```
|
||||
Create an ingress service in namespace `human-connection`. *Change the domain
|
||||
name* according to your needs:
|
||||
```sh
|
||||
$ kubectl apply -f human-connection/ingress/
|
||||
```
|
||||
Check the ingress server is working correctly:
|
||||
```sh
|
||||
$ curl -kivL -H 'Host: <DOMAIN_NAME>' 'https://<IP_ADDRESS>'
|
||||
```
|
||||
If the response looks good, configure your domain registrar for the new IP
|
||||
address and the domain.
|
||||
|
||||
Now let's get a valid HTTPS certificate. According to the tutorial above, check
|
||||
your tls certificate for staging:
|
||||
```sh
|
||||
$ kubectl describe --namespace=human-connection certificate tls
|
||||
$ kubectl describe --namespace=human-connection secret tls
|
||||
```
|
||||
|
||||
If everything looks good, update the issuer of your ingress. Change the
|
||||
annotation `certmanager.k8s.io/issuer` from `letsencrypt-staging` to
|
||||
`letsencrypt-prod` in your ingress configuration in
|
||||
`human-connection/ingress/ingress.yaml`.
|
||||
|
||||
```sh
|
||||
$ kubectl apply -f human-connection/ingress/ingress.yaml
|
||||
```
|
||||
Delete the former secret to force a refresh:
|
||||
```
|
||||
$ kubectl --namespace=human-connection delete secret tls
|
||||
```
|
||||
Now, HTTPS should be configured on your domain. Congrats.
|
||||
|
||||
#### Legacy data migration
|
||||
|
||||
This setup is completely optional and only required if you have data on a server
|
||||
@ -119,7 +167,7 @@ import the uploads folder and migrate a dump of mongodb into neo4j.
|
||||
Create a configmap with the specific connection data of your legacy server:
|
||||
```sh
|
||||
$ kubectl create configmap db-migration-worker \
|
||||
--namespace=staging \
|
||||
--namespace=human-connection \
|
||||
--from-literal=SSH_USERNAME=someuser \
|
||||
--from-literal=SSH_HOST=yourhost \
|
||||
--from-literal=MONGODB_USERNAME=hc-api \
|
||||
@ -127,36 +175,37 @@ $ kubectl create configmap db-migration-worker \
|
||||
--from-literal=MONGODB_AUTH_DB=hc_api \
|
||||
--from-literal=MONGODB_DATABASE=hc_api \
|
||||
--from-literal=UPLOADS_DIRECTORY=/var/www/api/uploads \
|
||||
--from-literal=NEO4J_URI=bolt://neo4j:7687
|
||||
|
||||
--from-literal=NEO4J_URI=bolt://localhost:7687
|
||||
```
|
||||
Create a secret with your public and private ssh keys:
|
||||
|
||||
Create a secret with your public and private ssh keys. As the
|
||||
[kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys)
|
||||
points out, you should be careful with your ssh keys. Anyone with access to your
|
||||
cluster will have access to your ssh keys. Better create a new pair with
|
||||
`ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`:
|
||||
|
||||
```sh
|
||||
$ kubectl create secret generic ssh-keys \
|
||||
--namespace=staging \
|
||||
--namespace=human-connection \
|
||||
--from-file=id_rsa=/path/to/.ssh/id_rsa \
|
||||
--from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
|
||||
--from-file=known_hosts=/path/to/.ssh/known_hosts
|
||||
```
|
||||
As the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-pod-with-ssh-keys)
|
||||
points out, you should be careful with your ssh keys. Anyone with access to your
|
||||
cluster will have access to your ssh keys. Better create a new pair with
|
||||
`ssh-keygen` and copy the public key to your legacy server with `ssh-copy-id`.
|
||||
|
||||
##### Migrate legacy database
|
||||
Patch the existing deployments to use a multi-container setup:
|
||||
```bash
|
||||
cd legacy-migration
|
||||
kubectl apply -f volume-claim-mongo-export.yaml
|
||||
kubectl patch --namespace=staging deployment nitro-backend --patch "$(cat deployment-backend.yaml)"
|
||||
kubectl patch --namespace=staging deployment nitro-neo4j --patch "$(cat deployment-neo4j.yaml)"
|
||||
kubectl patch --namespace=human-connection deployment nitro-backend --patch "$(cat deployment-backend.yaml)"
|
||||
kubectl patch --namespace=human-connection deployment nitro-neo4j --patch "$(cat deployment-neo4j.yaml)"
|
||||
cd ..
|
||||
```
|
||||
|
||||
Run the migration:
|
||||
```shell
|
||||
$ kubectl --namespace=staging get pods
|
||||
$ kubectl --namespace=human-connection get pods
|
||||
# change <POD_IDs> below
|
||||
$ kubectl --namespace=staging exec -it nitro-neo4j-65bbdb597c-nc2lv migrate
|
||||
$ kubectl --namespace=staging exec -it nitro-backend-c6cc5ff69-8h96z sync_uploads
|
||||
$ kubectl --namespace=human-connection exec -it nitro-neo4j-65bbdb597c-nc2lv migrate
|
||||
$ kubectl --namespace=human-connection exec -it nitro-backend-c6cc5ff69-8h96z sync_uploads
|
||||
```
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: nitro-db-migration-worker
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
volumes:
|
||||
- name: secret-volume
|
||||
@ -30,7 +30,7 @@
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mongo-export-claim
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
|
||||
15
human-connection/configmap.yaml
Normal file
15
human-connection/configmap.yaml
Normal file
@ -0,0 +1,15 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
GRAPHQL_PORT: "4000"
|
||||
GRAPHQL_URI: "http://nitro-backend.human-connection:4000"
|
||||
MOCK: "false"
|
||||
NEO4J_URI: "bolt://nitro-neo4j.human-connection:7687"
|
||||
NEO4J_USER: "neo4j"
|
||||
NEO4J_AUTH: none
|
||||
CLIENT_URI: "https://nitro-human-connection.human-connection.org"
|
||||
MAPBOX_TOKEN: pk.eyJ1IjoiaHVtYW4tY29ubmVjdGlvbiIsImEiOiJjajl0cnBubGoweTVlM3VwZ2lzNTNud3ZtIn0.KZ8KK9l70omjXbEkkbHGsQ
|
||||
metadata:
|
||||
name: configmap
|
||||
namespace: human-connection
|
||||
@ -3,18 +3,18 @@
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-backend
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
replicas: 2
|
||||
replicas: 1
|
||||
minReadySeconds: 15
|
||||
progressDeadlineSeconds: 60
|
||||
selector:
|
||||
matchLabels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
name: "nitro-backend"
|
||||
spec:
|
||||
containers:
|
||||
@ -31,33 +31,33 @@
|
||||
- name: CLIENT_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-web
|
||||
name: configmap
|
||||
key: CLIENT_URI
|
||||
- name: GRAPHQL_PORT
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-backend
|
||||
name: configmap
|
||||
key: GRAPHQL_PORT
|
||||
- name: GRAPHQL_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-backend
|
||||
name: configmap
|
||||
key: GRAPHQL_URI
|
||||
- name: MAPBOX_TOKEN
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-web
|
||||
name: configmap
|
||||
key: MAPBOX_TOKEN
|
||||
- name: JWT_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: staging
|
||||
name: secret
|
||||
key: JWT_SECRET
|
||||
optional: false
|
||||
- name: NEO4J_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
name: configmap
|
||||
key: NEO4J_URI
|
||||
volumeMounts:
|
||||
- mountPath: /nitro-backend/public/uploads
|
||||
@ -74,10 +74,10 @@
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: uploads-claim
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
storage: 2Gi
|
||||
@ -3,17 +3,17 @@
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-neo4j
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy: {}
|
||||
selector:
|
||||
matchLabels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-neo4j
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-neo4j
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
name: nitro-neo4j
|
||||
spec:
|
||||
containers:
|
||||
@ -34,17 +34,17 @@
|
||||
- name: NEO4J_URI
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
name: configmap
|
||||
key: NEO4J_URI
|
||||
- name: NEO4J_USER
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
name: configmap
|
||||
key: NEO4J_USER
|
||||
- name: NEO4J_AUTH
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-neo4j
|
||||
name: configmap
|
||||
key: NEO4J_AUTH
|
||||
ports:
|
||||
- containerPort: 7687
|
||||
@ -63,10 +63,10 @@
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: neo4j-data-claim
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 4Gi
|
||||
storage: 1Gi
|
||||
@ -2,18 +2,18 @@ apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-web
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
replicas: 2
|
||||
minReadySeconds: 15
|
||||
progressDeadlineSeconds: 60
|
||||
selector:
|
||||
matchLabels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-web
|
||||
human-connection.org/selector: deployment-human-connection-web
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-web
|
||||
human-connection.org/selector: deployment-human-connection-web
|
||||
name: nitro-web
|
||||
spec:
|
||||
containers:
|
||||
@ -26,17 +26,17 @@ spec:
|
||||
- name: BACKEND_URL
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-backend
|
||||
name: configmap
|
||||
key: GRAPHQL_URI
|
||||
- name: MAPBOX_TOKEN
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: staging-web
|
||||
name: configmap
|
||||
key: MAPBOX_TOKEN
|
||||
- name: JWT_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: staging
|
||||
name: secret
|
||||
key: JWT_SECRET
|
||||
optional: false
|
||||
image: humanconnection/nitro-web:latest
|
||||
34
human-connection/https/issuer.yaml
Normal file
34
human-connection/https/issuer.yaml
Normal file
@ -0,0 +1,34 @@
|
||||
---
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: Issuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
acme:
|
||||
# The ACME server URL
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
# Email address used for ACME registration
|
||||
email: user@example.com
|
||||
# Name of a secret used to store the ACME account private key
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-staging
|
||||
# Enable the HTTP-01 challenge provider
|
||||
http01: {}
|
||||
---
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: Issuer
|
||||
metadata:
|
||||
name: letsencrypt-prod
|
||||
namespace: human-connection
|
||||
spec:
|
||||
acme:
|
||||
# The ACME server URL
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
# Email address used for ACME registration
|
||||
email: user@example.com
|
||||
# Name of a secret used to store the ACME account private key
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-prod
|
||||
# Enable the HTTP-01 challenge provider
|
||||
http01: {}
|
||||
22
human-connection/ingress/ingress.yaml
Normal file
22
human-connection/ingress/ingress.yaml
Normal file
@ -0,0 +1,22 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress
|
||||
namespace: human-connection
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "nginx"
|
||||
certmanager.k8s.io/issuer: "letsencrypt-staging"
|
||||
certmanager.k8s.io/acme-challenge-type: http01
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- nitro-master.human-connection.org
|
||||
secretName: tls
|
||||
rules:
|
||||
- host: nitro-master.human-connection.org
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: nitro-web
|
||||
servicePort: 3000
|
||||
14
human-connection/service-backend.yaml
Normal file
14
human-connection/service-backend.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nitro-backend
|
||||
namespace: human-connection
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 4000
|
||||
targetPort: 4000
|
||||
selector:
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
@ -2,9 +2,9 @@ apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nitro-neo4j
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-neo4j
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
spec:
|
||||
ports:
|
||||
- name: bolt
|
||||
@ -14,4 +14,4 @@ spec:
|
||||
port: 7474
|
||||
targetPort: 7474
|
||||
selector:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-neo4j
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
14
human-connection/service-web.yaml
Normal file
14
human-connection/service-web.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nitro-web
|
||||
namespace: human-connection
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-web
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 3000
|
||||
targetPort: 3000
|
||||
selector:
|
||||
human-connection.org/selector: deployment-human-connection-web
|
||||
@ -3,7 +3,7 @@
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-backend
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nitro-neo4j
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: mongo-export-claim
|
||||
namespace: staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
|
||||
6
namespace-human-connection.yaml
Normal file
6
namespace-human-connection.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: human-connection
|
||||
labels:
|
||||
name: human-connection
|
||||
@ -1,6 +0,0 @@
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: staging
|
||||
labels:
|
||||
name: staging
|
||||
@ -4,5 +4,5 @@ data:
|
||||
JWT_SECRET: "Yi8mJjdiNzhCRiZmdi9WZA=="
|
||||
MONGODB_PASSWORD: "TU9OR09EQl9QQVNTV09SRA=="
|
||||
metadata:
|
||||
name: staging
|
||||
namespace: staging
|
||||
name: human-connection
|
||||
namespace: human-connection
|
||||
|
||||
@ -1,29 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
GRAPHQL_PORT: "4000"
|
||||
GRAPHQL_URI: "http://nitro-backend.staging:4000"
|
||||
MOCK: "false"
|
||||
metadata:
|
||||
name: staging-backend
|
||||
namespace: staging
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
NEO4J_URI: "bolt://nitro-neo4j.staging:7687"
|
||||
NEO4J_USER: "neo4j"
|
||||
NEO4J_AUTH: none
|
||||
metadata:
|
||||
name: staging-neo4j
|
||||
namespace: staging
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
CLIENT_URI: "https://nitro-staging.human-connection.org"
|
||||
MAPBOX_TOKEN: pk.eyJ1IjoiaHVtYW4tY29ubmVjdGlvbiIsImEiOiJjajl0cnBubGoweTVlM3VwZ2lzNTNud3ZtIn0.KZ8KK9l70omjXbEkkbHGsQ
|
||||
metadata:
|
||||
name: staging-web
|
||||
namespace: staging
|
||||
@ -1,14 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nitro-backend
|
||||
namespace: staging
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 4000
|
||||
targetPort: 4000
|
||||
selector:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-backend
|
||||
@ -1,16 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nitro-web
|
||||
namespace: staging
|
||||
labels:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-web
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 3000
|
||||
targetPort: 3000
|
||||
selector:
|
||||
workload.user.cattle.io/workloadselector: deployment-staging-web
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Cluster
|
||||
Loading…
x
Reference in New Issue
Block a user