mirror of
https://github.com/Ocelot-Social-Community/Ocelot-Social.git
synced 2025-12-13 07:46:06 +00:00
Merge branch 'master' of https://github.com/Ocelot-Social-Community/Ocelot-Social into 4012-default-admin
This commit is contained in:
commit
6ee23eda48
2
.github/ISSUE_TEMPLATE.md
vendored
2
.github/ISSUE_TEMPLATE.md
vendored
@ -1,5 +1,5 @@
|
||||
<!--
|
||||
Please take a look at the issue templates at https://github.com/Human-Connection/Human-Connection/issues/new/choose
|
||||
Please take a look at the issue templates at https://github.com/Ocelot-Social-Community/Ocelot-Social/issues/new/choose
|
||||
before submitting a new issue. Following one of the issue templates will ensure maintainers can route your request efficiently.
|
||||
|
||||
Thanks!
|
||||
|
||||
@ -34,7 +34,7 @@ This Code of Conduct applies both within project spaces and in public spaces whe
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at developer@human-connection.org. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at devops@ocelot.social. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
|
||||
|
||||
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
|
||||
|
||||
|
||||
@ -6,7 +6,7 @@ Thank you so much for thinking of contributing to the Human Connection project!
|
||||
|
||||
Instructions for how to install all the necessary software and some code guidelines can be found in our [documentation](https://docs.human-connection.org/human-connection/).
|
||||
|
||||
To get you started we recommend that you join forces with a regular contributor. Please join [our discord instance](https://human-connection.org/discord) to chat with developers or just get in touch directly on an issue on either [Github](https://github.com/Human-Connection/Human-Connection/issues) or [Zenhub](https://app.zenhub.com/workspaces/human-connection-nitro-5c0154ecc699f60fc92cf11f/boards?repos=152252353):
|
||||
To get you started we recommend that you join forces with a regular contributor. Please join [our discord instance](https://human-connection.org/discord) to chat with developers or just get in touch directly on an issue on either [Github](https://github.com/Ocelot-Social-Community/Ocelot-Social/issues) or [Zenhub](https://app.zenhub.com/workspaces/ocelotsocial-5fb21ff922cb410015dd6535/board?filterLogic=any&repos=301151089):
|
||||
|
||||

|
||||
|
||||
@ -14,7 +14,7 @@ We also have regular pair programming sessions that you are very welcome to join
|
||||
|
||||
## Development Flow
|
||||
|
||||
We operate in two week sprints that are planned, estimated and prioritised on [Zenhub](https://app.zenhub.com/workspaces/human-connection-nitro-5c0154ecc699f60fc92cf11f). All issues are also linked to and synced with [Github](https://github.com/Human-Connection/Human-Connection/issues). Look for the `good first issue` label if you're not sure where to start!
|
||||
We operate in two week sprints that are planned, estimated and prioritised on [Zenhub](https://app.zenhub.com/workspaces/ocelotsocial-5fb21ff922cb410015dd6535/board?filterLogic=any&repos=301151089). All issues are also linked to and synced with [Github](https://github.com/Ocelot-Social-Community/Ocelot-Social/issues). Look for the `good first issue` label if you're not sure where to start!
|
||||
|
||||
We try to discuss all questions directly related to a feature or bug in the respective issue, in order to preserve it for the future and for other developers. We use discord for real-time communication.
|
||||
|
||||
@ -82,6 +82,7 @@ Sprint planning
|
||||
* we select and prioritise the issues we will work on in the following two weeks
|
||||
|
||||
Sprint retrospective
|
||||
|
||||
* bi-weekly on Monday 13:00
|
||||
* via this [zoom link](https://zoom.us/j/7743582385)
|
||||
* all contributors welcome (most interesting for those who participated in the sprint)
|
||||
@ -90,6 +91,7 @@ Sprint retrospective
|
||||
## Philosophy
|
||||
|
||||
We practise [collective code ownership](http://www.extremeprogramming.org/rules/collective.html) rather than strong code ownership, which means that:
|
||||
|
||||
* developers can make contributions to other people's PRs (after checking in with them)
|
||||
* we avoid blocking because someone else isn't working, so we sometimes take over PRs from other developers
|
||||
* everyone should always push their code to branches so others can see it
|
||||
@ -104,6 +106,7 @@ As a volunteeer you have no commitment except your own self development and your
|
||||
## Open-Source Bounties
|
||||
|
||||
There are so many good reasons to contribute to Human Connection
|
||||
|
||||
* You learn state-of-the-art technologies
|
||||
* You build your portfolio
|
||||
* You contribute to a good cause
|
||||
@ -120,7 +123,7 @@ team, learning the workflow, and understanding this contribution guide. You can
|
||||
filter issues by 'good first issue', to get an idea where to start. Please join
|
||||
our our [community chat](https://human-connection.org/discord), too.
|
||||
|
||||
You can filter Github issues with label [bounty](https://github.com/Human-Connection/Human-Connection/issues?q=is%3Aopen+is%3Aissue+label%3Abounty). These issues should have a second label `€<amount>`
|
||||
You can filter Github issues with label [bounty](https://github.com/Ocelot-Social-Community/Ocelot-Social/issues?q=is%3Aopen+is%3Aissue+label%3Abounty). These issues should have a second label `€<amount>`
|
||||
which indicate their respective financial compensation in Euros.
|
||||
|
||||
You can bill us after your pull request got approved and merged into `master`.
|
||||
|
||||
@ -22,10 +22,10 @@
|
||||
* [Digital Ocean](deployment/digital-ocean/README.md)
|
||||
* [Kubernetes Dashboard](deployment/digital-ocean/dashboard/README.md)
|
||||
* [HTTPS](deployment/digital-ocean/https/README.md)
|
||||
* [Human Connection](deployment/human-connection/README.md)
|
||||
* [Error Reporting](deployment/human-connection/error-reporting/README.md)
|
||||
* [Mailserver](deployment/human-connection/mailserver/README.md)
|
||||
* [Maintenance](deployment/human-connection/maintenance/README.md)
|
||||
* [ocelot.social](deployment/ocelot-social/README.md)
|
||||
* [Error Reporting](deployment/ocelot-social/error-reporting/README.md)
|
||||
* [Mailserver](deployment/ocelot-social/mailserver/README.md)
|
||||
* [Maintenance](deployment/ocelot-social/maintenance/README.md)
|
||||
* [Volumes](deployment/volumes/README.md)
|
||||
* [Neo4J Offline-Backups](deployment/volumes/neo4j-offline-backup/README.md)
|
||||
* [Neo4J Online-Backups](deployment/volumes/neo4j-online-backup/README.md)
|
||||
|
||||
@ -17,8 +17,13 @@ Wait a little until your backend is up and running at [http://localhost:4000/](h
|
||||
|
||||
## Installation without Docker
|
||||
|
||||
For the local installation you need a recent version of [node](https://nodejs.org/en/)
|
||||
(>= `v10.12.0`).
|
||||
For the local installation you need a recent version of
|
||||
[node](https://nodejs.org/en/) (>= `v10.12.0`). We are using
|
||||
`12.19.0` and therefore we recommend to use the same version
|
||||
([see](https://github.com/Ocelot-Social-Community/Ocelot-Social/issues/4082)
|
||||
some known problems with more recent node versions). You can use the
|
||||
[node version manager](https://github.com/nvm-sh/nvm) to switch
|
||||
between different local node versions.
|
||||
|
||||
Install node dependencies with [yarn](https://yarnpkg.com/en/):
|
||||
```bash
|
||||
|
||||
@ -16,7 +16,7 @@ The following features will be implemented. This gets done in three steps:
|
||||
|
||||
### User Account
|
||||
|
||||
[Cucumber Features](https://github.com/Human-Connection/Human-Connection/tree/master/cypress/integration/user_account)
|
||||
[Cucumber Features](https://github.com/Ocelot-Social-Community/Ocelot-Social/tree/master/cypress/integration/user_account)
|
||||
|
||||
* Sign-up
|
||||
* Agree to Data Privacy Statement
|
||||
@ -34,7 +34,7 @@ The following features will be implemented. This gets done in three steps:
|
||||
|
||||
### User Profile
|
||||
|
||||
[Cucumber Features](https://github.com/Human-Connection/Human-Connection/tree/master/cypress/integration/user_profile)
|
||||
[Cucumber Features](https://github.com/Ocelot-Social-Community/Ocelot-Social/tree/master/cypress/integration/user_profile)
|
||||
|
||||
* Upload and Change Avatar
|
||||
* Upload and Change Profile Picture
|
||||
@ -59,7 +59,7 @@ The following features will be implemented. This gets done in three steps:
|
||||
|
||||
### Posts
|
||||
|
||||
[Cucumber Features](https://github.com/Human-Connection/Human-Connection/tree/master/cypress/integration/post)
|
||||
[Cucumber Features](https://github.com/Ocelot-Social-Community/Ocelot-Social/tree/master/cypress/integration/post)
|
||||
|
||||
* Creating Posts
|
||||
* Persistent Links
|
||||
@ -84,7 +84,7 @@ The following features will be implemented. This gets done in three steps:
|
||||
* Upvote comments of others
|
||||
|
||||
### Notifications
|
||||
[Cucumber features](https://github.com/Human-Connection/Human-Connection/tree/master/cypress/integration/notifications)
|
||||
[Cucumber features](https://github.com/Ocelot-Social-Community/Ocelot-Social/tree/master/cypress/integration/notifications)
|
||||
|
||||
* User @-mentionings
|
||||
* Notify authors for comments
|
||||
@ -116,7 +116,7 @@ The following features will be implemented. This gets done in three steps:
|
||||
|
||||
### Search
|
||||
|
||||
[Cucumber Features](https://github.com/Human-Connection/Human-Connection/tree/master/cypress/integration/search)
|
||||
[Cucumber Features](https://github.com/Ocelot-Social-Community/Ocelot-Social/tree/master/cypress/integration/search)
|
||||
|
||||
* Search for Categories
|
||||
* Search for Tags
|
||||
@ -237,7 +237,7 @@ Shows automatically related actions for existing post.
|
||||
|
||||
### Moderation
|
||||
|
||||
[Cucumber Features](https://github.com/Human-Connection/Human-Connection/tree/master/cypress/integration/moderation)
|
||||
[Cucumber Features](https://github.com/Ocelot-Social-Community/Ocelot-Social/tree/master/cypress/integration/moderation)
|
||||
|
||||
* Report Button for users for doubtful Content
|
||||
* Moderator Panel
|
||||
@ -249,7 +249,7 @@ Shows automatically related actions for existing post.
|
||||
|
||||
### Administration
|
||||
|
||||
[Cucumber Features](https://github.com/Human-Connection/Human-Connection/tree/master/cypress/integration/administration)
|
||||
[Cucumber Features](https://github.com/Ocelot-Social-Community/Ocelot-Social/tree/master/cypress/integration/administration)
|
||||
|
||||
* Provide Admin-Interface to send Users Invite Code
|
||||
* Static Pages for Data Privacy Statement ...
|
||||
@ -264,7 +264,7 @@ Shows automatically related actions for existing post.
|
||||
|
||||
### Internationalization
|
||||
|
||||
[Cucumber Features](https://github.com/Human-Connection/Human-Connection/tree/master/cypress/integration/internationalization)
|
||||
[Cucumber Features](https://github.com/Ocelot-Social-Community/Ocelot-Social/tree/master/cypress/integration/internationalization)
|
||||
|
||||
* Frontend UI
|
||||
* Backend Error Messages
|
||||
|
||||
@ -1,10 +1,10 @@
|
||||
# Human-Connection Nitro \| Deployment Configuration
|
||||
# ocelot.social \| Deployment Configuration
|
||||
|
||||
There are a couple different ways we have tested to deploy an instance of Human Connection, with [kubernetes](https://kubernetes.io/) and via [Helm](https://helm.sh/docs/). In order to manage your own
|
||||
network, you have to [install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), [install Helm](https://helm.sh/docs/intro/install/) (optional, but the preferred way),
|
||||
and set up a kubernetes cluster. Since there are many different options to host your cluster, we won't go into specifics here.
|
||||
There are a couple different ways we have tested to deploy an instance of ocelot.social, with [Kubernetes](https://kubernetes.io/) and via [Helm](https://helm.sh/docs/). In order to manage your own
|
||||
network, you have to [install Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/), [install Helm](https://helm.sh/docs/intro/install/) (optional, but the preferred way),
|
||||
and set up a Kubernetes cluster. Since there are many different options to host your cluster, we won't go into specifics here.
|
||||
|
||||
We have tested two different kubernetes providers: [Minikube](./minikube/README.md)
|
||||
We have tested two different Kubernetes providers: [Minikube](./minikube/README.md)
|
||||
and [Digital Ocean](./digital-ocean/README.md).
|
||||
|
||||
Check out the specific documentation for your provider. After that, choose whether you want to go with the recommended deploy option [Helm](./helm/README.md), or use kubernetes to apply the configuration for [Human Connection](./human-connection/README.md).
|
||||
Check out the specific documentation for your provider. After that, choose whether you want to go with the recommended deploy option [Helm](./helm/README.md), or use Kubernetes to apply the configuration for [ocelot.social](./ocelot-social/README.md).
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Digital Ocean
|
||||
|
||||
As a start, read the [introduction into kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes) by the folks at Digital Ocean. The following section should enable you to deploy Human Connection to your kubernetes cluster.
|
||||
As a start, read the [introduction into Kubernetes](https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes) by the folks at Digital Ocean. The following section should enable you to deploy ocelot.social to your Kubernetes cluster.
|
||||
|
||||
## Connect to your local cluster
|
||||
|
||||
@ -10,7 +10,8 @@ As a start, read the [introduction into kubernetes](https://www.digitalocean.com
|
||||
4. Now check if you can connect to the cluster and if its your newly created one by running: `kubectl get nodes`
|
||||
|
||||
The output should look about like this:
|
||||
```
|
||||
|
||||
```sh
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
nifty-driscoll-uu1w Ready <none> 69d v1.13.2
|
||||
@ -20,8 +21,8 @@ nifty-driscoll-uusn Ready <none> 69d v1.13.2
|
||||
|
||||
If you got the steps right above and see your nodes you can continue.
|
||||
|
||||
Digital Ocean kubernetes clusters don't have a graphical interface, so I suggest
|
||||
to setup the [kubernetes dashboard](./dashboard/README.md) as a next step.
|
||||
Digital Ocean Kubernetes clusters don't have a graphical interface, so I suggest
|
||||
to setup the [Kubernetes dashboard](./dashboard/README.md) as a next step.
|
||||
Configuring [HTTPS](./https/README.md) is bit tricky and therefore I suggest to
|
||||
do this as a last step.
|
||||
|
||||
@ -29,10 +30,10 @@ do this as a last step.
|
||||
|
||||
We are storing our images in the s3-compatible [DigitalOcean Spaces](https://www.digitalocean.com/docs/spaces/).
|
||||
|
||||
We still want to take backups of our images in case something happens to the images in the cloud. See these [instructions](https://www.digitalocean.com/docs/spaces/resources/s3cmd-usage/) about getting set up with `s3cmd` to take a copy of all images in a `Spaces` namespace, ie `human-connection-uploads`.
|
||||
We still want to take backups of our images in case something happens to the images in the cloud. See these [instructions](https://www.digitalocean.com/docs/spaces/resources/s3cmd-usage/) about getting set up with `s3cmd` to take a copy of all images in a `Spaces` namespace, i.e. `ocelot-social-uploads`.
|
||||
|
||||
After configuring `s3cmd` with your credentials, etc. you should be able to make a backup with this command.
|
||||
|
||||
```sh
|
||||
s3cmg get --recursive s3://human-connection-uploads --skip-existing
|
||||
```
|
||||
s3cmg get --recursive --skip-existing s3://ocelot-social-uploads
|
||||
```
|
||||
|
||||
@ -1,6 +1,26 @@
|
||||
# Setup Ingress and HTTPS
|
||||
|
||||
Follow [this quick start guide](https://docs.cert-manager.io/en/latest/tutorials/acme/quick-start/index.html) and install certmanager via helm and tiller:
|
||||
{% tabs %}
|
||||
{% tab title="Helm 3" %}
|
||||
|
||||
## Via Helm 3
|
||||
|
||||
Follow [this quick start guide](https://cert-manager.io/docs/) and install certmanager via Helm 3:
|
||||
|
||||
## Or Via Kubernetes Directly
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.1.0/cert-manager.yaml
|
||||
```
|
||||
|
||||
{% endtab %}
|
||||
{% tab title="Helm 2" %}
|
||||
|
||||
{% hint style="info" %}
|
||||
CAUTION: Tiller on Helm 2 is [removed](https://helm.sh/docs/faq/#removal-of-tiller) on Helm 3, because of savety issues. So we recomment Helm 3.
|
||||
{% endhint %}
|
||||
|
||||
Follow [this quick start guide](https://docs.cert-manager.io/en/latest/tutorials/acme/quick-start/index.html) and install certmanager via Helm 2 and tiller:
|
||||
[This resource was also helpful](https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html#installing-with-helm)
|
||||
|
||||
```bash
|
||||
@ -13,6 +33,9 @@ $ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/relea
|
||||
$ helm install --name cert-manager --namespace cert-manager --version v0.11.0 jetstack/cert-manager
|
||||
```
|
||||
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
||||
## Create Letsencrypt Issuers and Ingress Services
|
||||
|
||||
Copy the configuration templates and change the file according to your needs.
|
||||
@ -33,15 +56,40 @@ Once you are done, apply the configuration:
|
||||
$ kubectl apply -f .
|
||||
```
|
||||
|
||||
By now, your cluster should have a load balancer assigned with an external IP
|
||||
address. On Digital Ocean, this is how it should look like:
|
||||
{% hint style="info" %}
|
||||
CAUTION: It seems that the behaviour of Digital Ocean has changed and the load balancer is not created automatically anymore.
|
||||
And to create a load balancer costs money. Please refine the following documentation if required.
|
||||
{% endhint %}
|
||||
|
||||
{% tabs %}
|
||||
{% tab title="Without Load Balancer" %}
|
||||
|
||||
A solution without a load balance you can find [here](../no-loadbalancer/README.md).
|
||||
|
||||
{% endtab %}
|
||||
{% tab title="With Digital Ocean Load Balancer" %}
|
||||
|
||||
{% hint style="info" %}
|
||||
CAUTION: It seems that the behaviour of Digital Ocean has changed and the load balancer is not created automatically anymore.
|
||||
Please refine the following documentation if required.
|
||||
{% endhint %}
|
||||
|
||||
In earlier days by now, your cluster should have a load balancer assigned with an external IP
|
||||
address. On Digital Ocean, this is how it should look like:
|
||||
|
||||

|
||||
|
||||
If the load balancer isn't created automatically you have to create it your self on Digital Ocean under Networks.
|
||||
In case you don't need a Digital Ocean load balancer (which costs money by the way) have a look in the tab `Without Load Balancer`.
|
||||
|
||||
{% endtab %}
|
||||
{% endtabs %}
|
||||
|
||||
Check the ingress server is working correctly:
|
||||
|
||||
```bash
|
||||
$ curl -kivL -H 'Host: <DOMAIN_NAME>' 'https://<IP_ADDRESS>'
|
||||
<page HTML>
|
||||
```
|
||||
|
||||
If the response looks good, configure your domain registrar for the new IP address and the domain.
|
||||
@ -49,21 +97,68 @@ If the response looks good, configure your domain registrar for the new IP addre
|
||||
Now let's get a valid HTTPS certificate. According to the tutorial above, check your tls certificate for staging:
|
||||
|
||||
```bash
|
||||
$ kubectl describe --namespace=human-connection certificate tls
|
||||
$ kubectl describe --namespace=human-connection secret tls
|
||||
$ kubectl -n ocelot-social describe certificate tls
|
||||
<
|
||||
...
|
||||
Spec:
|
||||
...
|
||||
Issuer Ref:
|
||||
Group: cert-manager.io
|
||||
Kind: ClusterIssuer
|
||||
Name: letsencrypt-staging
|
||||
...
|
||||
Events:
|
||||
<no errors>
|
||||
>
|
||||
$ kubectl -n ocelot-social describe secret tls
|
||||
<
|
||||
...
|
||||
Annotations: ...
|
||||
cert-manager.io/issuer-kind: ClusterIssuer
|
||||
cert-manager.io/issuer-name: letsencrypt-staging
|
||||
...
|
||||
>
|
||||
```
|
||||
|
||||
If everything looks good, update the issuer of your ingress. Change the annotation `certmanager.k8s.io/issuer` from `letsencrypt-staging` to `letsencrypt-prod` in your ingress configuration in `ingress.yaml`.
|
||||
If everything looks good, update the cluster-issuer of your ingress. Change the annotation `cert-manager.io/cluster-issuer` from `letsencrypt-staging` (for testing by getting a dummy certificate – no blocking by letsencrypt, because of to many request cycles) to `letsencrypt-prod` (for production with a real certificate – possible blocking by letsencrypt for several days, because of to many request cycles) in your ingress configuration in `ingress.yaml`.
|
||||
|
||||
```bash
|
||||
# in folder deployment/digital-ocean/https/
|
||||
$ kubectl apply -f ingress.yaml
|
||||
```
|
||||
|
||||
Delete the former secret to force a refresh:
|
||||
Take a minute and have a look if the certificate is now newly generated by `letsencrypt-prod`, the cluster-issuer for production:
|
||||
|
||||
```text
|
||||
$ kubectl --namespace=human-connection delete secret tls
|
||||
```bash
|
||||
$ kubectl -n ocelot-social describe certificate tls
|
||||
<
|
||||
...
|
||||
Spec:
|
||||
...
|
||||
Issuer Ref:
|
||||
Group: cert-manager.io
|
||||
Kind: ClusterIssuer
|
||||
Name: letsencrypt-prod
|
||||
...
|
||||
Events:
|
||||
<no errors>
|
||||
>
|
||||
$ kubectl -n ocelot-social describe secret tls
|
||||
<
|
||||
...
|
||||
Annotations: ...
|
||||
cert-manager.io/issuer-kind: ClusterIssuer
|
||||
cert-manager.io/issuer-name: letsencrypt-prod
|
||||
...
|
||||
>
|
||||
```
|
||||
|
||||
Now, HTTPS should be configured on your domain. Congrats.
|
||||
In case the certificate is not newly created delete the former secret to force a refresh:
|
||||
|
||||
```bash
|
||||
$ kubectl -n ocelot-social delete secret tls
|
||||
```
|
||||
|
||||
Now, HTTPS should be configured on your domain. Congrats!
|
||||
|
||||
For troubleshooting have a look at the cert-manager's [Troubleshooting](https://cert-manager.io/docs/faq/troubleshooting/) or [Troubleshooting Issuing ACME Certificates](https://cert-manager.io/docs/faq/acme/).
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: human-connection
|
||||
name: ocelot-social
|
||||
labels:
|
||||
name: human-connection
|
||||
name: ocelot-social
|
||||
|
||||
@ -2,30 +2,31 @@ apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "nginx"
|
||||
certmanager.k8s.io/issuer: "letsencrypt-staging"
|
||||
certmanager.k8s.io/acme-challenge-type: http01
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: 6m
|
||||
# cert-manager.io/issuer: "letsencrypt-staging" # in case using issuers instead of a cluster-issuers
|
||||
cert-manager.io/cluster-issuer: "letsencrypt-staging"
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: 10m
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
# - nitro-mailserver.human-connection.org
|
||||
- develop.human-connection.org
|
||||
secretName: tls
|
||||
rules:
|
||||
- host: develop.human-connection.org
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: web
|
||||
servicePort: 3000
|
||||
- host: mailserver.human-connection.org
|
||||
- host: develop-k8s.ocelot.social
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: mailserver
|
||||
servicePort: 80
|
||||
- backend:
|
||||
serviceName: web
|
||||
servicePort: 3000
|
||||
path: /
|
||||
# decommt if you have installed the mailservice
|
||||
# - host: mail.ocelot.social
|
||||
# http:
|
||||
# paths:
|
||||
# - backend:
|
||||
# serviceName: mailserver
|
||||
# servicePort: 80
|
||||
# path: /
|
||||
# decommt to activate SSL via port 443 if you have installed the certificate. probalby via the cert-manager
|
||||
# tls:
|
||||
# - hosts:
|
||||
# - develop-k8s.ocelot.social
|
||||
# secretName: tls
|
||||
|
||||
@ -1,34 +1,70 @@
|
||||
---
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: Issuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
namespace: human-connection
|
||||
spec:
|
||||
acme:
|
||||
# The ACME server URL
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
# Email address used for ACME registration
|
||||
email: user@example.com
|
||||
# Name of a secret used to store the ACME account private key
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-staging
|
||||
# Enable the HTTP-01 challenge provider
|
||||
http01: {}
|
||||
# used while installation as first setup for testing purposes, recognize 'server: https://acme-staging-v02…'
|
||||
# !!! replace the e-mail for expiring certificates, see below !!!
|
||||
# !!! create the used secret, see below !!!
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
acme:
|
||||
# You must replace this email address with your own.
|
||||
# Let's Encrypt will use this to contact you about expiring
|
||||
# certificates, and issues related to your account.
|
||||
email: user@example.com
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
# Secret resource that will be used to store the account's private key.
|
||||
name: letsencrypt-staging-issuer-account-key
|
||||
# Add a single challenge solver, HTTP01 using nginx
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: nginx
|
||||
---
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: Issuer
|
||||
metadata:
|
||||
name: letsencrypt-prod
|
||||
namespace: human-connection
|
||||
spec:
|
||||
acme:
|
||||
# The ACME server URL
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
# Email address used for ACME registration
|
||||
email: user@example.com
|
||||
# Name of a secret used to store the ACME account private key
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-prod
|
||||
# Enable the HTTP-01 challenge provider
|
||||
http01: {}
|
||||
# used after installation for production, recognize 'server: https://acme-v02…'
|
||||
# !!! replace the e-mail for expiring certificates, see below !!!
|
||||
# !!! create the used secret, see below !!!
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-prod
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
acme:
|
||||
# You must replace this email address with your own.
|
||||
# Let's Encrypt will use this to contact you about expiring
|
||||
# certificates, and issues related to your account.
|
||||
email: user@example.com
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
# Secret resource that will be used to store the account's private key.
|
||||
name: letsencrypt-prod-issuer-account-key
|
||||
# Add a single challenge solver, HTTP01 using nginx
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: nginx
|
||||
---
|
||||
# fill in your letsencrypt-staging-issuer-account-key
|
||||
# generate base 64: $ echo -n '<your data>' | base64
|
||||
apiVersion: v1
|
||||
data:
|
||||
tls.key: <your base 64 data>
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: letsencrypt-staging-issuer-account-key
|
||||
namespace: ocelot-social
|
||||
type: Opaque
|
||||
---
|
||||
# fill in your letsencrypt-prod-issuer-account-key
|
||||
# generate base 64: $ echo -n '<your data>' | base64
|
||||
apiVersion: v1
|
||||
data:
|
||||
tls.key: <your base 64 data>
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: letsencrypt-prod-issuer-account-key
|
||||
namespace: ocelot-social
|
||||
type: Opaque
|
||||
|
||||
2
deployment/digital-ocean/no-loadbalancer/.gitignore
vendored
Normal file
2
deployment/digital-ocean/no-loadbalancer/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
mydns.values.yaml
|
||||
myingress.values.yaml
|
||||
9
deployment/digital-ocean/no-loadbalancer/README.md
Normal file
9
deployment/digital-ocean/no-loadbalancer/README.md
Normal file
@ -0,0 +1,9 @@
|
||||
# Solution Withou A Loadbalancer
|
||||
|
||||
## Expose Port 80 On Digital Ocean's Managed Kubernetes Without A Loadbalancer
|
||||
|
||||
Follow [this solution](https://stackoverflow.com/questions/54119399/expose-port-80-on-digital-oceans-managed-kubernetes-without-a-load-balancer/55968709) and install a second firewall, nginx, and use external DNS via Helm 3.
|
||||
|
||||
{% hint style="info" %}
|
||||
CAUTION: Some of the Helm charts are already depricated, so do an investigation of the approbriate charts and fill the correct commands in here.
|
||||
{% endhint %}
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
provider: digitalocean
|
||||
digitalocean:
|
||||
# create the API token at https://cloud.digitalocean.com/account/api/tokens
|
||||
# needs read + write
|
||||
apiToken: "DIGITALOCEAN_API_TOKEN"
|
||||
domainFilters:
|
||||
# domains you want external-dns to be able to edit
|
||||
- example.com
|
||||
rbac:
|
||||
create: true
|
||||
@ -0,0 +1,11 @@
|
||||
---
|
||||
controller:
|
||||
kind: DaemonSet
|
||||
hostNetwork: true
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
daemonset:
|
||||
useHostPort: true
|
||||
service:
|
||||
type: ClusterIP
|
||||
rbac:
|
||||
create: true
|
||||
@ -1,5 +1,5 @@
|
||||
apiVersion: v1
|
||||
appVersion: "0.3.1"
|
||||
description: A Helm chart for Human Connection
|
||||
name: human-connection
|
||||
description: A Helm chart for ocelot.social
|
||||
name: ocelot-social
|
||||
version: 0.1.0
|
||||
|
||||
@ -13,7 +13,7 @@ Probably you want to change this environment variable to your actual domain:
|
||||
|
||||
```bash
|
||||
# in folder /deployment/helm
|
||||
CLIENT_URI: "https://develop.human-connection.org"
|
||||
CLIENT_URI: "https://develop-k8s.ocelot.social"
|
||||
```
|
||||
|
||||
If you want to edit secrets, you have to `base64` encode them. See [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually). You can also use `helm-secrets`, but we have yet to test it.
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
@ -18,15 +18,15 @@ spec:
|
||||
maxUnavailable: "100%"
|
||||
selector:
|
||||
matchLabels:
|
||||
human-connection.org/selector: deployment-backend
|
||||
ocelot.social/selector: deployment-backend
|
||||
template:
|
||||
metadata:
|
||||
name: deployment-backend
|
||||
annotations:
|
||||
backup.velero.io/backup-volumes: uploads
|
||||
labels:
|
||||
human-connection.org/commit: {{ .Values.commit }}
|
||||
human-connection.org/selector: deployment-backend
|
||||
ocelot.social/commit: {{ .Values.commit }}
|
||||
ocelot.social/selector: deployment-backend
|
||||
spec:
|
||||
containers:
|
||||
- name: backend
|
||||
|
||||
@ -6,7 +6,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
@ -15,11 +15,11 @@ spec:
|
||||
progressDeadlineSeconds: 60
|
||||
selector:
|
||||
matchLabels:
|
||||
human-connection.org/selector: deployment-mailserver
|
||||
ocelot.social/selector: deployment-mailserver
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
human-connection.org/selector: deployment-mailserver
|
||||
ocelot.social/selector: deployment-mailserver
|
||||
name: mailserver
|
||||
spec:
|
||||
containers:
|
||||
|
||||
@ -5,18 +5,18 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
human-connection.org/selector: deployment-maintenance
|
||||
ocelot.social/selector: deployment-maintenance
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
human-connection.org/commit: {{ .Values.commit }}
|
||||
human-connection.org/selector: deployment-maintenance
|
||||
ocelot.social/commit: {{ .Values.commit }}
|
||||
ocelot.social/selector: deployment-maintenance
|
||||
name: maintenance
|
||||
spec:
|
||||
containers:
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
@ -16,15 +16,15 @@ spec:
|
||||
maxUnavailable: "100%"
|
||||
selector:
|
||||
matchLabels:
|
||||
human-connection.org/selector: deployment-neo4j
|
||||
ocelot.social/selector: deployment-neo4j
|
||||
template:
|
||||
metadata:
|
||||
name: neo4j
|
||||
annotations:
|
||||
backup.velero.io/backup-volumes: neo4j-data
|
||||
labels:
|
||||
human-connection.org/commit: {{ .Values.commit }}
|
||||
human-connection.org/selector: deployment-neo4j
|
||||
ocelot.social/commit: {{ .Values.commit }}
|
||||
ocelot.social/selector: deployment-neo4j
|
||||
spec:
|
||||
containers:
|
||||
- name: neo4j
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
@ -14,13 +14,13 @@ spec:
|
||||
progressDeadlineSeconds: 60
|
||||
selector:
|
||||
matchLabels:
|
||||
human-connection.org/selector: deployment-webapp
|
||||
ocelot.social/selector: deployment-webapp
|
||||
template:
|
||||
metadata:
|
||||
name: webapp
|
||||
labels:
|
||||
human-connection.org/commit: {{ .Values.commit }}
|
||||
human-connection.org/selector: deployment-webapp
|
||||
ocelot.social/commit: {{ .Values.commit }}
|
||||
ocelot.social/selector: deployment-webapp
|
||||
spec:
|
||||
containers:
|
||||
- name: webapp
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
annotations:
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
annotations:
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
@ -14,4 +14,4 @@ spec:
|
||||
port: 4000
|
||||
targetPort: 4000
|
||||
selector:
|
||||
human-connection.org/selector: deployment-backend
|
||||
ocelot.social/selector: deployment-backend
|
||||
|
||||
@ -6,7 +6,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
@ -18,5 +18,5 @@ spec:
|
||||
port: 25
|
||||
targetPort: 25
|
||||
selector:
|
||||
human-connection.org/selector: deployment-mailserver
|
||||
ocelot.social/selector: deployment-mailserver
|
||||
{{- end}}
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
@ -14,4 +14,4 @@ spec:
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
human-connection.org/selector: deployment-maintenance
|
||||
ocelot.social/selector: deployment-maintenance
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
@ -17,4 +17,4 @@ spec:
|
||||
port: 7474
|
||||
targetPort: 7474
|
||||
selector:
|
||||
human-connection.org/selector: deployment-neo4j
|
||||
ocelot.social/selector: deployment-neo4j
|
||||
|
||||
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
app.kubernetes.io/name: human-connection
|
||||
app.kubernetes.io/name: ocelot-social
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
|
||||
spec:
|
||||
@ -15,4 +15,4 @@ spec:
|
||||
protocol: TCP
|
||||
targetPort: 3000
|
||||
selector:
|
||||
human-connection.org/selector: deployment-webapp
|
||||
ocelot.social/selector: deployment-webapp
|
||||
|
||||
@ -1,14 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: maintenance
|
||||
namespace: human-connection
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-maintenance
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
human-connection.org/selector: deployment-human-connection-maintenance
|
||||
@ -1,14 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: backend
|
||||
namespace: human-connection
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 4000
|
||||
targetPort: 4000
|
||||
selector:
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
@ -1,14 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: web
|
||||
namespace: human-connection
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-web
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 3000
|
||||
targetPort: 3000
|
||||
selector:
|
||||
human-connection.org/selector: deployment-human-connection-web
|
||||
@ -1,21 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
SMTP_HOST: "mailserver.human-connection"
|
||||
SMTP_PORT: "25"
|
||||
GRAPHQL_URI: "http://backend.human-connection:4000"
|
||||
NEO4J_URI: "bolt://neo4j.human-connection:7687"
|
||||
NEO4J_AUTH: "none"
|
||||
CLIENT_URI: "https://staging.human-connection.org"
|
||||
NEO4J_apoc_import_file_enabled: "true"
|
||||
NEO4J_dbms_memory_pagecache_size: "490M"
|
||||
NEO4J_dbms_memory_heap_max__size: "500M"
|
||||
NEO4J_dbms_memory_heap_initial__size: "500M"
|
||||
NEO4J_dbms_security_procedures_unrestricted: "algo.*,apoc.*"
|
||||
SENTRY_DSN_WEBAPP: ""
|
||||
SENTRY_DSN_BACKEND: ""
|
||||
COMMIT: ""
|
||||
metadata:
|
||||
name: configmap
|
||||
namespace: human-connection
|
||||
@ -1,14 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
data:
|
||||
JWT_SECRET: "Yi8mJjdiNzhCRiZmdi9WZA=="
|
||||
MONGODB_PASSWORD: "TU9OR09EQl9QQVNTV09SRA=="
|
||||
PRIVATE_KEY_PASSPHRASE: "YTdkc2Y3OHNhZGc4N2FkODdzZmFnc2FkZzc4"
|
||||
MAPBOX_TOKEN: "cGsuZXlKMUlqb2lhSFZ0WVc0dFkyOXVibVZqZEdsdmJpSXNJbUVpT2lKamFqbDBjbkJ1Ykdvd2VUVmxNM1Z3WjJsek5UTnVkM1p0SW4wLktaOEtLOWw3MG9talhiRWtrYkhHc1EK"
|
||||
SMTP_USERNAME:
|
||||
SMTP_PASSWORD:
|
||||
NEO4J_USERNAME:
|
||||
NEO4J_PASSWORD:
|
||||
metadata:
|
||||
name: human-connection
|
||||
namespace: human-connection
|
||||
@ -11,7 +11,7 @@ Create a configmap with the specific connection data of your legacy server:
|
||||
|
||||
```bash
|
||||
$ kubectl create configmap maintenance-worker \
|
||||
--namespace=human-connection \
|
||||
-n ocelot-social \
|
||||
--from-literal=SSH_USERNAME=someuser \
|
||||
--from-literal=SSH_HOST=yourhost \
|
||||
--from-literal=MONGODB_USERNAME=hc-api \
|
||||
@ -25,7 +25,7 @@ Create a secret with your public and private ssh keys. As the [kubernetes docume
|
||||
|
||||
```bash
|
||||
$ kubectl create secret generic ssh-keys \
|
||||
--namespace=human-connection \
|
||||
-n ocelot-social \
|
||||
--from-file=id_rsa=/path/to/.ssh/id_rsa \
|
||||
--from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
|
||||
--from-file=known_hosts=/path/to/.ssh/known_hosts
|
||||
@ -41,14 +41,14 @@ Bring the application into maintenance mode.
|
||||
Then temporarily delete backend and database deployments
|
||||
|
||||
```bash
|
||||
$ kubectl --namespace=human-connection get deployments
|
||||
$ kubectl -n ocelot-social get deployments
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
develop-backend 1/1 1 1 3d11h
|
||||
develop-neo4j 1/1 1 1 3d11h
|
||||
develop-webapp 2/2 2 2 73d
|
||||
$ kubectl --namespace=human-connection delete deployment develop-neo4j
|
||||
$ kubectl -n ocelot-social delete deployment develop-neo4j
|
||||
deployment.extensions "develop-neo4j" deleted
|
||||
$ kubectl --namespace=human-connection delete deployment develop-backend
|
||||
$ kubectl -n ocelot-social delete deployment develop-backend
|
||||
deployment.extensions "develop-backend" deleted
|
||||
```
|
||||
|
||||
@ -63,7 +63,7 @@ pod/develop-maintenance-worker created
|
||||
Import legacy database and uploads:
|
||||
|
||||
```bash
|
||||
$ kubectl --namespace=human-connection exec -it develop-maintenance-worker bash
|
||||
$ kubectl -n ocelot-social exec -it develop-maintenance-worker bash
|
||||
$ import_legacy_db
|
||||
$ import_legacy_uploads
|
||||
$ exit
|
||||
@ -72,7 +72,7 @@ $ exit
|
||||
Delete the pod when you're done:
|
||||
|
||||
```bash
|
||||
$ kubectl --namespace=human-connection delete pod develop-maintenance-worker
|
||||
$ kubectl -n ocelot-social delete pod develop-maintenance-worker
|
||||
```
|
||||
|
||||
Oh, and of course you have to get those deleted deployments back. One way of
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: develop-maintenance-worker
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
containers:
|
||||
- name: develop-maintenance-worker
|
||||
|
||||
@ -11,7 +11,7 @@ $ minikube dashboard
|
||||
|
||||
This will give you an overview. Some of the steps below need some timing to make resources available to other dependent deployments. Keeping an eye on the dashboard is a great way to check that.
|
||||
|
||||
Follow the installation instruction for [Human Connection](../human-connection/README.md).
|
||||
Follow the installation instruction for [Human Connection](../ocelot-social/README.md).
|
||||
If all the pods and services have settled and everything looks green in your
|
||||
minikube dashboard, expose the services you want on your host system.
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: human-connection
|
||||
name: ocelot-social
|
||||
labels:
|
||||
name: human-connection
|
||||
name: ocelot-social
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Kubernetes Configuration for Human Connection
|
||||
# Kubernetes Configuration For ocelot.social
|
||||
|
||||
Deploying Human Connection with kubernetes is straight forward. All you have to
|
||||
Deploying *ocelot.social* with kubernetes is straight forward. All you have to
|
||||
do is to change certain parameters, like domain names and API keys, then you
|
||||
just apply our provided configuration files to your cluster.
|
||||
|
||||
@ -9,22 +9,22 @@ just apply our provided configuration files to your cluster.
|
||||
Change into the `./deployment` directory and copy our provided templates:
|
||||
|
||||
```bash
|
||||
# in folder deployment/human-connection/
|
||||
# in folder deployment/ocelot-social/
|
||||
$ cp templates/secrets.template.yaml ./secrets.yaml
|
||||
$ cp templates/configmap.template.yaml ./configmap.yaml
|
||||
```
|
||||
|
||||
Change the `configmap.yaml` in the `./deployment/human-connection` directory as needed, all variables will be available as
|
||||
environment variables in your deployed kubernetes pods.
|
||||
Change the `configmap.yaml` in the `./deployment/ocelot-social` directory as needed, all variables will be available as
|
||||
environment variables in your deployed Kubernetes pods.
|
||||
|
||||
Probably you want to change this environment variable to your actual domain:
|
||||
|
||||
```
|
||||
```yaml
|
||||
# in configmap.yaml
|
||||
CLIENT_URI: "https://nitro-staging.human-connection.org"
|
||||
CLIENT_URI: "https://develop-k8s.ocelot.social"
|
||||
```
|
||||
|
||||
If you want to edit secrets, you have to `base64` encode them. See [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually).
|
||||
If you want to edit secrets, you have to `base64` encode them. See [Kubernetes Documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually).
|
||||
|
||||
```bash
|
||||
# example how to base64 a string:
|
||||
@ -33,35 +33,39 @@ YWRtaW4=
|
||||
```
|
||||
|
||||
Those secrets get `base64` decoded and are available as environment variables in
|
||||
your deployed kubernetes pods.
|
||||
your deployed Kubernetes pods.
|
||||
|
||||
## Create a namespace
|
||||
## Create A Namespace
|
||||
|
||||
```bash
|
||||
# in folder deployment/
|
||||
$ kubectl apply -f namespace.yaml
|
||||
```
|
||||
|
||||
If you have a [kubernets dashboard](../digital-ocean/dashboard/README.md)
|
||||
deployed you should switch to namespace `human-connection` in order to
|
||||
If you have a [Kubernets Dashboard](../digital-ocean/dashboard/README.md)
|
||||
deployed you should switch to namespace `ocelot-social` in order to
|
||||
monitor the state of your deployments.
|
||||
|
||||
## Create persistent volumes
|
||||
## Create Persistent Volumes
|
||||
|
||||
While the deployments and services can easily be restored, simply by deleting
|
||||
and applying the kubernetes configurations again, certain data is not that
|
||||
and applying the Kubernetes configurations again, certain data is not that
|
||||
easily recovered. Therefore we separated persistent volumes from deployments
|
||||
and services. There is a [dedicated section](../volumes/README.md). Create those
|
||||
persistent volumes once before you apply the configuration.
|
||||
|
||||
## Apply the configuration
|
||||
## Apply The Configuration
|
||||
|
||||
Before you apply you should think about the size of the droplet(s) you need.
|
||||
For example, the requirements for Neo4j v3.5.14 are [here](https://neo4j.com/docs/operations-manual/3.5/installation/requirements/).
|
||||
Tips to configure the pod resources you find [here](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
|
||||
|
||||
```bash
|
||||
# in folder deployment/
|
||||
$ kubectl apply -f human-connection/
|
||||
$ kubectl apply -f ocelot-social/
|
||||
```
|
||||
|
||||
This can take a while because kubernetes will download the docker images. Sit
|
||||
This can take a while, because Kubernetes will download the Docker images from Docker Hub. Sit
|
||||
back and relax and have a look into your kubernetes dashboard. Wait until all
|
||||
pods turn green and they don't show a warning `Waiting: ContainerCreating`
|
||||
anymore.
|
||||
@ -3,10 +3,10 @@ kind: Deployment
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
human-connection.org/commit: COMMIT
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
ocelot.social/commit: COMMIT
|
||||
ocelot.social/selector: deployment-ocelot-social-backend
|
||||
name: backend
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
minReadySeconds: 15
|
||||
progressDeadlineSeconds: 60
|
||||
@ -14,7 +14,7 @@ spec:
|
||||
revisionHistoryLimit: 2147483647
|
||||
selector:
|
||||
matchLabels:
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
ocelot.social/selector: deployment-ocelot-social-backend
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 0
|
||||
@ -26,8 +26,8 @@ spec:
|
||||
backup.velero.io/backup-volumes: uploads
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
human-connection.org/commit: COMMIT
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
ocelot.social/commit: COMMIT
|
||||
ocelot.social/selector: deployment-ocelot-social-backend
|
||||
name: backend
|
||||
spec:
|
||||
containers:
|
||||
@ -35,9 +35,11 @@ spec:
|
||||
- configMapRef:
|
||||
name: configmap
|
||||
- secretRef:
|
||||
name: human-connection
|
||||
image: ocelotsocialnetwork/develop-backend:latest
|
||||
imagePullPolicy: Always
|
||||
name: ocelot-social
|
||||
image: ocelotsocialnetwork/develop-backend:latest # for develop
|
||||
# image: ocelotsocialnetwork/develop-backend:0.6.3 # for production or staging
|
||||
imagePullPolicy: Always # for develop or staging
|
||||
# imagePullPolicy: IfNotPresent # for production
|
||||
name: backend
|
||||
ports:
|
||||
- containerPort: 4000
|
||||
@ -3,16 +3,16 @@ kind: Deployment
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
ocelot.social/selector: deployment-ocelot-social-neo4j
|
||||
name: neo4j
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
progressDeadlineSeconds: 2147483647
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 2147483647
|
||||
selector:
|
||||
matchLabels:
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
ocelot.social/selector: deployment-ocelot-social-neo4j
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 0
|
||||
@ -24,15 +24,17 @@ spec:
|
||||
backup.velero.io/backup-volumes: neo4j-data
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
ocelot.social/selector: deployment-ocelot-social-neo4j
|
||||
name: neo4j
|
||||
spec:
|
||||
containers:
|
||||
- envFrom:
|
||||
- configMapRef:
|
||||
name: configmap
|
||||
image: ocelotsocialnetwork/develop-neo4j:latest
|
||||
imagePullPolicy: Always
|
||||
image: ocelotsocialnetwork/develop-neo4j:latest # for develop
|
||||
# image: ocelotsocialnetwork/develop-neo4j:0.6.3 # for production or staging
|
||||
imagePullPolicy: Always # for develop or staging
|
||||
# imagePullPolicy: IfNotPresent # for production
|
||||
name: neo4j
|
||||
ports:
|
||||
- containerPort: 7687
|
||||
@ -40,10 +42,12 @@ spec:
|
||||
- containerPort: 7474
|
||||
protocol: TCP
|
||||
resources:
|
||||
# see description and add cpu https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
|
||||
# see requirements for Neo4j v3.5.14 https://neo4j.com/docs/operations-manual/3.5/installation/requirements/
|
||||
limits:
|
||||
memory: 2G
|
||||
requests:
|
||||
memory: 1G
|
||||
memory: 2G
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
volumeMounts:
|
||||
@ -3,10 +3,10 @@ kind: Deployment
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
human-connection.org/commit: COMMIT
|
||||
human-connection.org/selector: deployment-human-connection-web
|
||||
ocelot.social/commit: COMMIT
|
||||
ocelot.social/selector: deployment-ocelot-social-webapp
|
||||
name: web
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
minReadySeconds: 15
|
||||
progressDeadlineSeconds: 60
|
||||
@ -14,7 +14,7 @@ spec:
|
||||
revisionHistoryLimit: 2147483647
|
||||
selector:
|
||||
matchLabels:
|
||||
human-connection.org/selector: deployment-human-connection-web
|
||||
ocelot.social/selector: deployment-ocelot-social-webapp
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
@ -24,8 +24,8 @@ spec:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
human-connection.org/commit: COMMIT
|
||||
human-connection.org/selector: deployment-human-connection-web
|
||||
ocelot.social/commit: COMMIT
|
||||
ocelot.social/selector: deployment-ocelot-social-webapp
|
||||
name: web
|
||||
spec:
|
||||
containers:
|
||||
@ -36,7 +36,7 @@ spec:
|
||||
- configMapRef:
|
||||
name: configmap
|
||||
- secretRef:
|
||||
name: human-connection
|
||||
name: ocelot-social
|
||||
image: ocelotsocialnetwork/develop-webapp:latest
|
||||
imagePullPolicy: Always
|
||||
name: web
|
||||
@ -1,7 +1,7 @@
|
||||
# Development Mail Server
|
||||
|
||||
You can deploy a fake smtp server which captures all send mails and displays
|
||||
them in a web interface. The [sample configuration](../templates/configmap.template.yml)
|
||||
them in a web interface. The [sample configuration](../templates/configmap.template.yaml)
|
||||
is assuming such a dummy server in the `SMTP_HOST` configuration and points to
|
||||
a cluster-internal SMTP server.
|
||||
|
||||
@ -10,8 +10,8 @@ To deploy the SMTP server just uncomment the relevant code in the
|
||||
run the following:
|
||||
|
||||
```bash
|
||||
# in folder deployment/human-connection
|
||||
kubectl apply -f mailserver/
|
||||
# in folder deployment/ocelot-social
|
||||
$ kubectl apply -f mailserver/
|
||||
```
|
||||
|
||||
You might need to refresh the TLS secret to enable HTTPS on the publicly
|
||||
@ -3,9 +3,9 @@ kind: Deployment
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-mailserver
|
||||
ocelot.social/selector: deployment-ocelot-social-mailserver
|
||||
name: mailserver
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
minReadySeconds: 15
|
||||
progressDeadlineSeconds: 60
|
||||
@ -13,7 +13,7 @@ spec:
|
||||
revisionHistoryLimit: 2147483647
|
||||
selector:
|
||||
matchLabels:
|
||||
human-connection.org/selector: deployment-human-connection-mailserver
|
||||
ocelot.social/selector: deployment-ocelot-social-mailserver
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
@ -23,7 +23,7 @@ spec:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-mailserver
|
||||
ocelot.social/selector: deployment-ocelot-social-mailserver
|
||||
name: mailserver
|
||||
spec:
|
||||
containers:
|
||||
@ -31,7 +31,7 @@ spec:
|
||||
- configMapRef:
|
||||
name: configmap
|
||||
- secretRef:
|
||||
name: human-connection
|
||||
name: ocelot-social
|
||||
image: djfarrelly/maildev
|
||||
imagePullPolicy: Always
|
||||
name: mailserver
|
||||
@ -2,9 +2,9 @@ apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mailserver
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-mailserver
|
||||
ocelot.social/selector: deployment-ocelot-social-mailserver
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
@ -14,4 +14,4 @@ spec:
|
||||
port: 25
|
||||
targetPort: 25
|
||||
selector:
|
||||
human-connection.org/selector: deployment-human-connection-mailserver
|
||||
ocelot.social/selector: deployment-ocelot-social-mailserver
|
||||
@ -13,7 +13,7 @@ We prepared sample configuration, so you can simply run:
|
||||
|
||||
```sh
|
||||
# in folder deployment/
|
||||
$ kubectl apply -f ocelotsocialnetwork/develop-maintenance
|
||||
$ kubectl apply -f ./ocelot-social/maintenance/
|
||||
```
|
||||
|
||||
This will fire up a maintenance service.
|
||||
@ -23,18 +23,18 @@ This will fire up a maintenance service.
|
||||
Now if you want to have a controlled downtime and you want to bring your
|
||||
application into maintenance mode, you can edit your global ingress server.
|
||||
|
||||
E.g. in file `deployment/digital-ocean/https/ingress.yaml` change the following:
|
||||
E.g. copy file [`deployment/digital-ocean/https/templates/ingress.template.yaml`](../../digital-ocean/https/templates/ingress.template.yaml) to new file `deployment/digital-ocean/https/ingress.yaml` and change the following:
|
||||
|
||||
```yaml
|
||||
...
|
||||
|
||||
- host: nitro-staging.human-connection.org
|
||||
- host: develop-k8s.ocelot.social
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
# serviceName: develop-webapp
|
||||
serviceName: develop-maintenance
|
||||
# serviceName: web
|
||||
serviceName: maintenance
|
||||
# servicePort: 3000
|
||||
servicePort: 80
|
||||
```
|
||||
@ -1,17 +1,17 @@
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: maintenance
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
human-connection.org/selector: deployment-human-connection-maintenance
|
||||
ocelot.social/selector: deployment-ocelot-social-maintenance
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
human-connection.org/commit: COMMIT
|
||||
human-connection.org/selector: deployment-human-connection-maintenance
|
||||
ocelot.social/commit: COMMIT
|
||||
ocelot.social/selector: deployment-ocelot-social-maintenance
|
||||
name: maintenance
|
||||
spec:
|
||||
containers:
|
||||
@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: maintenance
|
||||
namespace: ocelot-social
|
||||
labels:
|
||||
ocelot.social/selector: deployment-ocelot-social-maintenance
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
ocelot.social/selector: deployment-ocelot-social-maintenance
|
||||
14
deployment/ocelot-social/service-backend.yaml
Normal file
14
deployment/ocelot-social/service-backend.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: backend
|
||||
namespace: ocelot-social
|
||||
labels:
|
||||
ocelot.social/selector: deployment-ocelot-social-backend
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 4000
|
||||
targetPort: 4000
|
||||
selector:
|
||||
ocelot.social/selector: deployment-ocelot-social-backend
|
||||
@ -2,9 +2,9 @@ apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: neo4j
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
ocelot.social/selector: deployment-ocelot-social-neo4j
|
||||
spec:
|
||||
ports:
|
||||
- name: bolt
|
||||
@ -14,4 +14,4 @@ spec:
|
||||
port: 7474
|
||||
targetPort: 7474
|
||||
selector:
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
ocelot.social/selector: deployment-ocelot-social-neo4j
|
||||
14
deployment/ocelot-social/service-webapp.yaml
Normal file
14
deployment/ocelot-social/service-webapp.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: web
|
||||
namespace: ocelot-social
|
||||
labels:
|
||||
ocelot.social/selector: deployment-ocelot-social-webapp
|
||||
spec:
|
||||
ports:
|
||||
- name: web
|
||||
port: 3000
|
||||
targetPort: 3000
|
||||
selector:
|
||||
ocelot.social/selector: deployment-ocelot-social-webapp
|
||||
39
deployment/ocelot-social/templates/configmap.template.yaml
Normal file
39
deployment/ocelot-social/templates/configmap.template.yaml
Normal file
@ -0,0 +1,39 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
data:
|
||||
# decomment following lines for S3 bucket to store our images
|
||||
# AWS_ACCESS_KEY_ID: see secrets
|
||||
# AWS_BUCKET: ocelot-social-uploads
|
||||
# AWS_ENDPOINT: fra1.digitaloceanspaces.com
|
||||
# AWS_REGION: fra1
|
||||
# AWS_SECRET_ACCESS_KEY: see secrets
|
||||
CLIENT_URI: "https://develop-k8s.ocelot.social" # change this to your domain
|
||||
COMMIT: ""
|
||||
EMAIL_DEFAULT_SENDER: devops@ocelot.social # change this to your e-mail
|
||||
GRAPHQL_PORT: "4000"
|
||||
GRAPHQL_URI: "http://backend.ocelot-social:4000" # leave this as ocelot-social
|
||||
# decomment following line for Neo4j Enterprice version instead of Community version
|
||||
# NEO4J_ACCEPT_LICENSE_AGREEMENT: "yes"
|
||||
NEO4J_AUTH: "none"
|
||||
# NEO4J_dbms_connector_bolt_thread__pool__max__size: "10000"
|
||||
NEO4J_apoc_import_file_enabled: "true"
|
||||
NEO4J_dbms_memory_heap_initial__size: "500M"
|
||||
NEO4J_dbms_memory_heap_max__size: "500M"
|
||||
NEO4J_dbms_memory_pagecache_size: "490M"
|
||||
NEO4J_dbms_security_procedures_unrestricted: "algo.*,apoc.*"
|
||||
NEO4J_URI: "bolt://neo4j.ocelot-social:7687" # leave this as ocelot-social
|
||||
PUBLIC_REGISTRATION: "false"
|
||||
REDIS_DOMAIN: ---toBeSet(IP)---
|
||||
# REDIS_PASSWORD: see secrets
|
||||
REDIS_PORT: "6379"
|
||||
SENTRY_DSN_WEBAPP: "---toBeSet---"
|
||||
SENTRY_DSN_BACKEND: "---toBeSet---"
|
||||
SMTP_HOST: "mail.ocelot.social" # change this to your domain
|
||||
# SMTP_PASSWORD: see secrets
|
||||
SMTP_PORT: "25" # change this to your port
|
||||
# SMTP_USERNAME: see secrets
|
||||
SMTP_IGNORE_TLS: 'true' # change this to your setting
|
||||
WEBSOCKETS_URI: wss://develop-k8s.ocelot.social/api/graphql # change this to your domain
|
||||
metadata:
|
||||
name: configmap
|
||||
namespace: ocelot-social
|
||||
17
deployment/ocelot-social/templates/secrets.template.yaml
Normal file
17
deployment/ocelot-social/templates/secrets.template.yaml
Normal file
@ -0,0 +1,17 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
data:
|
||||
# decomment following lines for S3 bucket to store our images
|
||||
# AWS_ACCESS_KEY_ID: ---toBeSet---
|
||||
# AWS_SECRET_ACCESS_KEY: ---toBeSet---
|
||||
JWT_SECRET: "Yi8mJjdiNzhCRiZmdi9WZA=="
|
||||
PRIVATE_KEY_PASSPHRASE: "YTdkc2Y3OHNhZGc4N2FkODdzZmFnc2FkZzc4"
|
||||
MAPBOX_TOKEN: "---toBeSet(IP)---"
|
||||
NEO4J_USERNAME: ""
|
||||
NEO4J_PASSWORD: ""
|
||||
REDIS_PASSWORD: ---toBeSet---
|
||||
SMTP_PASSWORD: "---toBeSet---"
|
||||
SMTP_USERNAME: "---toBeSet---"
|
||||
metadata:
|
||||
name: ocelot-social
|
||||
namespace: ocelot-social
|
||||
@ -3,14 +3,14 @@
|
||||
At the moment, the application needs two persistent volumes:
|
||||
|
||||
* The `/data/` folder where `neo4j` stores its database and
|
||||
* the folder `/develop-backend/public/uploads` where the backend stores uploads.
|
||||
* the folder `/develop-backend/public/uploads` where the backend stores uploads, in case you don't use Digital Ocean Spaces (an AWS S3 bucket) for this purpose.
|
||||
|
||||
As a matter of precaution, the persistent volume claims that setup these volumes
|
||||
live in a separate folder. You don't want to accidently loose all your data in
|
||||
your database by running
|
||||
|
||||
```sh
|
||||
kubectl delete -f human-connection/
|
||||
kubectl delete -f ocelot-social/
|
||||
```
|
||||
|
||||
or do you?
|
||||
@ -18,6 +18,7 @@ or do you?
|
||||
## Create Persistent Volume Claims
|
||||
|
||||
Run the following:
|
||||
|
||||
```sh
|
||||
# in folder deployments/
|
||||
$ kubectl apply -f volumes
|
||||
@ -25,7 +26,7 @@ persistentvolumeclaim/neo4j-data-claim created
|
||||
persistentvolumeclaim/uploads-claim created
|
||||
```
|
||||
|
||||
## Backup and Restore
|
||||
## Backup And Restore
|
||||
|
||||
We tested a couple of options how to do disaster recovery in kubernetes. First,
|
||||
there is the [offline backup strategy](./neo4j-offline-backup/README.md) of the
|
||||
|
||||
@ -3,10 +3,10 @@
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: neo4j-data-claim
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.neo4jStorage }}
|
||||
storage: "10Gi" # see requirements for Neo4j v3.5.14 https://neo4j.com/docs/operations-manual/3.5/installation/requirements/
|
||||
|
||||
@ -23,13 +23,13 @@ So, all we have to do is edit the kubernetes deployment of our Neo4J database
|
||||
and set a custom `command` every time we have to carry out tasks like backup,
|
||||
restore, seed etc.
|
||||
|
||||
First bring the application into [maintenance mode](https://github.com/Human-Connection/Human-Connection/blob/master/deployment/human-connection/maintenance/README.md) to ensure there are no
|
||||
First bring the application into [maintenance mode](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/deployment/ocelot-social/maintenance/README.md) to ensure there are no
|
||||
database connections left and nobody can access the application.
|
||||
|
||||
Run the following:
|
||||
|
||||
```sh
|
||||
$ kubectl --namespace=human-connection edit deployment develop-neo4j
|
||||
$ kubectl -n ocelot-social edit deployment develop-neo4j
|
||||
```
|
||||
|
||||
Add the following to `spec.template.spec.containers`:
|
||||
@ -55,9 +55,9 @@ file and trigger an update of the deployment.
|
||||
First stop your Neo4J database, see above. Then:
|
||||
|
||||
```sh
|
||||
$ kubectl --namespace=human-connection get pods
|
||||
$ kubectl -n ocelot-social get pods
|
||||
# Copy the ID of the pod running Neo4J.
|
||||
$ kubectl --namespace=human-connection exec -it <POD-ID> bash
|
||||
$ kubectl -n ocelot-social exec -it <POD-ID> bash
|
||||
# Once you're in the pod, dump the db to a file e.g. `/root/neo4j-backup`.
|
||||
> neo4j-admin dump --to=/root/neo4j-backup
|
||||
> exit
|
||||
@ -72,12 +72,12 @@ Revert your changes to deployment `develop-neo4j` which will restart the databas
|
||||
First stop your Neo4J database. Then:
|
||||
|
||||
```sh
|
||||
$ kubectl --namespace=human-connection get pods
|
||||
$ kubectl -n ocelot-social get pods
|
||||
# Copy the ID of the pod running Neo4J.
|
||||
# Then upload your local backup to the pod. Note that once the pod gets deleted
|
||||
# e.g. if you change the deployment, the backup file is gone with it.
|
||||
$ kubectl cp ./neo4j-backup human-connection/<POD-ID>:/root/
|
||||
$ kubectl --namespace=human-connection exec -it <POD-ID> bash
|
||||
$ kubectl -n ocelot-social exec -it <POD-ID> bash
|
||||
# Once you're in the pod restore the backup and overwrite the default database
|
||||
# called `graph.db` with `--force`.
|
||||
# This will delete all existing data in database `graph.db`!
|
||||
|
||||
@ -43,12 +43,12 @@ Restoration must be done while the database is not running, see [our docs](https
|
||||
After, you have stopped the database, and have the pod running, you can restore the database by running these commands:
|
||||
|
||||
```sh
|
||||
$ kubectl --namespace=human-connection get pods
|
||||
$ kubectl -n ocelot-social get pods
|
||||
# Copy the ID of the pod running Neo4J.
|
||||
# Then upload your local backup to the pod. Note that once the pod gets deleted
|
||||
# e.g. if you change the deployment, the backup file is gone with it.
|
||||
$ kubectl cp ./neo4j-backup/ human-connection/<POD-ID>:/root/
|
||||
$ kubectl --namespace=human-connection exec -it <POD-ID> bash
|
||||
$ kubectl -n ocelot-social exec -it <POD-ID> bash
|
||||
# Once you're in the pod restore the backup and overwrite the default database
|
||||
# called `graph.db` with `--force`.
|
||||
# This will delete all existing data in database `graph.db`!
|
||||
|
||||
@ -8,14 +8,15 @@ you from loosing data if you accidently delete the namespace and the persistent
|
||||
volumes along with it.
|
||||
|
||||
```sh
|
||||
$ kubectl --namespace=human-connection get pv
|
||||
$ kubectl -n ocelot-social get pv
|
||||
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
pvc-bd02a715-66d0-11e9-be52-ba9c337f4551 1Gi RWO Delete Bound human-connection/neo4j-data-claim do-block-storage 4m24s
|
||||
pvc-bd208086-66d0-11e9-be52-ba9c337f4551 2Gi RWO Delete Bound human-connection/uploads-claim do-block-storage 4m12s
|
||||
pvc-bd02a715-66d0-11e9-be52-ba9c337f4551 5Gi RWO Delete Bound ocelot-social/neo4j-data-claim do-block-storage 4m24s
|
||||
pvc-bd208086-66d0-11e9-be52-ba9c337f4551 10Gi RWO Delete Bound ocelot-social/uploads-claim do-block-storage 4m12s
|
||||
```
|
||||
|
||||
Get the volume id from above, then change `ReclaimPolicy` with:
|
||||
|
||||
```sh
|
||||
kubectl patch pv <VOLUME-ID> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
|
||||
|
||||
@ -3,10 +3,10 @@
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: uploads-claim
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .Values.uploadsStorage }}
|
||||
storage: "10Gi"
|
||||
|
||||
@ -3,9 +3,9 @@
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: neo4j-data-claim
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
labels:
|
||||
app: human-connection
|
||||
app: ocelot-social
|
||||
spec:
|
||||
dataSource:
|
||||
name: neo4j-data-snapshot
|
||||
|
||||
@ -3,7 +3,7 @@
|
||||
kind: VolumeSnapshot
|
||||
metadata:
|
||||
name: uploads-snapshot
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
spec:
|
||||
source:
|
||||
name: uploads-claim
|
||||
|
||||
@ -2,7 +2,7 @@ version: "3.4"
|
||||
|
||||
services:
|
||||
webapp:
|
||||
image: schoolsinmotion/webapp:build-and-test
|
||||
image: ocelotsocialnetwork/develop-webapp:build-and-test
|
||||
build:
|
||||
context: webapp
|
||||
target: build-and-test
|
||||
@ -15,7 +15,7 @@ services:
|
||||
volumes:
|
||||
- webapp_node_modules:/nitro-web/node_modules
|
||||
backend:
|
||||
image: schoolsinmotion/backend:build-and-test
|
||||
image: ocelotsocialnetwork/develop-backend:build-and-test
|
||||
build:
|
||||
context: backend
|
||||
target: build-and-test
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Edit this Documentation
|
||||
|
||||
Find the [**table of contents** for this documentation on GitHub](https://github.com/Human-Connection/Human-Connection/blob/master/SUMMARY.md) and navigate to the file you need to update.
|
||||
Find the [**table of contents** for this documentation on GitHub](https://github.com/Ocelot-Social-Community/Ocelot-Social/blob/master/SUMMARY.md) and navigate to the file you need to update.
|
||||
|
||||
Click on the **edit pencil** on the right side directly above the text to edit this file on your fork of Human Connection \(HC\).
|
||||
|
||||
@ -8,7 +8,7 @@ You can see a preview of your changes by clicking the **Preview changes** tab as
|
||||
|
||||
If you are ready, fill in the **Propose file change** at the end of the webpage.
|
||||
|
||||
After that you have to send your change to the HC basis with a pull request. Here make a comment which issue you have fixed. (If you are working on one of our [open issues](https://github.com/Human-Connection/Human-Connection/issues) please include the number.)
|
||||
After that you have to send your change to the HC basis with a pull request. Here make a comment which issue you have fixed. (If you are working on one of our [open issues](https://github.com/Ocelot-Social-Community/Ocelot-Social/issues) please include the number.)
|
||||
|
||||
## Markdown your documentation
|
||||
|
||||
@ -20,7 +20,7 @@ To design your documentation see the syntax description at GitBook:
|
||||
|
||||
#### Headlines
|
||||
|
||||
```text
|
||||
```markdown
|
||||
# Main headline
|
||||
## Smaller headlines
|
||||
### Small headlines
|
||||
@ -28,7 +28,7 @@ To design your documentation see the syntax description at GitBook:
|
||||
|
||||
#### Tabs
|
||||
|
||||
```text
|
||||
```markdown
|
||||
{% tabs %}
|
||||
{% tab title="XXX" %}
|
||||
XXX
|
||||
@ -42,36 +42,36 @@ XXX
|
||||
|
||||
#### Commands
|
||||
|
||||
```text
|
||||
```LANGUAGE (for text highlighting)
|
||||
~~~markdown
|
||||
```<LANGUAGE> (for text highlighting)
|
||||
XXX
|
||||
```
|
||||
~~~
|
||||
|
||||
```text
|
||||
#### Links
|
||||
|
||||
```text
|
||||
[https://XXX](XXX)
|
||||
```markdown
|
||||
[XXX](https://XXX)
|
||||
```
|
||||
|
||||
#### Screenshots or other Images
|
||||
|
||||
```text
|
||||
```markdown
|
||||

|
||||
```
|
||||
|
||||
#### Hints for ToDos
|
||||
#### Hints For ToDos
|
||||
|
||||
```text
|
||||
```markdown
|
||||
{% hint style="info" %} TODO: XXX {% endhint %}
|
||||
```
|
||||
|
||||
## Host the screenshots
|
||||
## Host The Screenshots
|
||||
|
||||
### Host on Human Connection
|
||||
### Host On Ocelot-Social \(GitHub\) repository
|
||||
|
||||
{% hint style="info" %}
|
||||
TODO: How to host on Human Connection \(GitHub\) ...
|
||||
TODO: How to host on Ocelot-Social \(GitHub\) repository ...
|
||||
{% endhint %}
|
||||
|
||||
### Quick Solution
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Installation
|
||||
|
||||
The repository can be found on GitHub. [https://github.com/Human-Connection/Human-Connection](https://github.com/Human-Connection/Human-Connection)
|
||||
The repository can be found on GitHub. [https://github.com/Ocelot-Social-Community/Ocelot-Social](https://github.com/Ocelot-Social-Community/Ocelot-Social)
|
||||
|
||||
We give write permissions to every developer who asks for it. Just text us on
|
||||
[Discord](https://discord.gg/6ub73U3).
|
||||
@ -13,7 +13,7 @@ Clone the repository, this will create a new folder called `Human-Connection`:
|
||||
{% tabs %}
|
||||
{% tab title="HTTPS" %}
|
||||
```bash
|
||||
$ git clone https://github.com/Human-Connection/Human-Connection.git
|
||||
$ git clone https://github.com/Ocelot-Social-Community/Ocelot-Social.git
|
||||
```
|
||||
{% endtab %}
|
||||
|
||||
|
||||
@ -6,7 +6,7 @@
|
||||
"license": "MIT",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "https://github.com/Human-Connection/Human-Connection.git"
|
||||
"url": "https://github.com/Ocelot-Social-Community/Ocelot-Social.git"
|
||||
},
|
||||
"cypress-cucumber-preprocessor": {
|
||||
"nonGlobalStepDefinitions": true
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
sed -i "s/<COMMIT>/${TRAVIS_COMMIT}/g" $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml
|
||||
sed -i "s/<COMMIT>/${TRAVIS_COMMIT}/g" $TRAVIS_BUILD_DIR/scripts/patches/patch-configmap.yaml
|
||||
kubectl --namespace=human-connection patch configmap develop-configmap -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-configmap.yaml)"
|
||||
kubectl --namespace=human-connection patch deployment develop-backend -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml)"
|
||||
kubectl --namespace=human-connection patch deployment develop-webapp -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml)"
|
||||
kubectl -n ocelot-social patch configmap develop-configmap -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-configmap.yaml)"
|
||||
kubectl -n ocelot-social patch deployment develop-backend -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml)"
|
||||
kubectl -n ocelot-social patch deployment develop-webapp -p "$(cat $TRAVIS_BUILD_DIR/scripts/patches/patch-deployment.yaml)"
|
||||
|
||||
@ -4,4 +4,4 @@ data:
|
||||
COMMIT: <COMMIT>
|
||||
metadata:
|
||||
name: configmap
|
||||
namespace: human-connection
|
||||
namespace: ocelot-social
|
||||
|
||||
@ -2,4 +2,4 @@ spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
human-connection.org/commit: <COMMIT>
|
||||
ocelot.social/commit: <COMMIT>
|
||||
|
||||
@ -26,6 +26,7 @@ RUN NODE_ENV=production yarn run build
|
||||
FROM base as production
|
||||
RUN yarn install --production=true --frozen-lockfile --non-interactive --no-cache
|
||||
COPY --from=build-and-test ./develop-webapp/.nuxt ./.nuxt
|
||||
COPY --from=build-and-test ./develop-webapp/constants ./constants
|
||||
COPY --from=build-and-test ./develop-webapp/static ./static
|
||||
COPY nuxt.config.js .
|
||||
COPY locales locales
|
||||
|
||||
@ -91,7 +91,11 @@ export default {
|
||||
this.$toast.success(this.$t('login.success'))
|
||||
this.$emit('success')
|
||||
} catch (err) {
|
||||
this.$toast.error(this.$t('login.failure'))
|
||||
if (err.message === 'Error: no-cookie') {
|
||||
this.$toast.error(this.$t('login.no-cookie'))
|
||||
} else {
|
||||
this.$toast.error(this.$t('login.failure'))
|
||||
}
|
||||
}
|
||||
},
|
||||
toggleShowPassword(event) {
|
||||
|
||||
97
webapp/components/SocialMedia/SocialMedia.spec.js
Normal file
97
webapp/components/SocialMedia/SocialMedia.spec.js
Normal file
@ -0,0 +1,97 @@
|
||||
import { config, mount } from '@vue/test-utils'
|
||||
import SocialMedia from './SocialMedia.vue'
|
||||
|
||||
config.stubs['ds-space'] = '<span><slot /></span>'
|
||||
config.stubs['ds-text'] = '<span><slot /></span>'
|
||||
|
||||
describe('SocialMedia.vue', () => {
|
||||
let propsData
|
||||
let mocks
|
||||
|
||||
beforeEach(() => {
|
||||
propsData = {}
|
||||
|
||||
mocks = {
|
||||
$t: jest.fn(),
|
||||
}
|
||||
})
|
||||
|
||||
describe('mount', () => {
|
||||
const Wrapper = () => {
|
||||
return mount(SocialMedia, { propsData, mocks })
|
||||
}
|
||||
|
||||
describe('socialMedia card title', () => {
|
||||
beforeEach(() => {
|
||||
propsData.userName = 'Jenny Rostock'
|
||||
propsData.user = {
|
||||
socialMedia: [
|
||||
{
|
||||
id: 'ee1e8ed6-fbef-4bcf-b411-a12926f2ea1e',
|
||||
url: 'https://www.instagram.com/nimitbhargava',
|
||||
__typename: 'SocialMedia',
|
||||
},
|
||||
],
|
||||
}
|
||||
})
|
||||
|
||||
it('renders socialMedia card title', () => {
|
||||
Wrapper()
|
||||
expect(mocks.$t).toHaveBeenCalledWith('profile.socialMedia')
|
||||
})
|
||||
})
|
||||
|
||||
describe('socialMedia links', () => {
|
||||
let wrapper
|
||||
|
||||
beforeEach(() => {
|
||||
propsData.userName = 'Jenny Rostock'
|
||||
propsData.user = {
|
||||
socialMedia: [
|
||||
{
|
||||
id: 'ee1e8ed6-fbef-4bcf-b411-a12926f2ea1e',
|
||||
url: 'https://www.instagram.com/nimitbhargava',
|
||||
__typename: 'SocialMedia',
|
||||
},
|
||||
{
|
||||
id: 'dc91aecb-3289-47d0-8770-4b24eb24fd9c',
|
||||
url: 'https://www.facebook.com/NimitBhargava',
|
||||
__typename: 'SocialMedia',
|
||||
},
|
||||
{
|
||||
id: 'db1dc400-9303-4b43-9451-87dcac13b913',
|
||||
url: 'https://www.youtube.com/channel/UCu3GiKBFn5I07V9hBxF2CRA',
|
||||
__typename: 'SocialMedia',
|
||||
},
|
||||
],
|
||||
}
|
||||
// Now assign wrapper
|
||||
wrapper = Wrapper()
|
||||
})
|
||||
|
||||
it('shows 3 social media links', () => {
|
||||
expect(wrapper.findAll('a')).toHaveLength(3)
|
||||
})
|
||||
|
||||
it('renders a social media link', () => {
|
||||
const link = wrapper.findAll('a').at(0)
|
||||
expect(link.attributes('href')).toEqual('https://www.instagram.com/nimitbhargava')
|
||||
})
|
||||
|
||||
it('shows the first favicon', () => {
|
||||
const favicon = wrapper.findAll('a').at(0).find('img')
|
||||
expect(favicon.attributes('src')).toEqual('https://www.instagram.com/favicon.ico')
|
||||
})
|
||||
|
||||
it('shows the second favicon', () => {
|
||||
const favicon = wrapper.findAll('a').at(1).find('img')
|
||||
expect(favicon.attributes('src')).toEqual('https://www.facebook.com/favicon.ico')
|
||||
})
|
||||
|
||||
it('shows the last favicon', () => {
|
||||
const favicon = wrapper.findAll('a').at(-1).find('img')
|
||||
expect(favicon.attributes('src')).toEqual('https://www.youtube.com/favicon.ico')
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
49
webapp/components/SocialMedia/SocialMedia.vue
Normal file
49
webapp/components/SocialMedia/SocialMedia.vue
Normal file
@ -0,0 +1,49 @@
|
||||
<template>
|
||||
<ds-space v-if="user.socialMedia && user.socialMedia.length" margin="large">
|
||||
<base-card class="social-media-bc">
|
||||
<ds-space margin="x-small">
|
||||
<ds-text tag="h5" color="soft">
|
||||
{{ $t('profile.socialMedia') }} {{ userName | truncate(15) }}?
|
||||
</ds-text>
|
||||
<template>
|
||||
<ds-space v-for="link in socialMediaLinks()" :key="link.id" margin="x-small">
|
||||
<a :href="link.url" target="_blank">
|
||||
<img :src="link.favicon" alt="Link:" height="22" width="22" />
|
||||
{{ link.username }}
|
||||
</a>
|
||||
</ds-space>
|
||||
</template>
|
||||
</ds-space>
|
||||
</base-card>
|
||||
</ds-space>
|
||||
</template>
|
||||
|
||||
<script>
|
||||
export default {
|
||||
name: 'social-media',
|
||||
props: {
|
||||
userName: {},
|
||||
user: {},
|
||||
},
|
||||
methods: {
|
||||
socialMediaLinks() {
|
||||
const { socialMedia = [] } = this.user
|
||||
return socialMedia.map((socialMedia) => {
|
||||
const { url } = socialMedia
|
||||
const matches = url.match(/^(?:https?:\/\/)?(?:[^@\n])?(?:www\.)?([^:/\n?]+)/g)
|
||||
const [domain] = matches || []
|
||||
const favicon = domain ? `${domain}/favicon.ico` : null
|
||||
const username = url.split('/').pop()
|
||||
return { url, username, favicon }
|
||||
})
|
||||
},
|
||||
},
|
||||
}
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
.social-media-bc {
|
||||
position: relative;
|
||||
height: auto;
|
||||
}
|
||||
</style>
|
||||
@ -4,4 +4,5 @@ export default {
|
||||
APPLICATION_DESCRIPTION: 'ocelot.social Community Network',
|
||||
ORGANIZATION_NAME: 'ocelot.social Community',
|
||||
ORGANIZATION_JURISDICTION: 'City of Angels',
|
||||
COOKIE_NAME: 'ocelot-social-token',
|
||||
}
|
||||
|
||||
@ -315,6 +315,7 @@
|
||||
"moreInfo": "Was ist {APPLICATION_NAME}?",
|
||||
"moreInfoHint": "zur Präsentationsseite",
|
||||
"no-account": "Du hast noch kein Benutzerkonto?",
|
||||
"no-cookie": "Es kann kein Cookie angelegt werden. Du must Cookies akzeptieren.",
|
||||
"password": "Dein Passwort",
|
||||
"register": "Benutzerkonto erstellen",
|
||||
"success": "Du bist eingeloggt!"
|
||||
|
||||
@ -315,6 +315,7 @@
|
||||
"moreInfo": "What is {APPLICATION_NAME}?",
|
||||
"moreInfoHint": "to the presentation page",
|
||||
"no-account": "Don't have an account?",
|
||||
"no-cookie": "No cookie can be set. You must accept cookies.",
|
||||
"password": "Your Password",
|
||||
"register": "Sign up",
|
||||
"success": "You are logged in!"
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
import path from 'path'
|
||||
import dotenv from 'dotenv'
|
||||
import manifest from './constants/manifest.js'
|
||||
import metadata from './constants/metadata.js'
|
||||
|
||||
dotenv.config() // we want to synchronize @nuxt-dotenv and nuxt-env
|
||||
|
||||
@ -214,7 +215,7 @@ export default {
|
||||
|
||||
// Give apollo module options
|
||||
apollo: {
|
||||
tokenName: 'ocelot-social-token', // optional, default: apollo-token
|
||||
tokenName: metadata.COOKIE_NAME, // optional, default: apollo-token
|
||||
cookieAttributes: {
|
||||
expires: 1, // optional, default: 7 (days)
|
||||
},
|
||||
|
||||
@ -80,6 +80,7 @@
|
||||
"nuxt": "~2.12.1",
|
||||
"nuxt-dropzone": "^1.0.4",
|
||||
"nuxt-env": "~0.1.0",
|
||||
"sass": "^1.30.0",
|
||||
"stack-utils": "^2.0.1",
|
||||
"tippy.js": "^4.3.5",
|
||||
"tiptap": "~1.26.6",
|
||||
@ -133,7 +134,6 @@
|
||||
"identity-obj-proxy": "^3.0.0",
|
||||
"jest": "~26.6.3",
|
||||
"mutation-observer": "^1.0.3",
|
||||
"node-sass": "~4.13.1",
|
||||
"prettier": "~2.2.1",
|
||||
"sass-loader": "~8.0.2",
|
||||
"storybook-design-token": "^0.8.1",
|
||||
|
||||
@ -103,23 +103,7 @@
|
||||
type="following"
|
||||
@fetchAllConnections="fetchAllConnections"
|
||||
/>
|
||||
<ds-space v-if="user.socialMedia && user.socialMedia.length" margin="large">
|
||||
<base-card style="position: relative; height: auto">
|
||||
<ds-space margin="x-small">
|
||||
<ds-text tag="h5" color="soft">
|
||||
{{ $t('profile.socialMedia') }} {{ userName | truncate(15) }}?
|
||||
</ds-text>
|
||||
<template>
|
||||
<ds-space v-for="link in socialMediaLinks" :key="link.username" margin="x-small">
|
||||
<a :href="link.url" target="_blank">
|
||||
<user-avatar :image="link.favicon" />
|
||||
{{ link.username }}
|
||||
</a>
|
||||
</ds-space>
|
||||
</template>
|
||||
</ds-space>
|
||||
</base-card>
|
||||
</ds-space>
|
||||
<social-media :user-name="userName" :user="user" />
|
||||
</ds-flex-item>
|
||||
|
||||
<ds-flex-item :width="{ base: '100%', sm: 3, md: 5, lg: 3 }">
|
||||
@ -243,6 +227,7 @@ import { muteUser, unmuteUser } from '~/graphql/settings/MutedUsers'
|
||||
import { blockUser, unblockUser } from '~/graphql/settings/BlockedUsers'
|
||||
import PostMutations from '~/graphql/PostMutations'
|
||||
import UpdateQuery from '~/components/utils/UpdateQuery'
|
||||
import SocialMedia from '~/components/SocialMedia/SocialMedia'
|
||||
|
||||
const tabToFilterMapping = ({ tab, id }) => {
|
||||
return {
|
||||
@ -254,6 +239,7 @@ const tabToFilterMapping = ({ tab, id }) => {
|
||||
|
||||
export default {
|
||||
components: {
|
||||
SocialMedia,
|
||||
PostTeaser,
|
||||
HcFollowButton,
|
||||
HcCountTo,
|
||||
@ -292,17 +278,6 @@ export default {
|
||||
user() {
|
||||
return this.User ? this.User[0] : {}
|
||||
},
|
||||
socialMediaLinks() {
|
||||
const { socialMedia = [] } = this.user
|
||||
return socialMedia.map((socialMedia) => {
|
||||
const { url } = socialMedia
|
||||
const matches = url.match(/^(?:https?:\/\/)?(?:[^@\n])?(?:www\.)?([^:/\n?]+)/g)
|
||||
const [domain] = matches || []
|
||||
const favicon = domain ? `${domain}/favicon.ico` : null
|
||||
const username = url.split('/').pop()
|
||||
return { url, username, favicon }
|
||||
})
|
||||
},
|
||||
userName() {
|
||||
const { name } = this.user || {}
|
||||
return name || this.$t('profile.userAnonym')
|
||||
|
||||
@ -1,5 +1,6 @@
|
||||
import { InMemoryCache, IntrospectionFragmentMatcher } from 'apollo-cache-inmemory'
|
||||
import introspectionQueryResultData from './apollo-config/fragmentTypes.json'
|
||||
import metadata from '~/constants/metadata'
|
||||
|
||||
const fragmentMatcher = new IntrospectionFragmentMatcher({
|
||||
introspectionQueryResultData,
|
||||
@ -16,7 +17,7 @@ export default ({ req, nuxtState }) => {
|
||||
credentials: 'same-origin',
|
||||
},
|
||||
credentials: true,
|
||||
tokenName: 'ocelot-social-token',
|
||||
tokenName: metadata.COOKIE_NAME,
|
||||
persisting: false,
|
||||
websocketsOnly: false,
|
||||
cache: new InMemoryCache({ fragmentMatcher }),
|
||||
|
||||
@ -1,6 +1,10 @@
|
||||
import gql from 'graphql-tag'
|
||||
import { VERSION } from '~/constants/terms-and-conditions-version.js'
|
||||
import { currentUserQuery } from '~/graphql/User'
|
||||
import Cookie from 'universal-cookie'
|
||||
import metadata from '~/constants/metadata'
|
||||
|
||||
const cookies = new Cookie()
|
||||
|
||||
export const state = () => {
|
||||
return {
|
||||
@ -99,6 +103,9 @@ export const actions = {
|
||||
await this.app.$apolloHelpers.onLogin(login)
|
||||
commit('SET_TOKEN', login)
|
||||
await dispatch('fetchCurrentUser')
|
||||
if (cookies.get(metadata.COOKIE_NAME) === undefined) {
|
||||
throw new Error('no-cookie')
|
||||
}
|
||||
} catch (err) {
|
||||
throw new Error(err)
|
||||
} finally {
|
||||
|
||||
@ -5984,6 +5984,21 @@ cheerio@^1.0.0-rc.2:
|
||||
lodash "^4.15.0"
|
||||
parse5 "^3.0.1"
|
||||
|
||||
"chokidar@>=2.0.0 <4.0.0":
|
||||
version "3.4.3"
|
||||
resolved "https://registry.yarnpkg.com/chokidar/-/chokidar-3.4.3.tgz#c1df38231448e45ca4ac588e6c79573ba6a57d5b"
|
||||
integrity sha512-DtM3g7juCXQxFVSNPNByEC2+NImtBuxQQvWlHunpJIS5Ocr0lG306cC7FCi7cEA0fzmybPUIl4txBIobk1gGOQ==
|
||||
dependencies:
|
||||
anymatch "~3.1.1"
|
||||
braces "~3.0.2"
|
||||
glob-parent "~5.1.0"
|
||||
is-binary-path "~2.1.0"
|
||||
is-glob "~4.0.1"
|
||||
normalize-path "~3.0.0"
|
||||
readdirp "~3.5.0"
|
||||
optionalDependencies:
|
||||
fsevents "~2.1.2"
|
||||
|
||||
chokidar@^2.0.2, chokidar@^2.0.4:
|
||||
version "2.1.6"
|
||||
resolved "https://registry.yarnpkg.com/chokidar/-/chokidar-2.1.6.tgz#b6cad653a929e244ce8a834244164d241fa954c5"
|
||||
@ -12523,7 +12538,7 @@ node-res@^5.0.1:
|
||||
on-finished "^2.3.0"
|
||||
vary "^1.1.2"
|
||||
|
||||
node-sass@^4.12.0, node-sass@~4.13.1:
|
||||
node-sass@^4.12.0:
|
||||
version "4.13.1"
|
||||
resolved "https://registry.yarnpkg.com/node-sass/-/node-sass-4.13.1.tgz#9db5689696bb2eec2c32b98bfea4c7a2e992d0a3"
|
||||
integrity sha512-TTWFx+ZhyDx1Biiez2nB0L3YrCZ/8oHagaDalbuBSlqXgUPsdkUSzJsVxeDO9LtPB49+Fh3WQl3slABo6AotNw==
|
||||
@ -14913,6 +14928,13 @@ readdirp@~3.3.0:
|
||||
dependencies:
|
||||
picomatch "^2.0.7"
|
||||
|
||||
readdirp@~3.5.0:
|
||||
version "3.5.0"
|
||||
resolved "https://registry.yarnpkg.com/readdirp/-/readdirp-3.5.0.tgz#9ba74c019b15d365278d2e91bb8c48d7b4d42c9e"
|
||||
integrity sha512-cMhu7c/8rdhkHXWsY+osBhfSy0JikwpHK/5+imo+LpeasTF8ouErHrlYkwT0++njiyuDvc7OFY5T3ukvZ8qmFQ==
|
||||
dependencies:
|
||||
picomatch "^2.2.1"
|
||||
|
||||
realpath-native@^2.0.0:
|
||||
version "2.0.0"
|
||||
resolved "https://registry.yarnpkg.com/realpath-native/-/realpath-native-2.0.0.tgz#7377ac429b6e1fd599dc38d08ed942d0d7beb866"
|
||||
@ -15469,6 +15491,13 @@ sass-resources-loader@^2.0.0:
|
||||
glob "^7.1.1"
|
||||
loader-utils "^1.0.4"
|
||||
|
||||
sass@^1.30.0:
|
||||
version "1.30.0"
|
||||
resolved "https://registry.yarnpkg.com/sass/-/sass-1.30.0.tgz#60bbbbaf76ba10117e61c6c24f00161c3d60610e"
|
||||
integrity sha512-26EUhOXRLaUY7+mWuRFqGeGGNmhB1vblpTENO1Z7mAzzIZeVxZr9EZoaY1kyGLFWdSOZxRMAufiN2mkbO6dAlw==
|
||||
dependencies:
|
||||
chokidar ">=2.0.0 <4.0.0"
|
||||
|
||||
sax@^1.2.4, sax@~1.2.4:
|
||||
version "1.2.4"
|
||||
resolved "https://registry.yarnpkg.com/sax/-/sax-1.2.4.tgz#2816234e2378bddc4e5354fab5caa895df7100d9"
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user