Ok, so here are multiple issues: 1. In cypher, `NOT NULL` will return `NULL` not `FALSE`. If we want `FALSE` to be set in the database import, we should use `COAELESCE` to find the first not-null value. See: https://neo4j.com/docs/cypher-manual/current/syntax/working-with-null/ https://markhneedham.com/blog/2017/02/22/neo4j-null-values-even-work/ 2. I removed the `disabled` and `deleted` checks on the commented counter. With `neo4j-graphql-js` it is not possible to filter on the join models (at least not without a lot of complexity) for disabled or deleted items. Let's live with the fact that the list of commented posts will include those posts, where the user has deleted his comment or where the user's comment was disabled. It's being displayed as "not available" so I think this is OK for now. 3. De-couple the pagination counters from the "commented", "shouted" etc. counters. It might be that the list of posts is different for different users. E.g. if the user has blocked you, the "posts" list will be empty. The "shouted" or "commented" list will not have the posts of the author. If you are a moderator, the list will include disabled posts. So the counters are not in sync with the actual list coming from the backend. Therefore I implemented "fetch and check if resultSet < pageSize" instead of a global counter.
Legacy data migration
This setup is completely optional and only required if you have data on a server which is running our legacy code and you want to import that data. It will import the uploads folder and migrate a dump of the legacy Mongo database into our new Neo4J graph database.
Configure Maintenance-Worker Pod
Create a configmap with the specific connection data of your legacy server:
$ kubectl create configmap maintenance-worker \
--namespace=human-connection \
--from-literal=SSH_USERNAME=someuser \
--from-literal=SSH_HOST=yourhost \
--from-literal=MONGODB_USERNAME=hc-api \
--from-literal=MONGODB_PASSWORD=secretpassword \
--from-literal=MONGODB_AUTH_DB=hc_api \
--from-literal=MONGODB_DATABASE=hc_api \
--from-literal=UPLOADS_DIRECTORY=/var/www/api/uploads
Create a secret with your public and private ssh keys. As the kubernetes documentation points out, you should be careful with your ssh keys. Anyone with access to your cluster will have access to your ssh keys. Better create a new pair with ssh-keygen and copy the public key to your legacy server with ssh-copy-id:
$ kubectl create secret generic ssh-keys \
--namespace=human-connection \
--from-file=id_rsa=/path/to/.ssh/id_rsa \
--from-file=id_rsa.pub=/path/to/.ssh/id_rsa.pub \
--from-file=known_hosts=/path/to/.ssh/known_hosts
Deploy a Temporary Maintenance-Worker Pod
Bring the application into maintenance mode.
{% hint style="info" %} TODO: implement maintenance mode {% endhint %}
Then temporarily delete backend and database deployments
$ kubectl --namespace=human-connection get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nitro-backend 1/1 1 1 3d11h
nitro-neo4j 1/1 1 1 3d11h
nitro-web 2/2 2 2 73d
$ kubectl --namespace=human-connection delete deployment nitro-neo4j
deployment.extensions "nitro-neo4j" deleted
$ kubectl --namespace=human-connection delete deployment nitro-backend
deployment.extensions "nitro-backend" deleted
Deploy one-time maintenance-worker pod:
# in deployment/legacy-migration/
$ kubectl apply -f maintenance-worker.yaml
pod/nitro-maintenance-worker created
Import legacy database and uploads:
$ kubectl --namespace=human-connection exec -it nitro-maintenance-worker bash
$ import_legacy_db
$ import_legacy_uploads
$ exit
Delete the pod when you're done:
$ kubectl --namespace=human-connection delete pod nitro-maintenance-worker
Oh, and of course you have to get those deleted deployments back. One way of doing it would be:
# in folder deployment/
$ kubectl apply -f human-connection/deployment-backend.yaml -f human-connection/deployment-neo4j.yaml