Digital Ocean does not support shared directories. So we have to uploads
the images in `/uploads` via `kubectl cp` or something similar.
Likewise, it is not possible to share the exported mongodb .json files
with neo4j container. Therefore let's install `cypher-shell`, included
in `neo4j` package, to directly open a neo4j connection and bulk import
the data.
@appinteractive it's troublesome to add the SSH private key via
environment variable. You have to convert newlines to spaces and convert
them back - which I think is error prone. I hope we can transfer the
private key file on to our deployed container later on.
The idea is to import/dump the remote database via SSH, restore it to
the local mongodb, export .json collections to a shared volume and
import the json collections with cypher-shell.
I was wrong. It's not the container name but I forgot simply to give the
network the very same name as specified in the docker-compose.yml of the
frontend
After we proxy API requests through the server-side-rendered frontend
there is no need to use xip.io anymore. However, if you "join" services
to a named network at arbitrary times, the DNS of docker-compose only
works if you assign a name to the container. Thus I left `backend` as a
container name.