Ocelot-Social/docker-compose.test.yml
Robert Schäfer d6a8de478b
feat(backend): migrate to s3 (#8545)
## 🍰 Pullrequest
This will migrate our assets to an objectstorage via S3.

Before this PR is rolled out, the S3 credentials need to be configured in the respective infrastructure repository. The migration is implemented in a backend migration, i.e. I expect the `initContainer` to take a little longer but I hope then it's going to be fine. If any errors occcur, the migration should be repeatable, since the disk volume is still there.

### Issues
The backend having direct access on disk.

### Todo
- [ ] Configure backend environment variables in every infrastructure repo
- [ ] Remove kubernetes uploads volume in a future PR

Commits:

* refactor: follow @ulfgebhardt
  Here: https://github.com/Ocelot-Social-Community/Ocelot-Social/pull/8545#pullrequestreview-2846163417
  I don't know why the PR didn't include these changes already, I believe I made a mistake during rebase and lost the relevant commits.
* refactor: use typescript assertions
  I found it a better way to react to this comment: https://github.com/Ocelot-Social-Community/Ocelot-Social/pull/8545/files#r2092766596
* add S3 credentials
* refactor: easier to remember credentials
  It's for local development only
* give init container necessary file access
* fix: wrong upload location on production
* refactor: follow @ulfgebhardt's review
  See: https://github.com/Ocelot-Social-Community/Ocelot-Social/pull/8545#pullrequestreview-2881626504
2025-06-01 09:53:31 +00:00

83 lines
2.6 KiB
YAML

services:
webapp:
# name the image so that it cannot be found in a DockerHub repository, otherwise it will not be built locally from the 'dockerfile' but pulled from there
image: ghcr.io/ocelot-social-community/ocelot-social/webapp:test
build:
target: test
environment:
- NODE_ENV="test"
volumes:
- ./coverage:/app/coverage
backend:
# name the image so that it cannot be found in a DockerHub repository, otherwise it will not be built locally from the 'dockerfile' but pulled from there
image: ghcr.io/ocelot-social-community/ocelot-social/backend:test
depends_on:
- minio
- minio-mc
- mailserver
build:
target: test
environment:
- NODE_ENV="test"
- AWS_ACCESS_KEY_ID=minio
- AWS_SECRET_ACCESS_KEY=12341234
- AWS_ENDPOINT=http:/minio:9000
- AWS_REGION=local
- AWS_BUCKET=ocelot
volumes:
- ./coverage:/app/coverage
maintenance:
# name the image so that it cannot be found in a DockerHub repository, otherwise it will not be built locally from the 'dockerfile' but pulled from there
image: ghcr.io/ocelot-social-community/ocelot-social/maintenance:test
neo4j:
# name the image so that it cannot be found in a DockerHub repository, otherwise it will not be built locally from the 'dockerfile' but pulled from there
image: ghcr.io/ocelot-social-community/ocelot-social/neo4j-community:test
#environment:
# - NEO4J_dbms_connector_bolt_enabled=true
# - NEO4J_dbms_connector_bolt_tls__level=OPTIONAL
# - NEO4J_dbms_connector_bolt_listen__address=0.0.0.0:7687
# - NEO4J_auth=none
# - NEO4J_dbms_connectors_default__listen__address=0.0.0.0
# - NEO4J_dbms_connector_http_listen__address=0.0.0.0:7474
# - NEO4J_dbms_connector_https_listen__address=0.0.0.0:7473
mailserver:
image: maildev/maildev
ports:
- 1080:1080
- 1025:1025
minio:
image: quay.io/minio/minio
ports:
- 9000:9000
- 9001:9001
volumes:
- minio_data:/data
environment:
- MINIO_ROOT_USER=minio
- MINIO_ROOT_PASSWORD=12341234
command: server /data --console-address ":9001"
minio-mc:
image: quay.io/minio/mc
depends_on:
- minio
restart: on-failure
volumes:
- ./minio/readonly-policy.json:/tmp/readonly-policy.json
entrypoint: >
/bin/sh -c "
sleep 5;
/usr/bin/mc alias set dockerminio http://minio:9000 minio 12341234;
/usr/bin/mc mb --ignore-existing dockerminio/ocelot;
/usr/bin/mc anonymous set-json /tmp/readonly-policy.json dockerminio/ocelot;
"
volumes:
minio_data: