Ocelot-Social/docker-compose.override.yml
Robert Schäfer d6a8de478b
feat(backend): migrate to s3 (#8545)
## 🍰 Pullrequest
This will migrate our assets to an objectstorage via S3.

Before this PR is rolled out, the S3 credentials need to be configured in the respective infrastructure repository. The migration is implemented in a backend migration, i.e. I expect the `initContainer` to take a little longer but I hope then it's going to be fine. If any errors occcur, the migration should be repeatable, since the disk volume is still there.

### Issues
The backend having direct access on disk.

### Todo
- [ ] Configure backend environment variables in every infrastructure repo
- [ ] Remove kubernetes uploads volume in a future PR

Commits:

* refactor: follow @ulfgebhardt
  Here: https://github.com/Ocelot-Social-Community/Ocelot-Social/pull/8545#pullrequestreview-2846163417
  I don't know why the PR didn't include these changes already, I believe I made a mistake during rebase and lost the relevant commits.
* refactor: use typescript assertions
  I found it a better way to react to this comment: https://github.com/Ocelot-Social-Community/Ocelot-Social/pull/8545/files#r2092766596
* add S3 credentials
* refactor: easier to remember credentials
  It's for local development only
* give init container necessary file access
* fix: wrong upload location on production
* refactor: follow @ulfgebhardt's review
  See: https://github.com/Ocelot-Social-Community/Ocelot-Social/pull/8545#pullrequestreview-2881626504
2025-06-01 09:53:31 +00:00

88 lines
2.0 KiB
YAML

services:
webapp:
image: ghcr.io/ocelot-social-community/ocelot-social/webapp:local-development
build:
target: development
environment:
- NODE_ENV="development"
# - DEBUG=true
- NUXT_BUILD=/tmp/nuxt # avoid file permission issues when `rm -rf .nuxt/`
volumes:
- ./webapp:/app
frontend:
image: ghcr.io/ocelot-social-community/ocelot-social/frontend:local-development
build:
target: development
environment:
- NODE_ENV=development
ports:
# port required for npm run dev
- 24678:24678
volumes:
- ./frontend:/app
backend:
image: ghcr.io/ocelot-social-community/ocelot-social/backend:local-development
depends_on:
- minio
- minio-mc
build:
target: development
environment:
- NODE_ENV="development"
- DEBUG=true
- SMTP_PORT=1025
- SMTP_HOST=mailserver
- AWS_ACCESS_KEY_ID=minio
- AWS_SECRET_ACCESS_KEY=12341234
- AWS_ENDPOINT=http:/minio:9000
- AWS_REGION=local
- AWS_BUCKET=ocelot
- S3_PUBLIC_GATEWAY=http:/localhost:9000
volumes:
- ./backend:/app
neo4j:
ports:
# Also expose the neo4j query browser
- 7474:7474
mailserver:
image: maildev/maildev
container_name: mailserver
ports:
- 1080:1080
- 1025:1025
minio:
image: quay.io/minio/minio
ports:
- 9000:9000
- 9001:9001
volumes:
- minio_data:/data
environment:
- MINIO_ROOT_USER=minio
- MINIO_ROOT_PASSWORD=12341234
command: server /data --console-address ":9001"
minio-mc:
image: quay.io/minio/mc
depends_on:
- minio
restart: on-failure
volumes:
- ./minio/readonly-policy.json:/tmp/readonly-policy.json
entrypoint: >
/bin/sh -c "
sleep 5;
/usr/bin/mc alias set dockerminio http://minio:9000 minio 12341234;
/usr/bin/mc mb --ignore-existing dockerminio/ocelot;
/usr/bin/mc anonymous set-json /tmp/readonly-policy.json dockerminio/ocelot;
"
volumes:
minio_data: