36 Commits

Author SHA1 Message Date
Robert Schäfer
9a9118c721 Merge remote-tracking branch 'origin/master' into 2019/kw22/alpha_data_import_status_schema_split 2019-05-31 16:26:06 +02:00
Robert Schäfer
58add8fc5f Install envsubst in Dockerfile
@ulfgebhardt please setup docker on your machine or a remote machine.

Installing `envsubst` on alpine fails with circular dependencies
(awkward). So this repo here has a solution:
https://github.com/cirocosta/alpine-envsubst/blob/master/Dockerfile#L6
2019-05-31 16:26:00 +02:00
Robert Schäfer
10864e7d18 Remove command line arguments -u and -p
@ulfgebhardt: The docs at `man cypher-shell` say that you can pass
`NEO4J_USERNAME` and `NEO4J_PASSWORD` per environment variable. So the
command line arguments are obsolete here.
2019-05-31 16:26:00 +02:00
37be5481c0
remaining legacy table descriptions and dummy import scripts 2019-05-29 20:43:58 +02:00
355205028f
removed debug output 2019-05-29 20:04:03 +02:00
271af7dde2
keep index file for already imported files, do not reimport already imported file, invites, notifications, emotions and organisations scripts and descriptions 2019-05-29 20:02:30 +02:00
968578f10a
delete cql files 2019-05-29 17:30:46 +02:00
816fbdf7cd
fixed slug required - it was not in Nitro 2019-05-29 16:41:27 +02:00
Robert Schäfer
58224381a8 Add missing environment var to maintenance-worker 2019-05-29 16:12:03 +02:00
Robert Schäfer
a83aad3f60 Fix wrong path in Dockerfile 2019-05-29 16:00:45 +02:00
67595f6400
missing change for windows comment 2019-05-29 15:45:32 +02:00
7e44667dd1
modified .env file for neo4j import since its had problems with file locks on windows (cypher-shell) 2019-05-29 15:45:02 +02:00
6f0447515a
- fixed several errors handling import
- split graphql schema into parts
- which data is imported which is not - a list
2019-05-29 15:37:28 +02:00
Robert Schäfer
e1a113e7e4 Fix wrong mountpath
We're saving the files to /uploads. If the maintenance-worker does not
mount the uploads persistent volume there, we don't get persistent
files.
2019-05-29 15:16:08 +02:00
Robert Schäfer
2c8dcaa592 Yet another typo 2019-05-29 15:08:14 +02:00
Robert Schäfer
95fe115198 Fix typo 2019-05-29 15:01:56 +02:00
860a0d41d0
Merge pull request #697 from Human-Connection/2019/kw22/alpha_data_import
🍰 2019/kw22/alpha_data_import
2019-05-29 00:43:10 +02:00
4ccfe3822c
added existing data imports 2019-05-29 00:20:43 +02:00
Robert Schäfer
ab7a4ffc3e
Fix duplicate tags by using the name as the id
@ulfgebhardt: I wondered about the list of tags after importing the
legacy db. It seems, each tag has at most 1 contribution. I guess it's
because we create a unique id for each tag, so two tags with the same
`name` e.g. `#hashtag` and `#hashtag` are not de-duplicated.

I'm currently sitting in the train and cannot run the data import myself, could
you double-check?
2019-05-28 22:41:51 +02:00
014104e055
fixed data import from alpha data:
- include all collections (commented out)
- refactored neo4j import script
- use of .env file for (additional) configurations / configuration overrides
- lots of fiddeling with neo4j cql files and cypher shell
2019-05-28 18:53:05 +02:00
Robert Schäfer
f9ac22e560 Add executable UNIX permissions to export script
I encourage @ulfgebhardt to run the following command once:

```
SH_USERNAME=ulf SSH_HOST=***** MONGODB_USERNAME='hc-api' MONGODB_PASSWORD=***** MONGODB_DATABASE=hc_api MONGODB_AUTH_DB=admin UPLOADS_DIRECTORY=/data/api/uploads docker-compose -f docker-compose.maintenance.yml up --build
```

Once you're done with everything. You don't have to run docker for
development, but this procedure would ensure docker environment works
as expected.
2019-05-28 18:14:21 +02:00
4b5138880d
missing default .env file 2019-05-28 17:07:54 +02:00
301e7fa60c
fixed data export for alpha:
- include all collections
- refactored mongodb export script
- renamed to export
- use of .env file for (additional) configurations / configuration overrides
2019-05-28 17:03:44 +02:00
Robert Schäfer
dfef37b3f6 Prevent argument list too long error 2019-05-08 00:46:07 +02:00
Robert Schäfer
bcc2c4dbbb Configure scripts and docker-compose.yml
After endless try/error I found the way to share volumes between
multiple docker-compose.ymls: You have to place those files in the same
folder. Also the import scripts must be adapted.
2019-05-07 21:48:09 +02:00
Robert Schäfer
5771efc920 Add binary idle to keep container spinning 2019-05-07 19:17:42 +02:00
Robert Schäfer
7c139bed1a Remove obsolete binary to create RSA keys
We can supply files on kubernetes through secrets
2019-05-07 19:17:21 +02:00
Robert Schäfer
b5d91cffef Implement @appinteractive's suggestions
This:
https://github.com/Human-Connection/Human-Connection/pull/529#discussion_r280065855
2019-05-07 19:15:47 +02:00
Robert Schäfer
497f77ae10 Breakthrough! Use split+indices for performance
@appinteractive thanks for pointing out `split`. You just saved me some
days of work to refactor the import statements to use CSV instead of
JSON files.

@Tirokk when I enter `:schema` in Neo4J web UI, I see the following:
```
:schema
Indexes
   ON :Badge(id) ONLINE
   ON :Category(id) ONLINE
   ON :Comment(id) ONLINE
   ON :Post(id) ONLINE
   ON :Tag(id) ONLINE
   ON :User(id) ONLINE

No constraints
```

So I temporarily removed the unique constraints on `slug` and added
plain indices on `id` for all relevant node types. We cannot omit the
`:Label` unfortunately, neo4j does not allow this. So I had to add all
indices for all known node labels instead.

With indices the import finishes in:
```
Time elapsed: 351 seconds
```
🎉

@appinteractive when I keep the unique indices on slug, I get an error
during import that a node with label `:User` and slug `tobias` already
exists. Ie. we have unqiue constraint violations in our production data.

@mattwr18 @ulfgebhardt @ogerly I started the application on my machine
on the production data and it turns out that the index page
http://localhost:3000/ takes way to long. Visiting my profile page at
http://localhost:3000/profile/5b1693daf850c11207fa6109/robert-schafer
is fine, though. Even pagination works. When I visit a post page with
not too many comments, the application is fast enough, too:
http://localhost:3000/post/5bbf49ebc428ea001c7ca89c/neues-video-format-human-connection-tech-news
2019-05-01 12:25:28 +02:00
Robert Schäfer
43ac10f7d7 Nice catch @Tirokk 2019-04-25 11:43:43 +02:00
Robert Schäfer
c350fb37a9 Fine tuning documentation 2019-04-24 23:55:49 +02:00
Robert Schäfer
86c0307ddb Reclaim volume claims for maintenance worker 2019-04-24 23:46:06 +02:00
Robert Schäfer
31cff10206 Clean up kubernetes config for maintenance-worker
We're going into the direction of removing the backend and database
deployments, accessing `/uploads` and `/data` through the maintenance
worker pod and carrying out tasks from there.
2019-04-24 01:10:35 +02:00
Robert Schäfer
6ed5ad58d5 Merge maintenance with deployment 2019-04-24 00:27:58 +02:00
Robert Schäfer
77ac0ce7a6 Rename db-migration-worker to maintenance-worker 2019-04-23 23:12:00 +02:00
Robert Schäfer
4fe89e88ac Merging deployment to master 2019-03-20 21:07:57 +01:00