mirror of
https://github.com/IT4Change/Ocelot-Social.git
synced 2025-12-13 07:45:56 +00:00
Merge remote-tracking branch 'origin/master' into dependabot/npm_and_yarn/backend/neo4j-graphql-js-2.6.0
This commit is contained in:
commit
8111bc2190
@ -51,7 +51,19 @@ But what do we do when waiting for merge into master \(wanting to keep PRs small
|
||||
* solutions
|
||||
* 1\) put 2nd PR into branch that the first PR is hitting - but requires update after merging
|
||||
* 2\) prefer to leave exiting PR until it can be reviewed, and instead go and work on some other part of the codebase that is not impacted by the first PR
|
||||
|
||||
### Code Review
|
||||
* Github setting in place - at least one review is required to merge
|
||||
- in principle anyone (who is not the PR owner) can review
|
||||
- but often it will be the core developers (Robert, Ulf, Greg, Wolfgang?)
|
||||
- once there is a review, and presuming no requested changes, PR opener can merge
|
||||
|
||||
* CI/tests
|
||||
- the CI needs to pass
|
||||
- linting <-- autofix?
|
||||
- tests (unit, feature) (backend, frontend)
|
||||
- codecoverage
|
||||
|
||||
## Notes
|
||||
|
||||
question: when you want to pick a task - \(find out priority\) - is it in discord? is it in AV slack? --> Robert says you can always ask in discord - group channels are the best
|
||||
|
||||
@ -27,7 +27,10 @@
|
||||
* [HTTPS](deployment/digital-ocean/https/README.md)
|
||||
* [Human Connection](deployment/human-connection/README.md)
|
||||
* [Volumes](deployment/volumes/README.md)
|
||||
* [Neo4J DB Backup](deployment/backup.md)
|
||||
* [Neo4J Offline-Backups](deployment/volumes/neo4j-offline-backup/README.md)
|
||||
* [Volume Snapshots](deployment/volumes/volume-snapshots/README.md)
|
||||
* [Reclaim Policy](deployment/volumes/reclaim-policy/README.md)
|
||||
* [Velero](deployment/volumes/velero/README.md)
|
||||
* [Legacy Migration](deployment/legacy-migration/README.md)
|
||||
* [Feature Specification](cypress/features.md)
|
||||
* [Code of conduct](CODE_OF_CONDUCT.md)
|
||||
|
||||
2
deployment/digital-ocean/https/.gitignore
vendored
Normal file
2
deployment/digital-ocean/https/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
ingress.yaml
|
||||
issuer.yaml
|
||||
@ -12,20 +12,31 @@ $ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/relea
|
||||
$ helm install --name cert-manager --namespace cert-manager stable/cert-manager
|
||||
```
|
||||
|
||||
Create letsencrypt issuers. _Change the email address_ in these files before running this command.
|
||||
## Create Letsencrypt Issuers and Ingress Services
|
||||
|
||||
Copy the configuration templates and change the file according to your needs.
|
||||
|
||||
```bash
|
||||
# in folder deployment/digital-ocean/https/
|
||||
$ kubectl apply -f issuer.yaml
|
||||
cp templates/issuer.template.yaml ./issuer.yaml
|
||||
cp templates/ingress.template.yaml ./ingress.yaml
|
||||
```
|
||||
|
||||
Create an ingress service in namespace `human-connection`. _Change the domain name_ according to your needs:
|
||||
At least, **change email addresses** in `issuer.yaml`. For sure you also want
|
||||
to _change the domain name_ in `ingress.yaml`.
|
||||
|
||||
Once you are done, apply the configuration:
|
||||
|
||||
```bash
|
||||
# in folder deployment/digital-ocean/https/
|
||||
$ kubectl apply -f ingress.yaml
|
||||
$ kubectl apply -f .
|
||||
```
|
||||
|
||||
By now, your cluster should have an external IP address assigned. If you visit
|
||||
your dashboard, this is how it should look like:
|
||||
|
||||

|
||||
|
||||
Check the ingress server is working correctly:
|
||||
|
||||
```bash
|
||||
|
||||
BIN
deployment/digital-ocean/https/ip-address.png
Normal file
BIN
deployment/digital-ocean/https/ip-address.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 226 KiB |
@ -9,13 +9,21 @@ just apply our provided configuration files to your cluster.
|
||||
Copy our provided templates:
|
||||
|
||||
```bash
|
||||
$ cp secrets.template.yaml human-connection/secrets.yaml
|
||||
$ cp configmap.template.yaml human-connection/configmap.yaml
|
||||
# in folder deployment/human-connection/
|
||||
$ cp templates/secrets.template.yaml ./secrets.yaml
|
||||
$ cp templates/configmap.template.yaml ./configmap.yaml
|
||||
```
|
||||
|
||||
Change the `configmap.yaml` as needed, all variables will be available as
|
||||
environment variables in your deployed kubernetes pods.
|
||||
|
||||
Probably you want to change this environment variable to your actual domain:
|
||||
|
||||
```
|
||||
# in configmap.yaml
|
||||
CLIENT_URI: "https://nitro-staging.human-connection.org"
|
||||
```
|
||||
|
||||
If you want to edit secrets, you have to `base64` encode them. See [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually).
|
||||
|
||||
```bash
|
||||
@ -30,7 +38,8 @@ your deployed kubernetes pods.
|
||||
## Create a namespace
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f namespace-human-connection.yaml
|
||||
# in folder deployment/human-connection/
|
||||
$ kubectl apply -f namespace.yaml
|
||||
```
|
||||
|
||||
If you have a [kubernets dashboard](../digital-ocean/dashboard/README.md)
|
||||
@ -48,7 +57,7 @@ persistent volumes once before you apply the configuration.
|
||||
## Apply the configuration
|
||||
|
||||
```bash
|
||||
# in folder deployment/
|
||||
# in folder deployment/
|
||||
$ kubectl apply -f human-connection/
|
||||
```
|
||||
|
||||
|
||||
@ -17,6 +17,8 @@
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
backup.velero.io/backup-volumes: uploads
|
||||
labels:
|
||||
human-connection.org/commit: COMMIT
|
||||
human-connection.org/selector: deployment-human-connection-backend
|
||||
|
||||
@ -15,6 +15,8 @@
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
backup.velero.io/backup-volumes: neo4j-data
|
||||
labels:
|
||||
human-connection.org/selector: deployment-human-connection-neo4j
|
||||
name: nitro-neo4j
|
||||
|
||||
6
deployment/human-connection/namespace.yaml
Normal file
6
deployment/human-connection/namespace.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: human-connection
|
||||
labels:
|
||||
name: human-connection
|
||||
@ -7,7 +7,13 @@ At the moment, the application needs two persistent volumes:
|
||||
|
||||
As a matter of precaution, the persistent volume claims that setup these volumes
|
||||
live in a separate folder. You don't want to accidently loose all your data in
|
||||
your database by running `kubectl delete -f human-connection/`, do you?
|
||||
your database by running
|
||||
|
||||
```sh
|
||||
kubectl delete -f human-connection/
|
||||
```
|
||||
|
||||
or do you?
|
||||
|
||||
## Create Persistent Volume Claims
|
||||
|
||||
@ -19,24 +25,12 @@ persistentvolumeclaim/neo4j-data-claim created
|
||||
persistentvolumeclaim/uploads-claim created
|
||||
```
|
||||
|
||||
## Change Reclaim Policy
|
||||
## Backup and Restore
|
||||
|
||||
We recommend to change the `ReclaimPolicy`, so if you delete the persistent
|
||||
volume claims, the associated volumes will be released, not deleted:
|
||||
|
||||
```sh
|
||||
$ kubectl --namespace=human-connection get pv
|
||||
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
pvc-bd02a715-66d0-11e9-be52-ba9c337f4551 1Gi RWO Delete Bound human-connection/neo4j-data-claim do-block-storage 4m24s
|
||||
pvc-bd208086-66d0-11e9-be52-ba9c337f4551 2Gi RWO Delete Bound human-connection/uploads-claim do-block-storage 4m12s
|
||||
```
|
||||
|
||||
Get the volume id from above, then change `ReclaimPolicy` with:
|
||||
```sh
|
||||
kubectl patch pv <VOLUME-ID> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
|
||||
# in the above example
|
||||
kubectl patch pv pvc-bd02a715-66d0-11e9-be52-ba9c337f4551 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
kubectl patch pv pvc-bd208086-66d0-11e9-be52-ba9c337f4551 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
```
|
||||
We tested a couple of options how to do disaster recovery in kubernetes. First,
|
||||
there is the [offline backup strategy](./neo4j-offline-backup/README.md) of the
|
||||
community edition of Neo4J, which you can also run on a local installation.
|
||||
Kubernetes also offers so-called [volume snapshots](./volume-snapshots/README.md).
|
||||
Changing the [reclaim policy](./reclaim-policy/README.md) of your persistent
|
||||
volumes might be an additional safety measure. Finally, there is also a
|
||||
kubernetes specific disaster recovery tool called [Velero](./velero/README.md).
|
||||
|
||||
@ -23,7 +23,10 @@ So, all we have to do is edit the kubernetes deployment of our Neo4J database
|
||||
and set a custom `command` every time we have to carry out tasks like backup,
|
||||
restore, seed etc.
|
||||
|
||||
{% hint style="info" %} TODO: implement maintenance mode {% endhint %}
|
||||
{% hint style="info" %}
|
||||
TODO: implement maintenance mode
|
||||
{% endhint %}
|
||||
|
||||
First bring the application into maintenance mode to ensure there are no
|
||||
database connections left and nobody can access the application.
|
||||
|
||||
30
deployment/volumes/reclaim-policy/README.md
Normal file
30
deployment/volumes/reclaim-policy/README.md
Normal file
@ -0,0 +1,30 @@
|
||||
# Change Reclaim Policy
|
||||
|
||||
We recommend to change the `ReclaimPolicy`, so if you delete the persistent
|
||||
volume claims, the associated volumes will be released, not deleted.
|
||||
|
||||
This procedure is optional and an additional security measure. It might prevent
|
||||
you from loosing data if you accidently delete the namespace and the persistent
|
||||
volumes along with it.
|
||||
|
||||
```sh
|
||||
$ kubectl --namespace=human-connection get pv
|
||||
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
pvc-bd02a715-66d0-11e9-be52-ba9c337f4551 1Gi RWO Delete Bound human-connection/neo4j-data-claim do-block-storage 4m24s
|
||||
pvc-bd208086-66d0-11e9-be52-ba9c337f4551 2Gi RWO Delete Bound human-connection/uploads-claim do-block-storage 4m12s
|
||||
```
|
||||
|
||||
Get the volume id from above, then change `ReclaimPolicy` with:
|
||||
```sh
|
||||
kubectl patch pv <VOLUME-ID> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
|
||||
# in the above example
|
||||
kubectl patch pv pvc-bd02a715-66d0-11e9-be52-ba9c337f4551 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
kubectl patch pv pvc-bd208086-66d0-11e9-be52-ba9c337f4551 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
```
|
||||
|
||||
Given that you changed the reclaim policy as described above, you should be able
|
||||
to create a persistent volume claim based on a volume snapshot content. See
|
||||
the general kubernetes documentation [here](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/)
|
||||
and our specific documentation for snapshots [here](../snapshot/README.md).
|
||||
112
deployment/volumes/velero/README.md
Normal file
112
deployment/volumes/velero/README.md
Normal file
@ -0,0 +1,112 @@
|
||||
# Velero
|
||||
|
||||
{% hint style="danger" %}
|
||||
I tried Velero and it did not work reliably all the time. Sometimes the
|
||||
kubernetes cluster crashes during recovery or data is not fully recovered.
|
||||
|
||||
Feel free to test it out and update this documentation once you feel that it's
|
||||
working reliably. It is very likely that Digital Ocean had some bugs when I
|
||||
tried out the steps below.
|
||||
{% endhint %}
|
||||
|
||||
We use [velero](https://github.com/heptio/velero) for on premise backups, we
|
||||
tested on version `v0.11.0`, you can find their
|
||||
documentation [here](https://heptio.github.io/velero/v0.11.0/).
|
||||
|
||||
Our kubernets configurations adds some annotations to pods. The annotations
|
||||
define the important persistent volumes that need to be backed up. Velero will
|
||||
pick them up and store the volumes in the same cluster but in another namespace
|
||||
`velero`.
|
||||
|
||||
## Prequisites
|
||||
|
||||
You have to install the binary `velero` on your computer and get a tarball of
|
||||
the latest release. We use `v0.11.0` so visit the
|
||||
[release](https://github.com/heptio/velero/releases/tag/v0.11.0) page and
|
||||
download and extract e.g. [velero-v0.11.0-linux-arm64.tar.gz](https://github.com/heptio/velero/releases/download/v0.11.0/velero-v0.11.0-linux-amd64.tar.gz).
|
||||
|
||||
|
||||
## Setup Velero Namespace
|
||||
|
||||
Follow their [getting started](https://heptio.github.io/velero/v0.11.0/get-started)
|
||||
instructions to setup the Velero namespace. We use
|
||||
[Minio](https://docs.min.io/docs/deploy-minio-on-kubernetes) and
|
||||
[restic](https://github.com/restic/restic), so check out Velero's instructions
|
||||
how to setup [restic](https://heptio.github.io/velero/v0.11.0/restic):
|
||||
|
||||
```sh
|
||||
# run from the extracted folder of the tarball
|
||||
$ kubectl apply -f config/common/00-prereqs.yaml
|
||||
$ kubectl apply -f config/minio/
|
||||
```
|
||||
|
||||
Once completed, you should see the namespace in your kubernetes dashboard.
|
||||
|
||||
## Manually Create an On-Premise Backup
|
||||
|
||||
When you create your deployments for Human Connection the required annotations
|
||||
should already be in place. So when you create a backup of namespace
|
||||
`human-connection`:
|
||||
|
||||
```sh
|
||||
$ velero backup create hc-backup --include-namespaces=human-connection
|
||||
```
|
||||
|
||||
That should backup your persistent volumes, too. When you enter:
|
||||
|
||||
```
|
||||
$ velero backup describe hc-backup --details
|
||||
```
|
||||
|
||||
You should see the persistent volumes at the end of the log:
|
||||
|
||||
```
|
||||
....
|
||||
|
||||
Restic Backups:
|
||||
Completed:
|
||||
human-connection/nitro-backend-5b6dd96d6b-q77n6: uploads
|
||||
human-connection/nitro-neo4j-686d768598-z2vhh: neo4j-data
|
||||
```
|
||||
|
||||
## Simulate a Disaster
|
||||
|
||||
Feel free to try out if you loose any data when you simulate a disaster and try
|
||||
to restore the namespace from the backup:
|
||||
|
||||
```sh
|
||||
$ kubectl delete namespace human-connection
|
||||
```
|
||||
|
||||
Wait until the wrongdoing has completed, then:
|
||||
```sh
|
||||
$ velero restore create --from-backup hc-backup
|
||||
```
|
||||
|
||||
Now, I keep my fingers crossed that everything comes back again. If not, I feel
|
||||
very sorry for you.
|
||||
|
||||
|
||||
## Schedule a Regular Backup
|
||||
|
||||
Check out the [docs](https://heptio.github.io/velero/v0.11.0/get-started). You
|
||||
can create a regular schedule e.g. with:
|
||||
|
||||
```sh
|
||||
$ velero schedule create hc-weekly-backup --schedule="@weekly" --include-namespaces=human-connection
|
||||
```
|
||||
|
||||
Inspect the created backups:
|
||||
|
||||
```sh
|
||||
$ velero schedule get
|
||||
NAME STATUS CREATED SCHEDULE BACKUP TTL LAST BACKUP SELECTOR
|
||||
hc-weekly-backup Enabled 2019-05-08 17:51:31 +0200 CEST @weekly 720h0m0s 6s ago <none>
|
||||
|
||||
$ velero backup get
|
||||
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
|
||||
hc-weekly-backup-20190508155132 Completed 2019-05-08 17:51:32 +0200 CEST 29d default <none>
|
||||
|
||||
$ velero backup describe hc-weekly-backup-20190508155132 --details
|
||||
# see if the persistent volumes are backed up
|
||||
```
|
||||
50
deployment/volumes/volume-snapshots/README.md
Normal file
50
deployment/volumes/volume-snapshots/README.md
Normal file
@ -0,0 +1,50 @@
|
||||
# Kubernetes Volume Snapshots
|
||||
|
||||
It is possible to backup persistent volumes through volume snapshots. This is
|
||||
especially handy if you don't want to stop the database to create an [offline
|
||||
backup](../neo4j-offline-backup/README.md) thus having a downtime.
|
||||
|
||||
Kubernetes announced this feature in a [blog post](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/). Please make yourself familiar with it before you continue.
|
||||
|
||||
## Create a Volume Snapshot
|
||||
|
||||
There is an example in this folder how you can e.g. create a volume snapshot for
|
||||
the persistent volume claim `neo4j-data-claim`:
|
||||
|
||||
```sh
|
||||
# in folder deployment/volumes/volume-snapshots/
|
||||
kubectl apply -f snapshot.yaml
|
||||
```
|
||||
|
||||
If you are on Digital Ocean the volume snapshot should show up in the Web UI:
|
||||
|
||||

|
||||
|
||||
## Provision a Volume based on a Snapshot
|
||||
|
||||
Edit your persistent volume claim configuration and add a `dataSource` pointing
|
||||
to your volume snapshot. [The blog post](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/) has an example in section "Provision a new volume from a snapshot with
|
||||
Kubernetes".
|
||||
|
||||
There is also an example in this folder how the configuration could look like.
|
||||
If you apply the configuration new persistent volume claim will be provisioned
|
||||
with the data from the volume snapshot:
|
||||
|
||||
```
|
||||
# in folder deployment/volumes/volume-snapshots/
|
||||
kubectl apply -f neo4j-data.yaml
|
||||
```
|
||||
|
||||
## Data Consistency Warning
|
||||
|
||||
Note that volume snapshots do not guarantee data consistency. Quote from the
|
||||
[blog post](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/):
|
||||
|
||||
> Please note that the alpha release of Kubernetes Snapshot does not provide
|
||||
> any consistency guarantees. You have to prepare your application (pause
|
||||
> application, freeze filesystem etc.) before taking the snapshot for data
|
||||
> consistency.
|
||||
|
||||
In case of Neo4J this probably means that enterprise edition is required which
|
||||
supports [online backups](https://neo4j.com/docs/operations-manual/current/backup/).
|
||||
|
||||
Binary file not shown.
|
After Width: | Height: | Size: 118 KiB |
18
deployment/volumes/volume-snapshots/neo4j-data.yaml
Normal file
18
deployment/volumes/volume-snapshots/neo4j-data.yaml
Normal file
@ -0,0 +1,18 @@
|
||||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: neo4j-data-claim
|
||||
namespace: human-connection
|
||||
labels:
|
||||
app: human-connection
|
||||
spec:
|
||||
dataSource:
|
||||
name: neo4j-data-snapshot
|
||||
kind: VolumeSnapshot
|
||||
apiGroup: snapshot.storage.k8s.io
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
10
deployment/volumes/volume-snapshots/snapshot.yaml
Normal file
10
deployment/volumes/volume-snapshots/snapshot.yaml
Normal file
@ -0,0 +1,10 @@
|
||||
---
|
||||
apiVersion: snapshot.storage.k8s.io/v1alpha1
|
||||
kind: VolumeSnapshot
|
||||
metadata:
|
||||
name: neo4j-data-snapshot
|
||||
namespace: human-connection
|
||||
spec:
|
||||
source:
|
||||
name: neo4j-data-claim
|
||||
kind: PersistentVolumeClaim
|
||||
@ -19,11 +19,11 @@
|
||||
"test:jest": "cd webapp && yarn test && cd ../backend && yarn test:jest && codecov"
|
||||
},
|
||||
"devDependencies": {
|
||||
"codecov": "^3.4.0",
|
||||
"codecov": "^3.5.0",
|
||||
"cross-env": "^5.2.0",
|
||||
"cypress": "^3.2.0",
|
||||
"cypress-cucumber-preprocessor": "^1.11.0",
|
||||
"cypress-plugin-retries": "^1.2.0",
|
||||
"cypress-plugin-retries": "^1.2.1",
|
||||
"dotenv": "^8.0.0",
|
||||
"faker": "^4.1.0",
|
||||
"graphql-request": "^1.8.2",
|
||||
|
||||
@ -51,7 +51,7 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"@human-connection/styleguide": "0.5.17",
|
||||
"@nuxtjs/apollo": "4.0.0-rc4",
|
||||
"@nuxtjs/apollo": "4.0.0-rc4.1",
|
||||
"@nuxtjs/axios": "~5.4.1",
|
||||
"@nuxtjs/dotenv": "~1.3.0",
|
||||
"@nuxtjs/style-resources": "~0.1.2",
|
||||
@ -65,12 +65,12 @@
|
||||
"graphql": "~14.3.0",
|
||||
"jsonwebtoken": "~8.5.1",
|
||||
"linkify-it": "~2.1.0",
|
||||
"nuxt": "~2.6.3",
|
||||
"nuxt": "~2.7.1",
|
||||
"nuxt-env": "~0.1.0",
|
||||
"stack-utils": "^1.0.2",
|
||||
"string-hash": "^1.1.3",
|
||||
"tiptap": "1.19.0",
|
||||
"tiptap-extensions": "1.19.1",
|
||||
"tiptap": "1.19.2",
|
||||
"tiptap-extensions": "1.19.2",
|
||||
"v-tooltip": "~2.0.2",
|
||||
"vue-count-to": "~1.0.13",
|
||||
"vue-izitoast": "1.1.2",
|
||||
@ -92,7 +92,7 @@
|
||||
"eslint": "~5.16.0",
|
||||
"eslint-config-prettier": "~4.2.0",
|
||||
"eslint-loader": "~2.1.2",
|
||||
"eslint-plugin-prettier": "~3.0.1",
|
||||
"eslint-plugin-prettier": "~3.1.0",
|
||||
"eslint-plugin-vue": "~5.2.2",
|
||||
"fuse.js": "^3.4.4",
|
||||
"jest": "~24.8.0",
|
||||
|
||||
599
webapp/yarn.lock
599
webapp/yarn.lock
File diff suppressed because it is too large
Load Diff
20
yarn.lock
20
yarn.lock
@ -1503,14 +1503,14 @@ code-point-at@^1.0.0:
|
||||
resolved "https://registry.yarnpkg.com/code-point-at/-/code-point-at-1.1.0.tgz#0d070b4d043a5bea33a2f1a40e2edb3d9a4ccf77"
|
||||
integrity sha1-DQcLTQQ6W+ozovGkDi7bPZpMz3c=
|
||||
|
||||
codecov@^3.4.0:
|
||||
version "3.4.0"
|
||||
resolved "https://registry.yarnpkg.com/codecov/-/codecov-3.4.0.tgz#7d16d9d82b0ce20efe5dbf66245a9740779ff61b"
|
||||
integrity sha512-+vtyL1B11MWiRIBaPnsIALKKpLFck9m6QdyI20ZnG8WqLG2cxwCTW9x/LbG4Ht8b81equZWw5xLcr+0BIvmdJQ==
|
||||
codecov@^3.5.0:
|
||||
version "3.5.0"
|
||||
resolved "https://registry.yarnpkg.com/codecov/-/codecov-3.5.0.tgz#3d0748932f9cb41e1ad7f21fa346ef1b2b1bed47"
|
||||
integrity sha512-/OsWOfIHaQIr7aeZ4pY0UC1PZT6kimoKFOFYFNb6wxo3iw12nRrh+mNGH72rnXxNsq6SGfesVPizm/6Q3XqcFQ==
|
||||
dependencies:
|
||||
argv "^0.0.2"
|
||||
ignore-walk "^3.0.1"
|
||||
js-yaml "^3.13.0"
|
||||
js-yaml "^3.13.1"
|
||||
teeny-request "^3.11.3"
|
||||
urlgrey "^0.4.4"
|
||||
|
||||
@ -1810,10 +1810,10 @@ cypress-cucumber-preprocessor@^1.11.0:
|
||||
glob "^7.1.2"
|
||||
through "^2.3.8"
|
||||
|
||||
cypress-plugin-retries@^1.2.0:
|
||||
version "1.2.0"
|
||||
resolved "https://registry.yarnpkg.com/cypress-plugin-retries/-/cypress-plugin-retries-1.2.0.tgz#a4e120c1bc417d1be525632e7d38e52a87bc0578"
|
||||
integrity sha512-seQFI/0j5WCqX7IVN2k0tbd3FLdhbPuSCWdDtdzDmU9oJfUkRUlluV47TYD+qQ/l+fJYkQkpw8csLg8/LohfRg==
|
||||
cypress-plugin-retries@^1.2.1:
|
||||
version "1.2.1"
|
||||
resolved "https://registry.yarnpkg.com/cypress-plugin-retries/-/cypress-plugin-retries-1.2.1.tgz#0ae296e41c00c1aa1c2da83750e84c8a684e1c6b"
|
||||
integrity sha512-iZ00NmeVfHleZ6fcDvzoUr2vPpCm+fzqzHpUF5ceY0PEJRs8BzdmIN/4AVUwLTDYdb3F1B8qK+s3GSoGe2WPQQ==
|
||||
|
||||
cypress@^3.2.0:
|
||||
version "3.2.0"
|
||||
@ -3055,7 +3055,7 @@ js-levenshtein@^1.1.3:
|
||||
resolved "https://registry.yarnpkg.com/js-tokens/-/js-tokens-4.0.0.tgz#19203fb59991df98e3a287050d4647cdeaf32499"
|
||||
integrity sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==
|
||||
|
||||
js-yaml@^3.13.0, js-yaml@^3.9.0:
|
||||
js-yaml@^3.13.1, js-yaml@^3.9.0:
|
||||
version "3.13.1"
|
||||
resolved "https://registry.yarnpkg.com/js-yaml/-/js-yaml-3.13.1.tgz#aff151b30bfdfa8e49e05da22e7415e9dfa37847"
|
||||
integrity sha512-YfbcO7jXDdyj0DGxYVSlSeQNHbD7XPWvrVWeVUujrQEoZzWJIRrCPoyk6kL6IAjAG2IolMK4T0hNUe0HOUs5Jw==
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user