---
slideOptions:
transition: slide
theme: serif
---
<style>
.reveal {
font-size: 20pt;
}
.reveal code {
font-size: 90%;
}
</style>
# REANA notes
---
## Using the client
---
To get your access token, you will need to access https://reana.cern.ch from inside the CERN network (e.g. running firefox on lxplus).
I created a file called `reana.sh` that looks like:
```
source ~reana/public/reana/bin/activate
export REANA_SERVER_URL=https://reana.cern.ch
export REANA_ACCESS_TOKEN=xxxxxxxxxxxxxxxxxxx
```
---
Write a file called `reana.yaml` according to [the documentation](http://docs.reana.io/reference/reana-yaml/)
Create the workflow, upload your analysis code and start the execution:
```
$ reana-client create -w my-analysis
$ export REANA_WORKON=my-analysis
$ reana-client upload
$ reana-client start
```
---
Check its status:
```
$ reana-client status
$ reana-client logs
$ reana-client ls
```
Once it's finished, download the output:
```
$ reana-client download
```
---
### Testing with R(D*) hadronic Run 2
---
```yaml
version: 0.7.0
inputs:
directories:
- Bmassfit
files:
- env.sh
- config.default.yaml
- krb5.conf
- keytab
- .rootrc
parameters:
KRB_USER: lbanadat
.kinit: &kinit KRB5_CONFIG=./krb5.conf kinit -V ${KRB_USER}@CERN.CH -k -t keytab
.env: &env source env.sh
workflow:
type: serial
resources:
cvmfs:
- lhcb.cern.ch
- lhcbdev.cern.ch
specification:
steps:
- name: configure
environment: 'gitlab-registry.cern.ch/lhcb-docker/os-base/centos7-devel'
commands:
- cp config.default.yaml config.yaml
- name: build
environment: 'gitlab-registry.cern.ch/lhcb-docker/os-base/centos7-devel'
commands:
- *env
- cd Bmassfit
- cmake .
- make -j
- name: run
environment: 'gitlab-registry.cern.ch/lhcb-docker/os-base/centos7-devel'
commands:
- *env
- *kinit
- snakemake B0_Dstar3pi_fit_norm_run2.pdf Ds_3pi_fit_norm_run2.pdf Dstar_fit_norm_but_deltaM_run2.pdf Dzero_Kpi_fit_default_run2.pdf -j4
outputs:
files:
- Bmassfit/*.pdf
- Bmassfit/*.log
- Bmassfit/*_result.root
```
---
Difficulties:
Can't source my Conda environment from `/cvmfs/lhcbdev.cern.ch` because only a subset of CVMFS is mounted
```
==> Command: echo "RooFit.Banner no" > ~/.rootrc
==> Status: failed
==> Logs:
job:
bash: //.rootrc: Permission denied
```
Depsite using the same Docker image as when running on the GitLab CI, the location of `$HOME`/`~` seems to be different:
```
==> Command: ls /cvmfs/{lhcb,lhcbdev,lhcb-conddb,cernvm-prod,grid,sft}.cern.ch >> ls.txt
==> Status: running
==> Logs:
job:
ls: cannot access /cvmfs/lhcbdev.cern.ch: No such file or directory
ls: cannot access /cvmfs/lhcb-conddb.cern.ch: No such file or directory
ls: cannot access /cvmfs/cernvm-prod.cern.ch: No such file or directory
ls: cannot access /cvmfs/grid.cern.ch: No such file or directory
```
---
## Local deployment
---
### Stuff I had to install
DNF:
- `kubernetes-client`
- `VirtualBox`
- `virtualenv`
Downloaded binaries:
- `minikube` https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
- `helm` https://get.helm.sh/helm-v3.3.0-rc.1-linux-amd64.tar.gz
---
### Building, deploying and running REANA
```
mkdir -p reana
cd reana
git clone git@github.com:reanahub/reana.git
cd reana/
virtualenv ~/.virtualenvs/reana
source ~/.virtualenvs/reana/bin/activate
pip install . --upgrade
reana-dev
reana-dev git-clone --help
reana-dev git-clone -c ALL
make setup prefetch
make build
make deploy
DEMO=reana-demo-helloworld make example
```
---
## OpenStack deployment
---
Change to the LHCb Analysis Preservation Openstack project:
- Join the egroup `lhcb-analysis-preservation`
- Go to https://openstack.cern.ch/project/
- Switch projects to "LHCb Analysis preservation"
- From the "Tools" drop-down menu select "OpenStack RC File v3", which will give you a shell script to `source` on `lxplus-cloud.cern.ch`
- The script will ask for your CERN password
---
### Kubernetes cluster
---
If doing this for the first time, create a keypair:
```
openstack keypair create --public-key ~/.ssh/id_rsa.pub ${USER}-lxplus
```
---
Check the latest Kubernetes template:
```
$ openstack coe cluster template list
+--------------------------------------+---------------------------+
| uuid | name |
+--------------------------------------+---------------------------+
| 17760a5f-8957-4794-ab96-0d6bd8627282 | swarm-18.06-1 |
| ab08b219-3246-4995-bf76-a3123f69cb4f | swarm-1.13.1-2 |
| 6b4fc2c2-00b0-410d-a784-82b6ebdd85bc | kubernetes-1.13.10-1 |
| 8dffa2cc-8aa4-489b-a346-edc202db7673 | kubernetes-1.14.6-2 |
| f294e172-4688-48f2-8407-78874941af0a | kubernetes-1.15.3-3 |
| 680c95e1-d3ee-4ceb-94ae-05252e62b938 | kubernetes-1.17.5-1 |
| c96265f2-0ddb-420f-a674-b2252fde3230 | kubernetes-1.18.2-3 |
| 67036e75-c24a-4c58-a583-f469e332a89d | kubernetes-1.18.2-3-multi |
+--------------------------------------+---------------------------+
```
---
Then create a cluster
```
$ openstack coe cluster create lbreana-dev --cluster-template kubernetes-1.18.2-3 --node-count 2 --keypair ${USER}-lxplus
Request to create cluster c266f0f3-2f84-48aa-b1ef-20caac61941d accepted
```
Check the status:
```
$ openstack coe cluster list
+--------------------------------------+-------------+-----------------+------------+--------------+--------------------+---------------+
| uuid | name | keypair | node_count | master_count | status | health_status |
+--------------------------------------+-------------+-----------------+------------+--------------+--------------------+---------------+
| c266f0f3-2f84-48aa-b1ef-20caac61941d | lbreana-dev | admorris-lxplus | 2 | 1 | CREATE_IN_PROGRESS | None |
+--------------------------------------+-------------+-----------------+------------+--------------+--------------------+---------------+
```
Grab a coffee and wait until the status is `CREATE_COMPLETE`.
Then start the cluster, check its status and wait for the pods to be created:
```
$ $(openstack coe cluster config lbreana-dev)
$ kubectl get node
NAME STATUS ROLES AGE VERSION
lbreana-dev-ardzryixew2o-master-0 Ready master 4m32s v1.18.2
lbreana-dev-ardzryixew2o-node-0 Ready <none> 82s v1.18.2
lbreana-dev-ardzryixew2o-node-1 Ready <none> 112s v1.18.2
```
---
### Helm
---
Install `helm 3.2.4`, assuming `~/.local/bin` is in your `$PATH`
```
$ wget https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
$ tar -xzf helm-v3.2.4-linux-amd64.tar.gz
$ install linux-amd64/helm ~/.local/bin/
```
---
Add the `stable` and `reanahub` repos:
```
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
$ helm repo add reanahub https://reanahub.github.io/reana
"reanahub" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "reanahub" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
```
---
### NFS share
---
Create a volume on OpenStack and make a note of the ID:
```
$ openstack volume create lbreana-dev-volume --size 100
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2020-08-05T12:34:45.000000 |
| description | None |
| encrypted | False |
| id | 021635d8-2a44-4219-8754-28e9225dd95e |
| multiattach | False |
| name | lbreana-dev-volume |
| properties | |
| replication_status | None |
| size | 100 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | standard |
| updated_at | None |
| user_id | admorris |
+---------------------+--------------------------------------+
```
---
Set up an [NFS share provisioner](https://github.com/helm/charts/blob/master/stable/nfs-server-provisioner/README.md)
Create a file called `nfs-provisioner-values.yaml`:
```yaml
storageClass:
defaultClass: true
name: lbreana-dev-shared-volume-storage-class
```
and another called `persistent-volume.yaml`:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: reana-dev-storage-persistent-volume-0
spec:
capacity:
storage: 100Gi
storageClassName: lbreana-dev-shared-volume-storage-class
accessModes:
- ReadWriteOnce
cinder:
fsType: "ext4"
volumeID: "021635d8-2a44-4219-8754-28e9225dd95e"
claimRef:
namespace: default
name: lbreana-dev-shared-persistent-volume
```
**NB:** Make sure to use the `volumeID` from earlier.
---
Create the persistent volume and install the NFS provisioner:
```
$ kubectl create -f persistent-volume.yaml
persistentvolume/reana-dev-storage-persistent-volume-0 created
$ helm install reana-dev-storage stable/nfs-server-provisioner -f nfs-provisioner-values.yaml
NAME: reana-dev-storage
LAST DEPLOYED: Wed Aug 5 14:36:50 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The NFS Provisioner service has now been installed.
A storage class named 'lbreana-dev-shared-volume-storage-class' has now been created
and is available to provision dynamic volumes.
You can use this storageclass by creating a `PersistentVolumeClaim` with the
correct storageClassName attribute. For example:
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-dynamic-volume-claim
spec:
storageClassName: "lbreana-dev-shared-volume-storage-class"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
```
---
### REANA
---
Install REANA
```
$ helm install lbreana-dev reanahub/reana --devel --set shared_storage.backend=nfs --set shared_storage.volume_size=10
NAME: lbreana-dev
LAST DEPLOYED: Wed Aug 5 14:37:59 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The REANA system has been installed:
If you are installing REANA for the first time, there are a few steps left to finalise its configuration.
1. Get the REANA-Server pod name:
$ REANA_SERVER=$(kubectl get pod -l "app=lbreana-dev-server" -o name -o jsonpath='{.items[0].metadata.name}')
2. Initialise the database:
$ kubectl exec $REANA_SERVER -- ./scripts/setup
3. Create your administrator user and store the token:
$ kubectl exec $REANA_SERVER -- flask reana-admin create-admin-user user@my.org
<reana-admin-access-token-value>
$ read -s REANA_ADMIN_ACCESS_TOKEN # paste the secret here
$ kubectl create secret generic lbreana-dev-admin-access-token \
--from-literal=ADMIN_ACCESS_TOKEN="$REANA_ADMIN_ACCESS_TOKEN"
Thanks for flying REANA 🚀
```
---
My pods are crashing. I am stuck :frowning:
```
$ kubectl get sc,pv,pvc,pods
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/geneva-cephfs-testing manila-provisioner Retain Immediate false 17h
storageclass.storage.k8s.io/lbreana-dev-shared-volume-storage-class (default) cluster.local/reana-dev-storage-nfs-server-provisioner Delete Immediate true 13m
storageclass.storage.k8s.io/meyrin-cephfs manila-provisioner Retain Immediate false 17h
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/reana-dev-storage-persistent-volume-0 100Gi RWX Retain Bound default/lbreana-dev-shared-persistent-volume lbreana-dev-shared-volume-storage-class 13m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/lbreana-dev-shared-persistent-volume Bound reana-dev-storage-persistent-volume-0 100Gi RWX lbreana-dev-shared-volume-storage-class 6m56s
NAME READY STATUS RESTARTS AGE
pod/lbreana-dev-cache-56d65df57b-cmqp6 1/1 Running 0 6m56s
pod/lbreana-dev-db-67f6b7c5c5-wg69s 0/1 CrashLoopBackOff 6 6m56s
pod/lbreana-dev-message-broker-8485886fcb-8wzh4 1/1 Running 0 6m56s
pod/lbreana-dev-server-576ff44f86-2sr5w 0/2 ContainerCreating 0 6m56s
pod/lbreana-dev-traefik-69d5bbf878-89kjj 1/1 Running 0 6m56s
pod/lbreana-dev-workflow-controller-7b956985cf-lk7jw 0/2 ContainerCreating 0 6m56s
pod/reana-dev-storage-nfs-server-provisioner-0 1/1 Running 0 13m
```
---
Output of `kubectl describe` for all crashing/creating pods:
```
$ for pod in pod/lbreana-dev-db-67f6b7c5c5-wg69s pod/lbreana-dev-server-576ff44f86-2sr5w pod/lbreana-dev-workflow-controller-7b956985cf-lk7jw;do kubectl describe $pod;done
Name: lbreana-dev-db-67f6b7c5c5-wg69s
Namespace: default
Priority: 0
Node: lbreana-dev-ardzryixew2o-node-0/188.185.82.159
Start Time: Wed, 05 Aug 2020 15:24:38 +0200
Labels: app=lbreana-dev-db
pod-template-hash=67f6b7c5c5
Annotations: cni.projectcalico.org/podIP: 10.100.144.188/32
cni.projectcalico.org/podIPs: 10.100.144.188/32
Status: Running
IP: 10.100.144.188
IPs:
IP: 10.100.144.188
Controlled By: ReplicaSet/lbreana-dev-db-67f6b7c5c5
Containers:
db:
Container ID: docker://1eaabcf3e9c9de095a1773ec299b65b393651b54aa9e27a22e600849e2f91c80
Image: postgres:9.6.2
Image ID: docker-pullable://postgres@sha256:5284ba74a1065e34cf1bfccd64caf8c497c8dc623d6207b060b5ebd369427d34
Port: 5432/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 05 Aug 2020 15:30:20 +0200
Finished: Wed, 05 Aug 2020 15:30:20 +0200
Ready: False
Restart Count: 6
Environment:
TZ: Europe/Zurich
POSTGRES_DB: reana
POSTGRES_USER: reana
POSTGRES_PASSWORD: reana
Mounts:
/var/lib/postgresql/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6pwll (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: HostPath (bare host directory volume)
Path: /var/reana/db
HostPathType:
default-token-6pwll:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6pwll
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/lbreana-dev-db-67f6b7c5c5-wg69s to lbreana-dev-ardzryixew2o-node-0
Normal Pulled 6m37s (x5 over 8m2s) kubelet, lbreana-dev-ardzryixew2o-node-0 Container image "postgres:9.6.2" already present on machine
Normal Created 6m37s (x5 over 8m2s) kubelet, lbreana-dev-ardzryixew2o-node-0 Created container db
Normal Started 6m37s (x5 over 8m1s) kubelet, lbreana-dev-ardzryixew2o-node-0 Started container db
Warning BackOff 3m (x25 over 7m59s) kubelet, lbreana-dev-ardzryixew2o-node-0 Back-off restarting failed container
Name: lbreana-dev-server-576ff44f86-2sr5w
Namespace: default
Priority: 0
Node: lbreana-dev-ardzryixew2o-node-0/188.185.82.159
Start Time: Wed, 05 Aug 2020 15:24:38 +0200
Labels: app=lbreana-dev-server
pod-template-hash=576ff44f86
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/lbreana-dev-server-576ff44f86
Containers:
rest-api:
Container ID:
Image: reanahub/reana-server:0.6.0-58-g1ed94f1
Image ID:
Port: 5000/TCP
Host Port: 0/TCP
Command:
/bin/sh
-c
Args:
uwsgi --ini /var/reana/uwsgi/uwsgi.ini
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
REANA_COMPONENT_PREFIX: lbreana-dev
REANA_DB_NAME: reana
REANA_DB_PORT: 5432
REANA_MAX_CONCURRENT_BATCH_WORKFLOWS: 30
CERN_CONSUMER_KEY: <set to the key 'CERN_CONSUMER_KEY' in secret 'lbreana-dev-cern-sso-secrets'> Optional: false
CERN_CONSUMER_SECRET: <set to the key 'CERN_CONSUMER_SECRET' in secret 'lbreana-dev-cern-sso-secrets'> Optional: false
REANA_GITLAB_OAUTH_APP_ID: <set to the key 'REANA_GITLAB_OAUTH_APP_ID' in secret 'lbreana-dev-cern-gitlab-secrets'> Optional: false
REANA_GITLAB_OAUTH_APP_SECRET: <set to the key 'REANA_GITLAB_OAUTH_APP_SECRET' in secret 'lbreana-dev-cern-gitlab-secrets'> Optional: false
REANA_GITLAB_HOST: <set to the key 'REANA_GITLAB_HOST' in secret 'lbreana-dev-cern-gitlab-secrets'> Optional: false
REANA_SECRET_KEY: <set to the key 'REANA_SECRET_KEY' in secret 'lbreana-dev-secrets'> Optional: false
REANA_UI_ANNOUNCEMENT: <set to the key 'announcement' of config map 'announcement-config'> Optional: false
REANA_DB_USERNAME: <set to the key 'user' in secret 'lbreana-dev-db-secrets'> Optional: false
REANA_DB_PASSWORD: <set to the key 'password' in secret 'lbreana-dev-db-secrets'> Optional: false
Mounts:
/var/reana from reana-shared-volume (rw)
/var/reana/uwsgi from uwsgi-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from lbreana-dev-reana-token-wjj2p (ro)
scheduler:
Container ID:
Image: reanahub/reana-server:0.6.0-58-g1ed94f1
Image ID:
Port: <none>
Host Port: <none>
Command:
flask
start-scheduler
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
REANA_COMPONENT_PREFIX: lbreana-dev
REANA_DB_NAME: reana
REANA_DB_PORT: 5432
REANA_MAX_CONCURRENT_BATCH_WORKFLOWS: 30
REANA_DB_USERNAME: <set to the key 'user' in secret 'lbreana-dev-db-secrets'> Optional: false
REANA_DB_PASSWORD: <set to the key 'password' in secret 'lbreana-dev-db-secrets'> Optional: false
Mounts:
/var/reana from reana-shared-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from lbreana-dev-reana-token-wjj2p (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
reana-shared-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: lbreana-dev-shared-persistent-volume
ReadOnly: false
uwsgi-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: uwsgi-config
Optional: false
lbreana-dev-reana-token-wjj2p:
Type: Secret (a volume populated by a Secret)
SecretName: lbreana-dev-reana-token-wjj2p
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/lbreana-dev-server-576ff44f86-2sr5w to lbreana-dev-ardzryixew2o-node-0
Warning FailedMount 6m1s kubelet, lbreana-dev-ardzryixew2o-node-0 Unable to attach or mount volumes: unmounted volumes=[reana-shared-volume], unattached volumes=[reana-shared-volume uwsgi-config lbreana-dev-reana-token-wjj2p]: timed out waiting for the condition
Warning FailedMount 3m44s kubelet, lbreana-dev-ardzryixew2o-node-0 Unable to attach or mount volumes: unmounted volumes=[reana-shared-volume], unattached volumes=[lbreana-dev-reana-token-wjj2p reana-shared-volume uwsgi-config]: timed out waiting for the condition
Warning FailedMount 98s (x11 over 7m58s) kubelet, lbreana-dev-ardzryixew2o-node-0 MountVolume.WaitForAttach failed for volume "reana-dev-storage-persistent-volume-0" : WaitForAttach failed for Cinder disk "021635d8-2a44-4219-8754-28e9225dd95e": devicePath is empty
Normal SuccessfulAttachVolume 87s (x11 over 7m59s) attachdetach-controller AttachVolume.Attach succeeded for volume "reana-dev-storage-persistent-volume-0"
Warning FailedMount 86s kubelet, lbreana-dev-ardzryixew2o-node-0 Unable to attach or mount volumes: unmounted volumes=[reana-shared-volume], unattached volumes=[uwsgi-config lbreana-dev-reana-token-wjj2p reana-shared-volume]: timed out waiting for the condition
Name: lbreana-dev-workflow-controller-7b956985cf-lk7jw
Namespace: default
Priority: 0
Node: lbreana-dev-ardzryixew2o-node-1/137.138.31.117
Start Time: Wed, 05 Aug 2020 15:24:38 +0200
Labels: app=lbreana-dev-workflow-controller
pod-template-hash=7b956985cf
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/lbreana-dev-workflow-controller-7b956985cf
Containers:
rest-api:
Container ID:
Image: reanahub/reana-workflow-controller:0.6.0-36-gb702986
Image ID:
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
REANA_COMPONENT_PREFIX: lbreana-dev
REANA_DB_NAME: reana
REANA_DB_PORT: 5432
SHARED_VOLUME_PATH: /var/reana
K8S_REANA_SERVICE_ACCOUNT_NAME: lbreana-dev-reana
REANA_JOB_CONTROLLER_IMAGE: reanahub/reana-job-controller:0.6.0-31-g35c4fc8
REANA_WORKFLOW_ENGINE_IMAGE_CWL: reanahub/reana-workflow-engine-cwl:0.6.0-9-gcda4d46
REANA_WORKFLOW_ENGINE_IMAGE_YADAGE: reanahub/reana-workflow-engine-yadage:0.6.0-14-g7f9773c
REANA_WORKFLOW_ENGINE_IMAGE_SERIAL: reanahub/reana-workflow-engine-serial:0.6.0-13-gef2eb4c
REANA_STORAGE_BACKEND: network
REANA_GITLAB_HOST: <set to the key 'REANA_GITLAB_HOST' in secret 'lbreana-dev-cern-gitlab-secrets'> Optional: false
REANA_SECRET_KEY: <set to the key 'REANA_SECRET_KEY' in secret 'lbreana-dev-secrets'> Optional: false
REANA_DB_USERNAME: <set to the key 'user' in secret 'lbreana-dev-db-secrets'> Optional: false
REANA_DB_PASSWORD: <set to the key 'password' in secret 'lbreana-dev-db-secrets'> Optional: false
Mounts:
/var/reana from reana-shared-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from lbreana-dev-reana-token-wjj2p (ro)
job-status-consumer:
Container ID:
Image: reanahub/reana-workflow-controller:0.6.0-36-gb702986
Image ID:
Port: <none>
Host Port: <none>
Command:
flask
consume-job-queue
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
REANA_COMPONENT_PREFIX: lbreana-dev
REANA_DB_NAME: reana
REANA_DB_PORT: 5432
SHARED_VOLUME_PATH: /var/reana
REANA_DB_USERNAME: <set to the key 'user' in secret 'lbreana-dev-db-secrets'> Optional: false
REANA_DB_PASSWORD: <set to the key 'password' in secret 'lbreana-dev-db-secrets'> Optional: false
REANA_GITLAB_HOST: <set to the key 'REANA_GITLAB_HOST' in secret 'lbreana-dev-cern-gitlab-secrets'> Optional: false
Mounts:
/var/reana from reana-shared-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from lbreana-dev-reana-token-wjj2p (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
reana-shared-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: lbreana-dev-shared-persistent-volume
ReadOnly: false
lbreana-dev-reana-token-wjj2p:
Type: Secret (a volume populated by a Secret)
SecretName: lbreana-dev-reana-token-wjj2p
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/lbreana-dev-workflow-controller-7b956985cf-lk7jw to lbreana-dev-ardzryixew2o-node-1
Warning FailedAttachVolume 8m2s attachdetach-controller AttachVolume.Attach failed for volume "reana-dev-storage-persistent-volume-0" : failed to attach 021635d8-2a44-4219-8754-28e9225dd95e volume to 75cb5e60-28ed-4ea2-b9af-234843fa0fff compute: Bad request with: [POST https://openstack.cern.ch:8774/v2.1/45003912-fa11-4d33-8eb6-a1fa1a636ab9/servers/75cb5e60-28ed-4ea2-b9af-234843fa0fff/os-volume_attachments], error message: {"badRequest": {"message": "Invalid input received: Invalid volume: Volume 021635d8-2a44-4219-8754-28e9225dd95e status must be available or downloading to reserve, but the current status is reserved. (HTTP 400) (Request-ID: req-9b8e1f17-d82b-4118-8655-56be0016a812)", "code": 400}}
Warning FailedMount 6m2s kubelet, lbreana-dev-ardzryixew2o-node-1 Unable to attach or mount volumes: unmounted volumes=[reana-shared-volume], unattached volumes=[lbreana-dev-reana-token-wjj2p reana-shared-volume]: timed out waiting for the condition
Warning FailedAttachVolume 89s (x10 over 8m) attachdetach-controller AttachVolume.Attach failed for volume "reana-dev-storage-persistent-volume-0" : disk 021635d8-2a44-4219-8754-28e9225dd95e path /dev/vdb is attached to a different instance (6c7ad217-79b9-4c3e-a4ef-dfd2036ed604)
Warning FailedMount 87s (x2 over 3m45s) kubelet, lbreana-dev-ardzryixew2o-node-1 Unable to attach or mount volumes: unmounted volumes=[reana-shared-volume], unattached volumes=[reana-shared-volume lbreana-dev-reana-token-wjj2p]: timed out waiting for the condition
```