Saving LAMMPS Results
This example LAMMPS CPU workflow uses
StageoutScript to save results to a persistent volume. If you wanted to, for example, modify the
workflow template to save your results to AWS S3 instead of the persistent volume, you could copy and
edit the preinstalled application as described in the workflow catalog
documentation. Then you could make a small change to the
workflow template like so:
version: v1
volumes:
scratch:
reference: {{ .ScratchVolume }}
ingress:
- destination:
uri: file://stagein.sh
source:
uri: {{ .ExperimentUrl }}/{{ .StageinScript }}
{{- if .StageoutScript }}
- destination:
uri: file://stageout.sh
source:
uri: {{ .ExperimentUrl }}/{{ .StageoutScript }}
egress:
- source:
uri: file://{{ .RunName }}.tar.gz
destination:
uri: s3://{{.OutputPath}}/{{ .RunName }}.tar.gz
secret: {{ .S3Secret }}
{{- end }}
...snip...
{{- if .StageoutScript }}
stageout:
command:
- /bin/sh
- stageout.sh
- {{.RunName}}
cwd: /scratch
image:
uri: docker://alpine:latest
mounts:
scratch:
location: /scratch
name: stageout
requires:
- run-lammps
resource:
cpu:
cores: 1
memory:
size: 512MIB
{{- end }}
Note that we added an egress section which requires us to add a S3Secret parameter and we
removed a persistent data volume.
Then you would provide a different stageout script like so to create the tar archive used in egress:
#! /bin/sh
set -ex
run="${1}"
if [ -d "${run}" ] ; then
tar -czf "${run}.tar.gz" "${run}"
else
echo "did not save results to persistent store"
fi