By @dcherman >= 2.9.0
This task syncs (deploys) an Argo CD application and waits for it to be healthy. To do so, it requires the address of the Argo CD server and some form of authentication either a username/password or an authentication token.
argocd
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/argocd-sync-and-wait/manifests.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: argocd-sync-and-wait
annotations:
workflows.argoproj.io/description: >-
This task syncs (deploys) an Argo CD application and waits for it to be healthy.
To do so, it requires the address of the Argo CD server and some form of
authentication either a username/password or an authentication token.
workflows.argoproj.io/maintainer: '@dcherman'
workflows.argoproj.io/tags: argocd
workflows.argoproj.io/version: '>= 2.9.0'
spec:
entrypoint: argocd-sync-and-wait
templates:
- name: argocd-sync-and-wait
inputs:
parameters:
- name: argocd-version
value: v1.6.0
- name: application-name
- name: revision
value: HEAD
- name: flags
value: --
- name: argocd-server-address
- name: argocd-credentials-secret
script:
image: argoproj/argocd:{{inputs.parameters.argocd-version}}
command: [bash]
env:
- name: ARGOCD_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.argocd-credentials-secret}}"
key: token
optional: true
- name: ARGOCD_USERNAME
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.argocd-credentials-secret}}"
key: username
optional: true
- name: ARGOCD_PASSWORD
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.argocd-credentials-secret}}"
key: password
optional: true
- name: ARGOCD_SERVER
value: "{{inputs.parameters.argocd-server-address}}"
source: |
#!/bin/bash
set -euo pipefail
if [[ -z $ARGOCD_AUTH_TOKEN ]] && [[ -z "$ARGOCD_USERNAME" || -z "$ARGOCD_PASSWORD" ]]; then
echo "Either the ARGOCD_AUTH_TOKEN must be specified, or the ARGOCD_USERNAME/ARGOCD_PASSWORD must be specified."
exit 1
fi
if [ -z $ARGOCD_AUTH_TOKEN ]; then
yes | argocd login "$ARGOCD_SERVER" --username=$ARGOCD_USERNAME --password=$ARGOCD_PASSWORD {{inputs.parameters.flags}}
fi
echo "Running as ArgoCD User:"
argocd account get-user-info {{inputs.parameters.flags}}
argocd app sync {{inputs.parameters.application-name}} --revision {{inputs.parameters.revision}} {{inputs.parameters.flags}}
argocd app wait {{inputs.parameters.application-name}} --health {{inputs.parameters.flags}}
By @alexec >= 2.9.0
Build and push an image using Docker Buildkit. This does not need privileged access, unlike Docker in Docker (DIND).
buildkit
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/buildkit/manifests.yaml
# Build and push an image using Docker Buildkit. This does not need privileged access, unlike Docker in Docker (DIND).
#
# Publishing images requires an access token. For hub.docker.com you can create one at https://hub.docker.com/settings/security
# This needs to be mounted as `$DOCKER_CONFIG/config.json`. To do this, you'll need to create a secret as follows:
#
# export DOCKER_USERNAME=******
# export DOCKER_TOKEN=******
# kubectl create secret generic docker-config --from-literal="config.json={\"auths\": {\"https://index.docker.io/v1/\": {\"auth\": \"$(echo -n $DOCKER_USERNAME:$DOCKER_TOKEN|base64)\"}}}"
#
# Read more:
#
# * https://github.com/moby/buildkit#expose-buildkit-as-a-tcp-service
# * https://blog.alexellis.io/building-containers-without-docker/
# * https://hub.docker.com/r/moby/buildkit
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: buildkit
annotations:
workflows.argoproj.io/description: |
Build and push an image using Docker Buildkit. This does not need privileged access, unlike Docker in Docker (DIND).
workflows.argoproj.io/maintainer: '@alexec'
workflows.argoproj.io/tags: buildkit
workflows.argoproj.io/version: '>= 2.9.0'
spec:
templates:
- name: image
inputs:
parameters:
# Name of volume to mount
- name: work
# Path on the value
- name: path
# Name of the image to push
- name: image
# Mount the configuration so we can push the image.
# This should create the /.docker/config.json file.
volumes:
- name: docker-config
secret:
secretName: docker-config
container:
image: moby/buildkit:v0.7.2-rootless
volumeMounts:
- name: "{{inputs.parameters.work}}"
mountPath: /work
- name: docker-config
mountPath: /.docker
workingDir: /work/{{inputs.parameters.path}}
env:
- name: BUILDKITD_FLAGS
value: --oci-worker-no-process-sandbox
- name: DOCKER_CONFIG
value: /.docker
command:
- buildctl-daemonless.sh
args:
- build
- --frontend
- dockerfile.v0
- --local
- context=.
- --local
- dockerfile=.
- --output
- type=image,name=docker.io/{{inputs.parameters.image}},push=true
By @sumitnagal >= 2.9.0
This workflow template contains a template to make simple test-container executed via Argo.
distro
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/distro/manifests.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
annotations:
workflows.argoproj.io/description: >-
This workflow template contains a template to make simple test-container executed via Argo.
workflows.argoproj.io/maintainer: '@sumitnagal'
workflows.argoproj.io/tags: distro
workflows.argoproj.io/version: '>= 2.9.0'
name: distro
generateName: perf-infra-
spec:
serviceAccountName: argowf-svcacc
entrypoint: perf-infra
poddisruptionbudget:
minavailable: 100%
# must complete in 1m
activeDeadlineSeconds: 86400
# keep workflows for 3m
ttlStrategy:
secondsAfterCompletion: 3600
# delete all pods as soon as they complete
podGC:
strategy: OnPodCompletion
arguments:
parameters:
- name: limit
value: 1
- name: peakTPS
value: 1
- name: rampupTime
value: 1
- name: steadyStateTime
value: 1
- name: gitrepo
value: "NA"
- name: buildURL
value: "NA"
- name: envName
value: "prf"
- name: buildnum
value: 1
- name: baseurl
value: "https://XXX.amazonaws.com"
- name: uniqueName
value: "{{workflow.creationTimestamp}}"
- name: dataFlag
value: "false"
- name: appTestImg
value: "distroproj/gatling:latest"
- name: awscliGatmergeImg
value: "distroproj/gatling-merge:latest"
- name: pfiNamespace
value: "infra-ns"
- name: query
value: "/health/full"
- name: simulationClass
value: "Echo.EchoSimulation"
- name: s3BucketName
value: perf-results-XXXX
templates:
- name: perf-infra
steps:
- - name: pdbcreate
template: pdbcreate
- - name: run-test
template: run-test
withSequence:
count: "{{workflow.parameters.limit}}"
- - name: list-test
template: list
# Un comment if you want to fetch reports via Jenkins or other CI systesm
# - - name: s3-access
# template: s3access
- name: pdbcreate
container:
image: alpine:latest
command: [sh, -c]
args: [sleep 10]
- name: run-test
metadata:
# If you are using role base access for your namespace configure this
# annotations:
# iam.amazonaws.com/role: "k8s-{{workflow.parameters.pfiNamespace}}"
container:
image: "{{workflow.parameters.appTestImg}}"
imagePullPolicy: Always
command: ["sh","-c"]
args: [
"cd /gatling && mvn gatling:test -Dgatling.simulationClass={{workflow.parameters.simulationClass}} -Durl={{workflow.parameters.baseurl}} -Dquery={{workflow.parameters.query}} -DpeakTPS={{workflow.parameters.peakTPS}} -DrampupTime={{workflow.parameters.rampupTime}} -DsteadyStateTime={{workflow.parameters.steadyStateTime}} && cp -r /gatling/gatling_results /tmp && mkdir /tmp/simulations && cp $(find . -name 'simulation.log' | tail -1) /tmp/simulations/simulation_$(date +'%s')_$(cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-f0-9' | head -c 6).log && ls /tmp"
]
resources: # limit the resources
requests:
memory: 1Gi
cpu: "1"
limits:
memory: 2Gi
cpu: "1"
# volumeMounts:
# - name: finalresults
# mountPath: /tmp
outputs:
artifacts:
- name: simulation
path: /tmp/simulations
# It is possible to disable tar.gz archiving by setting the archive strategy to 'none'
# Disabling archiving has the following limitations on S3: symboloic links will not be
# uploaded, as S3 does not support the concept/file mode of symlinks.
# archive:
# none: {}
s3:
# Use the corresponding endpoint depending on your S3 provider:
# AWS: s3.amazonaws.com
# GCS: storage.googleapis.com
# Minio: my-minio-endpoint.default:9000
endpoint: s3.amazonaws.com
bucket: "{{workflow.parameters.s3BucketName}}"
region: us-west-2
key: results/{{workflow.namespace}}/{{workflow.parameters.uniqueName}}/simulations/{{pod.name}}.tgz
accessKeySecret:
name: my-s3-credentials
key: accessKey
secretKeySecret:
name: my-s3-credentials
key: secretKey
- name: list
container:
image: "{{workflow.parameters.awscliGatmergeImg}}"
command: [sh, -c]
args: ["
cd /perf-in &&
ls -R &&
for a in `ls -1 *.tgz`; do
gzip -dc $a | tar xf -;
done &&
echo '=======' &&
ls -R /perf-in &&
cd /opt/gatling/bin &&
./gatling.sh -ro /perf-in/simulations &&
mkdir /perf-out &&
cp -rf /perf-in/simulations /perf-out/simulations &&
rm $(find /perf-out/ -name '*.log') &&
ls -R /perf-out
"]
resources: # limit the resources
requests:
memory: 1Gi
cpu: "1"
limits:
memory: 2Gi
cpu: "1"
metadata:
# annotations:
# iam.amazonaws.com/role: "k8s-{{workflow.parameters.pfiNamespace}}"
inputs:
artifacts:
- name: inputsimulations
path: /perf-in
s3:
# Use the corresponding endpoint depending on your S3 provider:
# AWS: s3.amazonaws.com
# GCS: storage.googleapis.com
# Minio: my-minio-endpoint.default:9000
endpoint: s3.amazonaws.com
bucket: "{{workflow.parameters.s3BucketName}}"
region: us-west-2
key: results/{{workflow.namespace}}/{{workflow.parameters.uniqueName}}/simulations/
accessKeySecret:
name: my-s3-credentials
key: accessKey
secretKeySecret:
name: my-s3-credentials
key: secretKey
outputs:
artifacts:
- name: finalResult
path: /perf-out
# It is possible to disable tar.gz archiving by setting the archive strategy to 'none'
# Disabling archiving has the following limitations on S3: symboloic links will not be
# uploaded, as S3 does not support the concept/file mode of symlinks.
# this will work on your local machine as well as any of ci system as result will be uploaded to S3
archive:
none: {}
s3:
# Use the corresponding endpoint depending on your S3 provider:
# AWS: s3.amazonaws.com
# GCS: storage.googleapis.com
# Minio: my-minio-endpoint.default:9000
endpoint: s3.amazonaws.com
bucket: "{{workflow.parameters.s3BucketName}}"
region: us-west-2
key: results/{{workflow.namespace}}/{{workflow.parameters.uniqueName}}/finalResult/
accessKeySecret:
name: my-s3-credentials
key: accessKey
secretKeySecret:
name: my-s3-credentials
key: secretKey
- name: s3access
metadata:
annotations:
iam.amazonaws.com/role: "k8s-{{workflow.parameters.pfiNamespace}}"
container:
image: "{{workflow.parameters.awscliGatmergeImg}}"
command: [sh, -c]
args: ["sleep 20 && aws s3 ls s3://{{workflow.parameters.s3BucketName}}/results/ && aws s3 cp --recursive --acl authenticated-read 's3://{{workflow.parameters.s3BucketName}}/results/{{workflow.namespace}}/{{workflow.parameters.uniqueName}}' 's3://{{workflow.parameters.s3BucketName}}/results/{{workflow.namespace}}/{{workflow.parameters.uniqueName}}'"]
By @alexec >= 2.9.0
Checkout out from Git, build and test a Golang application.
gitgolang
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/go-build/manifests.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
annotations:
workflows.argoproj.io/description: |
Checkout out from Git, build and test a Golang application.
workflows.argoproj.io/maintainer: '@alexec'
workflows.argoproj.io/tags: golang, git
workflows.argoproj.io/version: '>= 2.9.0'
name: go-build
spec:
entrypoint: main
arguments:
parameters:
- name: repo
value: https://github.com/argoproj-labs/argo-workflows-catalog.git
- name: branch
value: master
- name: output
value: argo-workflows-catalog
templates:
- name: main
steps:
- - name: checkout
template: checkout
- - name: build
template: build
- - name: test
template: test
- name: checkout
script:
image: golang:1.14
workingDir: /work
args:
- sh
# use --depth 1 and --single-branch for fastest possible checkout
source: git clone --depth 1 --single-branch --branch {{workflow.parameters.branch}} {{workflow.parameters.repo}} .
volumeMounts:
- mountPath: /work
name: work
- name: build
script:
image: golang:1.14
workingDir: /work
args:
- sh
source: go build -o {{workflow.parameters.output}} -v ./...
volumeMounts:
- mountPath: /work
name: work
- name: test
script:
image: golang:1.14
workingDir: /work
args:
- sh
source: go test -v ./...
volumeMounts:
- mountPath: /work
name: work
volumeClaimTemplates:
# A shared work volume.
- name: work
metadata:
name: work
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 64Mi
By @alexec >= 2.9.0
This is a basic catalog example that simply prints "hello argo!".
example
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/hello-world/manifests.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
annotations:
workflows.argoproj.io/description: This is a basic catalog example that simply
prints "hello argo!".
workflows.argoproj.io/maintainer: '@alexec'
workflows.argoproj.io/tags: example
workflows.argoproj.io/version: '>= 2.9.0'
name: hello-world
spec:
entrypoint: argosay
templates:
- container:
args:
- echo
- hello argo!
command:
- /argosay
image: argoproj/argosay:v2
name: main
name: argosay
By @dcherman >= 2.9.0
This is a workflow template that allows you to run a Jenkinsfile pipeline in a container by using the `jenkins/jenkinsfile-runner` image.
cijenkins
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/jenkins-pipeline/manifests.yaml
# Jenkins icon https://jenkins.io/
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
annotations:
workflows.argoproj.io/description: |
This is a workflow template that allows you to run a Jenkinsfile pipeline in a
container by using the `jenkins/jenkinsfile-runner` image.
workflows.argoproj.io/maintainer: '@dcherman'
workflows.argoproj.io/tags: jenkins,ci
workflows.argoproj.io/version: '>= 2.9.0'
name: jenkins-pipeline
spec:
entrypoint: run-jenkins-pipeline
templates:
- name: run-jenkins-pipeline
inputs:
artifacts:
- name: jenkinsfile
path: /pipeline/jenkinsfile
- name: plugins
path: /pipeline/plugins.txt
optional: true
parameters:
- name: image
value: jenkins/jenkinsfile-runner
- name: build-number
value: 1
- name: cause
value: "argo-workflow"
- name: job-name
value: "job"
script:
image: "{{inputs.parameters.image}}"
command: [bash]
source: |
set -euox pipefail
ARGS=(--file="/pipeline/jenkinsfile")
ARGS+=(--build-number="{{inputs.parameters.build-number}}")
ARGS+=(--cause="{{inputs.parameters.cause}}")
ARGS+=(--job-name="{{inputs.parameters.job-name}}")
if [[ -f /pipeline/plugins.txt ]]; then
ARGS+=(--plugins="/pipeline/plugins.txt")
fi
/app/bin/jenkinsfile-runner-launcher "${ARGS[@]}"
By @descrepes >= 2.9.0
Dump/Restore a PostgreSQL database to/from S3.
postgresql
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/pgdump-s3/manifests.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
annotations:
workflows.argoproj.io/description: Dump/Restore a PostgreSQL database to/from S3.
workflows.argoproj.io/maintainer: '@descrepes'
workflows.argoproj.io/tags: postgresql
workflows.argoproj.io/version: '>= 2.9.0'
name: pgdump-s3
spec:
arguments:
parameters:
- name: postgres_host
value: myhost
- name: postgres_database
value: mydb
- name: s3_endpoint
value: s3.eu-central-1.amazonaws.com
- name: s3_bucket
value: postgres
- name: s3_prefix
value: "YYYY/MM/DD"
templates:
- name: backup
inputs:
parameters:
- name: postgres_host
- name: postgres_database
- name: s3_endpoint
- name: s3_bucket
- name: s3_prefix
container:
image: schickling/postgres-backup-s3
env:
- name: S3_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: mysecret
key: s3_access_key
- name: S3_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: mysecret
key: s3_secret_access_key
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_HOST
value: "{{inputs.parameters.postgres_host}}"
- name: POSTGRES_DATABASE
value: "{{inputs.parameters.postgres_database}}"
- name: S3_ENDPOINT
value: "{{inputs.parameters.s3_endpoint}}"
- name: S3_BUCKET
value: "{{inputs.parameters.s3_bucket}}"
- name: S3_PREFIX
value: "{{inputs.parameters.s3_prefix}}"
- name: restore
inputs:
parameters:
- name: postgres_host
- name: postgres_database
- name: s3_endpoint
- name: s3_bucket
- name: s3_prefix
container:
image: schickling/postgres-restore-s3
env:
- name: S3_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: mysecret
key: s3_access_key
- name: S3_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: mysecret
key: s3_secret_access_key
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_HOST
value: "{{inputs.parameters.postgres_host}}"
- name: POSTGRES_DATABASE
value: "{{inputs.parameters.postgres_database}}"
- name: S3_ENDPOINT
value: "{{inputs.parameters.s3_endpoint}}"
- name: S3_BUCKET
value: "{{inputs.parameters.s3_bucket}}"
- name: S3_PREFIX
value: "{{inputs.parameters.s3_prefix}}"
By @dcherman >= 2.9.0
This task sends a simple email to receivers via SMTP server.
emailsendmailsmtp
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/sendmail/manifests.yaml
# Credit for this implementation goes to @dlorenc based on the same task in the Tekton Catalog
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
annotations:
workflows.argoproj.io/description: This task sends a simple email to receivers via SMTP server.
workflows.argoproj.io/maintainer: '@dcherman'
workflows.argoproj.io/tags: sendmail,smtp,email
workflows.argoproj.io/version: '>= 2.9.0'
name: sendmail
spec:
entrypoint: sendmail
templates:
- name: sendmail
inputs:
parameters:
- name: sender
- name: body
- name: subject
- name: recipients
- name: smtp-secret
script:
image: python:3.8-alpine
command: [python]
env:
- name: USER
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.smtp-secret}}"
key: user
- name: PASSWORD
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.smtp-secret}}"
key: password
- name: TLS
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.smtp-secret}}"
key: tls
- name: SERVER
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.smtp-secret}}"
key: url
- name: PORT
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.smtp-secret}}"
key: port
source: |
import smtplib, ssl, os
port = os.getenv('PORT')
smtp_server = os.getenv('SERVER')
sender_email = "{{inputs.parameters.sender}}"
receiver_emails = "{{inputs.parameters.recipients}}"
user = os.getenv('USER')
password = os.getenv('PASSWORD')
tls = os.getenv('TLS')
message = f"""\
Subject: {{inputs.parameters.subject}}
To: {receiver_emails}
From: {sender_email}
{{inputs.parameters.body}}"""
print(message)
if tls == 'True':
context = ssl.create_default_context()
server = smtplib.SMTP_SSL(smtp_server, port, context=context)
else:
server = smtplib.SMTP(smtp_server, port)
if password != '':
server.login(user, password)
for receiver in receiver_emails.split(' '):
server.sendmail(sender_email, receiver, message)
server.quit()
By @descrepes >= 2.9.0
this workflow template contains a template to make simple notifications and a template to upload a file with a header and a footer.
slack
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/slack/manifests.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
annotations:
workflows.argoproj.io/description: >-
this workflow template contains a template to make simple notifications and a template to upload a file with a header and a footer.
workflows.argoproj.io/maintainer: '@descrepes'
workflows.argoproj.io/tags: slack
workflows.argoproj.io/version: '>= 2.9.0'
name: slack
spec:
templates:
- name: notify
inputs:
parameters:
- name: channel
- name: message
- name: color
- name: slack-secret
script:
imagePullPolicy: "Always"
image: descrepes/slack-cli:latest
env:
- name: SLACK_CHANNEL
value: "{{inputs.parameters.channel}}"
- name: SLACK_MESSAGE
value: "{{inputs.parameters.message}}"
- name: SLACK_COLOR
value: "{{inputs.parameters.color}}"
- name: SLACK_WEBHOOK
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.slack-secret}}"
key: webhook
command: ["sh"]
source: |
slack-notify
- name: file
inputs:
artifacts:
- name: file
path: /home/log
mode: 0755
parameters:
- name: header
default: ""
- name: footer
default: ""
- name: team
- name: channel
- name: color
default: "3399FF"
- name: slack-secret
script:
imagePullPolicy: "Always"
image: descrepes/slack-cli:latest
env:
- name: SLACK_CHANNEL
value: "{{inputs.parameters.channel}}"
- name: SLACK_COLOR
value: "{{inputs.parameters.color}}"
- name: SLACK_TOKEN
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.slack-secret}}"
key: token
- name: SLACK_WEBHOOK
valueFrom:
secretKeyRef:
name: "{{inputs.parameters.slack-secret}}"
key: webhook
command: ["sh"]
source: |
if [ ! -z "{{inputs.parameters.header}}" ]; then SLACK_MESSAGE="{{inputs.parameters.header}}" slack-notify; fi
slack-cli -d {{inputs.parameters.channel}} -T {{inputs.parameters.team}} -f /home/log
if [ ! -SLACK_WEBHOOKz "{{inputs.parameters.footer}}" ]; then SLACK_MESSAGE="{{inputs.parameters.footer}}" slack-notify; fi
By @courtneygreif >= 2.9.0
A template for sending slack notifications with workflow status details when the workflow begins and completes.
slack-workflow-notifications
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/slack-workflow-notifications/manifests.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
annotations:
workflows.argoproj.io/description: >-
A template for sending slack notifications with workflow status details when the workflow begins and completes.
workflows.argoproj.io/maintainer: '@courtneygreif'
workflows.argoproj.io/tags: slack-workflow-notifications
workflows.argoproj.io/version: '>= 2.9.0'
name: send-slack
spec:
templates:
- name: send-completion
inputs:
parameters:
- name: WORKFLOW_NAME
- name: WORKFLOW_STATUS
- name: SLACK_CHANNEL
script:
imagePullPolicy: "Always"
image: curlimages/curl
env:
- name: WORKFLOW_ENV # Optional
value: "qa"
- name: WORKFLOW_CLOUD # Optional
value: "aws"
- name: CLOUD_REGION # Optional
value: "us-gov-west-1"
- name: ENV_DOMAIN # Optional
value: "com"
- name: SLACK_TOKEN
value: YOUR_SLACK_TOKEN
command: ["sh"]
# If any optional environment variables above were removed, remove them from the curl as well
source: |
if [[ {{inputs.parameters.WORKFLOW_STATUS}} == "Succeeded" ]]; then
EMOJI=":celebrate:"
COLOR="#00ff00"
else
EMOJI=":failed:"
COLOR="#ff0000"
fi
curl -vvv -X POST -H "Authorization: Bearer $SLACK_TOKEN" \
-H "Content-type: application/json" \
--data "{ 'token': '$SLACK_TOKEN', 'channel': '{{inputs.parameters.SLACK_CHANNEL}}', 'text' : 'Workflow {{inputs.parameters.WORKFLOW_STATUS}}', 'attachments': [{'color': '$COLOR','blocks': [ { 'type': 'section', 'fields': [{ 'type': 'mrkdwn', 'text': ''}] }, { 'type': 'section', 'fields': [ { 'type': 'mrkdwn', 'text': '*Workflow Name:*\n{{inputs.parameters.WORKFLOW_NAME}}' }, { 'type': 'mrkdwn', 'text': '*Workflow Outcome:*\n{{inputs.parameters.WORKFLOW_STATUS}} $EMOJI' } ] } ] } ] }" \
https://slack.com/api/chat.postMessage
- name: send-beginning
inputs:
parameters:
- name: WORKFLOW_NAME
- name: SLACK_CHANNEL
script:
imagePullPolicy: "Always"
image: curlimages/curl
env:
- name: WORKFLOW_ENV # Optional
value: "qa"
- name: WORKFLOW_CLOUD # Optional
value: "aws"
- name: CLOUD_REGION # Optional
value: "us-gov-west-1"
- name: ENV_DOMAIN # Optional
value: "com"
- name: SLACK_TOKEN
value: YOUR_SLACK_TOKEN
command: ["sh"]
# If any optional environment variables above were removed, remove them from the curl as well
source: |
curl -vvv -X POST -H "Authorization: Bearer $SLACK_TOKEN" \
-H "Content-type: application/json" \
--data "{ 'token': '$SLACK_TOKEN', 'channel': '{{inputs.parameters.SLACK_CHANNEL}}', 'text' : 'Workflow {{inputs.parameters.WORKFLOW_NAME}} beginning:star:', 'attachments': [{'color': '#ADD8E6','blocks': [ { 'type': 'section', 'fields': [{ 'type': 'mrkdwn', 'text': ''}] } ] }] }" \
https://slack.com/api/chat.postMessage
---
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
annotations:
workflows.argoproj.io/description: >-
An example of how to add the notifications to your workflow.
workflows.argoproj.io/maintainer: '@courtneygreif'
workflows.argoproj.io/tags: slack-workflow-notifications
workflows.argoproj.io/version: '>= 2.9.0'
name: workflow-template-submittable
spec:
onExit: exit-notification
entrypoint: whalesay-notifications
templates:
- name: whalesay-template
container:
image: hello-world
- name: exit-notification
steps:
- - name: exit-time
templateRef:
name: send-slack
template: send-completion
arguments:
parameters:
- name: WORKFLOW_STATUS
value: "{{workflow.status}}"
- name: WORKFLOW_NAME
value: "{{workflow.name}}"
- name: SLACK_CHANNEL
value: "YOUR_SLACK_CHANNEL_ID"
# template to send a notificataion, complete workflow, and then send a completion notification
- name: whalesay-notifications
steps:
- - name: current-workflow
template: whalesay-template
- name: workflow-beginning
templateRef:
name: send-slack
template: send-beginning
arguments:
parameters:
- name: WORKFLOW_NAME
value: "{{workflow.name}}"
- name: SLACK_CHANNEL
value: "YOUR_SLACK_CHANNEL_ID"
By @descrepes >= 2.9.0
This workflow run a basic Terraform plan.
terraform
kubectl apply -f https://raw.githubusercontent.com/argoproj-labs/argo-workflows-catalog/master/templates/terraform-basic/manifests.yaml
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
annotations:
workflows.argoproj.io/description: This workflow run a basic Terraform plan.
workflows.argoproj.io/maintainer: '@descrepes'
workflows.argoproj.io/tags: terraform
workflows.argoproj.io/version: '>= 2.9.0'
name: terraform-basic
spec:
entrypoint: update
arguments:
parameters:
- name: terraform-docker-image
value: "descrepes/terraform:0.12.9-demo"
- name: terraform-script-repository
value: "https://github.com/descrepes/terraform-argo-vmpool-demo.git"
templates:
- name: plan
inputs:
artifacts:
- name: terraform
path: /home/terraform
git:
repo: "{{workflow.parameters.terraform-script-repository}}"
depth: 1
script:
imagePullPolicy: "Always"
image: "{{workflow.parameters.terraform-docker-image}}"
command: ["sh"]
source: |
cp /home/terraform/*.tf /home/argo/
/bin/terraform init -input=false /home/argo
/bin/terraform plan -parallelism=2 -input=false -no-color -out=/home/argo/tfclientsplan /home/argo >> /tmp/terraform-change.log
outputs:
artifacts:
- name: terraform-plan
path: /home/argo/
archive:
none: {}
- name: terraform-log
path: /tmp/terraform-change.log
archive:
none: {}
- name: apply
inputs:
artifacts:
- name: terraform-plan
path: /home/terraform
script:
imagePullPolicy: "Always"
image: "{{workflow.parameters.terraform-docker-image}}"
command: ["sh"]
source: |
/bin/terraform apply -input=false -parallelism=2 -no-color /home/terraform/tfclientsplan
- name: approve
suspend: {}
- name: update
dag:
tasks:
- name: plan
template: plan
- name: approve
dependencies: [plan]
template: approve
- name: apply
template: apply
dependencies: [plan, approve]
arguments:
artifacts:
- name: terraform-plan
from: "{{tasks.plan.outputs.artifacts.terraform-plan}}"