Commit 0109350c authored by Will JALLET's avatar Will JALLET 💸

use the incubator/elastic-stack chart

Microsoft's chart was aimed at Azure platform
parent a45d945f
Pipeline #4346 passed with stage
in 13 seconds
apiVersion: v1
description: A Helm chart for ELK
home: https://www.elastic.co/products
icon: https://www.elastic.co/assets/bltb35193323e8f1770/logo-elastic-stack-lt.svg
name: elastic-stack
version: 0.9.0
appVersion: 6.0
maintainers:
- name: rendhalver
email: pete.brown@powerhrg.com
- name: jar361
email: jrodgers@powerhrg.com
- name: christian-roggia
email: christian.roggia@gmail.com
approvers:
- christian-roggia
- rendhalver
reviewers:
- christian-roggia
- rendhalver
# Elastic-stack Helm Chart
This chart installs an elasticsearch cluster with kibana and logstash by default.
You can optionally disable logstash and install Fluentd if you prefer. It also optionally installs nginx-ldapauth-proxy and elasticsearch-curator.
## Prerequisites Details
* Kubernetes 1.8+
* PV dynamic provisioning support on the underlying infrastructure
## Chart Details
This chart will do the following:
* Implemented a dynamically scalable elasticsearch cluster using Kubernetes StatefulSets/Deployments
* Multi-role deployment: master, client (coordinating) and data nodes
* Statefulset Supports scaling down without degrading the cluster
## Installing the Chart
To install the chart with the release name `my-release`:
```bash
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install --name my-release incubator/elastic-stack
```
## Deleting the Charts
Delete the Helm deployment as normal
```
$ helm delete my-release
```
Deletion of the StatefulSet doesn't cascade to deleting associated PVCs. To delete them:
```
$ kubectl delete pvc -l release=my-release,component=data
```
## Configuration
Each requirement is configured with the options provided by that Chart.
Please consult the relevant charts for their configuration options.
dependencies:
- name: elasticsearch
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
version: 1.2.0
- name: kibana
repository: https://kubernetes-charts.storage.googleapis.com/
version: 0.6.0
- name: logstash
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
version: 0.6.3
- name: fluentd
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
version: 0.1.4
- name: fluent-bit
repository: https://kubernetes-charts.storage.googleapis.com/
version: 0.6.0
- name: nginx-ldapauth-proxy
repository: https://kubernetes-charts.storage.googleapis.com/
version: 0.1.2
- name: elasticsearch-curator
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
version: 0.2.4
digest: sha256:5a93301ce089e04837d2cd7ef0a547735388460186d9b0de2f62d982831df02b
generated: 2018-07-09T13:53:58.299283141-04:00
dependencies:
- name: elasticsearch
version: ^1.0.0
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
- name: kibana
version: ^0.6.0
repository: https://kubernetes-charts.storage.googleapis.com/
- name: logstash
version: ^0.6.0
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
condition: logstash.enabled
- name: fluentd
version: ^0.1.0
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
condition: fluentd.enabled
- name: fluent-bit
version: ^0.6.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: fluent-bit.enabled
- name: nginx-ldapauth-proxy
version: ^0.1.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: nginx-ldapauth-proxy.enabled
- name: elasticsearch-curator
version: ^0.2.0
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
condition: elasticsearch-curator.enabled
The elasticsearch cluster and associated extras have been installed.
Kibana can be accessed:
* Within your cluster, at the following DNS name at port 9200:
{{ template "kibana.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
* From outside the cluster, run these commands in the same shell:
{{- if contains "NodePort" .Values.kibana.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "kibana.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.kibana.service.type }}
WARNING: You have likely exposed your Elasticsearch cluster direct to the internet.
Elasticsearch does not implement any security for public facing clusters by default.
As a minimum level of security; switch to ClusterIP/NodePort and place an Nginx gateway infront of the cluster in order to lock down access to dangerous HTTP endpoints and verbs.
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "kibana.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "kibana.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:9200
{{- else if contains "ClusterIP" .Values.kibana.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "kibana.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:5601 to use Kibana"
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 5601:5601
{{- end }}
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "elastic-stack.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "elastic-stack.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "elastic-stack.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
# Default values for elk.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
kibana:
env:
ELASTICSEARCH_URL: http://http.default.svc.cluster.local:9200
logstash:
enabled: true
fluentd:
enabled: false
fluent-bit:
enabled: false
nginx-ldapauth-proxy:
enabled: false
# Example config to get it working with ELK. Adjust as you need to.
# proxy:
# port: 5601
# # This is the internal hostname for the kibana service
# host: "elk-kibana.default.svc.cluster.local"
# authName: "ELK:Infrastructure:LDAP"
# ldapHost: "ldap.example.com"
# ldapDN: "dc=example,dc=com"
# ldapFilter: "objectClass=organizationalPerson"
# ldapBindDN: "cn=reader,dc=example,dc=com"
# requires:
# - name: "ELK-USER"
# filter: "cn=elkuser,ou=groups,dc=example,dc=com"
# ingress:
# enabled: true
# hosts:
# - "elk.example.com"
# annotations:
# kubernetes.io/ingress.class: nginx
# tls:
# - hosts:
# - elk.example.com
# secretName: example-elk-tls
# secrets:
# ldapBindPassword: PASSWORD
elasticsearch-curator:
enabled: false
apiVersion: v1
name: kibana-logstash
version: 1.0.0
description: Helm chart for Kibana and Logstash
icon: https://www.elastic.co/assets/blt282ae2420e32fc38/icon-kibana-bb.svg
maintainers:
- name: Microsoft Social Engagement
email: msefoundations@microsoft.com
## Introduction
This chart bootstraps [Kibana](https://www.elastic.co/guide/en/kibana/current/index.html) with [Logstash](https://www.elastic.co/guide/en/logstash/current/index.html) on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Kubernetes 1.8+ (e.g. deployed with [Azure Container Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes))
## Configuration
The following table lists some of the configurable parameters of the `kibana-logstash` chart and their default values:
| Parameter | Description | Default |
| ---------------------------------------------- | ------------------------------------------------------------------- | ---------------------------------------------------------------------- |
| `image.pullPolicy` | General image pull policy | `Always` |
| `image.pullSecrets` | General image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `kibana.image.repository` | Kibana image | `docker.elastic.co/kibana/kibana` |
| `kibana.image.tag` | Kibana image tag | `6.2.4` |
| `kibana.replicas` | Number of Kibana instances started | `3` |
| `kibana.ingress.host` | Kibana DNS domain | `nil` (must be provided during installation) |
| `kibana.ingress.public.cert` | Kibana public TLS certificate | `nil` (must be provided during installation) |
| `kibana.ingress.private.key` | Kibana private TLS key | `nil` (must be provided during installation) |
| `logstash.image.repository` | Logstash image | `mseoss/logstash` |
| `logstah.image.tag` | Logstash image tag | `6.2.4` |
| `logstah.replicas` | Number of Logtash instances started | `3` |
| `logstah.queue.storageclass` | Storage class used for Logstash queue PV | `default` |
| `logstah.queue.distk_capacity` | Disk capacity of Logstash queue PV | `50Gi` |
| `stunnel.image.repository` | Stunnel image | `mseoss/stunnel` |
| `stunnel.image.tag` | Stunnel image tag | `5.44` |
| `stunnel.connections.dev.redis.host` | Address of Redis where the logs for `dev` environment are cached | `dev-logschache.redis.cache.windows.net` |
| `stunnel.connections.dev.redis.port` | Port of Redis where logs for `dev` environment are cached | `6380` |
| `stunnel.connections.dev.redis.key` | Key of Redis where logs for `dev` environment are cached | `nil` (must be provided during installation) |
| `stunnel.connections.dev.logcal.host` | Local host where Redis connection for `dev` environment is tunneled | `127.0.0.1` |
| `stunnel.connections.dev.logcal.port` | Local port where Redis connection for `dev` environment is tunneled | `6379` |
| `oauth.image.repository` | oauth2_proxy image | `mseoss/oauth2_proxy` |
| `oauth.image.tag` | oauth2_proxy image tag | `v2.2` |
| `oauth.client.id` | Azure AD application ID | `nil` (must be provided during installation) |
| `oauth.client.secret` | Azure AD application secret | `nil` (must be provided during installation) |
| `oauth.cookie.secret` | Secrete used to sign the Kibana SSO cookie | `nil` (must be provided during installation) |
| `oauth.cookie.expire` | Kibana SSO cookie expiration time | `168h0m` |
| `oauth.cookie.refresh` | Kibana SSO cookie refresh time | `60m` |
| `curator.image.repository` | Curator image | `docker.io/bobrik/curator` |
| `curator.image.tag` | Curator image tag | `latest` |
| `curator.install` | Indicates if curator cron job is created | `true` |
| `curator.image.index_prefix` | Prefix of the index over which curator runs | `dev` (should be the same like stunnel.connection.[env] |
| `templates.image.repository` | Elastic template tool image | `mseoss/elastictemplate` |
| `templates.image.tag` | Elastic template image tag | `latest` |
| `templates.image.install` | Indicates if elastic template pre-install job is executed | `true` |
| `watcher.image.repository` | Elastic watcher tool image | `mseoss/elasticwatcher` |
| `watcher.image.tag` | Elastic watcher image tag | `latest` |
| `watcher.image.install` | Indicates if elastic watcher post-install job is executed | `true` |
| `watcher.webhooks.teams` | Microsoft teams webhook (watcher will post here the alerts) | `nil` (must be provided during installation) |
| `watcher.indices` | Index prefixes where watches will be executed | ``"dev-logstash-*\"`(env prefix the same like stunnel.connection.[env] |
> Note that you can define multiple Redis connections. The helm chart will create a Logstash data pipeline for each of connection.
## Installing the Chart
The chart can be installed with the `deploy.sh` script. There are a few arguments which should be provided as input:
- The environment which contains the helm values (default is `acs`)
- A namespace (defualt is `elk`)
- The public DNS domain used by Kibana
- The name of the Azure KeyVault where the secrets are stored
```console
./deploy.sh -e acs -n elk -d my.kibana.domain.com -v keyvault-name
```
## Uninstalling the Chart
The chart can be uninstalled/deleted as follows:
```console
helm delete --purge kibana-logstash
```
This command removes all the Kubernetes resources associated with the chart and deletes the helm release.
## Validate the Chart
### Lint
You can validate that the chart has not lint warnings during development.
```console
helm lint -f environments/acs/values.yaml
```
### Template rendering
You can validate if the chart is properly rendered using the `helm template` command. A `dry run mode` is built into the deployment script. You just need to execute the script with the `-t` option:
```console
./deploy.sh -t -n elk
```
## Scale up the Logstash nodes
The logstash nodes can be easily scaled up/down with the following command:
```console
kubectl scale --namespace elk statefulset/logstash --replicas 6
```
client:
hosts:
- elasticsearch
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
aws_key:
aws_secret_key:
aws_region:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile:
logformat: default
blacklist: ['elasticsearch', 'urllib3']
pipeline:
workers: 20
batch:
size: 125
delay: 5
path:
queue: "/data/queue"
path.config: "/usr/share/logstash/pipeline/logstash.conf"
xpack:
monitoring:
elasticsearch:
url:
- "http://elasticsearch:9200"
username: ""
password: ""
email@example.com
\ No newline at end of file
{
"templates": [
{
"name": "logstash_template",
"body": {
"order": 0,
"template": "*-logstash-*",
"settings": {
"index.mapping.ignore_malformed": "true",
"routing.allocation.total_shards_per_node": 2
},
"mappings": {
"_default_": {
"dynamic": true
},
"logstash-input": {
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "long"
},
"count": {
"type": "long"
},
"duration": {
"type": "long"
},
"id": {
"type": "long"
},
"logger_name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"message": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"method": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
}
}
]
}
#!/bin/bash
# Copyright (c) Microsoft and contributors. All rights reserved.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# Include some common functions
current_dir="$(dirname $0)"
source "$current_dir/../../scripts/util.sh"
source "$current_dir/../../scripts/keyvault.sh"
# Parse YAML file
#
# Based on https://gist.github.com/briantjacobs/7753bf
function parse_yaml() {
local prefix=$2
local s
local w
local fs
s='[[:space:]]*'
w='[a-zA-Z0-9_]*'
fs="$(echo @|tr @ '\034')"
sed -ne "s|^\($s\)\($w\)$s:$s\"\(.*\)\"$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s[:-]$s\(.*\)$s\$|\1$fs\2$fs\3|p" "$1" |
awk -F"$fs" '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
printf("%s%s%s=(\"%s\")\n", "'"$prefix"'",vn, $2, $3);
}
}' | sed 's/_=/+=/g'
}
# Retrieves the Redis keys of all given clusters from Azure KeyVault and configured them as Helm variables
# Arguments:
# $1 - KeyVault name
# $2 - List with Redis cluster names. The access key of a cluster should be stored in KeyVault with the secret
# name 'logstash-${cluster}-redis-key'
function get_redis_keys() {
keyvault=$1
shift
redis_clusters=("$@")
params=""
for cluster in "${redis_clusters[@]}"; do
redis_key_secret="logstash-${cluster}-redis-key"
redis_key=$(get_secret ${keyvault} ${redis_key_secret})
check_rc "Failed to fetch from KeyVault the redis key '${redis_key_secret}'"
params+=" --set stunnel.connections.${cluster}.redis.key=${redis_key}"
done
echo $params
}
function show_help() {
cat <<EOF
Usage: ${0##*/} [-h] [-t] [-e ENVIRONMENT] -d DOMAIN -v VAULT_NAME
Deploys a Kubernetes Helm chart with in a given environment and namespace.
-h display this help and exit
-e ENVIRONMENT environment for which the deployment is perfomed (e.g. acs)
-d DOMAIN public domain name used by the $CHART_NAME
-n NAMESPACE namespace where the chart will be deployed
-v VAULT_NAME name of the Aure KeyVault where all the secretes and certificates are stored
-t validate only the templates without performing any deployment (dry run)
EOF
}
# Predefined constants
CHART_NAME='kibana-logstash'
ENVIRONMENT='acs'
DOMAIN=''
KEYVAULT_NAME=''
NAMESPACE='elk'
DRY_RUN=false
# Predefined KeyVault secrets names
KIBANA_CERTIFICATE_SECRET='kibana-certificate'
KIBANA_CERTIFICATE_KEY_PASSWORD_SECRET='kibana-certificate-key-password'
KIBANA_OAUTH_COOKIE_SECRET='kibana-oauth-cookie-secret'
KIBANA_OAUTH_CLIENT_ID='kibana-oauth-client-id'
KIBANA_OAUTH_CLIENT_SECRET='kibana-oauth-client-secret'
ELASTICSEARCH_WATCHER_WEBHOOK_TEAMS='elasticsearch-watcher-webhook-teams'
while getopts hd:e:tn:v: opt; do
case $opt in
h)
show_help
exit 0
;;
d)
DOMAIN=$OPTARG
;;
e)
ENVIRONMENT=$OPTARG
;;
t)
DRY_RUN=true
;;
n)
NAMESPACE=$OPTARG
;;
v)
KEYVAULT_NAME=$OPTARG
;;
*)
show_help >&2
exit 1
;;
esac
done
helm_values=" -f environments/${ENVIRONMENT}/values.yaml"
helm_params=""
# Check if the required commands are installed
echo "Checking helm command"
type helm > /dev/null 2>&1
check_rc "helm command not found in \$PATH. Please follow the documentation to install it: https://github.com/kubernetes/helm"
echo "Checking kubectl command"
type kubectl > /dev/null 2>&1
check_rc "kubectl command not found in \$PATH. Please follow the documentation to install it: https://kubernetes.io/docs/tasks/kubectl/install/"
if [[ "$DRY_RUN" = false ]]
then
echo "Checking az command"
type az > /dev/null 2>&1
check_rc "az command not found in \$PATH. Please follow the documentation to install it: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli"
echo "Checking openssl command"
type openssl > /dev/null 2>&1
check_rc "openssl command not found in \$PATH. Please install it and run again this script."
if [[ -z "$KEYVAULT_NAME" ]]
then
echo "Please provide the Azure KeyVault where the secrets and certificates are stored!"
show_help
exit -1
fi
# Set the domain name used by Kibana
if [[ -z "$DOMAIN" ]]
then
echo "Please provide the public domain used by $CHART_NAME!"
show_help
exit -1
fi
helm_params+=" --set kibana.ingress.host=${DOMAIN}"
echo "Retrieving secrets from Azure KeyVault:"
# Fetch form KeyVault the Kibana certificate and private key
echo " Fetching Kibana certificate and private key"
kibana_cert_key_password=$(get_secret $KEYVAULT_NAME $KIBANA_CERTIFICATE_KEY_PASSWORD_SECRET)
check_rc "Failed to fetch from KeyVault the password for Kibana certificate key"
get_cert_and_key $KEYVAULT_NAME $KIBANA_CERTIFICATE_SECRET $DOMAIN $kibana_cert_key_password public_cert private_key
helm_params+=" --set kibana.ingress.public.cert=${public_cert}"
helm_params+=" --set kibana.ingress.private.key=${private_key}"
# Fetch from KeyVault the OAuth2 proxy secrets
echo " Fetching OAuth2 proxy secrets"
kibana_oauth_client_id=$(get_secret $KEYVAULT_NAME $KIBANA_OAUTH_CLIENT_ID)
check_rc "Failed to fetch from KeyVault the Kibana OAuth2 Client ID"
helm_params+=" --set oauth.client.id=${kibana_oauth_client_id}"
kibana_oauth_client_secret=$(get_secret $KEYVAULT_NAME $KIBANA_OAUTH_CLIENT_SECRET)
check_rc "Failed to fetch from KeyVault the Kibana OAuth2 Client Secret"
helm_params+=" --set oauth.client.secret=${kibana_oauth_client_secret}"
kibana_oauth_cookie_secret=$(get_secret $KEYVAULT_NAME $KIBANA_OAUTH_COOKIE_SECRET)
check_rc "Failed to fetch from KeyVault the Kibana OAuth2 Cookie Secret"
helm_params+=" --set oauth.cookie.secret=${kibana_oauth_cookie_secret}"
# Fetch from KeyVault the Watcher secrets
echo " Fetching Elasticsearch Watcher secrets"
elasticsearch_watcher_webhook_teams=$(get_secret $KEYVAULT_NAME $ELASTICSEARCH_WATCHER_WEBHOOK_TEAMS)
check_rc "Failed to fetch from KeyVault the Elasticsearch Watcher webhook teams"
helm_params+=" --set watcher.webhooks.teams=${elasticsearch_watcher_webhook_teams}"
# Fetch from KeyVault the Redis keys
echo " Fetching Redis keys"
redis_connections=$(parse_yaml environments/${ENVIRONMENT}/values.yaml | grep stunnel_connections \
| awk -F'_' '{print $3}' | uniq | tr '\n' ' ')
redis_clusters=(${redis_connections})
helm_params+=" $(get_redis_keys $KEYVAULT_NAME ${redis_clusters[@]})"
fi
# Installing helm chart
echo "Installing $CHART_NAME helm chart"
error=$(mktemp)
output=$(mktemp)
(
if [[ "$DRY_RUN" = true ]]
then
helm template --namespace $NAMESPACE $helm_values $helm_params .
else
helm upgrade -i --timeout 1800 --namespace $NAMESPACE $helm_values $helm_params $CHART_NAME . --wait &> $output
fi
if [ $? -ne 1 ]
then
echo "OK" > $error
else
echo "FAIL" > $error
fi
) &
spinner
if [ $(cat $error) == "FAIL" ]