Commit 92907370 authored by Will JALLET's avatar Will JALLET 💸

Update kubedb chart, add Azure's logstash-kibana chart

parent dd6f48a4
Pipeline #4343 passed with stage
in 13 seconds
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
apiVersion: v1
name: kibana-logstash
version: 1.0.0
description: Helm chart for Kibana and Logstash
icon: https://www.elastic.co/assets/blt282ae2420e32fc38/icon-kibana-bb.svg
maintainers:
- name: Microsoft Social Engagement
email: msefoundations@microsoft.com
## Introduction
This chart bootstraps [Kibana](https://www.elastic.co/guide/en/kibana/current/index.html) with [Logstash](https://www.elastic.co/guide/en/logstash/current/index.html) on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
## Prerequisites
- Kubernetes 1.8+ (e.g. deployed with [Azure Container Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes))
## Configuration
The following table lists some of the configurable parameters of the `kibana-logstash` chart and their default values:
| Parameter | Description | Default |
| ---------------------------------------------- | ------------------------------------------------------------------- | ---------------------------------------------------------------------- |
| `image.pullPolicy` | General image pull policy | `Always` |
| `image.pullSecrets` | General image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `kibana.image.repository` | Kibana image | `docker.elastic.co/kibana/kibana` |
| `kibana.image.tag` | Kibana image tag | `6.2.4` |
| `kibana.replicas` | Number of Kibana instances started | `3` |
| `kibana.ingress.host` | Kibana DNS domain | `nil` (must be provided during installation) |
| `kibana.ingress.public.cert` | Kibana public TLS certificate | `nil` (must be provided during installation) |
| `kibana.ingress.private.key` | Kibana private TLS key | `nil` (must be provided during installation) |
| `logstash.image.repository` | Logstash image | `mseoss/logstash` |
| `logstah.image.tag` | Logstash image tag | `6.2.4` |
| `logstah.replicas` | Number of Logtash instances started | `3` |
| `logstah.queue.storageclass` | Storage class used for Logstash queue PV | `default` |
| `logstah.queue.distk_capacity` | Disk capacity of Logstash queue PV | `50Gi` |
| `stunnel.image.repository` | Stunnel image | `mseoss/stunnel` |
| `stunnel.image.tag` | Stunnel image tag | `5.44` |
| `stunnel.connections.dev.redis.host` | Address of Redis where the logs for `dev` environment are cached | `dev-logschache.redis.cache.windows.net` |
| `stunnel.connections.dev.redis.port` | Port of Redis where logs for `dev` environment are cached | `6380` |
| `stunnel.connections.dev.redis.key` | Key of Redis where logs for `dev` environment are cached | `nil` (must be provided during installation) |
| `stunnel.connections.dev.logcal.host` | Local host where Redis connection for `dev` environment is tunneled | `127.0.0.1` |
| `stunnel.connections.dev.logcal.port` | Local port where Redis connection for `dev` environment is tunneled | `6379` |
| `oauth.image.repository` | oauth2_proxy image | `mseoss/oauth2_proxy` |
| `oauth.image.tag` | oauth2_proxy image tag | `v2.2` |
| `oauth.client.id` | Azure AD application ID | `nil` (must be provided during installation) |
| `oauth.client.secret` | Azure AD application secret | `nil` (must be provided during installation) |
| `oauth.cookie.secret` | Secrete used to sign the Kibana SSO cookie | `nil` (must be provided during installation) |
| `oauth.cookie.expire` | Kibana SSO cookie expiration time | `168h0m` |
| `oauth.cookie.refresh` | Kibana SSO cookie refresh time | `60m` |
| `curator.image.repository` | Curator image | `docker.io/bobrik/curator` |
| `curator.image.tag` | Curator image tag | `latest` |
| `curator.install` | Indicates if curator cron job is created | `true` |
| `curator.image.index_prefix` | Prefix of the index over which curator runs | `dev` (should be the same like stunnel.connection.[env] |
| `templates.image.repository` | Elastic template tool image | `mseoss/elastictemplate` |
| `templates.image.tag` | Elastic template image tag | `latest` |
| `templates.image.install` | Indicates if elastic template pre-install job is executed | `true` |
| `watcher.image.repository` | Elastic watcher tool image | `mseoss/elasticwatcher` |
| `watcher.image.tag` | Elastic watcher image tag | `latest` |
| `watcher.image.install` | Indicates if elastic watcher post-install job is executed | `true` |
| `watcher.webhooks.teams` | Microsoft teams webhook (watcher will post here the alerts) | `nil` (must be provided during installation) |
| `watcher.indices` | Index prefixes where watches will be executed | ``"dev-logstash-*\"`(env prefix the same like stunnel.connection.[env] |
> Note that you can define multiple Redis connections. The helm chart will create a Logstash data pipeline for each of connection.
## Installing the Chart
The chart can be installed with the `deploy.sh` script. There are a few arguments which should be provided as input:
- The environment which contains the helm values (default is `acs`)
- A namespace (defualt is `elk`)
- The public DNS domain used by Kibana
- The name of the Azure KeyVault where the secrets are stored
```console
./deploy.sh -e acs -n elk -d my.kibana.domain.com -v keyvault-name
```
## Uninstalling the Chart
The chart can be uninstalled/deleted as follows:
```console
helm delete --purge kibana-logstash
```
This command removes all the Kubernetes resources associated with the chart and deletes the helm release.
## Validate the Chart
### Lint
You can validate that the chart has not lint warnings during development.
```console
helm lint -f environments/acs/values.yaml
```
### Template rendering
You can validate if the chart is properly rendered using the `helm template` command. A `dry run mode` is built into the deployment script. You just need to execute the script with the `-t` option:
```console
./deploy.sh -t -n elk
```
## Scale up the Logstash nodes
The logstash nodes can be easily scaled up/down with the following command:
```console
kubectl scale --namespace elk statefulset/logstash --replicas 6
```
client:
hosts:
- elasticsearch
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
aws_key:
aws_secret_key:
aws_region:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile:
logformat: default
blacklist: ['elasticsearch', 'urllib3']
pipeline:
workers: 20
batch:
size: 125
delay: 5
path:
queue: "/data/queue"
path.config: "/usr/share/logstash/pipeline/logstash.conf"
xpack:
monitoring:
elasticsearch:
url:
- "http://elasticsearch:9200"
username: ""
password: ""
email@example.com
\ No newline at end of file
{
"templates": [
{
"name": "logstash_template",
"body": {
"order": 0,
"template": "*-logstash-*",
"settings": {
"index.mapping.ignore_malformed": "true",
"routing.allocation.total_shards_per_node": 2
},
"mappings": {
"_default_": {
"dynamic": true
},
"logstash-input": {
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "long"
},
"count": {
"type": "long"
},
"duration": {
"type": "long"
},
"id": {
"type": "long"
},
"logger_name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"message": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"method": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
}
}
]
}
#!/bin/bash
# Copyright (c) Microsoft and contributors. All rights reserved.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# Include some common functions
current_dir="$(dirname $0)"
source "$current_dir/../../scripts/util.sh"
source "$current_dir/../../scripts/keyvault.sh"
# Parse YAML file
#
# Based on https://gist.github.com/briantjacobs/7753bf
function parse_yaml() {
local prefix=$2
local s
local w
local fs
s='[[:space:]]*'
w='[a-zA-Z0-9_]*'
fs="$(echo @|tr @ '\034')"
sed -ne "s|^\($s\)\($w\)$s:$s\"\(.*\)\"$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s[:-]$s\(.*\)$s\$|\1$fs\2$fs\3|p" "$1" |
awk -F"$fs" '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
printf("%s%s%s=(\"%s\")\n", "'"$prefix"'",vn, $2, $3);
}
}' | sed 's/_=/+=/g'
}
# Retrieves the Redis keys of all given clusters from Azure KeyVault and configured them as Helm variables
# Arguments:
# $1 - KeyVault name
# $2 - List with Redis cluster names. The access key of a cluster should be stored in KeyVault with the secret
# name 'logstash-${cluster}-redis-key'
function get_redis_keys() {
keyvault=$1
shift
redis_clusters=("$@")
params=""
for cluster in "${redis_clusters[@]}"; do
redis_key_secret="logstash-${cluster}-redis-key"
redis_key=$(get_secret ${keyvault} ${redis_key_secret})
check_rc "Failed to fetch from KeyVault the redis key '${redis_key_secret}'"
params+=" --set stunnel.connections.${cluster}.redis.key=${redis_key}"
done
echo $params
}
function show_help() {
cat <<EOF
Usage: ${0##*/} [-h] [-t] [-e ENVIRONMENT] -d DOMAIN -v VAULT_NAME
Deploys a Kubernetes Helm chart with in a given environment and namespace.
-h display this help and exit
-e ENVIRONMENT environment for which the deployment is perfomed (e.g. acs)
-d DOMAIN public domain name used by the $CHART_NAME
-n NAMESPACE namespace where the chart will be deployed
-v VAULT_NAME name of the Aure KeyVault where all the secretes and certificates are stored
-t validate only the templates without performing any deployment (dry run)
EOF
}
# Predefined constants
CHART_NAME='kibana-logstash'
ENVIRONMENT='acs'
DOMAIN=''
KEYVAULT_NAME=''
NAMESPACE='elk'
DRY_RUN=false
# Predefined KeyVault secrets names
KIBANA_CERTIFICATE_SECRET='kibana-certificate'
KIBANA_CERTIFICATE_KEY_PASSWORD_SECRET='kibana-certificate-key-password'
KIBANA_OAUTH_COOKIE_SECRET='kibana-oauth-cookie-secret'
KIBANA_OAUTH_CLIENT_ID='kibana-oauth-client-id'
KIBANA_OAUTH_CLIENT_SECRET='kibana-oauth-client-secret'
ELASTICSEARCH_WATCHER_WEBHOOK_TEAMS='elasticsearch-watcher-webhook-teams'
while getopts hd:e:tn:v: opt; do
case $opt in
h)
show_help
exit 0
;;
d)
DOMAIN=$OPTARG
;;
e)
ENVIRONMENT=$OPTARG
;;
t)
DRY_RUN=true
;;
n)
NAMESPACE=$OPTARG
;;
v)
KEYVAULT_NAME=$OPTARG
;;
*)
show_help >&2
exit 1
;;
esac
done
helm_values=" -f environments/${ENVIRONMENT}/values.yaml"
helm_params=""
# Check if the required commands are installed
echo "Checking helm command"
type helm > /dev/null 2>&1
check_rc "helm command not found in \$PATH. Please follow the documentation to install it: https://github.com/kubernetes/helm"
echo "Checking kubectl command"
type kubectl > /dev/null 2>&1
check_rc "kubectl command not found in \$PATH. Please follow the documentation to install it: https://kubernetes.io/docs/tasks/kubectl/install/"
if [[ "$DRY_RUN" = false ]]
then
echo "Checking az command"
type az > /dev/null 2>&1
check_rc "az command not found in \$PATH. Please follow the documentation to install it: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli"
echo "Checking openssl command"
type openssl > /dev/null 2>&1
check_rc "openssl command not found in \$PATH. Please install it and run again this script."
if [[ -z "$KEYVAULT_NAME" ]]
then
echo "Please provide the Azure KeyVault where the secrets and certificates are stored!"
show_help
exit -1
fi
# Set the domain name used by Kibana
if [[ -z "$DOMAIN" ]]
then
echo "Please provide the public domain used by $CHART_NAME!"
show_help
exit -1
fi
helm_params+=" --set kibana.ingress.host=${DOMAIN}"
echo "Retrieving secrets from Azure KeyVault:"
# Fetch form KeyVault the Kibana certificate and private key
echo " Fetching Kibana certificate and private key"
kibana_cert_key_password=$(get_secret $KEYVAULT_NAME $KIBANA_CERTIFICATE_KEY_PASSWORD_SECRET)
check_rc "Failed to fetch from KeyVault the password for Kibana certificate key"
get_cert_and_key $KEYVAULT_NAME $KIBANA_CERTIFICATE_SECRET $DOMAIN $kibana_cert_key_password public_cert private_key
helm_params+=" --set kibana.ingress.public.cert=${public_cert}"
helm_params+=" --set kibana.ingress.private.key=${private_key}"
# Fetch from KeyVault the OAuth2 proxy secrets
echo " Fetching OAuth2 proxy secrets"
kibana_oauth_client_id=$(get_secret $KEYVAULT_NAME $KIBANA_OAUTH_CLIENT_ID)
check_rc "Failed to fetch from KeyVault the Kibana OAuth2 Client ID"
helm_params+=" --set oauth.client.id=${kibana_oauth_client_id}"
kibana_oauth_client_secret=$(get_secret $KEYVAULT_NAME $KIBANA_OAUTH_CLIENT_SECRET)
check_rc "Failed to fetch from KeyVault the Kibana OAuth2 Client Secret"
helm_params+=" --set oauth.client.secret=${kibana_oauth_client_secret}"
kibana_oauth_cookie_secret=$(get_secret $KEYVAULT_NAME $KIBANA_OAUTH_COOKIE_SECRET)
check_rc "Failed to fetch from KeyVault the Kibana OAuth2 Cookie Secret"
helm_params+=" --set oauth.cookie.secret=${kibana_oauth_cookie_secret}"
# Fetch from KeyVault the Watcher secrets
echo " Fetching Elasticsearch Watcher secrets"
elasticsearch_watcher_webhook_teams=$(get_secret $KEYVAULT_NAME $ELASTICSEARCH_WATCHER_WEBHOOK_TEAMS)
check_rc "Failed to fetch from KeyVault the Elasticsearch Watcher webhook teams"
helm_params+=" --set watcher.webhooks.teams=${elasticsearch_watcher_webhook_teams}"
# Fetch from KeyVault the Redis keys
echo " Fetching Redis keys"
redis_connections=$(parse_yaml environments/${ENVIRONMENT}/values.yaml | grep stunnel_connections \
| awk -F'_' '{print $3}' | uniq | tr '\n' ' ')
redis_clusters=(${redis_connections})
helm_params+=" $(get_redis_keys $KEYVAULT_NAME ${redis_clusters[@]})"
fi
# Installing helm chart
echo "Installing $CHART_NAME helm chart"
error=$(mktemp)
output=$(mktemp)
(
if [[ "$DRY_RUN" = true ]]
then
helm template --namespace $NAMESPACE $helm_values $helm_params .
else
helm upgrade -i --timeout 1800 --namespace $NAMESPACE $helm_values $helm_params $CHART_NAME . --wait &> $output
fi
if [ $? -ne 1 ]
then
echo "OK" > $error
else
echo "FAIL" > $error
fi
) &
spinner
if [ $(cat $error) == "FAIL" ]
then
echo "Fail"
cat $output
exit -1
fi
if [[ "$DRY_RUN" = true ]]
then
echo " Done"
cat $output
else
echo " Done"
fi
image:
pullPolicy: Always
kibana:
image:
repository: docker.elastic.co/kibana/kibana
tag : 6.2.4
replicas: 3
env_var:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
KIBANA_ES_URL: "http://elasticsearch:9200"
XPACK_SECURITY_ENABLED: false
KUBERNETES_TRUST_CERT: false
container:
request:
cpu: "2000m"
mem: "1Gi"
limit:
cpu: "2000m"
mem: "2Gi"
ingress:
host:
public:
cert:
private:
key:
logstash:
image:
repository: mseoss/logstash
tag : 6.2.4
replicas: 3
queue:
storageclass: default
disk_capacity: "50Gi"
env_var:
ELASTICSEARCH_URL: "http://elasticsearch:9200"
KUBERNETES_TRUST_CERT: false
container:
request:
cpu: "1000m"
mem: "1Gi"
limit:
cpu: "1000m"
mem: "2Gi"
stunnel:
image:
repository: mseoss/stunnel
tag : 5.44
connections:
dev:
redis:
host: dev-logscache.redis.cache.windows.net
port: 6380
key:
local:
host: "127.0.0.1"
port: 6379
timeout: 30
container:
request:
cpu: "500m"
mem: "512Mi"
limit:
cpu: "500m"
mem: "2Gi"
oauth:
image:
repository: mseoss/oauth2_proxy
tag : v2.2
cookie:
expire : "168h0m"
refresh: "60m"
secret :
client:
id:
secret:
container:
request:
cpu: "500m"
mem: "512Mi"
limit:
cpu: "500m"
mem: "2Gi"
curator:
install: true
image:
repository: docker.io/bobrik/curator
tag : latest
container:
request:
cpu: "500m"
mem: "512Mi"
limit:
cpu: "500m"
mem: "2Gi"
index_prefix: dev
templates:
install: true
image:
repository: mseoss/elastictemplate
tag: latest
watcher:
install: true
image:
repository: mseoss/elasticwatcher
tag: latest
webhooks:
teams:
indices: "\"dev-logstash-*\""
actions:
1:
action: delete_indices
description: >-
Delete indices older than 30 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^{{ .Values.curator.index_prefix }}-logstash.*$'
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 30
exclude:
\ No newline at end of file
input {
{{- range $environment, $connection := .Values.stunnel.connections }}
redis {
id => {{ $environment | quote }}
host => {{ $connection.local.host | quote }}
port => {{ $connection.local.port | quote }}
batch_count => "5000"
data_type => "list"
key => "logstash"
password => {{ $connection.redis.key | quote }}
type => "logstash-input"
threads => "20"
tags => [{{ $environment | quote }}]
}
{{- end }}
}
filter {
# Example of filter
# if [type] == "logstash-input" {
# if [logger_name] == "org.logger" and [event-name] == "event-name" {
# drop {}
# }
# }
}
output {
{{- range $environment, $connection := .Values.stunnel.connections }}
if {{ $environment | quote }} in [tags] {
if [type] == "haproxy" {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "{{ $environment }}-haproxy-%{+YYYY.MM.dd}"
manage_template => false
document_type => "haproxy"
}
} if [type] == "syslog" {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "{{ $environment }}-syslog-%{+YYYY.MM.dd}"
manage_template => false
document_type => "syslog"
}
} else {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "{{ $environment }}-logstash-%{+YYYY.MM.dd}"
manage_template => false
document_type => "logstash-input"
}
}
}
{{- end }}
}
\ No newline at end of file
## OAuth2 Proxy Config File
## <addr>:<port> to listen on for HTTP/HTTPS clients
http_address = ":4180"
## the OAuth Redirect URL.
redirect_url = "https://{{ .Values.kibana.ingress.host }}/oauth2/callback"
## the http url(s) of the upstream endpoint. If multiple, routing is based on path
upstreams = [
# Kibana service
"http://localhost:5601/"
]
## Log requests to stdout
request_logging = true
## Email Domains to allow authentication for (this authorizes any email on this domain)
authenticated_emails_file = "/authorization/oauth2_emails.cfg"
## The OAuth Client ID, Secret
client_id = "{{ .Values.oauth.client.id }}"
client_secret = "{{ .Values.oauth.client.secret }}"
## Pass OAuth Access token to upstream via "X-Forwarded-Access-Token"
pass_basic_auth = false
## Cookie Settings
## Name - the cookie name
## Secret - the seed string for secure cookies; should be 16, 24, or 32 bytes
## for use with an AES cipher when cookie_refresh or pass_access_token
## is set
## Domain - (optional) cookie domain to force cookies to (ie: .yourcompany.com)
## Expire - (duration) expire timeframe for cookie
## Refresh - (duration) refresh the cookie when duration has elapsed after cookie was initially set.
## Should be less than cookie_expire; set to 0 to disable.
## On refresh, OAuth token is re-validated.
## (ie: 1h means tokens are refreshed on request 1hr+ after it was set)
## Secure - secure cookies are only sent by the browser of a HTTPS connection (recommended)
## HttpOnly - httponly cookies are not readable by javascript (recommended)
cookie_name = "kibana_sso_cookie"
cookie_secret = "{{ .Values.oauth.cookie.secret }}"
cookie_domain = "{{ .Values.kibana.ingress.host }}"
cookie_expire = "{{ .Values.oauth.cookie.expire }}"
cookie_refresh = "{{ .Values.oauth.cookie.refresh }}"
cookie_secure = true