Monitoring and Logging installation
This document contains instructions for installing the Monitoring and Logging components in the Kubernetes cluster.
Add the Monitoring node to the Kubernetes cluster
To install the Monitoring and Logging components in the Kubernetes cluster, use a separate host, VM3 (Virtual Machine 3), equipped with a local disk to store both the monitoring and logging data.
Monitoring and Logging consist of the following key components:
Grafana
InfluxDB
Prometheus
Loki
Fluent Bit
To add the Monitoring node to the Kubernetes cluster, prepare the host and install Ubuntu 20.04 LTS.
Prepare the host
Prepare the host VM3 (Virtual Machine 3) that matches the following requirements:
8 vCPU
32GB vRAM
vDisk1 100GB (sda), the system disk
vDisk2 500GB (sdb), will be used to store monitoring and logging data
1 vNIC, IP address
OS: Ubuntu 20.04 LTS
A hostname example: k8s-monitoring-xx (xx: 01, 02 … )
Install Ubuntu 20.04 LTS
To install Ubuntu 20.04 LTS, follow these steps:
Install the latest updates for Ubuntu 20.04 LTS:
sudo apt update && sudo apt upgrade -y
Enable SSH key-based access for the admin user by copying SSH keys into
~/.ssh/authorized_keys
.Enable sudo access with no password for the admin. Replace
admin_1
with the actual user:
sudo su
cat << EOF >> /etc/sudoers
admin_1 ALL=(ALL) NOPASSWD:ALL
EOF
Reboot the server.
sudo reboot
Add the host to the Kubernetes cluster
All operations will be executed on VM1, unless an alternative is explicitly designated.
On VM3, clone repository. Replace
{TAG_NAME}
with one of available tags (to list all available tags, usegit tag
):
git clone --branch master https://x-token-auth:ATCTT3xFfGN0Wd33LqQDzvX5huh2U_PgFNJjrEUUl2NRrMROcHm_g1nC7qQBhlNRQTHd4BpnMVepZWai-36xBQsLIu9etx-prmLOolUbGEZNp2IdzfOz_oy-8Fet-zqCcgYpPtyqwWF85_r3fCrN8NR4X2gXrdmXtO2fP1Y9N8W6aE1BDLOze20=687B5226@bitbucket.org/naveksoft/aipix-deploy.git
cd ./aipix-deploy-release/kubernetes/k8s-onprem/
git checkout {TAG_NAME}
Install the Kubernetes base components:
./install_kube_base.sh
On VM1, get the registration token and URL:
kubeadm token create --print-join-command
###Examle token:
kubeadm join 192.168.205.164:6443 --token 3yeqrm.abnp3yof8vivcbge --discovery-token-ca-cert-hash sha256:b58cfd679a3bb49f444dfe4869fad5e19f4fba87f1d6ae5f20da06c60f51684e
Copy the token and apply it with
sudo
on VM3:
###Example:
sudo kubeadm join 192.168.205.164:6443 --token 3yeqrm.abnp3yof8vivcbge --discovery-token-ca-cert-hash sha256:b58cfd679a3bb49f444dfe4869fad5e19f4fba87f1d6ae5f20da06c60f51684e
On VM1, check if the new node is added with the Ready status:
kubectl get nodes
###Example output:
NAME STATUS ROLES AGE VERSION
k8s-monitoring-01 Ready <none> 3m28s v1.28.2
On VM1,
label
andtaint
the new node to be used only by Monitoring (replace thek8s-monitoring-01
node name with your node name):
##Example:
kubectl taint nodes k8s-monitoring-01 monitoring:NoSchedule
kubectl label nodes k8s-monitoring-01 monitoring=true
Prepare a local storage by running the following script on VM3:
cd ~/aipix-deploy-release/kubernetes/k8s-onprem
./prepare_local_storage.sh 1 10 sdb make_fs
Install the Monitoring and Logging components
All operations will be executed on VM1, unless an alternative is explicitly designated.
To install the Monitoring and Logging components, follow these steps:
On VM1, go to the directory:
cd ~/aipix-deploy-release/kubernetes/k8s-onprem
Copy
sources.sh.sample
tosources.sh
if it was not copied before, and check the configurations of the monitoring section.
For proper Analytics monitoring, it is required for the variableMONITORING
to be set toyes
during the Analytics installation. Update the Analytics installation if necessary.
vim ./sources.sh
#Monitoring parameters (required if monitoring is deployed)
export MONITORING=no #If monitoring is deployed ("yes" or "no")
export PROVISION_DASHBOARDS=yes #If grafana dashboards is provisioned automaticaly ("yes" or "no")
export INFLUX_USR=admin #define influxdb admin user
export INFLUX_PSW=0hmSYYaRci6yJblARc6aHbHZ4YelTXTo #define influxdb admin userpassword (use only letters and numbers)
export INFLUX_TOKEN=2pORp9tDo40Lm32oGUKFLL8r1UuNbgUT #define influxdb API token (use only letters and numbers)
Prepare the configuration files for Monitoring:
./configure-monitoring.sh
Check the configuration files in the
../monitoring/
folder and make changes if required. To monitor S3 MinIO deployment, adjustbearer_token:
injob_name: minio-job
,job_name: minio-job-bucket
in theprometheus-config-map.yaml
file.
To get the tokens, run the following commands (in this example,local
– is the MinIO S3 storage alias name):
mc admin prometheus generate local
mc admin prometheus generate local bucket
Adjust bearer_token:
(or the entire block of MinIO configurations) in the prometheus-config-map.yaml
file (vim ../monitoring/prometheus-config-map.yaml
):
- job_name: minio-job
bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoibmF2ZWtzb2Z0IiwiZXhwIjo0ODUwMzk2OTA3fQ.WWCLGWb-usnTqR5aWGDwRpSlgl0VfDvthSSWYxhf3X1UNXCRWoGIZz386Q3KIShzAHipamIoFyf5oR9YCebobg
metrics_path: /minio/v2/metrics/cluster
scheme: http
static_configs:
- targets: [minio.minio-single.svc:9000]
- job_name: minio-job-bucket
bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoibmF2ZWtzb2Z0IiwiZXhwIjo0ODUwNTIwMjkyfQ.acVDRmB-kLSFCkzIIZAinzTRguHYYKZeM0ebwHHKfSDhjq3xYb4YaNwfHrJKoI4E3z5fTGD7sal4QwF5nxdn-w
metrics_path: /minio/v2/metrics/bucket
scheme: http
static_configs:
- targets: [minio.minio-single.svc:9000]
Install Monitoring and Logging by running the following script:
./deploy-monitoring.sh
When the installation is complete, you can connect to Grafana and import dashboards from the dashboard collection located in the ~/aipix-deploy-release/monitoring/grafana-dashboards/
folder of the current repository, provided that automatic dashboard provisioning is not set.
At the end of the script, you’ll get URLs of different components.
Additional steps
These steps are necessary to enable client access to Grafana via HTTPS using a Nginx proxy. In some installations, an external proxy can be used.
Adjust the
nginx.conf
file of the VMS installation (uncomment the lines):
vim ../nginx/nginx.conf
location /monitoring {
proxy_pass http://grafana.monitoring.svc:3000;
}
Run the script to update the VMS installation and apply changes:
./update-vms.sh
Adjust the Grafana deployment by adding these environment variables,
GF_SERVER_ROOT_URL
andGF_SERVER_SERVE_FROM_SUB_PATH
to the container specification:
kubectl -n monitoring edit deployments.apps grafana
...
containers:
- env:
- name: GF_SERVER_ROOT_URL
value: https://<domain-url>/monitoring
- name: GF_SERVER_SERVE_FROM_SUB_PATH
value: "true"
...