Skip to main content
Skip table of contents

Monitoring and Logging installation

This document contains instructions for installing the Monitoring and Logging components in the Kubernetes cluster.

Add the Monitoring node to the Kubernetes cluster

To install the Monitoring and Logging components in the Kubernetes cluster, use a separate host, VM3 (Virtual Machine 3), equipped with a local disk to store both the monitoring and logging data.

Monitoring and Logging consist of the following key components:

  • Grafana

  • InfluxDB

  • Prometheus

  • Loki

  • Fluent Bit

To add the Monitoring node to the Kubernetes cluster, prepare the host and install Ubuntu 20.04 LTS.

Prepare the host

Prepare the host VM3 (Virtual Machine 3) that matches the following requirements:

  • 8 vCPU

  • 32GB vRAM

  • vDisk1 100GB (sda), the system disk

  • vDisk2 500GB (sdb), will be used to store monitoring and logging data

  • 1 vNIC, IP address

  • OS: Ubuntu 20.04 LTS

  • A hostname example: k8s-monitoring-xx (xx: 01, 02 … )

Install Ubuntu 20.04 LTS

To install Ubuntu 20.04 LTS, follow these steps:

  1. Install the latest updates for Ubuntu 20.04 LTS:

CODE
sudo apt update && sudo apt upgrade -y 
  1. Enable SSH key-based access for the admin user by copying SSH keys into ~/.ssh/authorized_keys.

  2. Enable sudo access with no password for the admin. Replace admin_1 with the actual user:

CODE
sudo su 
CODE
cat << EOF >> /etc/sudoers
admin_1 ALL=(ALL) NOPASSWD:ALL
EOF
  1. Reboot the server.

CODE
sudo reboot

Add the host to the Kubernetes cluster

All operations will be executed on VM1, unless an alternative is explicitly designated.

  1. On VM3, clone repository. Replace {TAG_NAME} with one of available tags (to list all available tags, use git tag):

CODE
git clone --branch master https://x-token-auth:ATCTT3xFfGN0Wd33LqQDzvX5huh2U_PgFNJjrEUUl2NRrMROcHm_g1nC7qQBhlNRQTHd4BpnMVepZWai-36xBQsLIu9etx-prmLOolUbGEZNp2IdzfOz_oy-8Fet-zqCcgYpPtyqwWF85_r3fCrN8NR4X2gXrdmXtO2fP1Y9N8W6aE1BDLOze20=687B5226@bitbucket.org/naveksoft/aipix-deploy.git
cd ./aipix-deploy-release/kubernetes/k8s-onprem/
git checkout {TAG_NAME}
  1. Install the Kubernetes base components:

CODE
./install_kube_base.sh
  1. On VM1, get the registration token and URL:

CODE
kubeadm token create --print-join-command
CODE
###Examle token:
kubeadm join 192.168.205.164:6443 --token 3yeqrm.abnp3yof8vivcbge --discovery-token-ca-cert-hash sha256:b58cfd679a3bb49f444dfe4869fad5e19f4fba87f1d6ae5f20da06c60f51684e
  1. Copy the token and apply it with sudo on VM3:

CODE
###Example:
sudo kubeadm join 192.168.205.164:6443 --token 3yeqrm.abnp3yof8vivcbge --discovery-token-ca-cert-hash sha256:b58cfd679a3bb49f444dfe4869fad5e19f4fba87f1d6ae5f20da06c60f51684e
  1. On VM1, check if the new node is added with the Ready status:

CODE
kubectl get nodes
CODE
###Example output:
NAME                   STATUS   ROLES           AGE     VERSION
k8s-monitoring-01      Ready    <none>          3m28s   v1.28.2
  1. On VM1, label and taint the new node to be used only by Monitoring (replace the k8s-monitoring-01 node name with your node name):

CODE
##Example:
kubectl taint nodes k8s-monitoring-01 monitoring:NoSchedule
kubectl label nodes k8s-monitoring-01 monitoring=true
  1. Prepare a local storage by running the following script on VM3:

CODE
cd ~/aipix-deploy-release/kubernetes/k8s-onprem
./prepare_local_storage.sh 1 10 sdb make_fs

Install the Monitoring and Logging components

All operations will be executed on VM1, unless an alternative is explicitly designated.

To install the Monitoring and Logging components, follow these steps:

  1. On VM1, go to the directory:

CODE
cd ~/aipix-deploy-release/kubernetes/k8s-onprem
  1. Copy sources.sh.sample to sources.sh if it was not copied before, and check the configurations of the monitoring section.
    For proper Analytics monitoring, it is required for the variable MONITORING to be set to yes during the Analytics installation. Update the Analytics installation if necessary.

CODE
vim ./sources.sh
CODE
#Monitoring parameters (required if monitoring is deployed)
export MONITORING=no                            				     #If monitoring is deployed ("yes" or "no")
export PROVISION_DASHBOARDS=yes							             #If grafana dashboards is provisioned automaticaly ("yes" or "no")
export INFLUX_USR=admin 							                 #define influxdb admin user
export INFLUX_PSW=0hmSYYaRci6yJblARc6aHbHZ4YelTXTo				     #define influxdb admin userpassword (use only letters and numbers)
export INFLUX_TOKEN=2pORp9tDo40Lm32oGUKFLL8r1UuNbgUT				 #define influxdb API token (use only letters and numbers)
  1. Prepare the configuration files for Monitoring:

CODE
./configure-monitoring.sh
  1. Check the configuration files in the../monitoring/ folder and make changes if required. To monitor S3 MinIO deployment, adjust bearer_token: in job_name: minio-job , job_name: minio-job-bucket in the prometheus-config-map.yaml file.
    To get the tokens, run the following commands (in this example, local – is the MinIO S3 storage alias name):

CODE
mc admin prometheus generate local
CODE
mc admin prometheus generate local bucket

Adjust bearer_token: (or the entire block of MinIO configurations) in the prometheus-config-map.yaml file (vim ../monitoring/prometheus-config-map.yaml ):

CODE
  - job_name: minio-job
    bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoibmF2ZWtzb2Z0IiwiZXhwIjo0ODUwMzk2OTA3fQ.WWCLGWb-usnTqR5aWGDwRpSlgl0VfDvthSSWYxhf3X1UNXCRWoGIZz386Q3KIShzAHipamIoFyf5oR9YCebobg
    metrics_path: /minio/v2/metrics/cluster
    scheme: http
    static_configs:
      - targets: [minio.minio-single.svc:9000]
  - job_name: minio-job-bucket
    bearer_token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJwcm9tZXRoZXVzIiwic3ViIjoibmF2ZWtzb2Z0IiwiZXhwIjo0ODUwNTIwMjkyfQ.acVDRmB-kLSFCkzIIZAinzTRguHYYKZeM0ebwHHKfSDhjq3xYb4YaNwfHrJKoI4E3z5fTGD7sal4QwF5nxdn-w
    metrics_path: /minio/v2/metrics/bucket
    scheme: http
    static_configs:
      - targets: [minio.minio-single.svc:9000]
  1. Install Monitoring and Logging by running the following script:

CODE
./deploy-monitoring.sh

When the installation is complete, you can connect to Grafana and import dashboards from the dashboard collection located in the ~/aipix-deploy-release/monitoring/grafana-dashboards/ folder of the current repository, provided that automatic dashboard provisioning is not set.

At the end of the script, you’ll get URLs of different components.

Additional steps

These steps are necessary to enable client access to Grafana via HTTPS using a Nginx proxy. In some installations, an external proxy can be used.

  1. Adjust the nginx.conf file of the VMS installation (uncomment the lines):

CODE
 vim ../nginx/nginx.conf
CODE

    location /monitoring {
        proxy_pass http://grafana.monitoring.svc:3000;
    }
  1. Run the script to update the VMS installation and apply changes:

CODE
./update-vms.sh
  1. Adjust the Grafana deployment by adding these environment variables, GF_SERVER_ROOT_URL and GF_SERVER_SERVE_FROM_SUB_PATH to the container specification:

CODE
kubectl -n monitoring edit deployments.apps grafana
CODE
...
containers:
- env:
  - name: GF_SERVER_ROOT_URL
    value: https://<domain-url>/monitoring
  - name: GF_SERVER_SERVE_FROM_SUB_PATH
    value: "true"
...

 

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.