Skip to main content
Skip table of contents

Installation of the orchestrator and its components

The subsection provides all the necessary information for installing and configuring the orchestrator and its components.

  • Server Orchestration

  • Redis for Server Orchestrator

  • Nginx for Server Orchestrator

  • MySQL for Server Orchestrator

  • Tarantool for Server Orchestrator (Matcher Service)

  • Push1st for Server Orchestrator

  • ClickHouse for Server Orchestrator

1. Installation of components and dependencies:

  • redis-server, a utility for creating a virtual environment

  • mysql-server mysql-client python3-dev default-libmysqlclient-dev build-essential, MySQL and its dependencies

  • nginx, HTTP server and reverse proxy server (if necessary)

  • python3-virtualenv, library for creating a virtual environment

CODE
#!/bin/bash 
sudo apt update && sudo apt install -y apt-transport-https \
                                       ca-certificates \
                                       dirmngr \
                                       redis-server \
                                       mysql-server \
                                       mysql-client \
                                       python3-dev \
                                       default-libmysqlclient-dev \
                                       build-essential \
                                       nginx \
                                       python3-virtualenv \
                                       python3-pip
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-server clickhouse-client
sudo service clickhouse-server start

2. Clone the orchestrator project from the repository.

BASH
#!/bin/bash
git clone git@bitbucket.org:<company>/<xxxx>-analytics-orchestrator-server.git /opt/analytics-orchestrator-server

git clone --branch 22.09.1.0 https://<company>_jenkins:d8HD3xMVeRffqUR2UNn4@bitbucket.org/<company>/<xxxx>-analytics-orchestrator-server.git /opt/analytics-orchestrator-server

3. Create a virtual environment venv in the project root and activate it.

BASH
#!/bin/bash
cd /opt/analytics-orchestrator-server
virtualenv venv
source venv/bin/activate

4. Install dependent packages pip install -r requirements.txt.

BASH
#!/bin/bash
cd /opt/analytics-orchestrator-server
pip install -r requirements.txt

5. Load the database schema, the schema is located in server/management/events_collector/db/events_extracted_xxxx.sql of the project.

BASH
#!/bin/bash
clickhouse-client -n < /opt/analytics-orchestrator-server/server/management/events_collector/db/events_extracted_xxxx.sql

6. Create a user vms in clickhouse, for this you need to edit the configuration file /etc/clickhouse-server/users.d/xxxx.xml.

xxxx.xml
BASH
<clickhouse>
    <users>
      <xxxx>
          <profile>default</profile>
            <networks>
                  <ip>::/0</ip>
            </networks>
          <password>xxxx</password>
          <quota>default</quota>
      </xxxx>
    </users>
</clickhouse>

6.1. Change the listen_host parameter to 0.0.0.0 for clickhouse-server in the file /etc/clickhouse-server/config.xml.

BASH
<!-- Listen specified address.
         Use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere.
         Notes:
         If you open connections from wildcard address, make sure that at least one of the following measures applied:
         - server is protected by firewall and not accessible from untrusted networks;
         - all users are restricted to subset of network addresses (see users.xml);
         - all users have strong passwords, only secure (TLS) interfaces are accessible, or connections are only made via TLS interfaces.
         - users without password have readonly access.
         See also: https://www.shodan.io/search?query=clickhouse
      -->
    <listen_host>0.0.0.0</listen_host>

7. Check the operation of the services.

BASH
systemctl restart clickhouse-server.service
systemctl status clickhouse-server.service

8. Install push1st and necessary components/dependencies.

BASH
#!/bin/bash
# Add repository key
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9A4D3B9B041D12EF0D23694D8222A313EDE992FD
# Add repository to source list and adjust auth
echo "deb [arch=amd64] https://nexus.<company>.com/repository/ubuntu-universe/ universe main" | sudo tee /etc/apt/sources.list.d/push1st.list
echo "machine nexus.<company>.com/repository login public password public" | sudo tee /etc/apt/auth.conf.d/nexus.<company>.com.conf
sudo apt update && sudo apt install -y push1st
#возможны проблемы с версией push1st
# additional dependencies (If necessary, this is optional)
sudo luarocks install lua-cjson2

9. Configure the push1st service.

BASH
#!/bin/bash
nano /opt/<company>/push1st/server.yml 
server.yml
BASH
# ssl forward declaration
ssl:
    - &default-ssl
      key: # path to SSL private key file
      cert: # path to SSL cert file

# channels configuration section
server:
    proto: [ pusher, websocket ]       # enabled proto
    threads: 5                  # number of worker threads
    max-request-payload: 65400  # global max request payload length less or equal 65400
    listen: tcp://*:6003   # false, tcp://<host>:<port>
    ssl: { enable: false, *default-ssl }
    app-path: app/  # <proto>/<app-path>/<app-key> url
    pusher:
        path: /pusher/
        activity-timeout: 40    # pusher activity timeout (pusher ping\pong) N seconds
        whitelist: []
    websocket:
        path: /ws/
        activity-timeout: 3600    # ws activity timeout (pusher ping\pong) N seconds
        push: { public, private, presence } # enable\disable push functionality on channels
        whitelist: []
    mqtt:
        path: /mqtt/
        activity-timeout: 3600    # ws activity timeout (pusher ping\pong) N seconds
        push: { public, private, presence } # enable\disable push functionality on channels
        whitelist: []

#cluster:
#    listen: disable # udp://<host>:<port>, multicast://<multicast-group-address>:<port>/<bind-iface-ip-address>
#    ping-interval: 30 # 0 - to disable ping
#    listen: udp://*:8001 # strongly recommended bind to internal IP or close port with iptables
#    family: [ node1.push1st.local, node2.push1st.local ]
#    sync: [ register, unregister, join, leave, push ]
#    module: lua://modules/cluster.lua # cluster module

api:
    keep-alive-timeout: 10          # http api keep-alive connection timeout
    interface: pusher
    ssl: { enable: false, *default-ssl }
    path: /apps/
    whitelist: []
    listen: [ tcp://*:6002/ ]

credentials:
    - apps/*.yml

10. Configure integrations with the video analytics orchestrator.

BASH
#!/bin/bash
nano /opt/<company>/push1st/apps/orchestrator.yml
orchestratot.yml
BASH
orchestrator:
  enbale: true
  name: "Orchestrator"
  key: "app-key"
  secret: "secret"
  options: { client-messages: true, statistic: false }
  channels: [ public, private, presence ]
  origins: [ ]
  hook:
    trigger:
        - register # hook on register channel
        - unregister # hook on unregister channel
        - join # hook on subscribe to channel
        - leave # hook on leave channel
#        - push # hook on subscriber push message to channel, may be increase message delivery latency
    http-pipelining: false
    endpoint:
        - http://<IP-orchestrator-server>:8000/api/events/
#        - lua://modules/hook.lua        

11. Launch the service and check its operation.

BASH
#!/bin/bash
systemctl restart push1st.service 
systemctl status push1st.service

12. Install Tarantool version 2.8.

BASH
#!/bin/bash
curl -L https://tarantool.io/XWleucj/release/2.8/installer.sh | bash
sudo apt update && sudo apt install -y tarantool

13. Check the operation of the services.

BASH
#!/bin/bash
systemctl status tarantool

In the tarantool_app directory there is a file: tarantool_service.lua. Copy it to the directory /etc/tarantool/instances.enabled:

CODE
sudo cp tarantool_app/tarantool_service.lua /etc/tarantool/instances.enabled/tarantool_service.lua

In the copied file, specify the path to the working directory with lua scripts (parameter work_dir):

BASH
nano /etc/tarantool/instances.enabled/tarantool_service.lua

box.cfg {
   listen = 3301,
   background = true,
   log = '/var/log/tarantool/vectors.log',
   work_dir = '/opt/analytics-orchestrator-server/tarantool_app'
}

local app = require('app').run()

Next, start the service:

CODE
sudo tarantoolctl start tarantool_service.lua

Check the operation of the service:

CODE
sudo tarantoolctl status tarantool_service

In the /var/log/tarantool directory, there is a file with logs (vectors.log)

14. In MySQL, create a database: CREATE DATABASE orchestrator_db CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;.

BASH
#!/bin/bash
mysql -u root -e "CREATE DATABASE orchestrator_db CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;"

#If it is a container installation and the database for the CC and analytics is the same,
it can be created from the backend container with the following command:
docker exec <backend-container-name-or-id> mysql --protocol=TCP -u root -pmysql -h mysql-server --execute="CREATE DATABASE orchestrator_db CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;"

15. Make a copy of the example.env file and name it .env. Edit the settings for MySQL, Redis, and Tarantool in this file.

BASH
#!/bin/bash
cd/opt/analytics-orchestrator-server
cp example.env .env

16. Create a logs folder for the project at /var/log/django.

BASH
#!/bin/bash
mkdir -p /var/log/django

17. To run migrations, create an account for accessing the orchestrator's administration interface, and load fixtures, run:

CODE
#!/bin/bash
cd /opt/analytics-orchestrator-server
source venv/bin/activate
python manage.py seed

18. Configure the orchestrator's administration interface

CODE
#!/bin/bash
cd /opt/analytics-orchestrator-server
#Generate the frontend for the orchestrator's administration interface
python manage.py collectstatic

The result of the last command is the path to the folder where the orchestrator's frontend is located. This path should be specified in the nginx settings (django-backend-nginx): STEP 21

Access to the orchestrator's administration interface: http://<IP-orchestrator-server>/admin/

19. In the project's root directory, there is a deploy folder containing 5 services. The services need to be edited to specify the correct paths to the project and environment.

celery.service
BASH
[Unit]
Description=Celery Simple Dev Service
After=network.target

[Service]
#Type=forking
User=root
Group=root
Environment=CELERY_BIN=/opt/analytics-orchestrator-server/venv/bin/celery
WorkingDirectory=/opt/analytics-orchestrator-server/
ExecStart=/bin/sh -c '${CELERY_BIN} -A settings worker -B'
Restart=always

[Install]
WantedBy=multi-user.target
django.service
BASH
[Unit]
Description=Django REST API application
After=network.target

[Service]
RestartSec=5
WorkingDirectory=/opt/analytics-orchestrator-server/
ExecStart=/opt/analytics-orchestrator-server/venv/bin/gunicorn -b 0.0.0.0:8000 -w 4 --access-logfile /var/log/django/dj_access.log --error-logfile /var/log/django/dj_error.log settings.wsgi:application
Restart=always
User=root
Group=root

[Install]
WantedBy=multi-user.target
stats_processor.service
BASH
[Unit]
Description=Process statistics from Server and Binaries
After=network.target

[Service]
#Type=forking
User=root
Group=root
WorkingDirectory=/opt/analytics-orchestrator-server/
ExecStart=/opt/analytics-orchestrator-server/venv/bin/python manage.py stats_processor
Restart=always

[Install]
WantedBy=multi-user.target
matcher.service
CODE
[Unit]
Description=Matcher python application
After=network.target

[Service]
RestartSec=5
WorkingDirectory=/opt/analytics-orchestrator-server/
ExecStart=/opt/analytics-orchestrator-server/venv/bin/python manage.py matcher
Restart=always
User=root
Group=root

[Install]
WantedBy=multi-user.target
events_collector.service
CODE
[Unit]
Description=Matcher python application
After=network.target

[Service]
RestartSec=5
WorkingDirectory=/opt/analytics-orchestrator-server/
ExecStart=/opt/analytics-orchestrator-server/venv/bin/python manage.py events_collector
Restart=always
User=root
Group=root

[Install]
WantedBy=multi-user.target

20. After that, the services should be placed in /etc/systemd/system/ and the command systemctl daemon-reload should be executed.

BASH
#!/bin/bash
ln -s /opt/analytics-orchestrator-server/deploy/celery.service  /etc/systemd/system/celery.service
ln -s /opt/analytics-orchestrator-server/deploy/django.service  /etc/systemd/system/django.service
ln -s /opt/analytics-orchestrator-server/deploy/stats_processor.service  /etc/systemd/system/stats_processor.service
ln -s /opt/analytics-orchestrator-server/deploy/matcher.service  /etc/systemd/system/matcher.service
ln -s /opt/analytics-orchestrator-server/deploy/events_collector.service  /etc/systemd/system/events_collector.service
systemctl daemon-reload
systemctl enable stats_processor.service
systemctl start stats_processor.service
systemctl enable celery.service
systemctl start celery.service 
systemctl enable django.service
systemctl start django.service
systemctl enable matcher.service
systemctl start matcher.service
systemctl enable events_collector.service
systemctl start events_collector.service

21. Next, in the same deploy folder, there is a configuration file for nginx, which needs to be moved to /etc/nginx/sites-available/ and then a link to this configuration file needs to be created in /etc/nginx/sites-enabled/. After that, the nginx service needs to be restarted.

django-backend-nginx
BASH
upstream django {
    server 127.0.0.1:8000 fail_timeout=0;
}

server {
    charset utf-8;
    client_max_body_size 128M;

    listen 80; ## listen for ipv4

    server_name _;

    location ^~ /static/ {
     alias /opt/analytics-orchestrator-server/static/;
    }

    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
        proxy_set_header Host $host:80;
        add_header 'Access-Control-Expose-Headers' 'X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset' always;
        add_header 'Access-Control-Allow-Origin' '*' always;
        add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,hl,X-Client' always;
        add_header 'Access-Control-Allow-Methods' 'PUT, PATCH, GET, POST, DELETE, OPTIONS' always;
        # include /etc/nginx/uwsgi_params;
        # uwsgi_pass django;
        if ($request_method = OPTIONS ) {
          add_header Content-Length 0;
          add_header 'Access-Control-Allow-Origin' '*' always;
          add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,hl,X-Client' always;
          add_header 'Access-Control-Expose-Headers' 'X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset' always;
          add_header Content-Type text/plain;
          add_header 'Access-Control-Allow-Methods' 'PUT, GET, PATCH, POST, DELETE, OPTIONS' always;
          return 200;
        }

        proxy_pass http://django;
    }
}
BASH
#!/bin/bash
cp /opt/analytics-orchestrator-server/deploy/django-backend-nginx /etc/nginx/sites-available/
ln -s /etc/nginx/sites-available/django-backend-nginx /etc/nginx/sites-enabled/django-backend-nginx
systemctl restart nginx.service

22. Check the operation of the services.

BASH
systemctl status mysql
systemctl status redis-server 
systemctl status stats_processor.service
systemctl status events_collector.service 
systemctl status django.service
systemctl status celery.service
systemctl status matcher.service
systemctl status nginx
tarantoolctl status tarantool_service

Orchestrator Client

  • Orchestrator Client

  • Supervisor

The orchestrator client should be installed where the analytics executable modules are located (since the execution of executable files is carried out through the supervisor; this also currently blocks the launch of the analytics client in a container).

1. Installing components and dependencies. Install supervisor (v. 4.2.2). A suitable installation option can be selected here. python3-virtualenv is a library for creating a virtual environment.

BASH
#!/bin/bash
sudo apt update && sudo apt install -y apt-transport-https ca-certificates dirmngr python3-pip python3-virtualenv supervisor

2. Clone the client from the repository.

BASH
#!/bin/bash
git clone --branch release/22.12.0.0 git@bitbucket.org:<company>/<xxxx>-analytics-orchestrator-client.git /opt/analytics-orchestrator-client

git clone --branch release/22.12.0.0 https://<company>_jenkins:d8HD3xMVeRffqUR2UNn4@bitbucket.org/<company>/<xxxx>-analytics-orchestrator-client.git /opt/analytics-orchestrator-client

3. Create a virtual environment venv venv at the root of the project and activate it.

BASH
#!/bin/bash
cd /opt/analytics-orchestrator-client
virtualenv venv
source venv/bin/activate

4. From the active venv environment, install dependent packages with pip install -r requirements.txt

BASH
#!/bin/bash
cd /opt/analytics-orchestrator-client
pip install -r requirements.txt

5. In the settings.py file, you must specify the correct url to connect to push1st via websocket to send statistics.

/opt/<vms>-analytics-orchestrator-client/settings.py
PY
LISTENER_WEBSOCKET_URL = "ws://<IP-address-pushIst>:6003/ws/app/app-key/"
LISTENER_WEBSOCKET_CHANNEL = "stats"

HOST = "0.0.0.0"
PORT = 8800

STATS_PUSH_PERIOD = 2

6. The or-client.service is located in the deploy folder. The services must be edited, specifying the correct paths to the project and environment in them (other settings are as needed).

BASH
#!/bin/bash
nano /opt/analytics-orchestrator-client/deploy/or-client.service
or-client.service
BASH
[Unit]
Description=Orchestrator client application
After=network.target

[Service]
RestartSec=5
WorkingDirectory=/opt/analytics-orchestrator-client/
ExecStart=/opt/analytics-orchestrator-client/venv/bin/python main.py
Restart=always
User=root
Group=root

[Install]
WantedBy=multi-user.target

7. After changing the service files, create links to the services and copy them to /etc/systemd/system/, then execute the systemctl daemon-reload command and start the services.

BASH
#!/bin/bash
ln -s /opt/analytics-orchestrator-client/deploy/or-client.service /etc/systemd/system/or-client.service
systemctl daemon-reload
systemctl enable or-client.service
systemctl start or-client.service

8. Check the operation of the services.

BASH
#!/bin/bash
systemctl status or-client.service
systemctl status supervisor.service

Executable module for analytics

  • VSaaS video analytics bin

  • VSaaS video analytics models

The orchestrator client should be installed where the analytics executable module is installed.

IMPORTANT!!! For the executable module and its configuration files to work correctly, they must be located in a specific place, namely:

  • /opt/video-analytics/<xxxx>-video-analytics-cpu.out, the executable module itself

  • /opt/video-analytics/config, for configuration files

1. Clone the analytics executable module from the repository to the /opt/ directory.

BASH
#!/bin/bash
git clone git@bitbucket.org:<company>/xxxx-video-analytics-bin.git /opt/xxxx-video-analytics-bin

git clone https://<company>_jenkins:d8HD3xMVeRffqUR2UNn4@bitbucket.org/<company>/xxxx-video-analytics-bin.git /opt/xxxx-video-analytics-bin

2. Clone the analytics model from the repository to the /opt/ directory.

BASH
#!/bin/bash
git clone git@bitbucket.org:<company>/xxxx-video-analytics-models.git /opt/xxxx-video-analytics-models

git clone https://<company>_jenkins:d8HD3xMVeRffqUR2UNn4@bitbucket.org/<company>/xxxx-video-analytics-models.git /opt/xxxx-video-analytics-models

3. From the root of the analytics executable module project, run the install.sh script.

BASH
#!/bin/bash
cd /opt/xxxx-video-analytics-bin
bash install.sh

The result of executing the script will be the /opt/video-analytics/ directory. It will contain the executable module itself, models, configurations, and necessary dependencies.

4. Edit the configuration settings to run the analytics executable module. The configurations are located in the /opt/video-analytics/config directory, and you need to change the events_endpoint parameter to the url of the push1st websocket in all .ini files.

CODE
sed -i -e "s/10.1.16.238:6001/<push1st_ip_address>:6003/" /opt/video-analytics/config/*

If necessary, edit the range of cores on which binaries will be run:

CODE
sed -i -e "s/cores.*/cores = <cores_range>/" /opt/video-analytics/config/*

Frontend

Before installation, make sure that you have Node.js installed.

Install Node.js

To do this, use the following commands:

  • sudo apt update

  • sudo apt install nodejs

  • sudo apt install npm

  1. Clone the remote repository to your computer (access to the repository is provided upon user request)

  2. Switch to the Master branch by running the command git checkout master

  3. Upload the latest changes in this branch by running the command git pull

  4. In the project directory, run the command npm i

  5. In the project root, open the .env file and set the necessary addresses

  6. Then run the command npx react-scripts build

  7. Connect to the server via SSH

  8. Make sure there are no files in the /var/www/html/ directory. If there are files, run the command sudo rm -rf /var/www/html/*

  9. Copy the files from the build folder in the project directory to the var/www/html/ directory on the server

  10. In the nginx configuration file, add to the end of the http block:

    CODE
    server {
      listen 8080;
      server_name localhost;
      root /var/www/html/;
      location / {
        try_files $uri $uri/ /index.html;
      }
    }

  11. Restart nginx by running the command sudo systemctl restart nginx.service

  12. Check the nginx status by running the command sudo systemctl status nginx.service

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.