Installation of the orchestrator and its components
The subsection provides all the necessary information for installing and configuring the orchestrator and its components.
Server Orchestration
Redis for Server Orchestrator
Nginx for Server Orchestrator
MySQL for Server Orchestrator
Tarantool for Server Orchestrator (Matcher Service)
Push1st for Server Orchestrator
ClickHouse for Server Orchestrator
1. Installation of components and dependencies:
redis-server
, a utility for creating a virtual environmentmysql-server mysql-client python3-dev default-libmysqlclient-dev build-essential
, MySQL and its dependenciesnginx
, HTTP server and reverse proxy server (if necessary)python3-virtualenv
, library for creating a virtual environment
#!/bin/bash
sudo apt update && sudo apt install -y apt-transport-https \
ca-certificates \
dirmngr \
redis-server \
mysql-server \
mysql-client \
python3-dev \
default-libmysqlclient-dev \
build-essential \
nginx \
python3-virtualenv \
python3-pip
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
sudo apt-get install -y clickhouse-server clickhouse-client
sudo service clickhouse-server start
2. Clone the orchestrator project from the repository.
#!/bin/bash
git clone git@bitbucket.org:<company>/<xxxx>-analytics-orchestrator-server.git /opt/analytics-orchestrator-server
git clone --branch 22.09.1.0 https://<company>_jenkins:d8HD3xMVeRffqUR2UNn4@bitbucket.org/<company>/<xxxx>-analytics-orchestrator-server.git /opt/analytics-orchestrator-server
3. Create a virtual environment venv
in the project root and activate it.
#!/bin/bash
cd /opt/analytics-orchestrator-server
virtualenv venv
source venv/bin/activate
4. Install dependent packages pip install -r requirements.txt
.
#!/bin/bash
cd /opt/analytics-orchestrator-server
pip install -r requirements.txt
5. Load the database schema, the schema is located in server/management/events_collector/db/events_extracted_xxxx.sql of the project.
#!/bin/bash
clickhouse-client -n < /opt/analytics-orchestrator-server/server/management/events_collector/db/events_extracted_xxxx.sql
6. Create a user vms in clickhouse, for this you need to edit the configuration file /etc/clickhouse-server/users.d/xxxx.xml.
6.1. Change the listen_host
parameter to 0.0.0.0
for clickhouse-server
in the file /etc/clickhouse-server/config.xml.
<!-- Listen specified address.
Use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere.
Notes:
If you open connections from wildcard address, make sure that at least one of the following measures applied:
- server is protected by firewall and not accessible from untrusted networks;
- all users are restricted to subset of network addresses (see users.xml);
- all users have strong passwords, only secure (TLS) interfaces are accessible, or connections are only made via TLS interfaces.
- users without password have readonly access.
See also: https://www.shodan.io/search?query=clickhouse
-->
<listen_host>0.0.0.0</listen_host>
7. Check the operation of the services.
systemctl restart clickhouse-server.service
systemctl status clickhouse-server.service
8. Install push1st
and necessary components/dependencies.
#!/bin/bash
# Add repository key
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9A4D3B9B041D12EF0D23694D8222A313EDE992FD
# Add repository to source list and adjust auth
echo "deb [arch=amd64] https://nexus.<company>.com/repository/ubuntu-universe/ universe main" | sudo tee /etc/apt/sources.list.d/push1st.list
echo "machine nexus.<company>.com/repository login public password public" | sudo tee /etc/apt/auth.conf.d/nexus.<company>.com.conf
sudo apt update && sudo apt install -y push1st
#возможны проблемы с версией push1st
# additional dependencies (If necessary, this is optional)
sudo luarocks install lua-cjson2
9. Configure the push1st service.
#!/bin/bash
nano /opt/<company>/push1st/server.yml
10. Configure integrations with the video analytics orchestrator.
#!/bin/bash
nano /opt/<company>/push1st/apps/orchestrator.yml
11. Launch the service and check its operation.
#!/bin/bash
systemctl restart push1st.service
systemctl status push1st.service
12. Install Tarantool version 2.8.
#!/bin/bash
curl -L https://tarantool.io/XWleucj/release/2.8/installer.sh | bash
sudo apt update && sudo apt install -y tarantool
13. Check the operation of the services.
#!/bin/bash
systemctl status tarantool
In the tarantool_app
directory there is a file: tarantool_service.lua
. Copy it to the directory /etc/tarantool/instances.enabled
:
sudo cp tarantool_app/tarantool_service.lua /etc/tarantool/instances.enabled/tarantool_service.lua
In the copied file, specify the path to the working directory with lua scripts (parameter work_dir
):
nano /etc/tarantool/instances.enabled/tarantool_service.lua
box.cfg {
listen = 3301,
background = true,
log = '/var/log/tarantool/vectors.log',
work_dir = '/opt/analytics-orchestrator-server/tarantool_app'
}
local app = require('app').run()
Next, start the service:
sudo tarantoolctl start tarantool_service.lua
Check the operation of the service:
sudo tarantoolctl status tarantool_service
In the /var/log/tarantool
directory, there is a file with logs (vectors.log
)
14. In MySQL, create a database: CREATE DATABASE orchestrator_db CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
.
#!/bin/bash
mysql -u root -e "CREATE DATABASE orchestrator_db CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;"
#If it is a container installation and the database for the CC and analytics is the same,
it can be created from the backend container with the following command:
docker exec <backend-container-name-or-id> mysql --protocol=TCP -u root -pmysql -h mysql-server --execute="CREATE DATABASE orchestrator_db CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;"
15. Make a copy of the example.env
file and name it .env
. Edit the settings for MySQL, Redis, and Tarantool in this file.
#!/bin/bash
cd/opt/analytics-orchestrator-server
cp example.env .env
16. Create a logs folder for the project at /var/log/django
.
#!/bin/bash
mkdir -p /var/log/django
17. To run migrations, create an account for accessing the orchestrator's administration interface, and load fixtures, run:
#!/bin/bash
cd /opt/analytics-orchestrator-server
source venv/bin/activate
python manage.py seed
18. Configure the orchestrator's administration interface
#!/bin/bash
cd /opt/analytics-orchestrator-server
#Generate the frontend for the orchestrator's administration interface
python manage.py collectstatic
The result of the last command is the path to the folder where the orchestrator's frontend is located. This path should be specified in the nginx settings (django-backend-nginx
): STEP 21
Access to the orchestrator's administration interface: http://<IP-orchestrator-server>/admin/
19. In the project's root directory, there is a deploy folder containing 5 services. The services need to be edited to specify the correct paths to the project and environment.
20. After that, the services should be placed in /etc/systemd/system/
and the command systemctl daemon-reload
should be executed.
#!/bin/bash
ln -s /opt/analytics-orchestrator-server/deploy/celery.service /etc/systemd/system/celery.service
ln -s /opt/analytics-orchestrator-server/deploy/django.service /etc/systemd/system/django.service
ln -s /opt/analytics-orchestrator-server/deploy/stats_processor.service /etc/systemd/system/stats_processor.service
ln -s /opt/analytics-orchestrator-server/deploy/matcher.service /etc/systemd/system/matcher.service
ln -s /opt/analytics-orchestrator-server/deploy/events_collector.service /etc/systemd/system/events_collector.service
systemctl daemon-reload
systemctl enable stats_processor.service
systemctl start stats_processor.service
systemctl enable celery.service
systemctl start celery.service
systemctl enable django.service
systemctl start django.service
systemctl enable matcher.service
systemctl start matcher.service
systemctl enable events_collector.service
systemctl start events_collector.service
21. Next, in the same deploy folder, there is a configuration file for nginx, which needs to be moved to /etc/nginx/sites-available/
and then a link to this configuration file needs to be created in /etc/nginx/sites-enabled/
. After that, the nginx
service needs to be restarted.
#!/bin/bash
cp /opt/analytics-orchestrator-server/deploy/django-backend-nginx /etc/nginx/sites-available/
ln -s /etc/nginx/sites-available/django-backend-nginx /etc/nginx/sites-enabled/django-backend-nginx
systemctl restart nginx.service
22. Check the operation of the services.
systemctl status mysql
systemctl status redis-server
systemctl status stats_processor.service
systemctl status events_collector.service
systemctl status django.service
systemctl status celery.service
systemctl status matcher.service
systemctl status nginx
tarantoolctl status tarantool_service
Orchestrator Client
Orchestrator Client
Supervisor
The orchestrator client should be installed where the analytics executable modules are located (since the execution of executable files is carried out through the supervisor; this also currently blocks the launch of the analytics client in a container).
1. Installing components and dependencies. Install supervisor
(v. 4.2.2). A suitable installation option can be selected here. python3-virtualenv
is a library for creating a virtual environment.
#!/bin/bash
sudo apt update && sudo apt install -y apt-transport-https ca-certificates dirmngr python3-pip python3-virtualenv supervisor
2. Clone the client from the repository.
#!/bin/bash
git clone --branch release/22.12.0.0 git@bitbucket.org:<company>/<xxxx>-analytics-orchestrator-client.git /opt/analytics-orchestrator-client
git clone --branch release/22.12.0.0 https://<company>_jenkins:d8HD3xMVeRffqUR2UNn4@bitbucket.org/<company>/<xxxx>-analytics-orchestrator-client.git /opt/analytics-orchestrator-client
3. Create a virtual environment venv venv
at the root of the project and activate it.
#!/bin/bash
cd /opt/analytics-orchestrator-client
virtualenv venv
source venv/bin/activate
4. From the active venv
environment, install dependent packages with pip install -r requirements.txt
#!/bin/bash
cd /opt/analytics-orchestrator-client
pip install -r requirements.txt
5. In the settings.py
file, you must specify the correct url
to connect to push1st via websocket to send statistics.
6. The or-client.service
is located in the deploy folder. The services must be edited, specifying the correct paths to the project and environment in them (other settings are as needed).
#!/bin/bash
nano /opt/analytics-orchestrator-client/deploy/or-client.service
7. After changing the service files, create links to the services and copy them to /etc/systemd/system/
, then execute the systemctl daemon-reload
command and start the services.
#!/bin/bash
ln -s /opt/analytics-orchestrator-client/deploy/or-client.service /etc/systemd/system/or-client.service
systemctl daemon-reload
systemctl enable or-client.service
systemctl start or-client.service
8. Check the operation of the services.
#!/bin/bash
systemctl status or-client.service
systemctl status supervisor.service
Executable module for analytics
VSaaS video analytics bin
VSaaS video analytics models
The orchestrator client should be installed where the analytics executable module is installed.
IMPORTANT!!! For the executable module and its configuration files to work correctly, they must be located in a specific place, namely:
/opt/video-analytics/<xxxx>-video-analytics-cpu.out
, the executable module itself/opt/video-analytics/config
, for configuration files
1. Clone the analytics executable module from the repository to the /opt/
directory.
#!/bin/bash
git clone git@bitbucket.org:<company>/xxxx-video-analytics-bin.git /opt/xxxx-video-analytics-bin
git clone https://<company>_jenkins:d8HD3xMVeRffqUR2UNn4@bitbucket.org/<company>/xxxx-video-analytics-bin.git /opt/xxxx-video-analytics-bin
2. Clone the analytics model from the repository to the /opt/
directory.
#!/bin/bash
git clone git@bitbucket.org:<company>/xxxx-video-analytics-models.git /opt/xxxx-video-analytics-models
git clone https://<company>_jenkins:d8HD3xMVeRffqUR2UNn4@bitbucket.org/<company>/xxxx-video-analytics-models.git /opt/xxxx-video-analytics-models
3. From the root of the analytics executable module project, run the install.sh
script.
#!/bin/bash
cd /opt/xxxx-video-analytics-bin
bash install.sh
The result of executing the script will be the /opt/video-analytics/
directory. It will contain the executable module itself, models, configurations, and necessary dependencies.
4. Edit the configuration settings to run the analytics executable module. The configurations are located in the /opt/video-analytics/config
directory, and you need to change the events_endpoint
parameter to the url of the push1st websocket in all .ini files.
sed -i -e "s/10.1.16.238:6001/<push1st_ip_address>:6003/" /opt/video-analytics/config/*
If necessary, edit the range of cores on which binaries will be run:
sed -i -e "s/cores.*/cores = <cores_range>/" /opt/video-analytics/config/*
Frontend
Before installation, make sure that you have Node.js installed.
Install Node.js
To do this, use the following commands:
sudo apt update
sudo apt install nodejs
sudo apt install npm
Clone the remote repository to your computer (access to the repository is provided upon user request)
Switch to the Master branch by running the command
git checkout master
Upload the latest changes in this branch by running the command
git pull
In the project directory, run the command
npm i
In the project root, open the
.env
file and set the necessary addressesThen run the command
npx react-scripts build
Connect to the server via SSH
Make sure there are no files in the /var/www/html/ directory. If there are files, run the command
sudo rm -rf /var/www/html/*
Copy the files from the build folder in the project directory to the var/www/html/ directory on the server
In the nginx configuration file, add to the end of the http block:
CODEserver { listen 8080; server_name localhost; root /var/www/html/; location / { try_files $uri $uri/ /index.html; } }
Restart nginx by running the command
sudo systemctl restart nginx.service
Check the nginx status by running the command
sudo systemctl status nginx.service