v2.4.0
This is the API integration document for the VCAcore Video Analytics system.
The REST API of VCAcore is used by Core’s web UI to configure the web backend.
The VCAcore REST API allows the user to change its configuration tree, which is where the entire VCAcore configuration is stored. The configuration tree can be accessed using the URL below:
http://SERVER_IP:PORT/api.json
The VCAedge configuration tree is also available via REST API. The configuration tree can be accessed using the URL below:
http://CAMERA_IP/cgi-bin/admin/vca-api/api.json
The REST API of VCAcore uses Digest Authentication.
Include an Authorisation header in HTTP requests, specifying the Digest scheme and parameters.
To add an object to the configuration, send a POST
request to the appropriate endpoint for the type of object you are adding.
For example, to add an element:
POST
the element to /api/elements
with the following payload:
{
"typename": "rtsp",
"location": "rtsp://192.168.1.1/stream",
"user_id": "admin",
"user_pw": "password",
"do_rtsp_keep_alive": "FALSE",
"protocols": "7",
"name": "MY_RTSP_STREAM_1"
}
Response:
{
"index": 2
}
The response is the index of the newly created object in the configuration tree, in this case the RTSP element at /api/elements/2
. This index can be used as a reference to update this object.
It is not necessary to include all the properties of an object in the JSON payload of the POST
request. Where a property is not specified, a default value will be used. For objects which have a typename
property, this property is required.
To delete an object to the configuration, send a DELETE
request to the appropriate endpoint for the object you are deleting. When deleting, the index of the object must be used in the URL.
For example, to delete element 0:
DELETE
request to /api/elements/0
.
The response will contain the object that is being deleted:
{
"typename": "file",
"name": "",
"location": "vegas.mp4"
}
Take care when deleting objects from the configuration as the REST API does not check for dependencies. For example, removing an element
object used by a channel
object is not prohibited. As such, the channel
object will remain in the configuration until manually removed.
Any property can be modified by sending the appropriate PUT
request to the full path of the property, with the desired value in the payload. For example, if you wanted to change the name
property on the element that was just created, you would send a PUT
request to /api/elements/2/name
with the following payload:
"New Name"
It is also possible to update multiple properties in one request by sending a PUT
request to the parent object. In this example, you could send a PUT
request to /api/elements/2
with the following payload:
{
"user_id": "root",
"user_pw": "pass",
"name": "New Name"
}
These methods can be applied to any object in the api.json
.
Any or all of the configuration tree can be retrieved from the REST API by sending a GET
request to the desired endpoint. For example, sending a ‘GET’ request to /api/elements/2/
would return:
{
"typename": "rtsp",
"location": "rtsp://192.168.1.1/stream",
"user_id": "root",
"user_pw": "pass",
"do_rtsp_keep_alive": "FALSE",
"protocols": "7",
"name": "New Name"
}
Additionally, the entire configuration can be exported by sending a GET
request to /api
.
The entire configuration tree can be imported using the REST API by sending a POST
request to /api/import
. The payload should be a JSON object containing the entire configuration tree, as retrieved by sending a GET
request to /api
.
It is possible to edit and modify this JSON object before upload.
When performing an import, the whole application will be restarted, causing a break in analytics processing.
When a request was successful, the VCAcore web server responds with a __2xx__
code.
Requests that add an object using POST
will also have the index of the newly-created object in the response, as shown below:
{
"index": 4
}
When a request has failed, the web server returns a __4xx__
or __5xx__
error code.
Additionally, the web server will return a descriptive error string that can be used to diagnose the problem. The error is returned as an object with an error
property, and the error string as its value:
{
"error": "An error has occurred."
}
It is a requirement to add the Content-Type: application/json
header to all REST API requests that contain JSON data, which are PUT
and POST
. Requests that return data (GET
, POST
, and optionally DELETE
) should contain the Accept
header set to application/json
or */*
. Additionally, the GET
request can specify the Accept
header to application/json+schema
instead, in which case, instead of returning the configuration object directly, it will return its schema.
A point object represents a point in 2D space. Coordinate space orientation for a 2D point is:
"x": 0
as the left of the camera field of view"y": 0
as the bottom of the camera field of viewAn example point object is given below:
{
"x": 200,
"y": 400
}
The properties of a point object are as follows:
Property | Type | Description | Possible values |
---|---|---|---|
x |
Unsigned Integer | The x coordinate | Any unsigned integer between 0 and 65535 (inclusive) |
y |
Unsigned Integer | The y coordinate | Any unsigned integer between 0 and 65535 (inclusive) |
A colour object represents an RGBA colour value An example colour object is given below:
{
"r": 100,
"g": 200,
"b": 140,
"a": 90
}
The properties of a colour object are as follows:
Property | Type | Description | Possible values |
---|---|---|---|
r |
Unsigned Integer | The red component of the colour | Any unsigned integer between 0 and 255 (inclusive) |
g |
Unsigned Integer | The green component of the colour | Any unsigned integer between 0 and 255 (inclusive) |
b |
Unsigned Integer | The blue component of the colour | Any unsigned integer between 0 and 255 (inclusive) |
a |
Float | The alpha component of the colour | Any float between 0 and 1 (inclusive) |
To specify which licensing method this instance of VCAcore is using, simply send a PUT
request to /api/licenses
with a payload containing the method type. A sample payload is given below:
{
"server": "cloud"
}
Property | Type | Description | Possible values |
---|---|---|---|
server |
String | Specifies the licensing method to use | Either "local" or "cloud" |
VCAcore will attempt to connect using the new license method when the required fields are populated.
To specify a Cloud Licensing API key for this instance of VCAcore to use, simply send a PUT
request to /api/licenses/cloud
. Only utilised when licensing server
is "cloud"
. A sample payload is given below:
{
"api_key": "4a884a88-4a88-4a88-4a88-4a884a884a88"
}
Property | Type | Description | Possible values |
---|---|---|---|
api_key |
String | Cloud Licensing API key provided by web portal | Any valid Cloud Licensing API key |
VCAcore will attempt to connect using the new API key automatically.
To specify which License Server this instance of VCAcore is connected to, simply send a PUT
request to /api/licenses/daemon
. Only applies when licensing server
is "local"
. A sample payload is given below:
{
"address": "192.168.0.27",
"port": 15769
}
Property | Type | Description | Possible values |
---|---|---|---|
address |
String | IP address of system running License Server | Any valid IP address as a string |
port |
Unsigned Integer | port License Server running at address |
Any valid port number, Default value is 15769 |
VCAcore will attempt to connect using the new connection settings automatically.
To add a license pack to VCAcore, simply send a PUT
request to /api/licenses/activate
. Only applies when licensing server
is "local"
. A sample payload is given below:
Note: The license string provided below is invalid. You will need to purchase a license for your hardware GUID.
"43214321EDFAEFDEAFDEAFDEAFDEADFADAAEEADAFDEAFEDA"
Available license packs can be retrieved by sending a GET
request to /api/licenses/vca.json
Only applies when licensing server
is "local"
.
"1": {
"license": "198704FFFFFFFFAC....",
"token": "",
"name": "ProAI 60ch Enterprise",
"code": 6535,
"channels": 60,
"used_channels": 2,
"zones": 255,
"rules": 255,
"counters": 255,
"evaluation": false,
"expired": false,
"days_remaining": 0,
"suspended": false,
"features": [
"presence",
"enter",
"exit",
"appear",
"disappear",
"stopped",
...
]
}
To retrieve the hardware code for the License Server, you need to send a GET
request to /api/hardware/guid.json
Only applies when licensing server
is "local"
. The GUID is returned as a string, as shown below:
"F60492388FF8030C561B8C1505567B6C5687FAAA254D15FFB1E930D0D4905EA1"
There are two types of elements supported by VCAcore - file and RTSP elements. Elements are inputs to channels, and must be added before a channel is created.
An RTSP element may be created added by sending a POST
request to /api/elements
endpoint. A sample RTSP element is shown below:
{
"typename": "rtsp",
"location": "",
"user_id": "",
"user_pw": "",
"do_rtsp_keep_alive": "FALSE",
"protocols": "7",
"sync_to_rtcp_sr_time": "FALSE",
"tls_validation_flags": "127",
"name": ""
}
Property | Type | Description | Possible values |
---|---|---|---|
location |
String | The URI of this RSTP stream | Any string, can be empty |
user_id |
String | The username to be used to authenticate with the RTSP server | Any string, can be empty |
user_pw |
String | The password to be used to authenticate with the RTSP server | Any string, can be empty |
do_rtsp_keep_alive |
String | A boolean (represented as a string), specifying whether keep-alive should be enabled in this RTSP stream | "TRUE" or "FALSE" |
protocols |
String | The protocol to use for this RTSP stream | Set to "4" for RTSP over TCP, otherwise set to "7" |
sync_to_rtcp_sr_time |
String | Enables synchronising the channel’s metadata timestamps using the RTCP sender reports from the RTSP server | "TRUE" or "FALSE" |
tls_validation_flags |
String | Defines why a particular TLS certificate is to be rejected | Any 8-bit Integer as a String e.g. "127" (default) |
name |
String | A user-specified name for this element | Any string, can be empty |
The tls_validation_flags
only apply to RTSP sources with a location
starting with rtsps://
. The value is made up of seven flags represented as an 8-bit Integer. The last bit is not used and should always be 0
. Enabled flags are set to 1
, requiring the TLS certificate of that RTSPS source to have no errors relating to that check. Disabled flags are set to 0
, allowing errors relating to that check to be ignored. The following mapping outlines the checked flag and the corresponding bit location:
Flag | Bit |
---|---|
ensure_trusted_authority |
0 |
ensure_valid_identity |
1 |
ensure_certificate_activated |
2 |
ensure_certificate_not_revoked |
3 |
ensure_certificate_not_expired |
4 |
ensure_no_generic_error |
6 |
Unused |
7 |
An RTSP element may be created by sending a POST
request to the api/elements
endpoint. A sample file element is shown below:
{
"typename": "file",
"name": "",
"location": "las-vegas.mp4"
}
Property | Type | Description | Possible values |
---|---|---|---|
location |
String | The filename of the video (video files must be located in the share/test-clips subfolder of the install folder) |
Any string, can be empty |
name |
String | A user-specified name for this element | Any string, can be empty |
Once an element has been added, it can be linked to a channel. The channel object specifies channel properties including what tracker and algorithms are to be run. The parameters for the algorithms and trackers is also contained in this object and should be defined for each channel. For a channel to begin processing, a valid license code must be assigned to the channel.
Once a file or RTSP element has been added, and its index is known, this index can be used to link it to a channel. A channel may be added by sending a POST
request to /api/channels
. A sample channel object is shown below:
{
"name": "Main Park",
"description": "CH08334",
"enabled": true,
"input": 5,
"output": null,
"licenses": [
6535,
8210
],
"event_retrigger_time": 5,
"crop": {
"top_left": {
"x": 0,
"y": 0
},
"bottom_right": {
"x": 65535,
"y": 65535
}
},
"tracking_engine": "object_tracker",
"tracker": {
"stationary_time": 5000,
"stationary_hold_on_time": 60000,
"minimum_object_size": 10,
"detection_point": 0,
"sensitivity_threshold": 4,
"require_initial_movement": true
},
"calibration": {
"enabled": false,
"height": 4,
"tilt": 50,
"fov": 40,
"roll": 0,
"pan": 0,
"location": {
"latitude": 51.00973709106445,
"longitude": -0.0025509810447693,
"elevation": 10
},
"orientation": 30
"horizon": false,
"grid": {
"enabled": true,
"stroke": {
"r": 115,
"g": 210,
"b": 22,
"a": 1
},
"fill": {
"r": 136,
"g": 138,
"b": 133,
"a": 0
},
"spacing": 2
}
},
"calibration_filter": {
"enabled": false
},
"tamper": {
"enabled": false,
"alarm_timeout": 20000,
"area_threshold": 40,
"low_light": false
},
"scene_change": {
"mode": 1,
"alarm_timeout": 3000,
"area_threshold": 40
},
"annotation": {
"zones": false,
"objects": true,
"class": false,
"height": false,
"speed": false,
"area": false,
"ticker": true,
"dl_class": true,
"system_message": true,
"line_counters": true,
"counters": true,
"colour_signature": true,
"tracker_internal_state": false,
"faces": false,
"alarmed_only": false,
"position": true,
"interactions": true,
"tamper": true
},
"classification": [
{
"name": "Person",
"area": {
"min": 5,
"max": 20
},
"speed": {
"min": 0,
"max": 20
}
},
{
"name": "Vehicle",
"area": {
"min": 40,
"max": 1000
},
"speed": {
"min": 0,
"max": 200
}
},
{
"name": "Clutter",
"area": {
"min": 0,
"max": 4
},
"speed": {
"min": 0,
"max": 50
}
},
{
"name": "Group Of People",
"area": {
"min": 21,
"max": 39
},
"speed": {
"min": 0,
"max": 20
}
}
],
"dl_classifier": {
"enabled": true,
},
"dl_accessory_detector": {
"enabled": false
},
"fall": {
"enabled": false
},
"aggressive_behaviour": {
"enabled": false
},
"pose": {
"enabled": false
},
"optical_flow": {
"enabled": true
},
"colour_signature": {
"enabled": false,
"max_colours": 4
},
"generate_features": {
"enabled": true,
"interval": 1000
},
"stabilisation": {
"enabled": false
},
"user_data": [],
}
Below is a list of properties of a channel object
Property | Type | Description | Possible values |
---|---|---|---|
name |
String | A user-specified name for this element | Any string, can be empty |
description |
String | A user-specified description for this element | Any string, can be empty |
enabled |
Boolean | A boolean value specifying whether this channel is enabled | true or false |
input |
Unsigned integer | The index of a file or RTSP element to use as the input for this channel | Any unsigned integer |
output |
Unsigned integer | This property is deprecated, and must always be set to null |
Any unsigned integer, or null |
licenses |
List | List of license codes to assign to this channel | List of any unsigned integer |
event_retrigger_time |
Unsigned integer | The time (in milliseconds) that must elapse before an event is re-triggered | Any unsigned integer |
crop |
Object | The object specifying the crop parameters of this channel |
Any valid crop object |
tracking_engine |
String | Defines the tracking engine that will be run on the channel | Any valid tracker engine string. |
tracker |
Object | Tracking engine specific settings | Any valid tracker object |
calibration |
Object | The calibration object specifying the calibration parameters of this channel | Any valid calibration object |
calibration_filter |
Object | The calibration filter object | Any valid calibration filter object |
tamper |
Object | The tamper object specifying the tamper parameters of this channel | Any valid tamper object |
scene_change |
Object | The scene_change object specifying the scene-change parameters of this channel |
Any valid scene-change object |
annotation |
Object | The annotation object specifying which metadata annotations are rendered on the channel |
Any valid annotation object |
classification |
Array | The array of classification objects to use for this channel | Any valid array of classification objects |
dl_classifier |
Object | The object specifying the dl_classifier parameters of this channel |
Any valid dl_classifier object |
dl_accessory_detector |
Object | The object specifying the dl_accessory_detector parameters of this channel |
Any valid dl_accessory_detector object |
fall |
Object | The object specifying the fall parameters of this channel |
Any valid fall object |
aggressive_behaviour |
Object | The object specifying the aggressive_behaviour parameters of this channel |
Any valid aggressive_behaviour object |
pose |
Object | The object specifying the pose parameters of this channel |
Any valid pose object |
optical_flow |
Object | The object specifying the optical_flow parameters of this channel |
Any valid optical_flow object |
colour_signature |
Object | The object specifying the colour_signature parameters of this channel |
Any valid colour_signature object |
generate_features |
Object | The object specifying the generate_features parameters of this channel |
Any valid generate_features object |
stabilisation |
Object | The object specifying the stabilisation parameters of this channel |
Any valid stabilisation object |
Certain channel properties and objects will only apply when specific trackers are specified in tracking_engine
. In such cases the applicable tracker(s) will be noted below the object.
tracking_engine string |
Description |
---|---|
object_tracker |
Motion object tracker |
dl_object_tracker |
People and vehicle tracker for RGB camera views |
dl_people_tracker |
People tracker for RGB camera views |
dl_thermal_tracker |
People and vehicle tracker for thermal camera views |
dl_skeleton_tracker |
People tracker for RGB camera views |
dl_fisheye_tracker |
People tracker for Fisheye camera views |
hand_object |
People, Hand and Object tracker for RGB camera views |
qr_code_tracker |
QR code tracker for RGB camera views |
The following are the properties of a crop
object:
Property | Type | Description | Possible values |
---|---|---|---|
top_left |
Point Object | The top left 2D point of the crop region of interest | A point object |
bottom_right |
Point Object | The bottom right 2D point of the crop region of interest | A point object |
The following are the properties of a tracker
object, properties apply to a specific tracking_engine
:
Property | Type | Description | Possible values |
---|---|---|---|
stationary_time |
Unsigned integer | object_tracker value: defines in ms, the time detected motion must be static before it is defined as abandoned/removed |
Any unsigned integer |
stationary_hold_on_time |
Unsigned integer | dl_object_tracker and object_tracker value: defines in ms, the time a stationary object continues to be tracked for |
Any unsigned integer |
minimum_object_size |
Unsigned integer | object_tracker value: the min tracking pixels that detected motion must contain before it is classed as a tracked object |
Any unsigned integer |
maximum_object_size |
Unsigned integer | object_tracker value: the max tracking pixels that detected motion must contain before it is classed as a tracked object |
Any unsigned integer |
detection_point |
Unsigned integer | dl_object_tracker and object_tracker value: defines the location of the ground point for a bounding box. 0 automatic, 1 centre of bounding box, 2 bottom mid of bounding box |
0, 1, or 2 |
sensitivity_threshold |
Float | object_tracker value: defines how sensitive the object tracker is to movement |
2.0 - 8.0 |
require_initial_movement |
Boolean | dl_object_tracker value: allows trackers which utilise it, to ignore objects which have not yet moved |
true or false |
Values in this object apply to select trackers which have been highlighted per property.
The following are the properties of a calibration
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether calibration is enabled for this channel | true or false |
height |
Float | The height of the camera in meters | 0 - 100 |
tilt |
Float | The ‘tilt’ of the camera in degrees | -20 - 90 |
fov |
Float | The ‘field of view’ parameter of the camera in degrees | 5 - 150 |
roll |
Float | The ‘roll’ value of the camera in degrees | -45 - 45 |
pan |
Float | The ‘pan’ value of the camera in degrees | -90 - 90 |
location |
Object | An object specifying the camera location parameters | Any valid location object |
orientation |
Unsigned integer | Orientation of the camera relative to north (clockwise) in degrees | 0 - 359 |
horizon |
Boolean | A boolean value specifying whether the horizon is displayed in the calibration grid | true or false |
grid |
Object | An object specifying the calibration grid parameters | Any valid grid object |
Below is a list of the properties of the location
object:
Property | Type | Description | Possible values |
---|---|---|---|
latitude |
Float | A latitude expressed in degrees | -90 - 90 |
longitude |
Float | A longitude expressed in degrees | -180 - 180 |
elevation |
Float | The elevation of the camera in meters above sea level | Any unsigned integer |
Below is a list of the properties of the grid
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether the grid is enabled for this channel | true or false |
stroke |
Object | A colour object specifying the stroke colour of the calibration grid | Any valid colour object |
fill |
Object | A colour object specifying the fill colour of the calibration grid | Any valid colour object |
spacing |
Unsigned integer | The spacing between lines in the calibration grid | Any unsigned integer |
The following are the properties of a calibration_filter
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether calibration filter is enabled for this channel | true or false |
Below is a list of the properties of the tamper
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether tamper detection is enabled for this channel | true or false |
alarm_timeout |
Unsigned integer | The period (in milliseconds) that the image must be detected as changed for a tamper event to trigger | 1000 - 60000 |
area_threshold |
Unsigned integer | The area (as a percentage) of the image that must be detected as changed for a tamper event to trigger | Any unsigned integer |
low_light |
Boolean | A boolean value specifying whether low light tamper detection should be enabled | true or false |
Values in this object apply to all trackers.
Below is a list of the properties of the scene_change
object:
Property | Type | Description | Possible values |
---|---|---|---|
mode |
Unsigned integer | An integer specifying the scene-change detection mode | 0 Disabled, 1 automatic and 2 manual, 3 adaptive |
alarm_timeout |
Unsigned integer | The period (in milliseconds) that the image must be detected as changed for a scene-change to trigger | Any unsigned integer |
area_threshold |
Unsigned integer | The area (as a percentage) of the image that must be detected as changed for a scene-change event to trigger | Any unsigned integer |
Values in this object only apply to the object_tracker
.
Below is a list of the properties of the annotation
object:
Property | Type | Description | Possible values |
---|---|---|---|
zones |
Boolean | A boolean value specifying whether a channel’s zones are drawn for this channel | true or false |
objects |
Boolean | A boolean value specifying whether an object’s metadata (e.g. bounding box and train) is displayed for this channel | true or false |
class |
Boolean | A boolean value specifying whether an object’s class defined by calibrated data is displayed for this channel, requires objects to be true |
true or false |
height |
Boolean | A boolean value specifying whether an object’s height estimated by calibrated data is displayed for this channel, requires objects to be true |
true or false |
speed |
Boolean | A boolean value specifying whether an object’s speed estimated by calibrated data is displayed for this channel, requires objects to be true |
true or false |
area |
Boolean | A boolean value specifying whether an object’s area estimated by calibrated data is displayed for this channel, requires objects to be true |
true or false |
ticker |
Boolean | A boolean value specifying whether a channel’s event log is displayed for this channel | true or false |
dl_class |
Boolean | A boolean value specifying whether an object’s deep learning classification is displayed for this channel, requires objects to be true |
true or false |
system_message |
Boolean | A boolean value specifying whether system messages for a channel (e.g. Scene Learning) are displayed for this channel | true or false |
line_counters |
Boolean | A boolean value specifying whether a channel’s line counters are drawn for this channel | true or false |
counters |
Boolean | A boolean value specifying whether a channel’s counters are drawn for this channel | true or false |
colour_signature |
Boolean | A boolean value specifying whether an object’s colour signature is displayed for this channel, requires objects to be true |
true or false |
tracker_internal_state |
Boolean | A boolean value specifying whether object detections for deep learning trackers are displayed for this channel | true or false |
faces |
Boolean | A boolean value specifying whether an object’s face bounding box is displayed for this channel | true or false |
alarmed_only |
Boolean | A boolean value specifying whether an object’s deep learning classification is displayed for this channel | true or false |
position |
Boolean | A boolean value specifying whether an object’s calibrated position and geographic position data is displayed for this channel | true or false |
interactions |
Boolean | A boolean value specifying whether an object’s zone/rule interaction information is displayed for this channel | true or false |
tamper |
Boolean | A boolean value specifying whether the tamper map is displayed for this channel | true or false |
Values in this object apply to all trackers, unless specifically highlighted.
Below is a list of the properties of the classification
object:
Property | Type | Description | Possible values |
---|---|---|---|
name |
String | A user-specified name for this element | Any string, can be empty |
area |
Object | A threshold object to specify the minimum and maximum area cut-off for this class. The area unit is square meter | Any valid threshold object |
speed |
Object | A threshold object to specify the minimum and maximum speed cut-off for this class. The speed unit is kilometres per hour | Any valid threshold object |
Values in this object only apply to the object_tracker
Below is a list of the properties of the threshold
object:
Property | Type | Description | Possible values |
---|---|---|---|
min |
Unsigned integer | The minimum value | Any unsigned integer |
max |
Unsigned integer | The maximum value | Any unsigned integer |
Channel algorithm objects allow the use of non-tracker algorithms without configuring the corresponding observable. Adding and removing observables will continue to enable and disable non-tracker algorithms. The algorithm state will be reflected in the channel algorithm objects as this happens.
It is recommended that when events are required, rely on the observables to manage the algorithms. If only algorithm metadata is required and there is no application for the observable events, then management of the algorithms is best done via the channel object.
As this provides two methods to enable and disable an algorithm, a user should be aware of the relationship between algorithm switches and the observable behaviour. There are a few scenarios to be aware of:
User enables and disables an algorithm in the channel object directly. As there are no observables there will be no events, the algorithms metadata will be inserted into the metadata streams. E.g. fall
: Person objects will have "vca.meta.data.object.Fall"
metadata when detected.
User adds and removes observables linked to the channel. Both metadata and events will be present in the metadata streams. When the first observable of a given type is added, the algorithm is enabled in the channel object When the last is removed the algorithm is disabled. E.g. fall
: Person objects will have "vca.meta.data.object.Fall"
metadata when detected, and the fall observable will generate event metadata. Actions associated with that fall observable will be triggered.
User adds a corresponding observable and then disables the algorithm in the channel object. This will effectively disable both the metadata and events and leave a zombie observable that is not performing any use. E.g.fall
: Person objects can not have "vca.meta.data.object.Fall"
metadata, and the fall observable will never generate event metadata. Associated actions with that fall observable can never be triggered.
Below is a list of the properties of the dl_classifier
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether the Deep Learning Classifier is enabled on this channel | true or false |
Values in this object only apply to the object_tracker
.
Below is a list of the properties of the dl_accessory_detector
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether the accessory detection algorithm is enabled on this channel | true or false |
Values in this object only apply to the dl_skeleton_tracker
.
Below is a list of the properties of the fall
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether the fall algorithm is enabled on this channel | true or false |
Values in this object only apply to the dl_skeleton_tracker
.
Below is a list of the properties of the aggressive_behaviour
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether the aggressive detection algorithm is enabled on this channel | true or false |
Values in this object apply to all trackers.
Below is a list of the properties of the pose
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether the pose algorithm is enabled on this channel | true or false |
Values in this object only apply to the dl_skeleton_tracker
.
Below is a list of the properties of the optical_flow
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether the optical flow algorithm is enabled on this channel | true or false |
Values in this object only apply to the dl_object_tracker
.
Below is a list of the properties of the colour_signature
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether the colour signature is enabled for this channel | true or false |
max_colours |
Unsigned integer | Defines the maximum number of colours the colour signature algorithm includes in the metadata | 1 - 10 |
Values in this object apply to all trackers.
Below is a list of the properties of the generate_features
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether the colour signature is enabled for this channel | true or false |
interval |
Unsigned integer | Defines the interval (in ms) between each feature generation for a given object | Any unsigned integer |
Values in this object only apply to the dl_object_tracker
, dl_people_tracker
and dl_skeleton_tracker
.
Below is a list of the properties of the stabilisation
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying whether the stabilisation is enabled for this channel | true or false |
Please Note Stabilisation is currently not available in the VCAcore backend. API requests to this parameter will continue to be valid and saved in the VCAcore configuration, but the feature itself will not work.
A license code is set on a channel by sending a PUT
request to /api/channels/0/licenses
endpoint. A sample request is shown below:
[
6535,
8210
]
A license is assigned to a channel based on its code
. The code
signifies to VCAcore which license type to use with a channel and therefore which features to enable. For example 6535
is a ProAI
license type and unlocks all the features associated with the ProAI
license. More than one license code can be specified to allow for licenses with different features to be assigned to a single channel. VCAcore will attempt to assign the requested licenses to the channel if they are available. If any of the requested licenses are not available, i.e. have not been added to VCAcore or have had all available channels used, then no licenses will be assigned. See Retrieving a License for how to get which licenses codes are available.
A snapshot API is provided which will provide a JPEG encoded image from a defined channel. The resolution of the JPEG will be defined by the input resolution of the stream. A snapshot can be retrieved by sending a GET
request to the snapshot endpoint:
http://SERVER_IP:PORT/snapshot/CHANNEL_ID/latest?original=0
It is a requirement to add the Content-Type: image/jpeg
header to all requests to the snapshot API. The query parameter original
, defines if the returned image will have currently enabled annotations for that channel present or not.
If the parameter is not specified, original
defaults to 0
, and enabled annotations for that channel will be present in the image.
The snapshot service also supports scaling down snapshots from the original size. Scaling an image up is not possible with this service. The max_width
and max_height
parameters define ways to specify a dimension and scale. Scaled images always maintain the same aspect ratio, meaning both parameters are never required.
For example, if the input image is 720×480
. A snapshot request with the following would return an image half the input size, with no annotations for that channel present.
http://SERVER_IP:PORT/snapshot/CHANNEL_ID/latest?original=1&max_height=240
If both parameters are given, the scale service will honour the lower scaled dimension. For example, if the input image is 720×480
. A snapshot request with the following would return an image half the input size, with currently enabled annotations for that channel present.
http://SERVER_IP:PORT/snapshot/CHANNEL_ID/latest?&max_width=600&max_height=240
In this case the max_width
is ignored as the max_height
parameter results in a smaller scaled image.
When a snapshot API request is sent to VCAcore, the JPEG image is generated on demand. JPEG encoding is a resource intensive process and therefore this API is not designed for streaming at high frame rates. As such it is advised that calls made to this API are done so sparingly.
An observable is an object which monitors the metadata and generates an event when specific triggers are met. Commonly, observables are used to represent rules, which are associated with a channel, that fires an event when a specific condition is met by objects tracked in that channel (e.g. Presence or Dwell).
An observable can also represent one of VCAcore’s other sources, these work at a global, rather than channel, level triggering events when a specific global trigger is detected (e.g. VCAcore being Armed or when a scheduled time passes).
Observables can be linked to actions, so that when an event is generated by an observable, an action is performed.
To add an observable, send a POST
request to the /api/observables
endpoint. Please use the method mentioned earlier to add this element. A sample Presence observable is shown below:
{
"typename": "vca.observable.Presence",
"channel": 0,
"zone": 4294967295,
"name": "Presence 5",
"triggers_action": true
}
Note: The value 4294967295
is a sentinel value for null. When present it indicates that the input
or zone
is invalid.
Below is a list of properties that are common to all observables:
Property | Type | Description | Possible values |
---|---|---|---|
name |
String | A user-specified name for this element | Any string, can be empty |
triggers_action |
Boolean | A boolean specifying whether this observable can trigger actions | true or false |
For a complete overview of observable types (Basic Rules, Filters, Conditional Rules and Other Sources), including their function and use cases, please see the full VCAcore Manual.
Many observables have an input
, a reference to another observable which serves as a trigger. In all cases, an observable’s input
cannot be itself, or a cyclical dependency will be created what will cause abnormal behaviour.
What follows is a list of all the supported types of observables in VCAcore:
The Abandoned observable is a basic observable which generates an event when an object has been either; left within a defined zone or when an object is removed from a defined zone. Please use the method mentioned earlier to add this element. A sample Abandoned observable is shown below:
{
"typename": "vca.observable.Abandoned",
"channel": 0,
"zone": 3,
"name": "Abandoned 5",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Absence observable is a basic observable which generates an event when a zone does not have a object, of the specified class, present for the defined duration. Please use the method mentioned earlier to add this element. A sample Abandoned observable is shown below:
{
"typename": "vca.observable.Absence",
"channel": 0,
"zone": 0,
"duration": 1000,
"filters": [],
"confidence_threshold": 0.699999988079071,
"name": "Absence 2",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
duration |
Unsigned Integer | The interval value of this observable, in milliseconds | 1000 - 86400000 |
filters |
Array | Array of strings defining object classes defined by the object tracker or found under channel classification |
Any array of strings, can be an empty array |
confidence_threshold |
Float | Threshold to specify minimum confidence score required for the object to be classed as the object in filters |
Float between 0 - .99 inclusive |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Aggressive Behaviour observable is a basic observable, which generates an event when a fight is detected in the field of view for longer than the specified duration.
Please use the method mentioned earlier to add this element. A sample Aggressive Behaviour observable is shown below:
{
"typename": "vca.observable.AggressiveBehaviour",
"channel": 0,
"duration": 3000,
"threshold": 0.95,
"continuous_threshold": 0.5,
"name": "Fight",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
duration |
Unsigned Integer | The interval value of this observable, in milliseconds | 1000 - 86400000 |
threshold |
Float | Confidence threshold before a fight is detected | 0.2 - 1 |
continuous_threshold |
Float | Minimum persistent confidence threshold required for duration | 0.1 - 1 |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Appear observable is a basic observable which generates an event when an object starts being tracked within a zone, e.g. a person who appears in the scene from a doorway. Please use the method mentioned earlier to add this element. A sample Appear observable is shown below:
{
"typename": "vca.observable.Appear",
"channel": 0,
"zone": 3,
"name": "Appear 5",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Direction observable is a basic observable which generates an event when an object is moving in a specific direction. Please use the method mentioned earlier to add this element. A sample Direction observable is shown below:
{
"typename": "vca.observable.Direction",
"channel": 0,
"angle": 77,
"anglethreshold": 51,
"zone": 0,
"name": "Direction 3",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
angle |
Unsigned Integer | The direction an object should be travelling to trigger an event | Unsigned integer 0 - 360 |
anglethreshold |
Unsigned Integer | The threshold of angles accepted by the rule | Unsigned integer 0 - 90 |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Directional Crossing observable is a basic observable which generates an event when an object enters and exits a zone in a specific direction. Please use the method mentioned earlier to add this element. A sample Directional Crossing observable is shown below:
{
"typename": "vca.observable.DirectionalCrossing",
"channel": 0,
"angle": 77,
"anglethreshold": 51,
"zone": 0,
"filters": [
"Vehicle"
],
"confidence_threshold": 0.70,
"name": "Directional Crossing 3",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
angle |
Unsigned Integer | The direction an object should be travelling to trigger an event | Unsigned integer 0 - 360 |
anglethreshold |
Unsigned Integer | The threshold of angles accepted by the rule | Unsigned integer 0 - 90 |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
filters |
Array | Array of strings defining object classes defined by the object tracker or found under channel classification |
Any array of strings, can be an empty array |
confidence_threshold |
Float | Threshold to specify minimum confidence score required for the object to be classed as the object in filters |
Float between 0 - .99 inclusive |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Disappear observable is a basic observable which generates an event when an object stops being tracked within a zone, e.g. a person who exits the scene through a doorway. Please use the method mentioned earlier to add this element. A sample Disappear observable is shown below:
{
"typename": "vca.observable.Disappear",
"channel": 0,
"zone": 3,
"name": "Disappear 5",
"triggers_action": true
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Dwell observable is a basic observable which generates an event when an object remains in a zone for a set interval. Please use the method mentioned earlier to add this element. A sample Dwell observable is shown below:
{
"typename": "vca.observable.Dwell",
"channel": 0,
"zone": 0,
"interval": 10000,
"name": "Dwell 3",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
interval |
Unsigned Integer | The interval value of this observable, in milliseconds | 1000 - 86400000 |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Enter observable is a basic observable which generates an event when an object enter a zone, e.g. when an object crosses from the outside of a zone to the inside of a zone. Please use the method mentioned earlier to add this element. A sample Enter observable is shown below:
{
"typename": "vca.observable.Enter",
"channel": 0,
"zone": 3,
"name": "Enter 5",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Exit observable is a basic observable which generates an event when an object leaves a zone, e.g. when an object crosses from the inside of a zone to the outside of a zone. Please use the method mentioned earlier to add this element. A sample Exit observable is shown below:
{
"typename": "vca.observable.Exit",
"channel": 0,
"zone": 3,
"name": "Exit 5",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Fall observable is a basic observable which generates an event when an object, detected as Person by the DL People Tracker, Deep Learning Skeleton Tracker or Deep Learning Object Tracker, is detected as fallen. Please use the method mentioned earlier to add this element. A sample Fall observable is shown below:
{
"typename": "vca.observable.Fall",
"channel": 0,
"zone": 3,
"duration": 1000,
"confidence_threshold": 0,
"name": "Fall 5",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
duration |
Unsigned Integer | The interval value of this observable, in milliseconds | 1000 - 60000 |
confidence_threshold |
Float | Threshold to specify minimum confidence score required for the object to be classed as fallen | Float between 0 - .99 inclusive |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Hands Up observable is a basic observable which generates an event when an object, detected as Person by the DL Skeleton Tracker, is detected as having their hands up. Please use the method mentioned earlier to add this element. A sample Hands Up observable is shown below:
{
"typename": "vca.observable.HandsUp",
"channel": 0,
"zone": 3,
"duration": 1000,
"confidence_threshold": 0.4,
"name": "Hands Up 5",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
duration |
Unsigned Integer | The interval value of this observable, in milliseconds | 1000 - 60000 |
confidence_threshold |
Float | Threshold to specify minimum confidence score required for person to be classed as hands up | Float between 0 - .99 inclusive |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Line Counter observable is a basic observable which generates an event when an object is detected crossing a line. The referenced zone
must be configured as a line, not a polygon.
Please use the method mentioned earlier to add this element. A sample Line Counter observable is shown below:
{
"typename": "vca.observable.Line_Counter",
"channel": 0,
"direction": "both",
"zone": 5,
"calibration_width": 0,
"filter_shadows": false,
"name": "Line Counter 10",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
direction |
String | Defines the direction the line counter detects movement | String - “both” or “a” or “b” |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
calibration_width |
Float | The expected width of an object to cross the line, counts to go up by more than 1 | Any unsigned float - 0 to turn the calibration off |
filter_shadows |
Boolean | A boolean value specifying whether to filter shadows | true or false |
For a list of properties common to all observables, please see the General Concepts section on Observables.
This observable generates events when a stream is interrupted. Please use the method mentioned earlier to add this element. A sample Loss-of-signal observable is shown below:
{
"typename": "vca.observable.LossOfSignal",
"heartbeat_frequency": 1000,
"channel": 0,
"name": " - Loss of Signal",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
heartbeat_frequency |
Unsigned Integer | The interval value of this observable, in milliseconds | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Occupancy observable generates an event when the number of objects of the defined filter
changes in the specified zone
. The threshold_operator
, combined with threshold_value
, can be used to control when the occupancy observable generates events in relation to the number of objects in the zone
. Please use the method mentioned earlier to add this element.
The current count of this observable is available using the counter token or as Counter Value
metadata, available in the metadata streams and as part of the event.
A sample Occupancy observable is shown below:
{
"typename": "vca.observable.Occupancy",
"channel": 0,
"zone": 0,
"x": 30708,
"y": 3802,
"filters": [
"car",
"truck",
"van",
"motorcycle",
"bus"
],
"confidence_threshold": 0.699999988079071,
"threshold_value": 5,
"threshold_operator": ">",
"name": "Crossing Occupancy",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
x |
Unsigned Integer | The x coordinate of the counter | Any unsigned integer between 0 and 65535 (inclusive) |
y |
Unsigned Integer | The y coordinate of the counter | Any unsigned integer between 0 and 65535 (inclusive) |
filters |
Array | Array of strings defining object classes defined by the object tracker or found under channel classification |
Any array of strings, can be an empty array |
confidence_threshold |
Float | Threshold to specify minimum confidence score required for the object to be classed as the object in filters |
Float between 0 - .99 inclusive |
threshold_value |
Integer | The value utilised by the threshold_operator to define when events are generated by the counter observable |
Any integer between -10000 and 10000 |
threshold_operator |
String | The operator which describes the counters event generation behaviour, in relation to the threshold_value |
Any of > , < , >= , <= , == , != or none |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Presence observable is a basic observable which generates an event when an object is present inside a zone. Please use the method mentioned earlier to add this element. A sample Presence observable is shown below:
{
"typename": "vca.observable.Presence",
"channel": 0,
"zone": 4294967295,
"name": "Presence 5",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Stopped observable is a basic observable which generates an event when an object is stationary inside a zone, for longer than the specified amount of time. Please use the method mentioned earlier to add this element. A sample Stopped observable is shown below:
{
"typename": "vca.observable.Stopped",
"zone": 4,
"duration": 10000,
"channel": 0,
"name": "Stopped 3",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
zone |
Unsigned Integer | The index of a zone to associate with this observable | Any unsigned integer |
duration |
Unsigned Integer | The interval value of this observable, in milliseconds | 1000 - 60000 |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Tailgating observable is a basic observable which generates an event when an object crosses through a zone, or over a line, within a set duration of one another. Please use the method mentioned earlier to add this element. A sample Tailgating observable is shown below:
{
"typename": "vca.observable.Tailgating",
"channel": 0,
"zone": 0,
"duration": 2000,
"name": "Tailgating 3",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Integer | The index of the channel this observable is associated with | Any unsigned integer |
duration |
Unsigned Integer | The duration value of this observable, in milliseconds | 1000 - 60000 |
zone |
Integer | The index of a zone to associate with this observable | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Unattended observable is a basic observable which generates an event when an object, with a specified class, is present in the attendee_zone
but an object, of the specified class, is not present in the attendant_zone
. Please note if filter
is an empty array, the rule will never trigger.
Please use the method mentioned earlier to add this element. A sample Unattended observable is shown below:
{
"typename": "vca.observable.Unattended",
"channel": 0,
"attendee_zone": 0,
"attendant_zone": 1,
"duration": 5000,
"filters": [
"person"
],
"confidence_threshold": 0.699999988079071,
"name": "Unattended 5",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
attendee_zone |
Unsigned Integer | The index of the zone to associate with this observable | Any unsigned integer |
attendant_zone |
Unsigned Integer | The index of the zone to associate with this observable | Any unsigned integer |
duration |
Unsigned Integer | The interval value of this observable, in milliseconds | 1000 - 60000 |
filters |
Array | Array of strings defining object classes defined by the object tracker or found under channel classification |
Any array of strings, can be an empty array |
confidence_threshold |
Float | Threshold to specify minimum confidence score required for the object to be classed as the object in filters |
Float between 0 - .99 inclusive |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Accessory Filter observable is a filter which generates an event when the object’s metadata, which has triggered the input
observable, contains matching accessory data to the observables filter
.
Due to the use cases associated with accessory detection, a differentiation can be made between a person with the detected accessory, a person classified as not wearing the accessory, and a person who has not yet been evaluated.
The Accessory Filter will only generate an event when all of the following requirements are met:
filter
entry matches an evaluated accessory on the object.confidence_threshold
is equal or greater than confidence
of the detected accessory.state
matches the detection state of the accessory.The algorithms that produce Accessory metadata require the Deep Learning Skeleton Tracker to work, if this tracker is not configured on the channel, accessory metadata will never be generated.
Please use the method mentioned earlier to add this element. A sample Object Filter observable is shown below:
{
"typename": "vca.observable.AccessoryFilter",
"channel": 0,
"input": 1,
"filter": "high_vis_vest",
"confidence_threshold": 0.6000000238418579,
"state": "present",
"name": "Accessory Filter 3",
"triggers_action": true
}
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
input |
Unsigned Integer | The index of another observable, which becomes the input | Any unsigned integer |
filter |
String | String defining accessory class that is being evaluated for | Either high_vis_vest or hard_hat |
confidence_threshold |
Float | Threshold to specify minimum confidence score required for the object to be classed as the object in filter |
Float between 0 - .99 inclusive |
state |
String | Array of strings defining the accessory state | Either present or absent |
The Colour Filter observable is a filter which generates an event when the object, which has triggered the input
observable, has 5%
or more of any colour defined under filters
. For this observable to generate an event, the channel
must have the colour signature algorithm enabled. Please use the method mentioned earlier to add this element. A sample Colour Filter
observable is shown below with all ten possible colours defined under filters:
{
"typename": "vca.observable.ColourFilter",
"channel": 0,
"input": 6,
"filters": [
"Black",
"Grey",
"Blue",
"Brown",
"Cyan",
"Green",
"Red",
"Magenta",
"White",
"Yellow"
],
"name": "Colour Filter 13",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
input |
Unsigned Integer | The index of another observable, which becomes the input | Any unsigned integer |
filters |
Array | Array of strings defining colours | Any array of strings, can be an empty array |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Object Filter observable is a filter which generates an event when the object’s metadata, which has triggered the input
observable, contains one of the classes in the filters
array. A class must match either:
channel classification
entries for calibration based classification.confidence classification
classes for the configured tracker.An object’s metadata will only contain this data if the selected tracker is able, or configured, to add this metadata to the object. For example, if the filter car
is added, this is possible in two ways:
channel
has been calibrated.Please use the method mentioned earlier to add this element. A sample Object Filter observable is shown below:
{
"typename": "vca.observable.ObjectFilter",
"channel": 0,
"input": 8,
"filters": [
"person",
"vehicle"
],
"confidence_threshold": 0.699999988079071,
"name": "Object Filter 12",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
input |
Unsigned Integer | The index of another observable, which becomes the input | Any unsigned integer |
filters |
Array | Array of strings defining object classes defined by the object tracker or found under channel classification |
Any array of strings, can be an empty array |
confidence_threshold |
Float | Threshold to specify minimum confidence score required for the object to be classed as the object in filters |
Float between 0 - .99 inclusive |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Retrigger observable is a filter observable which generates an event when the input observable fires, as long as the input observable has not fired within the previous interval period. Please use the method mentioned earlier to add this element. A sample Retrigger Filter observable is shown below:
{
"typename": "vca.observable.Retrigger",
"channel": 0,
"interval": 3000,
"input": 1,
"name": "Retrigger 1",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
interval |
Unsigned Integer | The interval value of this observable, in milliseconds | 1 - 86400000 |
input |
Unsigned Integer | The index of another observable, which becomes the input | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Speed Filter observable is a filter which generates an event when the object, which has triggered the input
observable, is travelling between a min and max speed. For this observable to generate an event, the channel
must have been calibrated. Please use the method mentioned earlier to add this element. A sample Speed Filter observable is shown below:
{
"typename": "vca.observable.Speed",
"channel": 0,
"input": 8,
"minspeed": 3,
"maxspeed": 10,
"name": "Speed Filter 11",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
input |
Unsigned Integer | The index of another observable, which becomes the input | Any unsigned integer |
minspeed |
Unsigned Integer | The minimum speed an object must be travelling to be accepted by the rule | 1 - 65535 |
maxspeed |
Unsigned Integer | The maximum speed an object must be travelling to be accepted by the rule | 1 - 65535 |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Source Filter observable is a filter which generates an event when the input
observable triggers an event, and the source
observable is in an on
state. Valid inputs for use as a source
are either the Schedule or HTTP other source observables. Please use the method mentioned earlier to add this element. A sample Source Filter observable is shown below:
{
"typename": "vca.observable.SourceFilter",
"channel": 0,
"input": 5,
"source": 7,
"name": "Source Filter",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
input |
Unsigned Integer | The index of another observable, which becomes the input | Any unsigned integer |
source |
Unsigned Integer | The index of a HTTP or Schedule other source observable | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The And observable is a representation of the logical AND operation. Please use the method mentioned earlier to add this element. A sample And observable is shown below:
{
"typename": "vca.observable.And",
"channel": 0,
"inputa": 4,
"inputb": 5,
"constrain_target": true,
"name": "And 2",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
inputa |
Unsigned Integer | The index of another observable, which becomes the first input | Any unsigned integer |
inputb |
Unsigned Integer | The index of another observable, which becomes the second input | Any unsigned integer |
constrain_target |
Boolean | A boolean specifying whether this observable generates events per-target | true or false |
For a list of properties common to all observables, please see the General concepts section on Observables.
The Continuously observable generates an event when another event has been occurring continuously for a certain amount of time. The time parameter is user-specified. Please use the method mentioned earlier to add this element. A sample Continuously observable is shown below:
{
"typename": "vca.observable.Continuously",
"interval": 1000,
"channel": 0,
"input": 6,
"constrain_target": true,
"name": "Continuously 3",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
input |
Unsigned Integer | The index of another observable, which becomes the input to this one | Any unsigned integer |
interval |
Unsigned Integer | The interval value of this observable, in milliseconds | 1 - 86400000 |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Counter observable generates an event when the value of count
changes. The count
value is defined by the input
observables which either increment, decrement, or define its occupancy. A count
value can be reset to 0
by defining input observable(s). When the input observable(s) trigger the count
is reset. The threshold_operator
, combines with threshold_value
, can be used to control when the counter observable generates events in relation to the current count
. The reset_inputs
, specify observable(s) which, when triggered, will reset the count
value to 0
. Please use the method mentioned earlier to add this element. A sample Counter observable is shown below:
{
"typename": "vca.observable.Counter",
"channel": 0,
"count": -20,
"x": 32767,
"y": 32767,
"increment_inputs": [
2,
6
],
"decrement_inputs": [
7
],
"occupancy_inputs": [
8
],
"reset_inputs": [ ],
"threshold_value": 20,
"threshold_operator": ">=",
"name": "Counter 17",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
count |
Unsigned Integer | The index of another observable, which becomes the input to this one | Any unsigned integer |
x |
Unsigned Integer | The x coordinate of the counter | Any unsigned integer between 0 and 65535 (inclusive) |
y |
Unsigned Integer | The y coordinate of the counter | Any unsigned integer between 0 and 65535 (inclusive) |
increment_inputs |
Array | The array of observables which will increment count when an they trigger an event |
Any array of unsigned integers, can be an empty array |
decrement_inputs |
Array | The array of observables which will decrement count when an they trigger an event |
Any array of unsigned integers, can be an empty array |
occupancy_inputs |
Array | The array of observables which will add to count the number of objects which are triggering the observable |
Any array of unsigned integers, can be an empty array |
reset_inputs |
Array | The array of observables which will reset count to 0. |
Any array of unsigned integers, can be an empty array |
threshold_value |
Integer | The value utilised by the threshold_operator to define when events are generated by the counter observable |
Any integer between -10000 and 10000 |
threshold_operator |
String | The operator which describes the counters event generation behaviour, in relation to the threshold_value |
Any of > , < , >= , <= , == , != or none |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Not observable is a representation of the logical NOT operation. Please use the method mentioned earlier to add this element. A sample Not observable is shown below:
{
"typename": "vca.observable.Not",
"channel": 0,
"input": 7,
"name": "Not",
"constrain_target": false,
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
input |
Unsigned Integer | The index of another observable, which becomes the input to this one | Any unsigned integer |
constrain_target |
Boolean | A boolean specifying whether this observable generates events per-target | true or false |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Or observable is a representation of the logical OR operation. Please use the method mentioned earlier to add this element. A sample Or observable is shown below:
{
"typename": "vca.observable.Or",
"channel": 0,
"inputa": 2,
"inputb": 7,
"constrain_target": true,
"name": "Or 4",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Unsigned Integer | The index of the channel this observable is associated with | Any unsigned integer |
inputa |
Unsigned Integer | The index of another observable, which becomes the first input | Any unsigned integer |
inputb |
Unsigned Integer | The index of another observable, which becomes the second input | Any unsigned integer |
constrain_target |
Boolean | A boolean specifying whether this observable generates events per-target | true or false |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Previous observable generates an event when another event has occurred previously, within a certain amount of time. The time parameter is user-specified. Please use the method mentioned earlier to add this element. A sample Previous observable is shown below:
{
"typename": "vca.observable.Previous",
"interval": 1000,
"channel": 0,
"input": 5,
"constrain_target": true,
"name": "Previous 6",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Integer | The index of the channel this observable is associated with | Any unsigned integer |
input |
Integer | The index of another observable, which becomes the input | Any unsigned integer |
interval |
Unsigned Integer | The interval value of this observable, in milliseconds | 1 - 86400000 |
constrain_target |
Boolean | A boolean specifying whether this observable generates events per-target. | true or false |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Repeatedly observable generates an event when the input rule is triggered a set number of times within a defined period. The duration
period is a window of time computed from every input event. Please use the method mentioned earlier to add this element. A sample Repeatedly observable is shown below:
{
"typename": "vca.observable.Repeatedly",
"input": 5,
"duration": 1000,
"occurrences": 3,
"channel": 0,
"constrain_target": true,
"name": "Repeatedly 6",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
channel |
Integer | The index of the channel this observable is associated with | Any unsigned integer |
input |
Integer | The index of another observable, which becomes the input | Any unsigned integer |
duration |
Unsigned Integer | The duration value of this observable, in milliseconds | 1 - 86400000 |
occurrences |
Unsigned Integer | The required occurrences of the input | 1 - 86400000 |
constrain_target |
Boolean | A boolean specifying whether this observable generates events per-target | true or false |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Armed observable is a channel-independent observable which generates an event when the VCAcore system is Armed. Please use the method mentioned earlier to add this element. A sample Armed observable is shown below:
{
"typename": "vca.observable.Armed",
"name": "Armed Source",
"triggers_action": true
}
The Armed observable does not have any specific properties. For a list of properties common to all observables, please see the General Concepts section on Observables.
The Disarmed observable is a channel-independent observable which generates an event when the VCAcore system is Disarmed. Please use the method mentioned earlier to add this element. A sample Disarmed observable is shown below:
{
"typename": "vca.observable.Disarmed",
"name": "Disarmed Source",
"triggers_action": true
}
The Disarmed observable does not have any specific properties. For a list of properties common to all observables, please see the General Concepts section on Observables.
The HTTP observable is a channel-independent observable which generates an event each time the state
is changed to true
. Please use the method mentioned earlier to add this element. A sample HTTP observable is shown below:
{
"typename": "vca.observable.Http",
"state": true,
"name": "Http Source",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
state |
Boolean | A boolean specifying whether this observable is on or off | true or false |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Interval observable is a channel-independent observable which generates an event each time the interval
period passes. Please use the method mentioned earlier to add this element. A sample Interval observable is shown below:
{
"typename": "vca.observable.Interval",
"interval": 1000,
"name": "Interval Source",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
interval |
Unsigned Integer | The interval value of this observable, in milliseconds | Any unsigned integer |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The Schedule observable is a channel-independent observable, which generates an event when the system clock coincides with a scheduled on
period. Events are generated once per on
period (if VCAcore is started during an on
period a single event is fired). When set_arm_disarm
is true
, VCAcore will be armed and disarmed according to the defined periods of on
/ off
. Please use the method mentioned earlier to add this element. A sample Schedule observable is shown below:
{
"typename": "vca.observable.Schedule",
"schedule": [
"000000000000000000000000000000000000000000000000",
"000000000000000000000000000000000000000000000000",
"000000000000000111111111110000000000000000000000",
"000000000000000000001111111111111000000000000000",
"000011111111111111111100000000000000000000000000",
"000000000000000000000000000000000000000000000000",
"000000000000000000000000000000000000000000000000"
],
"set_arm_disarm": false,
"name": "Schedule Source",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
schedule |
String Array | Array of seven strings, forty-eight characters in size. Each character is a binary digit | Array of 7, 48 binary character strings |
set_arm_disarm |
Boolean | A boolean specifying whether this observable also sets the armed state of VCAcore | true or false |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The System observable is a channel-independent observable, which generates an event when the specified resource_type
passes a set threshold
. Events will continue to send whilst the set threshold
is met each time the min_interval
duration has passed, if repeat_events
is true
. Please use the method mentioned earlier to add this element. A sample System observable is shown below:
{
"typename": "vca.observable.System",
"resource_type": "Gpu Utilisation",
"threshold": 0,
"min_interval": 60000,
"repeat_events": true,
"name": "System Alarm Source",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
resource_type |
String | String defining the system resource to monitor | Only Gpu Utilisation |
threshold |
Unsigned Integer | Percentage threshold that must be reached to trigger an event | Unsigned integer between 0 and 1 |
min_interval |
Unsigned Integer | The interval value between each triggered event, in milliseconds | Any unsigned integer |
repeat_events |
Boolean | A boolean specifying whether this observable can repeatedly trigger events | true or false |
For a list of properties common to all observables, please see the General Concepts section on Observables.
The License observable is a channel-independent observable which generates events in the following cases:
If repeat_events
is false
, a single event will be sent for each license event. (e.g. the license server is detected as disconnected when previously it was connected). If repeat_events
is true
, events will continue to be sent each time the min_interval
duration has passed. Please use the method mentioned earlier to add this element. A sample License observable is shown below:
{
"typename": "vca.observable.License",
"min_interval": 60000,
"repeat_events": true,
"name": "License Server Changed Source",
"triggers_action": true
}
In addition to the common properties described above, below is a list of properties specific to this observable:
Property | Type | Description | Possible values |
---|---|---|---|
min_interval |
Unsigned Integer | The interval value between each triggered event, in milliseconds | Any unsigned integer |
repeat_events |
Boolean | A boolean specifying whether this observable can repeatedly trigger events | true or false |
For a list of properties common to all observables, please see the General Concepts section on Observables.
To add a zone, send a POST
request to the /api/zones
endpoint. Unlike other elements, a zone does not have a typename
property, so a zone may be added by sending a single POST
request to the endpoint mentioned above with the correct payload. A sample zone is given below:
{
"name": "Zone 0",
"channel": 4,
"points": [
{
"x": 20585,
"y": 17374
},
{
"x": 23368,
"y": 51893
}
],
"colour": {
"r": 252,
"g": 175,
"b": 62,
"a": 1
},
"polygon": false,
"detection": true
}
The properties of a zone object are given below:
Property | Type | Description | Possible values |
---|---|---|---|
name |
String | A user-defined name for this zone | Any string, can be empty |
channel |
Unsigned integer | The identifier of the channel this zone is associated with | Any unsigned integer |
points |
Array of objects | An array of point objects | A point object array that has a minimum of two points |
colour |
Object | A colour object specifying the colour of the zone, without the alpha("a" ) property |
A colour object with no alpha property |
polygon |
Boolean | An boolean specifying whether this zone should be treated as a polygon (true ) or a line (false ) |
true or false |
detection |
Boolean | An boolean specifying whether detection is enabled on this zone | true or false |
Actions are objects that represent an operation that can be performed by the application. Actions can be linked to observables so that when an observable fires an event, the event causes the action to be triggered.
Actions can be added by sending a POST
request to the /api/actions
endpoint. Please use the method mentioned earlier to add this element. A sample TCP action is shown below:
{
"typename": "vca.action.Tcp",
"uri": "192.168.5.3",
"port": 0,
"body": "{ \"event_name\": \"{{name}}\", \"event_id\": {{id}} } ",
"name": "TcpActionAddTest",
"observables": [
5,
4
],
"always_trigger": false
}
The following is a list of the properties common to all actions:
Property | Type | Description | Possible values |
---|---|---|---|
name |
String | A user-defined name for this action | Any string, can be empty |
observables |
Array of indices | An array of indexes of observables to associated with this action | Any array of unsigned integers (can be empty) |
always_trigger |
Boolean | A boolean specifying whether this action triggers irrespective of the armed state of the device | true or false |
What follows is a list of all actions supported in VCAcore.
A TCP action sends data to a user-specified endpoint. Please use the method mentioned earlier to add this element. A sample TCP action is shown below:
{
"typename": "vca.action.Tcp",
"uri": "192.168.5.3",
"port": 0,
"body": "{ \"event_name\": \"{{name}}\", \"event_id\": {{id}} } ",
"name": "TcpActionAddTest",
"observables": [],
"always_trigger": false
}
The following is a list of properties specific to the TCP action:
Property | Type | Description | Possible values |
---|---|---|---|
uri |
String | The URI of the TCP server to send the data to | Any string, can be empty |
port |
Unsigned integer | The port to connect to on the TCP server | Any unsigned integer between 0 and 65535 (inclusive) |
body |
String | Content of the action body, e.g VCAcore metadata tokens, XML, JSON etc as required | Any string, can be empty |
The TCP Body supports escape characters, allowing for inputs such NULL terminated strings (\x00
). In turn these escape characters can be escaped through the use of an additional \
e.g. (\\x00
).
For a list of properties common to all actions, please refer to the General Concepts section on actions.
An HTTP action sends an HTTP(s) request to a user-specified endpoint according to the HTTP/1.1 standard. Please use the method mentioned earlier to add this element. A sample HTTP action is shown below:
{
"typename": "vca.action.Http",
"method": "GET",
"uri": "http://192.168.1.60",
"port": 0,
"headers": "Content-Type: application/json",
"body": "{ \"event_name\": \"{{name}}\", \"event_id\": {{id}} } ",
"authentication": false,
"username": "",
"password": "",
"send_snapshot": false,
"pre_snapshots": 1,
"post_snapshots": 1,
"jpeg_quality": "average",
"interval": 10,
"multipart_name": "vca",
"multipart_image_name": "vca",
"verify_host_certificate": true,
"ssl_method": "sslv3",
"name": "",
"observables": [],
"always_trigger": false
}
The following is a list of properties specific to the HTTP action:
Property | Type | Description | Possible values |
---|---|---|---|
method |
String | The HTTP verb to use when sending the request | One of the following: "GET" , "POST" , "PUT" , "DELETE" , "HEAD" |
uri |
String | The URI of the HTTP Request | Any string, can be empty |
port |
Unsigned integer | The port to connect to on the HTTP server | Any unsigned integer between 0 and 65535 (inclusive) |
headers |
String | Content of the HTTP action header | Any string, can be empty |
body |
String | Content of the action body, e.g VCAcore metadata tokens, XML, JSON etc as required | Any string, can be empty |
authentication |
Boolean | A boolean specifying whether to enable authentication | true or false |
username |
String | The username to use when authentication is enabled | Any string, can be empty |
password |
String | The password to use when authentication is enabled | Any string, can be empty |
send_snapshot |
Boolean | A boolean value specifying whether to send snapshots with the request | true or false |
pre_snapshots |
Unsigned integer | The number of pre-event snapshots to send with the request | Any unsigned integer between 0 and 10 (inclusive) |
post_snapshots |
Unsigned integer | The number of post-event snapshots to send with the request | Any unsigned integer between 0 and 5 (inclusive) |
jpeg_quality |
String | The quality of the JPEG snapshots that are sent with the request | One of the following: "worst" , "low" , "average" , "good" , "best" |
interval |
Unsigned Integer | The time interval between snapshots (in milliseconds) | Any unsigned integer between 0 and 1000 (inclusive) |
multipart_name |
String | The text assigned as multipart name in the request | Letters, Numbers, Dashes, Underscores and Square Brackets only, cannot be empty |
multipart_image_name |
String | The text assigned as multipart image name in the request | Letters, Numbers, Dashes, Underscores and Square Brackets only, cannot be empty |
verify_host_certificate |
Boolean | A boolean value specifying whether verify the certificate of the HTTPs endpoint | true or false |
ssl_method |
String | The encryption method to use when sending the http action | One of the following: "sslv3" , "sslv2_3" , "tlsv11" ,"tlsv12" , "tlsv13" , "tls" |
Please note multipart_name
will need to be reflected in any scripts that handle this request, for example in php; this would be by using $_FILES['vca']
where vca
is the string set in the multipart_name
field
For a list of properties common to all actions, please refer to the General Concepts section on actions.
An Email action sends an email in a user-specified format. Please use the method mentioned earlier to add this element. A sample Email action is shown below:
{
"typename": "vca.action.Email",
"server": "",
"port": 0,
"encryption": "none",
"username": "",
"password": "",
"enable_authentication": false,
"verify_host_certificate": false,
"to": "",
"cc": "",
"bcc": "",
"from": "",
"subject": "{{type.string}} Cam ID {{#Channel}}{{id}}{{/Channel}}",
"format": "custom",
"body": "{{name}} triggered with at {{start.iso8601}}",
"send_snapshot": false,
"pre_snapshots": 0,
"post_snapshots": 0,
"jpeg_quality": "average",
"interval": 0,
"name": "",
"observables": [],
"always_trigger": false
}
Property | Type | Description | Possible values |
---|---|---|---|
server |
String | The URI of the SMTP Server to use for sending the email | Any string, can be empty |
port |
Unsigned integer | The port to connect to on the TCP server | Any unsigned integer between 0 and 65535 (inclusive) |
encryption |
String | The encryption method to use when connecting to the server. Both unencrypted and TLS methods are supported | Either "none" , "tls" , "tlsv10" , "tlsv11" , "tlsv12" , "tlsv13" |
username |
String | The username to use when authentication is enabled | Any string, can be empty |
password |
String | The password to use when authentication is enabled | Any string, can be empty |
enable_authentication |
Boolean | A boolean value specifying whether authentication should be enabled | true or false |
verify_host_certificate |
Boolean | A boolean value specifying whether the server’s SSL certificate should be verified | true or false |
to |
String | The ‘To’ field of the email | Any string, can be empty |
cc |
String | The ‘CC’ field of the email | Any string, can be empty |
bcc |
String | The ‘BCC’ field of the email | Any string, can be empty |
from |
String | The ‘From’ field of the email | Any string, can be empty |
subject |
String | Content of the email subject. Can use VCAcore metadata tokens | Any string, can be empty |
format |
String | The string specifying the format of the email body | Must always be set to custom |
body |
String | Content of the action body, e.g VCAcore metadata tokens, XML, JSON etc as required | Any string, can be empty |
send_snapshot |
Boolean | A boolean value specifying whether to send snapshots with the request | true or false |
pre_snapshots |
Unsigned integer | The number of pre-event snapshots to send with the request | Any unsigned integer between 0 and 10 (inclusive) |
post_snapshots |
Unsigned integer | The number of post-event snapshots to send with the request | Any unsigned integer between 0 and 5 (inclusive) |
jpeg_quality |
String | The quality of the JPEG snapshots that are sent with the request | One of the following: "worst" , "low" , "average" , "good" , "best" |
interval |
Unsigned Integer | The time interval between snapshots (in milliseconds) | Any unsigned integer between 0 and 1000 (inclusive) |
For a list of properties common to all actions, please refer to the General Concepts section on actions.
An ‘Arm’ action sets the state of the application to ‘armed’ When the application is armed, all actions fire normally. Please use the method mentioned earlier to add this element. A sample ‘Arm’ action is shown below:
{
"typename": "vca.action.Arm",
"name": "",
"observables": [],
"always_trigger": false
}
The ‘Arm’ action does not have any specific properties. For a list of properties common to all actions, please refer to the General Concepts section on actions.
A ‘Disarm’ action sets the state of the application to ‘disarmed’ When the application is disarmed, only actions with always_trigger
set to true
will fire. Other actions will be prevented from firing. Please use the method mentioned earlier to add this element. A sample ‘Disarm’ action is shown below:
{
"typename": "vca.action.Disarm",
"name": "",
"observables": [],
"always_trigger": false
}
The ‘Disarm’ action does not have any specific properties. For a list of properties common to all actions, please refer to the General Concepts section on actions.
Algorithm information is designed as a reference end point for properties of the algorithms which may change. Each algorithm is categorised, with a set of properties for each algorithm type. For example: The list of detectable classes a tracker can detect can evolve over time. The classes
property will return the list of classes that tracker will detect. To retrieve the data simply send a GET
request to /api/algorithms/info
. A sample response is given below:
{
"dl_fisheye_tracker": {
"typename": "vca.algorithm.type.Tracker",
"version": "1.1.3",
"model_status": "Not Loaded",
"classes": [
"person"
],
"input_size": {
"width": 672,
"height": 672
}
},
"dl_classifier": {
"typename": "vca.algorithm.type.ObjectClassifier",
"version": "2.18.p",
"model_status": "Not Loaded",
"classes": [
"background",
"person",
"vehicle"
]
},
"dl_accessory_detector": {
"typename": "vca.algorithm.type.AttributeClassifier",
"version": "1.0.49-rev1",
"model_status": "Not Loaded",
"classes": [
"hard_hat",
"high_vis_vest"
]
},
"fall": {
"typename": "vca.algorithm.type.StateClassifier",
"version": "1.16",
"model_status": "Not Loaded"
},
"colour_signature": {
"typename": "vca.algorithm.type.ColourClassifier",
"version": "0.0.0",
"model_status": "Ready",
"colours": {
"Black": {
"r": 0,
"g": 0,
"b": 0
},
"Blue": {
"r": 0,
"g": 0,
"b": 200
},
"Brown": {
"r": 150,
"g": 75,
"b": 0
},
"Cyan": {
"r": 0,
"g": 255,
"b": 255
},
"Green": {
"r": 0,
"g": 150,
"b": 0
},
"Grey": {
"r": 100,
"g": 100,
"b": 100
},
"Magenta": {
"r": 200,
"g": 0,
"b": 200
},
"Red": {
"r": 255,
"g": 0,
"b": 0
},
"White": {
"r": 255,
"g": 255,
"b": 255
},
"Yellow": {
"r": 255,
"g": 255,
"b": 0
}
}
},
"dl_people_tracker": {
"typename": "vca.algorithm.type.Tracker",
"version": "2.1.1",
"model_status": "Not Loaded",
"classes": [
"person"
],
"input_size": {
"width": 672,
"height": 672
}
},
"dl_skeleton_tracker": {
"typename": "vca.algorithm.type.Tracker",
"version": "2.2.1",
"model_status": "Not Loaded",
"classes": [
"person"
],
"input_size": {
"width": 640,
"height": 480
}
},
"aggressive_behaviour": {
"typename": "vca.algorithm.type.FrameClassifier",
"version": "1.1.11-rev3",
"model_status": "Not Loaded"
},
"person_re_id": {
"typename": "vca.algorithm.type.FeatureGenerator",
"version": "4.0.2",
"model_status": "Not Loaded"
},
"hand_object": {
"typename": "vca.algorithm.type.Tracker",
"version": "2.1.1.p",
"model_status": "Not Loaded",
"classes": [
"hand",
"object",
"person"
],
"input_size": {
"width": 672,
"height": 672
}
},
"pose": {
"typename": "vca.algorithm.type.StateClassifier",
"version": "0.0.0",
"model_status": "Ready"
},
"qr_code_tracker": {
"typename": "vca.algorithm.type.Tracker",
"version": "4.8.0",
"model_status": "Ready",
"classes": [
"qr_code"
],
"input_size": {
"width": 4294967295,
"height": 4294967295
}
},
"dl_object_tracker": {
"typename": "vca.algorithm.type.Tracker",
"version": "4.16.3.p",
"model_status": "Not Loaded",
"classes": [
"bag",
"bicycle",
"boat",
"bus",
"car",
"forklift",
"motorcycle",
"person",
"truck",
"van"
],
"input_size": {
"width": 672,
"height": 672
}
},
"shelf_clearing": {
"typename": "vca.algorithm.type.Tracker",
"version": "2.0.8",
"model_status": "Not Loaded",
"classes": [
],
"input_size": {
"width": 672,
"height": 672
}
}
}
The following is a list of the properties common to all algorithm types:
Property | Type | Description | Possible values |
---|---|---|---|
version |
String | The version number of the deep learning model or algorithm backend used by the algorithm | Any string |
model_status |
String | The current status of the algorithm | See below |
The model_status
is used to indicate the current state of the algorithm and its readiness to start processing video. This takes into account the building or loading of the models into the hardware accelerators (which can be a lengthy process). This variable also drives the Burnt-in Annotation used to inform the user of the state of the selected algorithm.
model_status |
Description | BIA message |
---|---|---|
Queued |
Engine Builder is waiting because another model is being built | Waiting to Build [MODEL] |
Building |
Model is currently being optimised for the backend and saved to disk | Building [MODEL] |
NotLoaded |
Engine builder has built and written the model to file, but it is not currently selected | N/A algorithm not requested |
Loading |
Backend is loading the already built model because the configuration requires it | Initialising [MODEL] |
Ready |
Backend has loaded the optimised model and its ready to infer | N/A algorithm should be running |
Failed |
Something went wrong initialising the model | Model Failed to Build, check GPU backend |
Below is a list of the properties of the vca.algorithm.type.Tracker
object:
Property | Type | Description | Possible values |
---|---|---|---|
classes |
Array of Strings | An array of each object class the algorithm will classify | Any String |
input_size |
Object | An input_size object defining width and height |
Any valid input_size object |
Below is a list of the properties of the input_size
object:
Property | Type | Description | Possible values |
---|---|---|---|
width |
Unsigned integer | The width of the input image | Any Unsigned integer |
height |
Unsigned integer | The height of the input image | Any Unsigned integer |
Note: The value 4294967295
is a sentinel value for null. In this case it represents that the width and height values are governed by the video source, and processing will be done at full frame.
Below is a list of the properties of the vca.algorithm.type.ObjectClassifier
and vca.algorithm.type.AttributeClassifier
objects:
Property | Type | Description | Possible values |
---|---|---|---|
classes |
Array of Strings | An array of each object class the algorithm will classify | Any String |
These classes
are used in the filters
property of the Accessory, Directional Crossing and Object Filter observables.
Below is a list of the properties of the vca.algorithm.type.ColourClassifier
:
Property | Type | Description | Possible values |
---|---|---|---|
colours |
Array of Objects | An array of colour objects describing the colour name and mean RGB values |
Colour Objects |
These colours
are used in the filters
property of the Colour Filter observable.
The following are the properties of the settings
object:
Property | Type | Description | Possible values |
---|---|---|---|
web_port |
Unsigned Integer | The port the user interface and SSE metadata streams are hosted on | 0 - 65535 |
rtsp_port |
Unsigned Integer | The port the RTSP streams are available on | 0 - 65535 |
network |
Object | A network object specifying the network settings | Any Valid network object |
coordinate_range_max |
Unsigned Integer | Max value an object’s position metadata can have in both the x and y axis | 1 - 65535 |
coordinates_flip_y_axis |
Boolean | A boolean value specifying the orientation of the y coordinate range | true or false |
coordinates_round_to_int |
Boolean | A boolean value specifying if coordinates should be rounded to integer | true or false |
display_units |
String | The units to be used by the burnt-in annotation | "imperial" or "metric" |
armed |
Boolean | A boolean value specifying if VCAcore is armed and generating events | true or false |
object_image |
Object | An object image object specifying object image settings | Any Valid object image object |
onvif |
Object | An ONVIF object specifying the ONVIF settings | Any Valid ONVIF object |
logging |
Object | A logging object specifying the VCAcore logging level | Any Valid logging object |
To change the current port used to access the VCAcore UI and SSE metadata streams, a PUT
request must be sent to /api/settings/web_port
with the desired port number:
8080
To change the current port used to access the VCAcore RTSP video and RTSP metadata streams, a PUT
request must be sent to /api/settings/rtsp_port
with the desired port number:
8554
The VCAserver web server by default runs unencrypted over HTTP. To enable SSL and switch the REST API, Web UI and SSE Metadata streams to encrypted HTTPs, upload certificate and key files via HTTP multipart POST
request to the /api/ssl/certificates
endpoint.
The multipart request must comprise two file inputs, with the correct multipart name
for each file; - name="cert"
with a certificate file cert.pem
- name="key"
with a key file key.key
The name
identifiers are used by VCAserver to handle each file correctly.
If the request succeeds, the change is applied immediately and you can start using HTTPs to communicate with the web server.
Removal of the SSL certificate is performed by making a HTTP DELETE
request to the endpoint: /api/ssl/certificate
. This will delete both .pem
and .key
files previously uploaded, after which the server will return to using HTTP protocol.
The network setting interface can be used to return the current network device configuration. For example, sending a GET
request to /api/settings/network/devices/0/
would return:
"name": "eth0",
"ipv4": {
"address": "192.168.0.23",
"subnet": "255.255.255.0",
"gateway": "192.168.0.1",
"method": "auto",
"dns_servers": [
"194.168.4.100",
"194.168.8.100",
"192.168.0.1"
]
}
Due to the nature of the interface some of the endpoints are static as they are defined by the host system. PUT
and POST
requests will fail to these endpoints with 400 Bad Request
.
The following are the properties of a network/devices
object:
Property | Type | Description | Possible values |
---|---|---|---|
name |
String | The network device name | Static |
ipv4 |
Object | ipv4 object |
Any valid ipv4 object |
On specific platforms, a network device configuration can be updated by sending a PUT
request to /api/settings/network/devices/0/
, with a valid full or partial ipv4
object.
The following are the properties of a ipv4
object:
Property | Type | Description | Possible values |
---|---|---|---|
address |
String | IP address | Any valid IP address as a string XXX.XXX.XXX.XXX |
subnet |
String | Subnet Mask | Any valid Subnet mask as a string 255.255.255.0 |
gateway |
String | IP address of the default gateway | Any valid IP address as a string XXX.XXX.XXX.XXX |
method |
String | The network IP settings method | auto or manual |
dns_servers |
List | List of IP addresses for DNS servers | Any list of IP addresses |
The max coordinate range is used to define the max value an object’s position metadata can have in both the x and y axis. For example setting the value to 100
means that all object coordinate data will be given in the range 0
- 100
.
To change the current max coordinate range, a PUT
request must be sent to /api/settings/coordinate_range_max
with the desired value as an integer:
100
The flip y axis setting is used to define the orientation of the y axis.
To change the orientation of the y axis, a PUT
request must be sent to /api/settings/coordinates_flip_y_axis
with a payload containing either:
true
to set 0
as the bottom of the camera field of view or:
false
to set 0
as the top of the camera field of view.
The round to integer setting ensures all returned positional metadata is given in integer form. Caution should be taken if max coordinate range is set to a small value.
To change the round to integer setting, a PUT
request must be sent to /api/settings/coordinates_round_to_int
with a payload containing either:
true
to round positional data to the nearest integer or:
false
to return the data as is.
Defines the units used in the burnt in annotation of speed, height and area.
To change the display units setting, a PUT
request must be sent to /api/settings/display_units
with a payload containing either:
"imperial"
to set the burnt-in annotation values for speed, height and area to imperial units or or:
"metric"
The settings define if an how often Base64 encoded JPEG snapshots of tracked objects are generated. These snapshots are made available as part of the SSE objects
metadata stream. When enabled all objects detected across all channels will start to have JPEG encoded snapshots generated, whether the SSE metadata is being used or not. In scenes with many objects or deployments with many channels, this setting will have an impact on the CPU and memory usage.
To change the object image settings, a PUT
request must be sent to /api/settings/object_image
with a payload containing:
{
"enabled": false,
"interval": 1000
}
The following are the properties of a object_image
object:
Property | Type | Description | Possible values |
---|---|---|---|
enabled |
Boolean | A boolean value specifying if object snapshots should be generated | true or false |
interval |
Unsigned Integer | The interval which snapshots should be regenerated for an object | 0 - 5000 |
To enable or disable the internal ONVIF service, allowing for RTSP and event data to be consumed by ONVIF compliant services.
To enable the ONVIF service, a PUT
request must be sent to /api/settings/onvif/enabled
with a payload containing either:
true
to enable the ONVIF service or:
false
To change the logging level in VCAcore, a PUT
request must be sent to /api/settings/logging
with a payload containing:
{
"level": "error"
}
Property | Type | Description | Possible values |
---|---|---|---|
level |
String | Specifies the logging level used | "fatal" "error", "warning", "info" "debug", "trace" |
Logging levels above "error"
will incur a resource usage cost and should not be left set unless absolutely required.
The current armed state of VCAcore can be retrieved by sending a GET
request to /api/settings/armed
To change the armed state of VCAcore, a PUT
request must be sent to /api/settings/armed
with a payload containing either:
true
to arm VCAcore or:
false
to disarm VCAcore.
To change the current password, a POST
request must be sent to /api/auth/user/admin
with the following data:
{
"current": "CURRENT_PASSWORD_MD5_HASH",
"password": "NEW_PASSWORD_MD5_HASH"
}
The password hashes are computed as follows:
MD5("admin:vcatechnology.com:" + password)
Note that all following HTTP requests will need to be made with the updated password.
VCAcore supports two methods to access metadata produced by the various algorithms running on a channel.
Both methods expose VCAcore’s metadata in JSON format, a detailed description of the data format is outlined below.
The SSE metadata API endpoint for a channel is split into a number of categories, objects
, events
, keep-alive
, history
, count
and status
, each generating a separate message. Message types will only be generated if there is data to send. For example, if a scene is static only keep-alive
messages would be generated for that channel.
Each events
, objects
, history
, count
, and status
JSON object has the ISO8601 timestamp of that particular frame as its only property. The value associated with that property is an array of objects, for a comprehensive breakdown of the returned data formats please see Metadata Format.
It is possible to filter which category of message is sent by adding query parameters to the SSE endpoint. If no parameter is specified, objects
and events
messages will be generated and sent, with a keep alive message sent after 1000ms
.
For events
messages only:
http://SERVER_IP:PORT/metadata/CHANNEL_ID?events=1
An additional events.unique
parameter is also available limiting events
metadata to just the first and final event objects. Metadata objects under Event Metadata, Scene Learning and Tamper will all be included within this filter.
http://SERVER_IP:PORT/metadata/CHANNEL_ID?events=1&events.unique=1
An example response below:
{
"2020-10-30T12:43:42.035830016Z": [
{
"typename": "vca.meta.data.Event",
"id": 12841,
"name": "Deep Learning Presence 24",
"type": "Presence",
"category": "analytics",
"start": "2020-10-30T12:43:42.035830016Z",
"end": "2020-10-30T12:43:42.035830016Z",
"duplicate": true,
"final": true,
"objects": [
{
"typename": "vca.meta.data.Channel",
"id": 2
},
{
"typename": "vca.meta.data.Zone",
"id": 4,
"name": "Car Park",
"channel": 2,
"colour": {
"r": 114,
"g": 159,
"b": 207,
"a": 255
},
"detection": "on",
"type": "polygon",
"outline": [
{
"x": 0,
"y": 26634
},
{
"x": 34476,
"y": 26634
},
{
"x": 34476,
"y": 23035
},
{
"x": 0,
"y": 23035
}
]
}
]
},
{
"typename": "vca.meta.data.Event",
"category": "analytics",
"type": "count",
"duplicate": false,
"final": false,
"start": "2020-10-30T12:42:59.835830016Z",
"end": "2020-10-30T12:43:42.035830016Z",
"id": 1247782,
"name": "South to North",
"objects": [
{
"id": 79,
"name": "South to North",
"position": {
"x": 54195,
"y": 9934
},
"typename": "vca.meta.data.count.Value",
"value": 105
},
{
"id": 79,
"typename": "vca.meta.data.Observable"
},
{
"id": 0,
"typename": "vca.meta.data.Channel"
}
]
}
]
}
For objects
messages only:
http://SERVER_IP:PORT/metadata/CHANNEL_ID?objects=1
An example response below:
{
"2018-10-02T16:51:55.782845060+01:00": [
{
"typename": "vca.meta.data.Object",
"id": 2128,
"outline": [
{
"x": 12910,
"y": 33733
},
{
"x": 27169,
"y": 33733
},
{
"x": 12910,
"y": 65535
},
{
"x": 27169,
"y": 65535
}
],
"width": 14259,
"height": 31802,
"meta": [
{
"typename": "vca.meta.data.object.GroundPoint",
"value": {
"x": 20040,
"y": 65535
}
}
]
}
]
}
For history
messages only:
http://SERVER_IP:PORT/metadata/CHANNEL_ID?history=1
An example response below:
{
"2018-10-02T16:51:55.782845060+01:00": [
{
"typename": "vca.meta.data.object.History",
"map": [
{
"key": "2018-10-02T16:51:55.374636300+01:00",
"value": {
"typename": "vca.meta.data.object.GroundPoint",
"value": {
"x": 19220,
"y": 3304
}
}
},
{
"key": "2018-10-02T16:51:55.574636300+01:00",
"value": {
"typename": "vca.meta.data.object.GroundPoint",
"value": {
"x": 18581,
"y": 3028
}
}
},
{
"key": "2018-10-02T16:51:55.782845060+01:00",
"value": {
"typename": "vca.meta.data.object.GroundPoint",
"value": {
"x": 18192,
"y": 3028
}
}
}
],
"object_id": 2128
}
]
}
For count
messages only:
http://SERVER_IP:PORT/metadata/CHANNEL_ID?count=1
An example response below:
{
"2022-03-22T16: 40: 48.106083755Z": [
{
"typename": "vca.meta.data.count.Value",
"id": 3,
"name": "my first counter",
"value": 2,
"difference": -1,
"position": {
"x": 45989,
"y": 15082
}
},
{
"typename": "vca.meta.data.count.Value",
"id": 5,
"name": "my second counter",
"value": 254,
"difference": 1,
"position": {
"x": 489,
"y": 13082
}
}
]
}
For status
messages only:
http://SERVER_IP:PORT/metadata/CHANNEL_ID?status=1
An example response below:
{
"2022-03-22T16: 40: 48.106083755Z": [
{
"typename": "vca.meta.data.ProcessingTime",
"time": 20,
"identifier": "vca.algorithm.DLObjectTracker"
},
{
"typename": "vca.meta.data.ProcessingTime",
"time": 8,
"identifier": "Analytics"
},
{
"typename": "vca.meta.data.VideoInfo",
"resolution": {
"width": 720,
"height": 480
},
"frame_rate_n": 15,
"frame_rate_d": 1,
"decoder": "avdec_mpeg4-10"
}
]
}
The SSE metadata API supports a keep-alive
message. An example keep-alive
response is below:
{
}
If no metadata messages are sent from a channel for a set interval
, a keep-alive
message will be sent. The same empty JSON object will continue to be sent after every interval
period. The interval
can be specified using the following parameter:
http://SERVER_IP:PORT/metadata/CHANNEL_ID?keep-alive=2000
If no parameter is specified, the keep alive message will be sent after 1000ms
.
The SSE metadata API endpoint to retrieve system statistics is:
http://SERVER_IP:PORT/api/system-stats
System information for system uptime, processor load, graphics card information and load and memory load can be retrieved. An example response is given below:
{
"time": {
"now": "2023-11-20T14:13:34.224181896Z"
},
"cpu": {
"devices": [
{
"vendor": "Arm Limited",
"product": "ARMv8 Processor rev 1 (v8l)"
}
],
"process": 0.0506757,
"processes": [
0.0506757
],
"temperature": 0,
"temperatures": [],
"total": 0.0473899,
"totals": [
0.0675676,
0.0945946,
0,
0.0273973
]
},
"gpus": [
{
"device": {
"address_id": 0,
"bus_id": 1,
"product": "GA106M [GeForce RTX 3060 Mobile / Max-Q]",
"vendor": "NVIDIA Corporation"
},
"memory": {
"available": 8149336064,
"total": 8366784512,
"used": 217448448
},
"name": "NVIDIA GeForce RTX 3060 Laptop GPU",
"temperature": 49,
"utilisation": 0
}
],
"memory": {
"physical": {
"in_use": 4253986816,
"process": 216981504,
"total": 16673214464
},
"virtual": {
"in_use": 4253986816,
"process": 1548251136,
"total": 18820694016
}
},
"uptime": {
"process": 2741172,
"system": 277573990
}
}
Please note on Windows systems the process
and processes
values belonging to cpu
will be set to 0
.
Below is example Python code demonstrating how the channel SSE metadata stream, can be consumed:
#!/usr/bin/python
# The user must install the sseclient and requests packages using pip
from sseclient import SSEClient
import json
import requests
def do_something_useful(message):
= json.loads(message.data)
metadata print('Received metadata event')
print(json.dumps(metadata, indent=4, sort_keys=True))
if __name__ == '__main__':
= '192.168.1.99'
SERVER_IP = '80'
PORT = 0
CHANNEL_ID = SSEClient('http://' + SERVER_IP + ':' + PORT + '/metadata/' + str(CHANNEL_ID) + '?events=1&events.unique=1',
messages ={
headers"Accept": "text/event-stream",
"Accept-Encoding": "identity"
},=requests.auth.HTTPDigestAuth('admin', 'admin'),
auth=65536
chunk_size
)for msg in messages:
do_something_useful(msg)
In addition to a channel’s RTSP video stream, the metadata for that channel is also encoded into an RTSP metadata stream.
The RTSP metadata endpoint for a channel is the same as the RTSP URL:
rtsp://SERVER_IP:RTSP_PORT/channels/CHANNEL_ID
The RTSP metadata stream has both ONVIF (XML) and JSON streams defined in the RTSP DESCRIBE message. Each RTSP JSON metadata stream message contains a JSON object for a particular frame for a given channel. ONVIF Metadata is outside the scope of this document.
Each JSON object contains a timestamp and objects property. The timestamp property has an ISO8601 timestamp value for the given frame. The objects property contains an array of objects, details of which are outlined in Metadata Format.
{
"timestamp": "2020-09-08T16:52:34.011858944+01:00",
"objects": [
{
"typename": "vca.meta.data.Object",
"id": 1380,
"outline": [
{
"x": 10646,
"y": 23724
},
{
"x": 19021,
"y": 23724
},
{
"x": 10646,
"y": 29627
},
{
"x": 19021,
"y": 29627
}
],
"width": 8375,
"height": 5903,
"meta": [
{
"typename": "vca.meta.data.object.GroundPoint",
"value": {
"x": 14833,
"y": 29627
}
},
{
"typename": "vca.meta.data.classification.Confidence",
"class": "vehicle",
"confidence": 0.8999999761581421,
"object_id": 1380
}
]
},
{
"typename": "vca.meta.data.Event",
"id": 12841,
"name": "Deep Learning Presence 24",
"type": "Presence",
"category": "analytics",
"start": "2020-09-08T16:52:31.677858944+01:00",
"end": "2020-09-08T16:52:34.011858944+01:00",
"duplicate": true,
"final": true,
"objects": [
{
"typename": "vca.meta.data.Channel",
"id": 2
},
{
"typename": "vca.meta.data.Zone",
"id": 4,
"name": "Car Park",
"channel": 2,
"colour": {
"r": 114,
"g": 159,
"b": 207,
"a": 255
},
"detection": "on",
"type": "polygon",
"outline": [
{
"x": 0,
"y": 26634
},
{
"x": 34476,
"y": 26634
},
{
"x": 34476,
"y": 23035
},
{
"x": 0,
"y": 23035
}
]
}
]
}
]
}
Example code in Python demonstrating how the RTSP metadata stream, can be consumed is available for download here:
To access the metadata stream, produced by the VCAedge plug-in on either the IPM or IPAI cameras, a Common Gateway Interface (CGI) is provided. Please note VCAcore does not support the CGI.
Example code in Python demonstrating how the CGI metadata stream, can be consumed is available for download here:
Each metadata object has a typename
string property enabling its identification. Below is a list of object types that may be found in the metadata API.
Note: VCAcore’s default coordinate scheme is a 16-bit integer, with 0-65535 representing the range from 0-1 in the frame. However, this upper limit is customisable.
An object representing a metadata event.
Event metadata will continue to be generated whist an event continues to be true. Within the event metadata object, a number of properties are defined to help determine when an event starts and finishes.
How these properties are updated in the event life cycle is indicated below:
Event starts:
start
and end
times are set to current frame’s timestamp.duplicate
and final
are both false
as this is the first frame the event exists.Event continues to be true (In some cases this step will not apply as an event is only ever true for a single frame):
start
time remains static.end
time is updated to the current frame’s timestamp.duplicate
becomes true
as this event’s metadata is now more than one frame old.final
remains false.Event is no longer true and therefore known to be finished:
start
time remains static.end
time remains set to the last value the event was true.duplicate
is either set to, or remains, true
.final
becomes true
as the event will not be present in the next frames metadata.Note: When a channel with a file source is configured, and the video loops, a single "category": "loss-of-signal"
event message will be created with duplicate
and final
both set to true
.
Example:
{
"typename": "vca.meta.data.Event",
"id": 12841,
"name": "Deep Learning Presence 24",
"type": "Presence",
"category": "analytics",
"start": "2020-09-08T16:52:31.677858944+01:00",
"end": "2020-09-08T16:52:34.011858944+01:00",
"duration": 2440,
"duplicate": true,
"final": true,
"objects": [
{
"typename": "vca.meta.data.Channel",
"id": 2
},
{
"typename": "vca.meta.data.Zone",
"id": 4,
"name": "Car Park",
"channel": 2,
"colour": {
"r": 114,
"g": 159,
"b": 207,
"a": 255
},
"detection": "on",
"type": "polygon",
"outline": [
{
"x": 0,
"y": 26634
},
{
"x": 34476,
"y": 26634
},
{
"x": 34476,
"y": 23035
},
{
"x": 0,
"y": 23035
}
]
},
{
"id": 105,
"width": 728,
"height": 3268,
"meta": [
{
"typename": "vca.meta.data.object.GroundPoint",
"value": {
"x": 44229,
"y": 24298
}
}
],
"outline": [
{
"x": 43865,
"y": 21030
},
{
"x": 44593,
"y": 21030
},
{
"x": 43865,
"y": 24298
},
{
"x": 44593,
"y": 24298
}
],
"typename": "vca.meta.data.Object"
}
]
}
Property | Type | Description | Possible values |
---|---|---|---|
id |
Number | The id of this event | An unsigned integer |
name |
String | The name of the rule that triggered this event | Any string (can be empty) |
type |
String | The type of this event | A valid event type. See below for a list of types |
category |
String | The category of this event | A valid event category. See below for a list of categories |
start |
String | The start timestamp of this event | A valid ISO8601 timestamp |
end |
String | The end timestamp of this event | A valid ISO8601 timestamp |
duration |
Number | The duration in milliseconds of this event | An unsigned integer |
duplicate |
Boolean | Indicates a persistent event has fired before | true/false |
final |
Boolean | Indicates this event has finished | true/false |
objects |
Array | Metadata objects associated with this event | A valid array of tracked objects |
Below is a list of possible event categories:
Event category string | Description |
---|---|
analytics |
An event generated by the VCA Analytics engine |
loss-of-signal |
An event indicating the loss of video signal from a camera |
license |
An event generated by the state of licensing |
Below is a list of possible analytics
event types:
Event type string | Description |
---|---|
absence |
An absence event |
presence |
A presence event |
enter |
An enter event |
exit |
An exit event |
appear |
An appear event |
abandoned |
An abandoned event |
aggressive-behaviour |
An aggressive behaviour event |
disappear |
A disappear event |
stopped |
A stopped event |
dwell |
A dwell event |
direction |
A direction event |
directional-crossing |
A directional crossing event |
fall |
A fall event |
hands-up |
A hands up event |
occupancy |
An occupancy event |
tailgating |
A tailgating event |
repeatedly |
A repeatedly event |
linecountera |
A line counter crossing event (left direction) |
linecounterb |
A line counter crossing event (right direction) |
and |
An and event |
or |
An or event |
previous |
A previous event |
not |
A not event |
continuously |
A continuously event |
unattended |
An unattended event |
Below is a list of possible license
event types:
Event type string | Description |
---|---|
expiry |
An evaluation expiry event. |
connected |
A license server connected event |
disconnected |
A license server disconnected event |
The zone data object.
Example:
{
"typename": "vca.meta.data.Zone",
"id": 4,
"name": "Car Park",
"channel": 2,
"colour": {
"r": 114,
"g": 159,
"b": 207,
"a": 255
},
"detection": "on",
"type": "polygon",
"outline": [
{
"x": 0,
"y": 26634
},
{
"x": 34476,
"y": 26634
},
{
"x": 34476,
"y": 23035
},
{
"x": 0,
"y": 23035
}
]
}
Property | Type | Description | Possible values |
---|---|---|---|
id |
Number | The id of this zone in the configuration | An unsigned integer |
name |
String | The name of the zone | Any string (can be empty) |
channel |
Number | The id of this channel the zone is configured on | An unsigned integer |
colour |
Object | A colour object specifying the colour of the zone, without the alpha("a" ) property |
Any valid colour object, with no alpha property |
detection |
Boolean | An boolean specifying whether detection is enabled on this zone | true or false |
type |
String | A string specifying whether this zone is a polygon or a line |
polygon or line |
outline |
Array of objects | An array of point objects | A point object array that has a minimum of two points |
An object indicating current level of aggression.
Example:
{
"typename": "vca.meta.data.Aggression",
"confidence": 0.2347955339821
}
Property | Type | Description | Possible values |
---|---|---|---|
confidence |
Float | The aggression model’s confidence | 0 - 1 |
The source channel of an object.
Example:
{
"typename": "vca.meta.data.Channel",
"id": 1
}
Property | Type | Description | Possible values |
---|---|---|---|
id |
Number | The id of this channel | An unsigned integer |
The object representation of a counter or occupancy observable. This object represents the state of the observable at any given frame.
Example:
{
"typename": "vca.meta.data.count.Value",
"id": 3,
"name": "my first counter",
"value": 2,
"difference": -1,
"position": {
"x": 45989,
"y": 15082
}
},
Property | Type | Description | Possible values |
---|---|---|---|
id |
Number | The id of the counter observable this count value is associated with | An unsigned integer |
name |
String | The name of the counter this count value is associated with | Any string (can be empty) |
value |
Number | The value of the counter | A signed integer |
difference |
Number | The difference between last count event and current event | A signed integer |
position |
Object | A point object | A point object points |
The object representation of an evaluation license’s remaining time.
Example:
{
"typename": "vca.meta.data.license.RemainingTime",
"time": 0,
"license_id": 1
}
Property | Type | Description | Possible values |
---|---|---|---|
time |
Number | The amount of time remaining on the evaluation license in seconds | An unsigned integer |
license_id |
Number | The id of the license this time value is associated with |
An unsigned integer. |
An object indicating scene learning is in progress.
Example:
{
"typename": "vca.meta.data.Learning"
}
An object indicating tamper is in progress.
Example:
{
"typename": "vca.meta.data.Tampered"
}
The object representation of a line counter event. This object contains data pertaining to the object which has crossed the line.
Example:
{
"typename": "vca.meta.data.count.Line",
"rule_id": 4,
"width": 20,
"position": 3,
"count": 2,
"direction": false
}
Property | Type | Description | Possible values |
---|---|---|---|
rule_id |
Number | The rule id of this counting line associated with this event | An unsigned integer |
width |
Number | The width of the object which crossed the line | An unsigned, 16-bit integer |
position |
Number | The position of object on the line | An unsigned, 16-bit integer |
count |
Number | The number of objects crossing the line in this event | An unsigned integer |
direction |
Boolean | The crossing direction, with Left = false and Right = true | true/false |
An object representing a tracked object.
{
"id": 56,
"width": 4726,
"height": 4914,
"meta": [
{
"typename": "vca.meta.data.object.GroundPoint",
"value": {
"x": 45686,
"y": 25667
}
},
{
"class": "person",
"confidence": 0.875215470790863,
"object_id": 56,
"typename": "vca.meta.data.classification.Confidence"
},
{
"typename": "vca.meta.data.ColourSignature",
"colours": [
{
"colour_name": "Black",
"colour_value": {
"r": 0,
"g": 0,
"b": 0
},
"proportion": 0.95555555820465088
},
{
"colour_name": "..."
}
]
}
],
"outline": [
{
"x": 43323,
"y": 20753
},
{
"x": 48049,
"y": 20753
},
{
"x": 43323,
"y": 25667
},
{
"x": 48049,
"y": 25667
}
],
"typename": "vca.meta.data.Object"
}
Property | Type | Description | Possible values |
---|---|---|---|
id |
Number | The id of this tracked object | An unsigned integer |
height |
Number | The height of the objects bounding box outline, relative to coordinate_range_max |
An unsigned integer |
width |
Number | The width of the objects bounding box outline, relative to coordinate_range_max |
An unsigned integer |
meta |
Array | An array of additional metadata objects | An array of valid metadata objects |
outline |
Array of objects | An array of point objects | A point object array that has a minimum of two points |
Metadata objects that could appear in the meta
object are defined below:
The ground point of a tracked object in the image space. The x
and y
values will be any unsigned 16-bit integer, unless a max coordinate range has been set.
Example:
{
"typename": "vca.meta.data.object.GroundPoint",
"value": {
"x": 45686,
"y": 25667
}
}
Property | Type | Description | Possible values |
---|---|---|---|
value |
Object | An object with x/y coordinates of the ground point | For both x and y, valid values are any unsigned 16-bit integer |
Metadata objects that can be attributed to a given tracked object based on the trackers, rules and channel configuration.
Indicates if a tracked person is wearing, or not wearing an accessory. Accessory metadata is only available when; the Deep Learning Skeleton Tracker and an Accessory Filter observable is used on the channel.
The state of an accessory has two possible values:
present
: the person has been evaluated and the accessory is detected.absent
: the person has been evaluated and the accessory is not detected{
"typename": "vca.meta.data.object.Accessory",
"confidence": 0.6,
"class": "high_vis_vest",
"state": "absent"
}
Property | Type | Description | Possible values |
---|---|---|---|
class |
String | The class of this accessory | A valid accessory class. See below for a list of classes |
confidence |
Float | The model’s confidence that the object class is correct | 0 - 1 |
state |
String | The state of the accessory | Either present or absent |
Below is a list of possible accessory classes:
Class string | Description |
---|---|
high_vis_vest |
Brightly coloured safety jacket, with reflective tape |
hard_hat |
A single coloured safety helmet, does not include motorbike or bike helmets |
The estimated area of the tracked object. Area metadata is only available when the channel has been calibrated.
Example:
{
"typename": "vca.meta.data.object.Area",
"value": 3.289283514022827
}
Property | Type | Description | Possible values |
---|---|---|---|
value |
Number | The estimated area of the object in square meters for the current frame | Float greater than 0 |
Describes a body part (skeletal joint) position and the detection confidence of the algorithm. Body Part’s with a confidence
of 0
are considered to have an estimated position. Body Part metadata is only available when the Deep Learning Skeleton Tracker is used on the channel.
Example:
{
"position": {
"x": 43386,
"y": 3561
},
"confidence": 0.8080910444259644,
"typename": "vca.meta.data.pose.BodyPart"
}
Property | Type | Description | Possible values |
---|---|---|---|
position |
Object | Position in x and y of the detected body part | Single point in terms of x and y |
confidence |
Number | Detection and classification confidence of the detection algorithm | 0 - 1 |
The estimated position of the object on the calibration grid. x
is the estimated distance (+/-) from the centre of the calibration grid in meters, where 0
is the centre of the grid. y
is the estimated distance from the camera in meters, where 0
is the camera position. Calibrated position metadata is only available when the channel has been calibrated.
Example
{
"value": {
"x": 0.7089886665344238,
"y": 5.038111686706543
},
"typename": "vca.meta.data.object.CalibratedPosition"
}
Property | Type | Description | Possible values |
---|---|---|---|
value |
Object | An object with x/y coordinates of the calibrated position | For both x and y, Float |
The class name (e.g. person
) provided by the calibration based classification algorithm used with the Object Tracker. The value
is defined by the classification
object for the Channel the object is detected on. The value
will be computed for the object every frame and could change as the object properties change over time. Calibration classification metadata is only available when the channel has been calibrated.
Example:
{
"value": "Vehicle",
"typename": "vca.meta.data.classification.Name"
}
Property | Type | Description | Possible values |
---|---|---|---|
value |
String | Classification name | A name as defined by the classification object of the channel |
The breakdown of pixel colours found in a given tracked object’s bounding box. The number of colours a pixel can be grouped into is fixed, however the number of colours returned in the colour signature metadata object may change.
Colour Signature metadata is only available when colour_signature
is enabled
on the channel.
Example:
{
"typename": "vca.meta.data.ColourSignature",
"colours": [
{
"colour_name": "Black",
"colour_value": {
"r": 0,
"g": 0,
"b": 0
},
"proportion": 0.95555555820465088
},
{
"colour_name": "Brown",
"colour_value": {
"r": 150,
"g": 75,
"b": 0
},
"proportion": 0
},
{
"colour_name": "Grey",
"colour_value": {
"r": 100,
"g": 100,
"b": 100
},
"proportion": 0.029468599706888199
},
{
"colour_name": "Blue",
"colour_value": {
"r": 0,
"g": 0,
"b": 200
},
"proportion": 0
},
{
"colour_name": "Green",
"colour_value": {
"r": 0,
"g": 150,
"b": 0
},
"proportion": 0
},
{
"colour_name": "Cyan",
"colour_value": {
"r": 0,
"g": 255,
"b": 255
},
"proportion": 0
},
{
"colour_name": "Red",
"colour_value": {
"r": 255,
"g": 0,
"b": 0
},
"proportion": 0
},
{
"colour_name": "Magenta",
"colour_value": {
"r": 200,
"g": 0,
"b": 200
},
"proportion": 0
},
{
"colour_name": "Yellow",
"colour_value": {
"r": 255,
"g": 255,
"b": 0
},
"proportion": 0
},
{
"colour_name": "White",
"colour_value": {
"r": 255,
"g": 255,
"b": 255
},
"proportion": 0.014975845813751221
}
]
}
Property | Type | Description | Possible values |
---|---|---|---|
colours |
Array | An array of colour names, RGB values and proportion of pixels grouped into this colour category, for this object | A fixed array of colour metadata objects |
The class name (e.g. person
) and confidence provided by a classification algorithm (i.e. the Deep Learning Classifier). The available class names are listed below and are defined by the classification algorithm. The confidence value indicates now likely that classification is to be correct.
Class string | Description | Source |
---|---|---|
person |
A person, or tracked object with a person present (e.g bicycle) | DLF/DLOT/DLPT |
vehicle |
A car, van, bus or truck | DLF |
background |
Any object which is not included in the previous classes | DLF |
motorcycle |
A motorcycle | DLOT |
bicycle |
A bicycle | DLOT |
bus |
A bus | DLOT |
car |
A car | DLOT |
van |
A van, including mini-vans, mini-buses and buses | DLOT |
truck |
A truck, including lorries and commercial work vehicles | DLOT |
forklift |
A forklift truck | DLOT |
bag |
A backpack or holdall | DLOT |
hand |
A hand | HOI |
object |
An object held by a hand | HOI |
qr_code |
A detected and decoded QR Code | QR Tracker |
As with any image based classification algorithm, there may be some overlap between objects that look similar. For example, a small commercial van may classify as a car in certain orientations.
Example:
{
"class": "person",
"confidence": 0.875215470790863,
"object_id": 22,
"typename": "vca.meta.data.classification.Confidence"
}
Property | Type | Description | Possible values |
---|---|---|---|
class |
String | Classification name | One of a set list of names as defined by the classification algorithm |
confidence |
Float | The model’s confidence that the object class is correct | 0 - 1 |
object_id |
Number | The id of this tracked object | An unsigned integer |
The amount of time the object has been in the zone matching the zone_id
. If the object is in more than one zone simultaneously, then two instances of this metadata will be present.
{
"typename": "vca.meta.data.object.DwellTime",
"duration": 0,
"zone_id": 0
}
Property | Type | Description | Possible values |
---|---|---|---|
duration |
Number | the amount of time (ms) that an object has been in the zone | An unsigned integer |
zone_id |
Number | The ID of the corresponding zone the object is in | A valid zone ID |
An object containing the bounding box and Body Part information for a detected face. A face should only be present when the person is looking towards the camera.
The map
property will return all body parts detected for the face object. Current implementation provides 5 possible body part types. In any given map
there could be less than 5 body part types. For a face to be added to the metadata, at least the left_eye
, right_eye
and nose
all need to be present with a confidence
> 0. The list of detectable body parts is subject to change. Face metadata is only available when the Deep Learning Skeleton Tracker is used on the channel.
Below is a list of possible body part body parts:
Body Part key string |
---|
nose |
left_eye |
right_eye |
left_ear |
right_ear |
Example:
{
"outline": [
{
"x": 0,
"y": 26634
},
{
"x": 34476,
"y": 26634
},
{
"x": 34476,
"y": 23035
},
{
"x": 0,
"y": 23035
}
],
"parts": [
{
"key": "right_eye",
"value": {
"position": {
"x": 29733,
"y": 15315
},
"confidence": 0.459747314453125,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "left_eye",
"value": {
"position": {
"x": 30340,
"y": 15315
},
"confidence": 0.57958984375,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "left_ear",
"value": {
"position": {
"x": 31553,
"y": 15671
},
"confidence": 0.83154296875,
"typename": "vca.meta.data.pose.BodyPart"
}
}
],
"confidence": 0.6226463317871094,
"typename": "vca.meta.data.Face"
}
Property | Type | Description | Possible values |
---|---|---|---|
outline |
Array of objects | An array of point objects | A point object array that has a minimum of two points |
map |
Array | An array of body parts keyed against a fixed list of body part types | An array of body part metadata objects |
confidence |
Number | Mean confidence value from all body part objects in map |
0 - 1 |
Please note the object_id
will match the id
value of the host "vca.meta.data.Object"
.
Defines if a tracked object is in a fallen state. Fall metadata is only available when; the Deep Learning Skeleton Tracker and a Fall observable is used on the channel.
Example:
{
"typename": "vca.meta.data.object.Fall",
"confidence": 0.9571578502655029
}
Property | Type | Description | Possible values |
---|---|---|---|
confidence |
Number | Classification confidence of the detection algorithm | 0 - 1 |
The estimated position of the object relative to the cameras geographic position and orientation. Geographic position metadata is only available when the channel has been calibrated and its latitude, longitude, elevation and orientation has been defined.
Example
{
"value": {
"latitude": 56.42979431152344,
"longitude": -1.5424676537513733,
"elevation": 2
},
"typename": "vca.meta.data.object.GeoLocation"
}
Property | Type | Description | Possible values |
---|---|---|---|
value |
Object | An object with latitude/longitude and elevation | For all, Float |
Defines if a tracked person has been identified as having their Hands Up. Hands Up metadata is only available when; the Deep Learning Skeleton Tracker and a Hands Up observable is used on the channel.
Example:
{
"typename": "vca.meta.data.object.HandsUp",
"confidence": 0.9571578502655029
}
Property | Type | Description | Possible values |
---|---|---|---|
confidence |
Number | Classification confidence of the detection algorithm | 0 - 1 |
An object containing the bounding box information for a detected head. Head metadata is only available when the Deep Learning People Tracker or Deep Learning Skeleton Tracker is used on the channel.
Example:
{
"outline": [
{
"x": 0,
"y": 26634
},
{
"x": 34476,
"y": 26634
},
{
"x": 34476,
"y": 23035
},
{
"x": 0,
"y": 23035
}
],
"typename": "vca.meta.data.Head"
}
Property | Type | Description | Possible values |
---|---|---|---|
outline |
Array of objects | An array of point objects | A point object array that has a minimum of four points |
The estimated height of the tracked object. Height metadata is only available when the channel has been calibrated.
Example:
{
"typename": "vca.meta.data.object.Height",
"value": 3.743859052658081
}
Property | Type | Description | Possible values |
---|---|---|---|
value |
Number | The estimated height of the object in meters for the current frame | Float greater than 0 |
Encoded image data from a variety of sources. Image data of type object
is only available when the object_image
enabled
setting is true
.
{
"typename": "vca.meta.data.Image",
"data": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIfIiEmKzcvJik0KSEiMEExNDk7Pj4+JS5ESUM8SDc9Pjv/2
wBDAQoLCw4NDhwQEBw7KCIoOzs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozv/wAARCABNAFYDASIAAhEBAxEB/8QAHwAAAQUBAQE
BAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3O
Dk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6er
x8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVY
nLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcb
HyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwCVvBNjg+TdXER7EEH+lRjwnqUX+p8QTYHQOWP6ZrqMUAetCkxnGT+HNcPJm067x2mt1JP4k
Z/WsPVZF0OYQ6jo9issgyot5njYj1wGx+ldd4y1u50K1sJLQRh7iYqxkXIwK811dNQ1rWze3yxq05HMYwuB7VV7j1NKDxHpjShDb38APGUuFcZ+jL/Wr7XFoGwdZvLdiOB
PacY+qt/SrnhTwT4cv7GWXVL+fzlJAQOEH8ua62wjt5dOiWMxzxICikgNwOKkdji4GuNpFnr+nyn+7JM0RP8A32MfrUkcGv4Mkdql5z83kTJLj/vk11k2g6ZOxMthbnPXE
YH8qoy+DtHbJjgaJiMZjcjH50Wi9yb2ORuoJopPMnsbiBmGCZIGAB+tVRLEBt3gnsTXaReFZ7fmy1q7gYdMtkfpio5tH8Qf9Be2ux3W5tw2fxINHLEOY5RITnfuc5H96it
46JqkROdC0ufnho2Kf1FFHKgud3ilVCxwvBoHJ4FcN8RPE81mV0OwfZK4D3EyH5lU/wAFJFRV2cz8QNUutQ8TXFq8u63s22QqhyBx1yO9ZWnXE817BAzszEhY/Y56UiSII
gu3PfNQyxCQHHB7Yq3odKp6Hokev31nCE1HS7ZY0PlIxi25OO5HU1t+F7fydH3ZXEkrMFB+77V5jaxzQ2m15ZHjJyUL5XPriux+Hk7STanDvOFVHCk55PeoKnStG7O04pO
KWl20rnCRnntTDGD1qbbRigCHyx6UVNiii4EjyrbQyXDruSFC7D1AGa8Hvb19V1K51Bxg3EpfGeg7CvXvGl4+n+D9QkTBaZRCMnGA3BI968hsLC5v7kWlnEXkKE4A6AdzV
RNqa6lbzCHqzFKoXJ61QJKuQ3UEipFYkdadrnRGdjTN0GiCDtXT/D6fb4jkXdgTWxUjPUg5rjIRubGa6nwXNa2niG3kuZhEvzLuPQ5HelY0lUc42PT+wpRmnFcgEdD0PrS
YqbHnvRjeaKdikxzQIBRS0UgOQ+Jlww8MW8XGJbocjvgVlfC2MnWNQuc4EcAUceprN8U+JLvV9KjtJ1iwsofKjBpPAesxaNe3kkwVhLEqhWk255rWxpF6GX41sYNO8W39t
bRlId++MegNYqNXa+PGt9ZEOp2sIWSNdku2VX3D14riRwaELmLEcrJ0rWt541jEhHI9KxkRmGR0qzaybZdrDK1SQRqNbHdWXxNhsbKO2k01p3jXHmGXG704xVpPijbvhm0
KTHfZcf4ivOL63aPMyg+UT1PapLOw1C9XdZ2F1OiLl3jiJCj1J9KbSM3Jtnr2j+LdP1qbyFiktJz92OVgd30Irb214lpmpeTPHLGcyRtkZHIIr13Rdag1uyE8YCSKMSIeq
ms5RC5fxRS5FFRYZ45d6JfiTbKqoBwN6lf51DBotzuO14G7ffAxXpcr6hboT9uWQKOA8IP9axptStp2xd6TaTEHkqpQn8q1uI5RfC+r3B2wQROT/dnX/GmP4D8RjLDT1Ye
gnTP8666HStNv1LwQTWjAgfLNuH5EVBeTX+gTMIrzzlVc7ZIx/jQI4ybRNYsiUl0yeIDqxGR+fSoo7CUW0tzJLFH5ecozYJ+lejaZ4mkuh+9tVDDnKPgH8CDWpJp+matEV
u9PicPxyOR+VO4Hjsd9NgKpGOwIzXWWHjjxFbW1tZQzQfI4G1IyGkHTaexH6+9T+NvCWmaJbWc+nI0XnSbGUncPrWRZ3KaTdreC2juHQkKJRwOMdqTYF3xp4XNlKut6fF5
cFwA08C9IWPp7ZrL0HXbrRLiS6gfaAMMz8qfYjvXomi6yuu2jrcWihJBtdA+VYHt0rlL2W10LWJtOtNLsZGB3LPcReYVz6KTt/SqWoD7H4n6i24zaXBdN6glQPwFFXLHw7
/awa4nvWUtztSMAD6DoKKlpDP/Z",
"captured_timestamp": "2022-11-23T06:49:46.561735201Z",
"format": "data:image/jpeg;base64",
"image_type": "object"}
Property | Type | Description | Possible values |
---|---|---|---|
data |
String | Encoded image data | Any String |
captured_timestamp |
String | The timestamp when the image data was encoded |
A valid ISO8601 timestamp |
format |
String | The MIME type if the generated data |
Any Valid MIME type string (See table below) |
image_type |
String | The VCA source of the image data | object |
format string | Description |
---|---|
data:image/jpeg;base64 |
Base64 encoded JPEG |
data:image/png;base64 |
Base64 encoded PNG |
data:image/svg+xml;base64 |
Base64 encoded SVG |
data:image/gif;base64 |
Base64 encoded GIF |
The number of pixels in the tracked object’s bounding box. Pixel count is computed as bounding box height x bounding box width. This value is not relative to coordinate_range_max
, instead is computed against the input channels resolution.
Example:
{
"typename": "vca.meta.data.object.PixelCount",
"value": 36000
}
Property | Type | Description | Possible values |
---|---|---|---|
value |
Number | Number of pixels in the bounding box | An unsigned integer |
A map of detected body parts (skeleton joints). The map
property will return all body parts detected for the attached object. Current implementation provides 17 possible body part types. In any given map
there could be less than 17. The list of detectable body parts is subject to change. Pose metadata is only available when the Deep Learning Skeleton Tracker is used on the channel.
Below is a list of possible body part body parts:
Body Part key string |
---|
nose |
left_eye |
right_eye |
left_ear |
right_ear |
left_shoulder |
right_shoulder |
left_elbow |
right_elbow |
left_wrist |
right_wrist |
left_hip |
right_hip |
left_knee |
right_knee |
left_ankle |
right_ankle |
Example:
{
"map": [
{
"key": "nose",
"value": {
"position": {
"x": 43386,
"y": 3561
},
"confidence": 0.8080910444259644,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "right_shoulder",
"value": {
"position": {
"x": 41262,
"y": 4986
},
"confidence": 0.7354756593704224,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "right_elbow",
"value": {
"position": {
"x": 39745,
"y": 12109
},
"confidence": 0.7663553953170776,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "right_wrist",
"value": {
"position": {
"x": 38228,
"y": 18520
},
"confidence": 0.8059698939323425,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "left_shoulder",
"value": {
"position": {
"x": 46724,
"y": 6054
},
"confidence": 0.7997663617134094,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "left_elbow",
"value": {
"position": {
"x": 48241,
"y": 12822
},
"confidence": 0.7496799826622009,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "left_wrist",
"value": {
"position": {
"x": 48241,
"y": 18520
},
"confidence": 0.7150058746337891,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "right_hip",
"value": {
"position": {
"x": 41262,
"y": 18164
},
"confidence": 0.6697762608528137,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "right_knee",
"value": {
"position": {
"x": 41262,
"y": 26000
},
"confidence": 0.7108161449432373,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "right_ankle",
"value": {
"position": {
"x": 40049,
"y": 32767
},
"confidence": 0.5868609547615051,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "left_hip",
"value": {
"position": {
"x": 45207,
"y": 18520
},
"confidence": 0.689977765083313,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "left_knee",
"value": {
"position": {
"x": 44600,
"y": 25644
},
"confidence": 0.7125816345214844,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "left_ankle",
"value": {
"position": {
"x": 42476,
"y": 31698
},
"confidence": 0.6361902952194214,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "right_eye",
"value": {
"position": {
"x": 42779,
"y": 2849
},
"confidence": 0.7985154986381531,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "left_eye",
"value": {
"position": {
"x": 43993,
"y": 2849
},
"confidence": 0.8068280220031738,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "right_ear",
"value": {
"position": {
"x": 42172,
"y": 2493
},
"confidence": 0.33054694533348083,
"typename": "vca.meta.data.pose.BodyPart"
}
},
{
"key": "left_ear",
"value": {
"position": {
"x": 44903,
"y": 2849
},
"confidence": 0.6982870697975159,
"typename": "vca.meta.data.pose.BodyPart"
}
}
],
"confidence": 0.5487611293792725,
"typename": "vca.meta.data.Pose"
}
Property | Type | Description | Possible values |
---|---|---|---|
map |
Array | An array of body parts keyed against a fixed list of body part types | An array of body part metadata objects |
confidence |
Number | Mean confidence value from all body part objects in map |
0 - 1 |
The generated feature vector of a tracked person
object. The feature vector is generated at a defined interval
. The vector is Base64 encoded and it’s decoded length is subject to change. The encoded vector is persisted in the object’s metadata until a new feature vector is generated. REID features metadata is only available when generate_features
is enabled
on the channel.
Example:
{
"typename": "vca.meta.data.object.Features",
"features": "ACC9vABAbr0AAMK9AMAqPQAAWb0AIMO7AMD+PAAAtz0AYE09AGAmPQBgpbwAgMu8AADIvADgoTwA4Cs8AICnPQBATD0AIDw9AOA1vQCA0jwAIEy9AAAjP
QBACTwAgM+9AEA5vQCAnj0AwFI9AKBsPQCART0A4CU+ACAxPACAvjwAAKy9AKDUPQDADz0A4CQ9AOBGPQCAPT0AYG09AABkvQAgsT0AAIK9AEBvvQBADr4AAIQ9AKCgPQB
ARD0AwI29AACEPQBAeT0AoLg9AED9PADAIr0AADE+AIDTvQBAQbwAgPU8ACBqvABA6D0AQKK9AMDJPAAAwTwAwBU8AAA/vACAz70AgNO7AKCTPQBgAL4AwIk9AKCTPQBAp
D0AIKU9AACAvQDAoj0AQDy9ACCdPQBgDL0AoKK6AIBDPQBAkjsAoBY9AAAivQDgYT0A4Is7AMDMPQBA4T0AgIU9AOC0OwDgOT0AACm9AMAUvQBATLwAwPy9AGCgPABgrz
A4IG7AMAFvgCAsrwAwIY9AECNvQAgob0A4Jw9AKD/PQBAHL0AANo8ACBLPQAANL0AwH29AKCgPAAgBT0A4J08AABaPQCg/rsAIK+8AICIPQDgOz0AYNo9AMDbOwCAm7wAA
I67AOAUvQCAKz0AIA28AEAJvgDgpz0AIC69AKChPQDAo7oAYAK8AIC0PQAgD70AQOG9AOBfuwCgsjwAIIO9ACDEPACgxzoAoLg9ACCQPQDg1D0AQM49AOArvQAgWz0AgNW
8AOAyvQBgFr0AQG89AOCKPQAgtjwAwJm8AMCIvACApT0AAD89AEAUvQCg+DgAgJg7AKBKPQCgbbwAgJO9AEDPvQDARbwAgJ47AOAjPgDAA70AYKS8ACA5vQDgrD0AgEg9A
ADCPACAmD0AwK68AAACPgCgdrwAAHU8AMA+vQBggL0AwJe9AEB/PABgET4AQFe9AIClvQAA3b0AYF09AIALvQAAYT0AYF08AIDEvQBgAL0AwCg9AKCWvQDgzT0AgHE4ACA
5PQAgwTwAAGW9AIAbPQCAWD0AoEq9AEC8PACgpLwAQN08ACCvvAAgqDwAQA26ACAyOgCgLj0A4Es9AOC9ugAA4bwAAPU8AIBIvQDgfr0AwBS9AABJvQDg47wAgIO9AKCHP
QBgaT0AgBM+AOB/OwCAsz0AIOO9AGBtPQAgML0AwLo7AAA0vQBgfj0AgFs9AADvvQAAlTsAgIU8ACCkvQBg0boAwIq9AEAcvgCgCj0AQKk9AADFPQDgC70AgFS9ACDLuQA
gnbwA4Ba9AMDTvAAATz0AwJ49AGAAPQDgsj0AYH49AGAJPgDgKD0AwFw9AKCwPACg/TsAABG9AOCxvA=="
}
Property | Type | Description | Possible values |
---|---|---|---|
features |
String | Base64 encoded REID feature vector | Base64 string |
The break down of pixel colours found in defined areas of specific object types. The metadata for a segment of an object is linked to its classification (provided by vca.meta.data.classification.Confidence
). For example, the legs
and torso
segments are only computed for objects classified as person
. The current list of available segments and their associated class string is provided below. Segmented Colour Signature metadata is only available when colour_signature
is enabled
on the channel.
Segment | Description | Class string |
---|---|---|
legs |
The lower half of a person | person |
torso |
The upper half of a person, including arms | person |
Example:
{
"typename": "vca.meta.data.SegmentedColourSignature",
"segments": {
"legs": {
"typename": "vca.meta.data.ColourSignature",
"colours": [
{
"colour_name": "Black",
"colour_value": {
"b": 0,
"g": 0,
"r": 0
},
"proportion": 0.1927710771560669
},
{
"colour_name": "Brown",
"colour_value": {
"b": 0,
"g": 75,
"r": 150
},
"proportion": 0.28915661573410034
},
{
"colour_name": "Grey",
"colour_value": {
"b": 100,
"g": 100,
"r": 100
},
"proportion": 0.46987950801849365
}
]
},
"torso": {
"typename": "vca.meta.data.ColourSignature"
"colours": [
{
"colour_name": "Brown",
"colour_value": {
"b": 0,
"g": 75,
"r": 150
},
"proportion": 0.5585585832595825
},
{
"colour_name": "Grey",
"colour_value": {
"b": 100,
"g": 100,
"r": 100
},
"proportion": 0.30630630254745483
},
{
"colour_name": "Green",
"colour_value": {
"b": 0,
"g": 150,
"r": 0
},
"proportion": 0.06306306272745132
},
{
"colour_name": "Yellow",
"colour_value": {
"b": 0,
"g": 255,
"r": 255
},
"proportion": 0.07207207381725311
}
]
}
}
}
Property | Type | Description | Possible values |
---|---|---|---|
segments |
Object | ColourSignature object keyed by the segment name |
any segments relevant to the object classification |
The estimated speed of the tracked object. Speed metadata is only available when the channel has been calibrated.
Example:
{
"typename": "vca.meta.data.object.Speed",
"value": 3.6601788997650146
}
Property | Type | Description | Possible values |
---|---|---|---|
value |
Number | The estimated speed of the object in km/h for the current frame | Float greater than 0 |
Present when a tracked object has some detected/associated text. Text metadata is only available when the QR Code Tracker is used on the channel.
{
"typename": "vca.meta.data.Text",
"value": "https://vcatechnology.com/",
"category": "qr_code"
}
Property | Type | Description | Possible values |
---|---|---|---|
value |
String | Text | Any string (can be empty) |
category |
String | Source of the text object | qr_code |
An object containing the tracking history of a tracked object in the form of the ground points keyed by timestamp.
Example:
{
"map": [
{
"key": "1970-01-01T01:00:00.000500000+01:00",
"value": {
"typename": "vca.meta.data.object.GroundPoint",
"value": {
"x": 17900,
"y": 33646
}
}
},
{
"key": "1970-01-01T01:00:01.000000000+01:00",
"value": {
"typename": "vca.meta.data.object.GroundPoint",
"value": {
"x": 17506,
"y": 33658
}
}
}
],
"object_id": 147,
"typename": "vca.meta.data.object.History"
},
Property | Type | Description | Possible values |
---|---|---|---|
map |
Array | An array of tracked objects | An array of valid tracked objects |
object_id |
Number | The id of this tracked object | An unsigned integer |
Please note the object_id
will match the id
value of the host "vca.meta.data.Object"
.
A channel status object is a list of objects that represent how fast the given channel is processing frames and running algorithms.
Profiling metadata containing processing time in ms for a given part of the VCAcore pipeline defined by it’s identifier
.
Example:
{
"typename": "vca.meta.data.ProcessingTime",
"time": 0,
"identifier": "Analytics"
}
Property | Type | Description | Possible values |
---|---|---|---|
time |
Number | Amount of time in milliseconds | An unsigned integer |
identifier |
String | Identifies the aspect of the VCAcore pipeline the count applies to | See table below |
identifier string. | Description | Notes |
---|---|---|
vca.algorithm.DLPeopleTracker |
Inference speed for a single frame of DLPT | |
vca.algorithm.DLSkeletonTracker |
Inference speed for a single frame of DLST | |
vca.algorithm.DLClassifier |
Inference speed for a batch of DLF samples | |
vca.algorithm.DLObjectTracker |
Inference speed for a single frame of DLOT | |
vca.algorithm.DLThermalTracker |
Inference speed for a single frame of DLTT | |
vca.algorithm.DLFisheyeTracker |
Inference speed for a single frame of DLFT | |
vca.algorithm.HandObject |
Inference speed for a single frame of HOI Tracker | |
vca.algorithm.QRCodeTracker |
Inference speed for a single frame of QR Code Tracker | |
vca.algorithm.AggressiveBehaviour |
Inference speed for a single frame of Aggressive Behaviour algorithm | Runs in addition to a tracker |
vca.algorithm.FallDetector |
Inference speed for a single frame of Fall detection algorithm | Runs in addition to a tracker |
vca.algorithm.PersonReId |
Inference speed for a single frame of REID detection algorithm | Runs in addition to a tracker |
vca.algorithm.dl_accessory_detector.Torso |
Inference speed for a single frame of Head Accessory detection algorithm | Runs in addition to a tracker |
vca.algorithm.dl_accessory_detector.Head |
Inference speed for a single frame of Torso Accessory detection algorithm | Runs in addition to a tracker |
vca.algorithm.FaceDetect |
Inference speed for a single frame of Face detection detection algorithm | Runs in addition to a tracker |
Analytics |
Time to run the analytics pipeline including DL inference, rules etc |
To profile the performance of the VCAcore pipeline the ProcessingTime
data types are provided. There is some overlap between what the various identifiers measure and care must be taken when interpreting the data.
A channel of VCAcore can broadly be split into two parts; decoding the input and processing the analytics and rules on a decoded frame (reported with Analytics
).
Analytics
relates to the time taken from a frame entering the analytics pipeline, to when all algorithms and rules have been processed. If a DL algorithm is run then the vca.algorithm.*
value makes up part of the full Analytics
time
.
The Analytics
value can be considered the pipeline processing time, assuming that the decode speed remains faster than the input frame rate. VCAcore has a capped frame rate of 15fps
in the analytics pipeline. As such, an optimal value for Analytics
is 66ms
or lower. If greater than 66ms
, the output frame rate of an analytics channel will drop. If lower that 66ms
, and both decoding and the selected vca.algorithm.*
are also lower, there is additional capacity on the host machine to run more channels.
Monitoring of a given vca.algorithm.*
is useful to ensure models and or GPU performance is not a bottleneck.
Input video information.
Example:
{
"typename": "vca.meta.data.VideoInfo",
"resolution": {
"width": 720,
"height": 480
},
"frame_rate_n": 15,
"frame_rate_d": 1,
"decoder": "avdec_mpeg4-10"
},
Property | Type | Description | Possible values |
---|---|---|---|
resolution |
Object | Object containing a video width and height | Object with a width and height containing an unsigned integer |
frame_rate_n |
Number | Frame rate numerator | An unsigned integer |
frame_rate_d |
Number | Frame rate denominator | An unsigned integer |
decoder |
String | The GStreamer decoder used to decode the frame | Limited values defined by GStreamer |
note: To calculate channel frame rate, both numerator and denominator must be used: frame_rate_n / frame_rate_d = channel_fps