from datumaro.components.dataset import Dataset
dataset = Dataset.import_from('directory/path/','image_dir')
This will search for images in the directory recursively and add
them as dataset entries with names like <subdir1>/<subsubdir1>/<image_name1>.
The list of formats matches the list of supported image formats in OpenCV.
After addition into a project, images can be split into subsets and renamed
with transformations, filtered, joined with existing annotations etc.
To use a video as an input, one should either create an Extractor plugin,
which splits a video into frames, or split the video manually and import images.
5 - Command line workflow
The key object is a project, so most CLI commands operate on projects.
However, there are few commands operating on datasets directly.
A project is a combination of a project’s own dataset, a number of
external data sources and an environment.
An empty Project can be created by project create command,
an existing dataset can be imported with project import command.
A typical way to obtain projects is to export tasks in CVAT UI.
If you want to interact with models, you need to add them to project first.
Note: command invocation syntax is subject to change,
always refer to command –help output
Available CLI commands:
6.1 - Convert datasets
This command allows to convert a dataset from one format into another.
In fact, this command is a combination of project import and project export
and just provides a simpler way to obtain the same result when no extra options
is needed. A list of supported formats can be found in the --help output of
this command.
Usage:
datum convert --help
datum convert \
-i <input path>\
-if <input format>\
-o <output path>\
-f <output format>\
-- [extra parameters for output format]
Example: convert a VOC-like dataset to a COCO-like one:
datum convert --input-format voc --input-path <path/to/voc/>\
--output-format coco
6.2 - Create project
The command creates an empty project. Once a Project is created, there are
a few options to interact with it.
Usage:
datum create --help
datum create \
-o <project_dir>
Example: create an empty project my_dataset
datum create -o my_dataset/
6.3 - Add and remove data
A Project can contain a number of external Data Sources. Each Data Source
describes a way to produce dataset items. A Project combines dataset items from
all the sources and its own dataset into one composite dataset. You can manage
project sources by commands in the source command line context.
Datasets come in a wide variety of formats. Each dataset
format defines its own data structure and rules on how to
interpret the data. For example, the following data structure
is used in COCO format:
/dataset/
- /images/<id>.jpg
- /annotations/
Supported formats are listed in the command help. Check extending tips
for information on extra format support.
Usage:
datum add --help
datum remove --help
datum add\
path <path>\
-p <project dir>\
-f <format>\
-n <name>
datum remove \
-p <project dir>\
-n <name>
Example: create a project from a bunch of different annotations and images,
and generate TFrecord for TF Detection API for model training
datum create
# 'default' is the name of the subset below
datum add path <path/to/coco/instances_default.json> -f coco_instances
datum add path <path/to/cvat/default.xml> -f cvat
datum add path <path/to/voc> -f voc_detection
datum add path <path/to/datumaro/default.json> -f datumaro
datum add path <path/to/images/dir> -f image_dir
datum export -f tf_detection_api
6.4 - Filter project
This command allows to create a sub-Project from a Project. The new project
includes only items satisfying some condition. XPath
is used as a query format.
There are several filtering modes available (-m/--mode parameter).
Supported modes:
i, items
a, annotations
i+a, a+i, items+annotations, annotations+items
When filtering annotations, use the items+annotations
mode to point that annotation-less dataset items should be
removed. To select an annotation, write an XPath that
returns annotation elements (see examples).
Usage:
datum filter --help
datum filter \
-p <project dir>\
-e '<xpath filter expression>'
Example: extract a dataset with only images which width < height
datum filter \
-p test_project \
-e '/item[image/width < image/height]'
Example: extract a dataset with only images of subset train.
datum project filter \
-p test_project \
-e '/item[subset="train"]'
Example: extract a dataset with only large annotations of class cat and any
non-persons
datum filter \
-p test_project \
--mode annotations -e '/item/annotation[(label="cat" and area > 99.5) or label!="person"]'
Example: extract a dataset with only occluded annotations, remove empty images
datum filter \
-p test_project \
-m i+a -e '/item/annotation[occluded="True"]'
Item representations are available with --dry-run parameter:
This command merges items from 2 or more projects and checks annotations for
errors.
Spatial annotations are compared by distance and intersected, labels and
attributes are selected by voting.
Merge conflicts, missing items and annotations, other errors are saved into a .json file.
A model consists of a graph description and weights. There is also a script
used to convert model outputs to internal data structures.
datum create
datum model add\
-n <model_name> -l open_vino -- \
-d <path_to_xml> -w <path_to_bin> -i <path_to_interpretation_script>
Interpretation script for an OpenVINO detection model (convert.py):
You can find OpenVINO model interpreter samples in
datumaro/plugins/openvino/samples (instruction).
from datumaro.components.extractor import*
max_det =10
conf_thresh =0.1defprocess_outputs(inputs, outputs):# inputs = model input, array or images, shape = (N, C, H, W)# outputs = model output, shape = (N, 1, K, 7)# results = conversion result, [ [ Annotation, ... ], ... ]
results =[]forinput, output inzip(inputs, outputs):
input_height, input_width =input.shape[:2]
detections = output[0]
image_results =[]for i, det inenumerate(detections):
label =int(det[1])
conf =float(det[2])if conf <= conf_thresh:continue
x =max(int(det[3]* input_width),0)
y =max(int(det[4]* input_height),0)
w =min(int(det[5]* input_width - x), input_width)
h =min(int(det[6]* input_height - y), input_height)
image_results.append(Bbox(x, y, w, h,
label=label, attributes={'score': conf}))
results.append(image_results[:max_det])return results
defget_categories():# Optionally, provide output categories - label map etc.# Example:
label_categories = LabelCategories()
label_categories.add('person')
label_categories.add('car')return{ AnnotationType.label: label_categories }
6.14 - Run inference
This command applies model to dataset images and produces a new project.
Usage:
datum model run --help
datum model run \
-p <project dir>\
-m <model_name>\
-o <save_dir>
Example: launch inference on a dataset
datum import<...>
datum model add mymodel <...>
datum model run -m mymodel -o inference
6.15 - Run inference explanation
Runs an explainable AI algorithm for a model.
This tool is supposed to help an AI developer to debug a model and a dataset.
Basically, it executes inference and tries to find problems in the trained
model - determine decision boundaries and belief intervals for the classifier.
Currently, the only available algorithm is RISE (article),
which runs inference and then re-runs a model multiple times on each
image to produce a heatmap of activations for each output of the
first inference. As a result, we obtain few heatmaps, which
shows, how image pixels affected the inference result. This algorithm doesn’t
require any special information about the model, but it requires the model to
return all the outputs and confidences. The algorithm only supports
classification and detection models.
The following use cases available:
RISE for classification
RISE for object detection
Usage:
datum explain --help
datum explain \
-m <model_name>\
-o <save_dir>\
-t <target>\<method>\<method_params>
Example: run inference explanation on a single image with visualization
datum create <...>
datum model add mymodel <...>
datum explain -t image.png -m mymodel \
rise --max-samples 1000 --progressive
Note: this algorithm requires the model to return
all (or a reasonable amount) the outputs and confidences unfiltered,
i.e. all the Label annotations for classification models and
all the Bboxes for detection models.
You can find examples of the expected model outputs in tests/test_RISE.py
For OpenVINO models the output processing script would look like this:
Example: rename dataset items by a regular expression
Replace pattern with replacement
Remove frame_ from item ids
datum transform -t rename -- -e '|pattern|replacement|'
datum transform -t rename -- -e '|frame_(\d+)|\\1|'
Example: sampling dataset items as many as the number of target samples with
sampling method entered by the user, divide into sampled and unsampled
subsets
There are five methods of sampling the m option.
topk: Return the k with high uncertainty data
lowk: Return the k with low uncertainty data
randk: Return the random k data
mixk: Return half to topk method and the rest to lowk method
randtopk: First, select 3 times the number of k randomly, and return
the topk among them.
Example : control number of outputs to 100 after NDR
There are two methods in NDR e option
random: sample from removed data randomly
similarity: sample from removed data with ascending
There are two methods in NDR u option
uniform: sample data with uniform distribution
inverse: sample data with reciprocal of the number
datum transform -t ndr -- \
-w train \
-a gradient \
-k 100\
-e random \
-u uniform
7 - Extending
There are few ways to extend and customize Datumaro behavior, which is
supported by plugins. Check our contribution guide for
details on plugin implementation. In general, a plugin is a Python code file.
It must be put into a plugin directory:
<project_dir>/.datumaro/plugins for project-specific plugins
<datumaro_dir>/plugins for global plugins
Built-in plugins
Datumaro provides several builtin plugins. Plugins can have dependencies,
which need to be installed separately.
TensorFlow
The plugin provides support of TensorFlow Detection API format, which includes
boxes and masks. It depends on TensorFlow, which can be installed with pip:
pip install tensorflow
# or
pip install tensorflow-gpu
# or
pip install datumaro[tf]# or
pip install datumaro[tf-gpu]
Accuracy Checker
This plugin allows to use Accuracy Checker
to launch deep learning models from various frameworks
(Caffe, MxNet, PyTorch, OpenVINO, …) through Accuracy Checker’s API.
The plugin depends on Accuracy Checker, which can be installed with pip:
This plugin provides support for model inference with OpenVINO™.
The plugin depends on the OpenVINO™ Toolkit, which can be installed by
following these instructions
Dataset Formats
Dataset reading is supported by Extractors and Importers.
An Extractor produces a list of dataset items corresponding
to the dataset. An Importer creates a project from the data source location.
It is possible to add custom Extractors and Importers. To do this, you need
to put an Extractor and Importer implementation scripts to a plugin directory.
Dataset writing is supported by Converters.
A Converter produces a dataset of a specific format from dataset items.
It is possible to add custom Converters. To do this, you need to put a Converter
implementation script to a plugin directory.
Dataset Conversions (“Transforms”)
A Transform is a function for altering a dataset and producing a new one.
It can update dataset items, annotations, classes, and other properties.
A list of available transforms for dataset conversions can be extended by
adding a Transform implementation script into a plugin directory.
Model launchers
A list of available launchers for model execution can be extended by adding
a Launcher implementation script into a plugin directory.