Model Configuration
This guide explains the Models Configuration dialog in detail — how to define indexes (models), choose their purpose, configure categories and attributes, set up key points, and control the full training pipeline.
Models are persisted as “indexes” in the project’s database. There can be multiple indexes, and one of them can be designated as the primary entry point for analysis.

Overview
- Open the Models Configuration dialog to manage all models for the current project.
- The UI uses a two‑pane layout:
- Left: the list of models (indexes), with an option to add/remove.
- Right: tabs to configure the selected model.
- Tabs for a model:
- Configuration (always)
- Inference (always)
- Categories (for classify purpose)
- Key points (when enabled)
- Attributes (always)
- Training (always)
Saving validates the full set of models, then updates indexes in the project. Canceling with unsaved changes prompts for confirmation.
Model List
- Add new: creates a model with sensible defaults (classify purpose, multiple categories, pre‑trained on COCO).
- Delete: removes the selected model (after confirmation).
- Primary: the primary model is the entry point for analysis and stores bundles. Only one can be primary at a time.
- Validation indicators: items in the list are highlighted if they have validation errors.
Configuration Tab
The Configuration tab contains two sections: General and Base.
General
- Name: required and must be unique.
- Primary: marks this model as the analysis entry point.
- Input width / height: required. Defaults derive from the selected base model and update when base model changes.
- Description: used by the LLM to decide when to use this model during search/analysis.
Base
- Purpose: what the model does.
- Locate: detect object locations (bounding boxes and/or key points).
- Classify: classify images (optionally also locate boxes and/or key points).
- Identify: identify previously located objects (no boxes/key points on its own).
- Accuracy: a speed/accuracy slider (coarse). Higher accuracy generally favors larger/slower backbones.
- Bounding boxes: whether the model outputs bounding boxes (disabled for Identify).
- Key points: whether the model outputs key points (disabled for Identify).
- Multiple categories: only for Classify; controls multi‑label detection.
- Based on: pre‑training source.
- None: train from scratch.
- COCO 2017: use COCO as a base.
- When selected, categories and, if applicable, key point labels are auto‑seeded with COCO defaults. If you had custom lists, you’ll be prompted to replace them.
- Input sizes auto-sync with the selected base model while you’re using defaults.
- The platform selects an underlying backbone based on purpose, boxes/key points, and accuracy.
Categories Tab (Classify)
Define the list of categories (classes) the model can detect.
- Each category has a numeric value and a display name.
- Add/remove categories inline.
- If “Based on = COCO 2017,” the default COCO category list is preloaded.
- Category attributes (see Attributes tab) can be declared per category.
Key Points Tab
Define key point labels and optionally which categories produce key points.
- Key point labels: ordered list of labels; the length defines how many key points the model predicts.
- Categories that detect key points (Classify): pick which categories output key points.
- If “Based on = COCO 2017,” a default COCO‑style key point label list is offered when switching to COCO.
Attributes Tab
Attributes are extra values computed for each detected object or, for Identify models, for the identified instance. Two groups:
- Shared: attributes applied across the whole model.
- Per‑category: attributes scoped to a specific category (only for Classify models).
Supported attribute types
- Color: requires “Number of colors” and “Max colors.”
- Another model: delegates computation to another model in the project. Includes clip/align options (see below).
- Identification: like “Another model,” but constrained to models with purpose Identify.
- Angle: compute pitch/yaw/roll from 4 key points (top, bottom, left, right). Requires a comma‑separated list of 4 indices.
- Bearing: compute direction of gaze from 5 key points (center, left, right, bottom, top). Requires 5 indices.
- Adjacent angle: compute angle between 2 lines (X1,Y1 and X2,Y2) selected from key points (4 indices via a helper selector).
Clipping and key-point alignment
For attributes that run another model.
- Clip: crop to the bounding box, optionally with a margin.
- Align with key points: rotate/align using 4 selected key points.
- Constraints enforced by validation:
- You can’t clip when the source model doesn’t output boxes.
- You can’t align when the source model doesn’t output key points.
- All 4 align points must be selected when aligning.
Training Tab
Training covers dataset preparation, augmentations, core optimizer settings, and advanced CenterNet/ResNet parameters. Some parameters are only relevant when boxes or key points are enabled.
Data preparation and splits
- Prepare data: when checked, prior train/validation splits are discarded and a new split is made. If validations don’t exist yet, this is automatically enabled.
- Percentage training data: fraction for training; the remainder is used for validation.
- Level categories: balance categories by synthesizing additional samples using augmentation.
Clipping and alignment for training
- Same clip/align options as in attributes (margin, align points). Subject to the same constraints (needs boxes/key points).
Data augmentation
Toggle and parameterize any combination of:
- Rotate (max degrees)
- Scale (max %)
- Flip (horizontal/vertical)
- Translate X/Y (max %)
- Brightness/Contrast/Saturation/Hue (max %)
- Nr of combinations: how many augmentations to combine at once per image (computed upper bound depends on how many are active).
Optimizer and schedule
- Fine tune: train all layers (vs. only the final layer). Available for Identify and Classify.
- Batch size: number of samples per step.
- Epochs: minimum training epochs; the system may determine an optimal stopping point via validation.
- Optimizer: Adam or SGD.
- Momentum: used with SGD.
- Learning rate decay: Cosine or Exponential.
- Initial learning rate: for cosine schedule.
- Decay steps: number of steps to decay over. Auto‑computed based on dataset size, batch size, and epochs, and updated when those change.
CenterNet/ResNet training parameters
General
- Freeze backbone epochs: epochs to keep the backbone frozen before unfreezing.
- Unfreeze layers per stage and unfreeze interval: progressive unfreezing schedule.
Detection and key point specifics
Shown when boxes or key points are enabled.
- Loss weights: object center, size, offset, box, classification, keypoint, heatmap (per task component).
- Heatmap bias and Gaussian radius.
- CenterNet debug images toggle.
- Size‑based Gaussians toggle.
- Box upsample filters.
Regularization
- Dropout: 0..1.
Genetic optimization (optional)
- Enable: use a genetic search to optimize training parameters.
- Max search time (s) and consecutive worse limit.
- Fields: select numeric fields to search (batch size, epochs, learning rate, momentum, decay steps, loss weights, etc.), with min/max/step per field.
Validation Rules
Before saving, all models are validated. Highlights:
Model
- Name and description are required; names must be unique across models.
- Based on (pre‑training source) is required.
- Accuracy (speed/accuracy slider) is required.
Attributes
- Every attribute must have a type.
- Color: nrColors and maxColors required.
- Another model / Identification: must reference an existing model; cyclic references are disallowed.
- Angle/Bearing/Adjacent angle: require the correct count of key point indices (4 for angle, 5 for bearing, 4 for adjacent angle).
- Clip/Align constraints: see “Clipping and alignment.”
Per‑category attributes
- Category selection required per group.
Training
- Numeric fields must be valid positive numbers (or within specified ranges); dropout must be between 0 and 1.
- Boolean fields must be boolean.
Errors are shown per model and per field; saving is blocked until issues are resolved.
Saving and Persistence
- Save updates the project’s indexes. Under the hood:
- New models create new indexes (folders or DB entries depending on storage).
- Removed models are closed and deleted from project configuration (and files are removed for SQLite).
- Updated models are applied in place.
- If no model is primary, the first model is made primary.
- Cancel discards changes; if modified, you’ll be asked to confirm.
Tips and Best Practices
- Use a single Primary model as the pipeline entry; downstream attributes can delegate to other models.
- Prefer COCO 2017 as a starting point when your task aligns with generic objects; you can prune/rename categories later.
- Keep your category list focused and consistent; use Level categories to balance datasets.
- Use attribute “Another model” to build modular pipelines (e.g., locate → identify).
- Only enable key points when needed; they add complexity, but unlock powerful alignment and angle/bearing attributes.
Troubleshooting
- Can’t enable Clip? Ensure the source model outputs boxes.
- Can’t enable Align? Ensure the source model outputs key points, and select all 4 align points.
- Decay steps looks odd? It’s calculated from dataset size, batch size, and epochs; it will update when those change.
- Save blocked? Check the highlighted fields in the list and in the active tab; correct all validation errors.
Related
- Getting started: how to create a project and run analysis.
- Training your own models: end‑to‑end training workflow and charts.