Label Format

This is compatible with the labels generated by Scalabel. The labels are released in Scalabel Format. A label json file is a list of frame objects with the fields below. Please note that this format is a superset of the data fields. For example, box3d may be absent if the label is a 2d bounding box, and intrinsics may not appear if the exact camera calibration is unknown.

Categories

Object Detection

For object detection, 10 classes are evalued, they are:

1: pedestrian
2: rider
3: car
4: truck
5: bus
6: train
7: motorcycle
8: bicycle
9: traffic light
10: traffic sign

Note that, the field category_id range from 1 instead of 0.

Instance Segmentation, Box Tracking, Segmentation Tracking

For instance segmentation, multi object tracking (box tracking) and multi object tracking and segmentation (segmentation tracking), only the first 8 classes are used and evaluated.

Semantic Segmentation

Meanwhile, for the semantic segmentation task, 19 classes are evaluated, they are:

0: road
1: sidewalk
2: building
3: wall
4: fence
5: pole
6: traffic light
7: traffic sign
8: vegetation
9: terrain
10: sky
11: person
12: rider
13: car
14: truck
15: bus
16: train
17: motorcycle
18: bicycle

category_id ranges from 0 for the semantic segmentation task. 255 is used for “unknown” category, and will not be evaluated.

Drivable Area

For the drivable area task, 3 classes are evaluated, they are:

0: direct
1: alternative
2: background

category_id ranges from 0 for the drivable area task.

Lane Marking

For the lane marking task, there are 3 sub-task: lane categories, lane directions and lane styles. There are 9, 3 and 3 classes for each sub-task listed above.

Lane Categories

0: crosswalk
1: double other
2: double white
3: double yellow
4: road curb
5: single other
6: single white
7: single yellow
8: background

Lane Directions

0: parallel
1: vertical
2: background

Lane Styles

0: solid
1: dashed
2: background

Attributes

BDD100K dataset has some specific properties.

Frame attributes

- weather: "rainy|snowy|clear|overcast|undefined|partly cloudy|foggy"
- scene: "tunnel|residential|parking lot|undefined|city street|gas stations|highway|"
- timeofday: "daytime|night|dawn/dusk|undefined"

Label attributes

- occluded: boolean
- truncated: boolean
- trafficLightColor: "red|green|yellow|none"
- areaType: "direct | alternative" (for driving area)
- laneDirection: "parallel|vertical" (for lanes)
- laneStyle: "solid | dashed" (for lanes)
- laneTypes: (for lanes)

Semantic Segmentation Format

We provide labels for semantic segmentation and drivable area in both JSON and mask formats. The mask format save the ground-truch of each image into an one-channel png (8 bits per pixel). The value of each pixel represents its category. 255 usually means “ignore”.

Lane Marking Format

For lane marking, there are three sub-tasks: lane categories, lane direction and lane styles. A one-channel png file is used for each image to store all classes information. The format for a pixel is defined as the image below. The 3-th bit and the 4-th bit are for direction and style. The last 3 bits are for category. Most importantly, the 5-th bit is to indicate whether this pixel belongs to the background (0: lane, 1: background).

Downloading buttons

Instance Segmentation Format

We provide labels for instance segmentation and segmentation tracking in both JSON and bitmask formats. Note that poly2d used in JSONs is not of the same format as COCO. Instead, the poly2d field stores a Bezier Curve with vertices and control points. In the bitmask format, labels for each image are stored in an RGBA png file.

The evaluation scripts use bitmasks as ground-truth, so we suggest using bitmasks as input all the way. We expect each pixel only corresponds to one predicted class, poly2d cannot guarantee that, while bitmasks can assure that.

For the RGBA image, The first byte, R, is used for the category id range from 1 (0 is used for the background). Moreover, G is for the instance attributes. Currently, four attributes are used, they are “truncated”, “occluded”, “crowd” and “ignore”. Note that boxes with “crowd” or “ignore” labels will not be considered during testing. The above four attributes are stored in least significant bits of G. Given this, G = (truncated << 3) + (occluded << 2) + (crowd << 1) + ignore . Finally, the B channel and A channel store the “ann_id” for instance segmentation and “ann_id” for segmentation tracking, respectively, which can be computed as (B << 8) + A. The below image is for reference.

Downloading buttons

Format Conversion

Coordinate System

During our labeling, we regard the left-top corner of the most left-top pixel as (0, 0), so in our conversion scripts, the width is computed as x2 - x1 + 1, and height is computed as y2 - y1 + 1. This manner also influence the mIoU calculation. This manner is consistent with pycocotools, MMDetection 1.x and maskrcnn-benchmark. Note that, MMDetection 2.x and Detectron2 adopt a different manner. You need to take care when using them.

from_coco

from_coco converts coco-format json files into bdd100k format. Currently, for conversion of segmentation, only the polygon format is supported.

Available arguments:

python3 -m bdd100k.label.from_coco -l ${input_file} -o ${out_path}

to_mask

You can run the conversion from poly2d to masks/bitmasks by this command:

python3 -m bdd100k.label.to_mask -m sem_seg|ins_seg|seg_track -l ${in_path} -o ${out_path} [--nproc ${process_num}]
  • process_num: the number of processes used for the conversion. Default as 4.

However, as the conversion process is not deterministic, we don’t recommend converting it by yourself.

to_color

You can run the conversion from masks/bitmasks to colormaps by this command:

python3 -m bdd100k.label.to_color -m sem_seg|ins_seg|seg_track -l ${in_path} -o ${out_path} [--nproc ${process_num}]
  • process_num: the number of processes used for the conversion. Default as 4.

to_coco

to_coco converts bdd100k json files into coco format.

Available arguments:

python3 -m bdd100k.label.to_coco -m det|box_track -l ${in_path} -o ${out_path}

For instance segmentation and segmentation tracking, converting from “JOSN + Bitmasks” and from “Bitmask” are both supported. For the first choice, use this command:

python3 -m bdd100k.label.to_coco -m ins_seg|seg_track -l ${in_path} -o ${out_path} -mb ${mask_base}
  • mask_base: the path to the bitmasks

If you only have Bitmasks in hand and don’t use the scalabel_id field, you can use this comman:

python3 -m bdd100k.label.to_coco -m ins_seg|seg_track -l ${mask_base} -o ${out_path}
  • mask_base: the path to the bitmasks