ultrayolo.datasets package¶
Submodules¶
ultrayolo.datasets.common module¶
-
ultrayolo.datasets.common.
anchors_to_string
(anchors)[source]¶ transform the anchors into a strting
- Arguments:
anchors {np.ndarrary} – the anchors
- Returns:
str – the anchors in yolo format
-
ultrayolo.datasets.common.
best_anchors_iou
(boxes, anchors)[source]¶ [summary]
- Arguments:
boxes {np.ndarray} – a numpy array of shape (num_examples, num_bboxes) of type (x_min, y_min, x_max, y_max) anchors {np.ndarray} – a numpy array with the anchors to be used for the object detection (num_anchors, (W, H))
- Returns:
[np.ndarray] – a numpy array that returns the best anchors fot the given object
-
ultrayolo.datasets.common.
get_grid_sizes
(image_shape, base_grid_size)[source]¶ utility function that compute the sizes of the grid system
- Arguments:
image_shape {tuple} – the shape of the image base_grid_size {int} – the size of the cell in base grid
- Returns:
list – a list of the grid sizes
-
ultrayolo.datasets.common.
load_anchors
(path)[source]¶ read the anchors from a file saved in the format x1,y1, x2,y2, …, x9, y9
- Arguments:
path {str} – the path of the file to read
- Returns:
numpy.ndarray – an array of tuples [(x1,y1), (x2,y2), …, (x9, y9)]
-
ultrayolo.datasets.common.
load_classes
(path, as_dict=False)[source]¶ - it expect to read a file with one class per line sorted in the same order with respect
to the class name. example: dog cat will be codified as dog -> 0 cat -> 1 …. -> 2 The index 0 is used to represent no class
- Arguments:
path {str} – the path where the file is saved
- Keyword Arguments:
as_dict {bool} – load the classes as dictionary (idx, class) (default: {False})
- Returns:
list|dict – the list of the classes
-
ultrayolo.datasets.common.
make_masks
(nelems)[source]¶ generate the default masks for the model
- Arguments:
nelems {int} – the number of elements for the mask
- Returns:
np.ndarray – a numpy array with the masks
-
ultrayolo.datasets.common.
open_boxes_batch
(paths)[source]¶ parses bounding boxes and classes from a list of paths
- Arguments:
paths {[list]} – a list of paths
- Returns:
[tuple] – a tuple (boxes, class) of shape (M,4) and (M,1)
-
ultrayolo.datasets.common.
open_image
(path)[source]¶ Open an image using imageio
- Arguments:
path {str} – the path of the image
- Returns:
numpy.ndarray – format (H,W,C)
-
ultrayolo.datasets.common.
pad_batch_to_fixed_size
(batch_images, target_shape, batch_boxes=None)[source]¶ Resize and pad images and boxes to the target shape Arguments ——– batch_images: an array of images with shape (H,W,C) target_shape: a shape of type (H,W,C) batch_boxes: an array of array with format (xmin, ymin, xmax, ymax)
images_aug: a list of augmented images boxes_aug: a list of augmented boxes (optional: if boxes is not None)
-
ultrayolo.datasets.common.
pad_boxes
(boxes, max_objects)[source]¶ Pad boxes to desired size Arguments ——– boxes: an array of boxes with shape (N, X1, Y1, X2, Y2) max_objects: the maximum number of boxes
boxes: with shape (max_objects, X1, Y1, X2, Y2)
-
ultrayolo.datasets.common.
pad_classes
(classes, max_objects)[source]¶ [summary]
- Arguments:
classes {[type]} – [description] max_objects {[type]} – [description]
- Returns:
[type] – [description]
-
ultrayolo.datasets.common.
pad_to_fixed_size
(image, target_shape, boxes=None)[source]¶ Resize and pad images and boxes to the target shape Arguments ——– image: an image with shape (H,W,C) target_shape: a shape of type (H,W,C) boxes: an array of format (xmin, ymin, xmax, ymax)
image_pad: the image padded boxes_pad: the boxes padded (optional: if boxes is not None)
-
ultrayolo.datasets.common.
parse_boxes
(str_boxes)[source]¶ Parse annotations in the form x_min,y_min,x_max,y_max x_min,y_min,x_max,y_max …
- Arguments:
str_boxes {str} – annotations in the form x_min,y_min,x_max,y_max x_min,y_min,x_max,y_max …
- Returns:
numpy.ndarray – a numpy array with the boxes extracted from the input
-
ultrayolo.datasets.common.
parse_boxes_batch
(list_str_boxes)[source]¶ parse a list of annotations
- Arguments:
list_str_boxes {list} – the list of the str annotations
-
ultrayolo.datasets.common.
prepare_batch
(batch_images, batch_boxes, batch_classes, target_shape, max_objects, augmenters=None, pad=True)[source]¶ - prepare a batch of images and boxes:
resize all the images to the same size
update the size of the boxes basing on the new image size
- Arguments:
batch_images {numpy.ndarry} – an array of images with shape (H,W,C) batch_boxes {np.ndarray} –
an array of array with format (xmin, ymin, xmax, ymax)
target_shape {tuple} – a shape of type (H,W,C) max_objects {int} – the maximum number of boxes to track
- Keyword Arguments:
augmenters {imgaug.augmenters} – ImgAug augmenters (default: {None}) pad {bool} – if the images should be padded (default: {True})
- Returns:
Tuple – a Tuple with batch_images, batch_boxes and batch_classes
-
ultrayolo.datasets.common.
resize
(image, target_shape, boxes=None, keep_aspect_ratio=True)[source]¶ Resize images and boxes to the target shape Arguments ——– image: an image with shape (H,W,C) target_shape: a shape of type (H,W,C) boxes: an array of format (xmin, ymin, xmax, ymax) keep_aspect_ratio: (default: True)
image_resized: the image resized boxes_resized: the boxes resized (optional: if boxes is not None)
-
ultrayolo.datasets.common.
resize_batch
(batch_images, target_shape, batch_boxes=None)[source]¶ Resize and pad images and boxes to the target shape Arguments ——– batch_images: an array of images with shape (H,W,C) target_shape: a shape of type (H,W,C) batch_boxes: an array of array with format (xmin, ymin, xmax, ymax, class_name)
images_aug: a list of augmented images boxes_aug: a list of augmented boxes (optional: if boxes is not None)
-
ultrayolo.datasets.common.
save_image
(img, path)[source]¶ save an image
- Arguments:
img {numpy.ndarray} – an image as numpy array path {str} – the path
-
ultrayolo.datasets.common.
to_center_width_height
(boxes)[source]¶ transform a numpy array of boxes from [x_min, y_min, x_max, y_max] –in–> [x_center, y_center, width, height]
-
ultrayolo.datasets.common.
transform_target
(boxes_data, classes_data, anchors, anchor_masks, grid_sizes, num_classes, target_shape, classes=None)[source]¶ transform the target data into yolo format
- Arguments:
boxes_data {np.ndarray} – an array of shape (NBATCH, x_min, y_min, x_max, y_max) classes_data {np.ndarray} – an array of shape (NBATCH, 1) anchors {np.ndarray} – an array of shape (6 or 9, 2) anchor_masks {np.ndarray} – an array of mask to select the anchors grid_sizes {list} – the list of the grid sizes num_classes {int} – the number of classes target_shape {tuple} – thet target shape of the images
- Keyword Arguments:
- classes {list} – a positional list that associate id_num to pos (default: {None})
This is used in the case where a dataset of created by filtering some classes with respect to another dataset and the classes are not 0 indexed
- Returns:
[tuple] – a tuple with the dataset transformed for coco training
ultrayolo.datasets.datasets module¶
-
class
ultrayolo.datasets.datasets.
BaseDataset
(annotations_path: str, img_shape: Tuple[int, int, int], max_objects: int, batch_size: int, anchors: numpy.ndarray, anchor_masks: numpy.ndarray, base_grid_size: int = 32, is_training: bool = True, augmenters: imgaug.augmenters.meta.Sequential = None, pad_to_fixed_size: bool = True, images_folder='images')[source]¶ Bases:
tensorflow.python.keras.utils.data_utils.Sequence
-
class
ultrayolo.datasets.datasets.
CocoFormatDataset
(annotations_path, img_shape, max_objects, batch_size, anchors, anchor_masks, base_grid_size: int = 32, is_training=True, augmenters=None, pad_to_fixed_size=True, images_folder='images')[source]¶ Bases:
ultrayolo.datasets.datasets.BaseDataset
this class handles dataset into the COCO format.
- annotation{
“id”: int, “image_id”: int, “category_id”: int, “segmentation”: RLE or [polygon], “area”: float, “bbox”: [x,y,width,height], “iscrowd”: 0 or 1,
} categories[{ “id”: int, “name”: str, “supercategory”: str, }]
- Arguments:
Sequence {tf.kera.utils.Sequence} – [description]
-
class
ultrayolo.datasets.datasets.
YoloDatasetMultiFile
(annotations_path: str, img_shape: Tuple[int, int, int], max_objects: int, batch_size: int, anchors: numpy.ndarray, anchor_masks: numpy.ndarray, base_grid_size: int = 32, is_training: bool = True, augmenters: imgaug.augmenters.meta.Sequential = None, pad_to_fixed_size: bool = True, images_folder='images')[source]¶
-
class
ultrayolo.datasets.datasets.
YoloDatasetSingleFile
(annotations_path: str, img_shape: Tuple[int, int, int], max_objects: int, batch_size: int, anchors: numpy.ndarray, anchor_masks: numpy.ndarray, base_grid_size: int = 32, is_training: bool = True, augmenters: imgaug.augmenters.meta.Sequential = None, pad_to_fixed_size: bool = True, images_folder='images')[source]¶
ultrayolo.datasets.genanchors module¶
-
class
ultrayolo.datasets.genanchors.
AnchorsGenerator
(num_clusters, scaling_factor, dist_fn=<function median>)[source]¶ Bases:
object
-
ultrayolo.datasets.genanchors.
gen_anchors
(boxes_xywh, num_clusters, scaling_factor=1.1)[source]¶ generate anchors
- Arguments:
boxes_xywh {np.ndarray} – the boxes used to crreate the anchors num_clusters {int} – the number of clusters to generate
- Keyword Arguments:
scaling_factor {float} – a multiplicator factor to increase thebox size (default: {1.0})
- Returns:
[type] – [description]