Gene_Pool

src.Gene_Pool.conv_block(x, filters=16, kernel_size=3, strides=2, normalization='BatchNormalization', activation='silu6')[source]

Defines a convolutional block with a given layer of inputs x.

Parameters

xtf.Tensor

Input tensor for convolutional block.

filtersint, optional

The dimensionality of the output space for the Conv2D layer.

kernel_sizeint, optional

The height and width of the 2D convolution window.

stridesint, optional

The strides of the convolution along the height and width.

normalizationstr, optional

The type of normalization layer. Supports ‘BatchNormalization’ and ‘LayerNormalization’.

activationstr, optional

The activation function to use. Supports ‘relu’, ‘relu6’, ‘silu’, and ‘silu6’.

Returns

xtf.Tensor

Output tensor after applying convolution, normalization, and activation.

src.Gene_Pool.ffn(x, hidden_units, dropout_rate, use_bias=False)[source]

Implements a Feed-Forward Network (FFN), which is an essential component of various deep learning architectures.

Parameters

xtensor

The input tensor to the FFN.

hidden_unitslist

A list containing the number of hidden units for each dense layer in the FFN.

dropout_ratefloat

The dropout rate used by the dropout layers in the FFN.

use_biasbool, default=False

If True, the layers in the FFN will use bias vectors.

Returns

xtensor

The output tensor from the FFN.

src.Gene_Pool.inverted_residual_block(x, expansion_factor, output_channels, strides=1, kernel_size=3, normalization='BatchNormalization', activation='silu6', residual='Concatenate')[source]

Defines an inverted residual block with a given layer of inputs x.

Parameters

xtf.Tensor

Input tensor for the inverted residual block.

expansion_factorint

Determines the number of output channels for the first Conv2D layer in the block.

output_channelsint

The number of output channels for the last Conv2D layer in the block.

stridesint, optional

The strides of the convolution along the height and width.

kernel_sizeint, optional

The height and width of the 2D convolution window.

normalizationstr, optional

The type of normalization layer. Supports ‘BatchNormalization’ and ‘LayerNormalization’.

activationstr, optional

The activation function to use. Supports ‘relu’, ‘relu6’, ‘silu’, and ‘silu6’.

residualstr, optional

The type of residual connection to use. Supports ‘Concatenate’, ‘StochasticDepth’, and ‘Add’.

Returns

mtf.Tensor

Output tensor after applying the inverted residual block operations.

src.Gene_Pool.mobilevit_block(x, num_blocks, projection_dim, strides=1, kernel_size=3, num_heads=2, residual='Concatenate', activation='silu6', normalization='BatchNormalization')[source]

Constructs a MobileViT block which consists of local feature extraction, global feature extraction (via transformer block), and merging of local and global features.

Parameters

xtensor

Input tensor.

num_blocksint

Number of transformer layers to use in the transformer block.

projection_dimint

Output dimensions for the Convolution and Transformer blocks.

stridesint, default=1

Stride length for the Convolution blocks.

kernel_sizeint, default=3

Kernel size for the Convolution blocks.

num_headsint, default=2

Number of attention heads for the MultiHeadAttention layer in the Transformer block.

residualstr, default=’Concatenate’

Type of residual connection. Options are ‘Concatenate’, ‘StochasticDepth’, and ‘Add’.

activationstr, default=’silu6’

Activation function to use in the Convolution blocks.

normalizationstr, default=’BatchNormalization’

Normalization layer to use in the Convolution blocks.

Returns

local_global_featurestensor

Output tensor after the MobileViT block.

src.Gene_Pool.transformer_block(encoded_patches, transformer_layers, projection_dim, num_heads=2)[source]

Creates a Transformer block, which contains multiple layers of multi-head self-attention followed by a feed-forward network (FFN). Each of these operations is followed by a stochastic depth skip connection, and layer normalization.

Parameters

encoded_patchestensor

The input tensor to the Transformer block.

transformer_layersint

The number of layers in the Transformer block.

projection_dimint

The number of output dimensions for the Transformer block.

num_headsint, default=2

The number of attention heads for each self-attention layer in the Transformer block.

Returns

encoded_patchestensor

The output tensor from the Transformer block.