models

models.cloudgan

CloudGAN Objects

class CloudGAN(pl.LightningModule)

__init__

def __init__(forecast_steps: int = 48, input_channels: int = 12, lr: float = 0.0002, beta1: float = 0.5, beta2: float = 0.999, num_filters: int = 64, generator_model: str = "runet", norm: str = "batch", use_dropout: bool = False, discriminator_model: str = "enhanced", discriminator_layers: int = 0, loss: str = "vanilla", scheduler: str = "plateau", lr_epochs: int = 10, lambda_l1: float = 100.0, l1_loss: str = "l1", channels_per_timestep: int = 12, condition_time: bool = False, pretrained: bool = False)

Creates CloudGAN, based off of https://www.climatechange.ai/papers/icml2021/54 Changes include allowing outputs for all timesteps, optionally conditioning on time for single timestep output

Arguments:

  • forecast_steps - Number of timesteps to forecast
  • input_channels - Number of input channels
  • lr - Learning Rate
  • beta1 - optimizer beta1
  • beta2 - optimizer beta2 value
  • num_filters - Number of filters in generator
  • generator_model - Generator name
  • norm - Norm type
  • use_dropout - Whether to use dropout
  • discriminator_model - model for discriminator, one of options in define_discriminator
  • discriminator_layers - Number of layers in discriminator, only for NLayerDiscriminator
  • loss - Loss function, described in GANLoss
  • scheduler - LR scheduler name
  • lr_epochs - Epochs for LR scheduler
  • lambda_l1 - Lambda for L1 loss, from slides recommended between 5-200
  • l1_loss - Loss to use for the L1 in the slides, default is L1, also SSIM is available
  • channels_per_timestep - Channels per input timestep
  • condition_time - Whether to condition on a future timestep, similar to MetNet

train_per_timestep

def train_per_timestep(images: torch.Tensor, future_images: torch.Tensor, optimizer_idx: int, batch_idx: int)

For training with conditioning on time, so when the model is giving a single output

This goes through every timestep in forecast_steps and runs the training

Arguments:

  • images - (Batch, Timestep, Channels, Width, Height)
  • future_images - (Batch, Timestep, Channels, Width, Height)
  • optimizer_idx - int, the optiimizer to use

train_all_timestep

def train_all_timestep(images: torch.Tensor, future_images: torch.Tensor, optimizer_idx: int, batch_idx: int)

Train on all timesteps, instead of single timestep at a time. No conditioning on future timestep

Arguments:

images: future_images: optimizer_idx: batch_idx:

models.fcn

models.runet

models.utils

reverse_space_to_depth

def reverse_space_to_depth(frames: np.ndarray, temporal_block_size: int = 1, spatial_block_size: int = 1) -> np.ndarray

Reverse space to depth transform.

space_to_depth

def space_to_depth(frames: np.ndarray, temporal_block_size: int = 1, spatial_block_size: int = 1) -> np.ndarray

Space to depth transform.

models.deeplabv3

models.layers

models.layers.TimeDistributed

TimeDistributed Objects

class TimeDistributed(nn.Module)

Applies module over tdim identically for each step, use low_mem to compute one at a time.

forward

def forward(*tensors, **kwargs)

input x with shape:(bs,seq_len,channels,width,height)

low_mem_forward

def low_mem_forward(*tensors, **kwargs)

input x with shape:(bs,seq_len,channels,width,height)

format_output

def format_output(out, bs, seq_len)

unstack from batchsize outputs

models.layers.ConvLSTM

ConvLSTMCell Objects

class ConvLSTMCell(nn.Module)

__init__

def __init__(input_dim, hidden_dim, kernel_size, bias, conv_type: str = "standard")

Initialize ConvLSTM cell.

Parameters

input_dim: int Number of channels of input tensor. hidden_dim: int Number of channels of hidden state. kernel_size: (int, int) Size of the convolutional kernel. bias: bool Whether or not to add the bias.

models.layers.Normalization

models.layers.Discriminator

SelfAttention Objects

class SelfAttention(nn.Module)

Self attention Layer

forward

def forward(x)

inputs : x : input feature maps( B X C X W X H) returns : out : self attention value + input feature attention: B X N X N (N is Width*Height)

models.layers.Generator

models.layers.CoordConv

AddCoords Objects

class AddCoords(nn.Module)

forward

def forward(input_tensor)

Arguments:

  • input_tensor - shape(batch, channel, x_dim, y_dim)

models.layers.ConditionTime

condition_time

def condition_time(x, i=0, size=(12, 16), seq_len=15)

create one hot encoded time image-layers, i in [1, seq_len]

ConditionTime Objects

class ConditionTime(nn.Module)

Condition Time on a stack of images, adds horizon channels to image

forward

def forward(x, fstep=0)

x stack of images, fsteps

models.layers.GResBlock

models.layers.SpatioTemporalLSTMCell_memory_decoupling

__author__

PredNN v2 adapted from https://github.com/thuml/predrnn-pytorch

models.layers.RUnetLayers

models.layers.Attention

models.attention_unet

models.conv_lstm

ConvLSTM Objects

class ConvLSTM(torch.nn.Module)

forward

def forward(x, forecast_steps=0, hidden_state=None)

Parameters

input_tensor: 5-D Tensor of shape (b, t, c, h, w) # batch, time, channel, height, width

models.gan

models.gan.common

get_norm_layer

def get_norm_layer(norm_type="instance")

Return a normalization layer

Arguments:

norm_type (str) -- the name of the normalization layer: batch | instance | none

For BatchNorm, we use learnable affine parameters and track running statistics (mean/stddev). For InstanceNorm, we do not use learnable affine parameters. We do not track running statistics.

init_weights

def init_weights(net, init_type="normal", init_gain=0.02)

Initialize network weights.

Arguments:

net (network) -- network to be initialized init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal init_gain (float) -- scaling factor for normal, xavier and orthogonal.

We use 'normal' in the original pix2pix and CycleGAN paper. But xavier and kaiming might work better for some applications. Feel free to try yourself.

init_net

def init_net(net, init_type="normal", init_gain=0.02)

Initialize a network: 1. register CPU/GPU device (with multi-GPU support); 2. initialize the network weights

Arguments:

net (network) -- the network to be initialized init_type (str) -- the name of an initialization method: normal | xavier | kaiming | orthogonal gain (float) -- scaling factor for normal, xavier and orthogonal. gpu_ids (int list) -- which GPUs the network runs on: e.g., 0,1,2

Return an initialized network.

cal_gradient_penalty

def cal_gradient_penalty(netD, real_data, fake_data, device, type="mixed", constant=1.0, lambda_gp=10.0)

Calculate the gradient penalty loss, used in WGAN-GP paper https://arxiv.org/abs/1704.00028

Arguments:

netD (network) -- discriminator network real_data (tensor array) -- real images fake_data (tensor array) -- generated images from the generator device (str) -- GPU / CPU: from torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu') type (str) -- if we mix real and fake data or not [real | fake | mixed]. constant (float) -- the constant used in formula ( ||gradient||_2 - constant)^2 lambda_gp (float) -- weight for this loss

Returns the gradient penalty loss

models.gan.discriminators

define_discriminator

def define_discriminator(input_nc, ndf, netD, n_layers_D=3, norm="batch", init_type="normal", init_gain=0.02, conv_type: str = "standard")

Create a discriminator

Arguments:

input_nc (int) -- the number of channels in input images ndf (int) -- the number of filters in the first conv layer netD (str) -- the architecture's name: basic | n_layers | pixel n_layers_D (int) -- the number of conv layers in the discriminator; effective when netD=='n_layers' norm (str) -- the type of normalization layers used in the network. init_type (str) -- the name of the initialization method. init_gain (float) -- scaling factor for normal, xavier and orthogonal.

Returns a discriminator

Our current implementation provides three types of discriminators: - [basic] - 'PatchGAN' classifier described in the original pix2pix paper. It can classify whether 70×70 overlapping patches are real or fake. Such a patch-level discriminator architecture has fewer parameters than a full-image discriminator and can work on arbitrarily-sized images in a fully convolutional fashion.

  • [n_layers] - With this mode, you can specify the number of conv layers in the discriminator with the parameter (default=3 as used in [basic] (PatchGAN).)

  • [pixel] - 1x1 PixelGAN discriminator can classify whether a pixel is real or not. It encourages greater color diversity but has no effect on spatial statistics.

The discriminator has been initialized by . It uses Leakly RELU for non-linearity.

GANLoss Objects

class GANLoss(nn.Module)

Define different GAN objectives.

The GANLoss class abstracts away the need to create the target label tensor that has the same size as the input.

__init__

def __init__(gan_mode, target_real_label=1.0, target_fake_label=0.0)

Initialize the GANLoss class.

Arguments:

gan_mode (str) - - the type of GAN objective. It currently supports vanilla, lsgan, and wgangp. target_real_label (bool) - - label for a real image target_fake_label (bool) - - label of a fake image

  • Note - Do not use sigmoid as the last layer of Discriminator. LSGAN needs no sigmoid. vanilla GANs will handle it with BCEWithLogitsLoss.

get_target_tensor

def get_target_tensor(prediction, target_is_real)

Create label tensors with the same size as the input.

Arguments:

prediction (tensor) - - tpyically the prediction from a discriminator target_is_real (bool) - - if the ground truth label is for real images or fake images

Returns:

A label tensor filled with ground truth label, and with the size of the input

__call__

def __call__(prediction, target_is_real)

Calculate loss given Discriminator's output and grount truth labels.

Arguments:

prediction (tensor) - - tpyically the prediction output from a discriminator target_is_real (bool) - - if the ground truth label is for real images or fake images

Returns:

the calculated loss.

NLayerDiscriminator Objects

class NLayerDiscriminator(nn.Module)

Defines a PatchGAN discriminator

__init__

def __init__(input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, conv_type: str = "standard")

Construct a PatchGAN discriminator

Arguments:

input_nc (int) -- the number of channels in input images ndf (int) -- the number of filters in the last conv layer n_layers (int) -- the number of conv layers in the discriminator - norm_layer - normalization layer

forward

def forward(input)

Standard forward.

PixelDiscriminator Objects

class PixelDiscriminator(nn.Module)

Defines a 1x1 PatchGAN discriminator (pixelGAN)

__init__

def __init__(input_nc, ndf=64, norm_layer=nn.BatchNorm2d, conv_type: str = "standard")

Construct a 1x1 PatchGAN discriminator

Arguments:

input_nc (int) -- the number of channels in input images ndf (int) -- the number of filters in the last conv layer - norm_layer - normalization layer

forward

def forward(input)

Standard forward.

CloudGANDiscriminator Objects

class CloudGANDiscriminator(nn.Module)

Defines a discriminator based off https://www.climatechange.ai/papers/icml2021/54/slides.pdf

models.gan.generators

define_generator

def define_generator(input_nc, output_nc, ngf, netG: Union[str, torch.nn.Module], norm="batch", use_dropout=False, init_type="normal", init_gain=0.02)

Create a generator

Arguments:

input_nc (int) -- the number of channels in input images output_nc (int) -- the number of channels in output images ngf (int) -- the number of filters in the last conv layer netG (str) -- the architecture's name: resnet_9blocks | resnet_6blocks | unet_256 | unet_128 norm (str) -- the name of normalization layers used in the network: batch | instance | none use_dropout (bool) -- if use dropout layers. init_type (str) -- the name of our initialization method. init_gain (float) -- scaling factor for normal, xavier and orthogonal.

Returns a generator

Our current implementation provides two types of generators: - U-Net - [unet_128] (for 128x128 input images) and [unet_256] (for 256x256 input images) The original U-Net paper: https://arxiv.org/abs/1505.04597

Resnet-based generator: [resnet_6blocks] (with 6 Resnet blocks) and [resnet_9blocks] (with 9 Resnet blocks) Resnet-based generator consists of several Resnet blocks between a few downsampling/upsampling operations. We adapt Torch code from Justin Johnson's neural style transfer project (https://github.com/jcjohnson/fast-neural-style).

The generator has been initialized by . It uses RELU for non-linearity.

ResnetGenerator Objects

class ResnetGenerator(nn.Module)

Resnet-based generator that consists of Resnet blocks between a few downsampling/upsampling operations.

We adapt Torch code and idea from Justin Johnson's neural style transfer project(https://github.com/jcjohnson/fast-neural-style)

__init__

def __init__(input_nc, output_nc, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, n_blocks=6, padding_type="reflect", conv_type: str = "standard")

Construct a Resnet-based generator

Arguments:

input_nc (int) -- the number of channels in input images output_nc (int) -- the number of channels in output images ngf (int) -- the number of filters in the last conv layer - norm_layer - normalization layer use_dropout (bool) -- if use dropout layers n_blocks (int) -- the number of ResNet blocks padding_type (str) -- the name of padding layer in conv layers: reflect | replicate | zero

forward

def forward(input)

Standard forward

ResnetBlock Objects

class ResnetBlock(nn.Module)

Define a Resnet block

__init__

def __init__(dim, padding_type, norm_layer, use_dropout, use_bias, conv_type: str = "standard")

Initialize the Resnet block

A resnet block is a conv block with skip connections We construct a conv block with build_conv_block function, and implement skip connections in function. Original Resnet paper: https://arxiv.org/pdf/1512.03385.pdf

build_conv_block

def build_conv_block(dim, padding_type, norm_layer, use_dropout, use_bias, conv2d: torch.nn.Module)

Construct a convolutional block.

Arguments:

dim (int) -- the number of channels in the conv layer. padding_type (str) -- the name of padding layer: reflect | replicate | zero - norm_layer - normalization layer use_dropout (bool) -- if use dropout layers. use_bias (bool) -- if the conv layer uses bias or not

Returns a conv block (with a conv layer, a normalization layer, and a non-linearity layer (ReLU))

forward

def forward(x)

Forward function (with skip connections)

UnetGenerator Objects

class UnetGenerator(nn.Module)

Create a Unet-based generator

__init__

def __init__(input_nc, output_nc, num_downs, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False, conv_type: str = "standard")

Construct a Unet generator

Arguments:

input_nc (int) -- the number of channels in input images output_nc (int) -- the number of channels in output images num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7, image of size 128x128 will become of size 1x1 # at the bottleneck ngf (int) -- the number of filters in the last conv layer - norm_layer - normalization layer

We construct the U-Net from the innermost layer to the outermost layer. It is a recursive process.

forward

def forward(input)

Standard forward

UnetSkipConnectionBlock Objects

class UnetSkipConnectionBlock(nn.Module)

Defines the Unet submodule with skip connection. X -------------------identity---------------------- |-- downsampling -- |submodule| -- upsampling --|

__init__

def __init__(outer_nc, inner_nc, input_nc=None, submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False, conv_type: str = "standard")

Construct a Unet submodule with skip connections.

Arguments:

outer_nc (int) -- the number of filters in the outer conv layer inner_nc (int) -- the number of filters in the inner conv layer input_nc (int) -- the number of channels in input images/features submodule (UnetSkipConnectionBlock) -- previously defined submodules outermost (bool) -- if this module is the outermost module innermost (bool) -- if this module is the innermost module - norm_layer - normalization layer use_dropout (bool) -- if use dropout layers.

models.unet

models.perceiver

models.pixel_cnn

models.pl_metnet

models.pix2pix