flame.pytorch.helpers.optimizer#

Module Contents#

Functions#

scale_lr_linearly(base_lr, batch_size[, world_size, ...])

Attributes#

_logger

flame.pytorch.helpers.optimizer._logger#
flame.pytorch.helpers.optimizer.scale_lr_linearly(base_lr, batch_size, world_size=None, base_batch_size=256)#
Parameters:
  • base_lr (float) –

  • batch_size (int) –

  • world_size (Optional[int]) –

  • base_batch_size (int) –

Return type:

float