Scheduled Optim Finetuning
ScheduledOptimFinetuning
Bases: Optimizer
DEPRECATED: moved to AcousticModule.
A custom optimizer that uses AdamW
for optimization and an ExponentialLR
for learning rate scheduling.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
train_config |
AcousticTrainingConfig
|
Training configuration with optimizer and scheduler parameters. |
required |
parameters |
Iterable
|
Iterable of parameters to optimize. |
required |
defaults |
Dict[str, Any]
|
Default optimization options. Defaults to an empty dictionary. |
{}
|
step |
Optional[int]
|
The current training step. Defaults to None. |
None
|
Source code in notebooks/experiments/optimizer/scheduled_optim_finetuning.py
get_lr()
load_state_dict(state_dict)
Loads the optimizer state.
Args: state_dict (Dict[str, Any]): A dictionary containing a whole state of the optimizer.
Source code in notebooks/experiments/optimizer/scheduled_optim_finetuning.py
step(closure)
zero_grad()
Clears the gradients of all optimized parameters. This should be called before the backward pass in PyTorch.