src.auto_trainer

Entry point for running AutoMAX hyperparameter search with a standard LibAUC Trainer (image classification and similar tasks).

CLI usage

python -m src.auto_trainer --config_file config.yaml

Functions

main()

Parse CLI arguments, load the megaconf, build an autopartial trainer, and run optimize().

Steps performed:

  1. Load YAML config and apply CLI overrides via apply_cli_overrides().

  2. Set global random seeds via set_seed().

  3. Load train / eval datasets with libauc.trainer.load_dataset.

  4. Resolve default optimizer and loss configs from parse_defaultconfig.

  5. Build an autopartial TrainingArguments that carries the search distributions.

  6. Construct an AutoMAXConfigration from the automax: config section.

  7. Build an autopartial Trainer and run optimize().

apply_cli_overrides(cfg, args)

Merge CLI-supplied values into the megaconf in-place.

Parameters:
  • cfg (OmegaConf DictConfig) – Merged megaconf produced by _build_megaconf.

  • args (argparse.Namespace) – Parsed CLI arguments from argparse.

Returns:

The mutated config (same object).

Return type:

OmegaConf DictConfig

The following CLI flags are supported:

CLI flag

Config key overridden

--epochs

training.epochs

--batch_size

training.batch_size

--eval_batch_size

training.eval_batch_size

--sampling_rate

training.sampling_rate

--num_workers

training.num_workers

--output_path

training.output_path

--seed

training.SEED

--resume_from_checkpoint / --no-resume_from_checkpoint

training.resume_from_checkpoint

--save_checkpoint_every

training.save_checkpoint_every

set_seed(seed)

Set all relevant random seeds for reproducibility.

Parameters:

seed (int) – Seed value to apply.

Sets numpy, torch CPU, torch CUDA seeds, and enables torch.backends.cudnn.deterministic.

Megaconf defaults

auto_trainer ships with the following built-in defaults, which are merged under any user-supplied YAML:

training:
  optimizer: PESG
  optimizer_kwargs: {}
  loss: AUCMLoss
  loss_kwargs: {}
  SEED: 42
  batch_size: 128
  eval_batch_size: 128
  sampling_rate: 0.5
  epochs: 50
  decay_epochs: []
  num_workers: 2
  output_path: ./output
  resume_from_checkpoint: true
  save_checkpoint_every: 5
  project_name: libauc
  experiment_name: run_auto
  verbose: 1

automax:
  deterministic: true
  n_trials: 5
  n_configs: 1
  SEED: 42
  name: automax_search
  output_directory: ./automax_output
  overwrite: true

dataset:
  name: ""
  kwargs: {}
  eval_splits: [val]

model:
  name: resnet18
  pretrained: false
  num_classes: 1
  in_channels: 3

metrics: [AUROC]
metric_kwargs: []