Releases: tanyuqian/redco
Releases · tanyuqian/redco
Redco release v0.4.23
- An argument updated in
Trainer.fit()
:save_every_ckpt
->save_ckpt_every_k_epochs
- Added
params_sharded
andopt_state_sharded
inTrainer.__init__()
, for memory saving.
Redco release v0.4.22
- Simplified argument names for the random key in
loss_fn()
andpred_fn()
:train_rng
/pred_rng
->rng
redco-v0.4.21
- Accelerated Inference for multi-host, purely data parallel case
- Added optional argument
train_step_fn
inTrainer
for fully customizing every training step, e.g., per-sample gradient noising for data-private training. - Slight argument name change in
Deployer.get_lr_schedule_fn()
:warmup_rate
->warmup_ratio
redco-v0.4.20
- Updated data example type support -- can be a
list
of whatever types now, e.g.,examples=[str, str, str, ...]
orexamples=[dict, dict, dict, ...]
- Updated mixed-precision training -- by setting
compute_dtype
, e.g.,Trainer(compute_dtype=jnp.bfloat16)
.
Redco release v0.4.19
- Accelerated multi-host running
- Updated WandB login, e.g.,
redco.Deployer(wandb_init_kwargs={'project': '...', 'name': '...'})
- Updated customization of
params_sharding_rules
Redco release v0.4.18
Simplified checkpoint loading.
Redco release v0.4.17
- Faster ckpt saving and loading
- Text-to-image example updated with model parallelism for StableDiffusion
Redco release v0.4.16
Redco version 0.4.16.
Redco release v0.4.15
Redco version 0.4.15. Updated ckpt saving & loading.
Redco release v0.4.14
Redco version 0.4.14. Updated SLURM setting. See language_modeling or text_to_text for detailed use cases.