Skip to content

Releases: tanyuqian/redco

Redco release v0.4.23

20 Oct 21:06
Compare
Choose a tag to compare
  • An argument updated in Trainer.fit(): save_every_ckpt -> save_ckpt_every_k_epochs
  • Added params_sharded and opt_state_sharded in Trainer.__init__(), for memory saving.

Redco release v0.4.22

21 Aug 17:28
Compare
Choose a tag to compare
  • Simplified argument names for the random key in loss_fn() and pred_fn():
    • train_rng/pred_rng -> rng

redco-v0.4.21

19 Aug 16:37
Compare
Choose a tag to compare
  • Accelerated Inference for multi-host, purely data parallel case
  • Added optional argument train_step_fn in Trainer for fully customizing every training step, e.g., per-sample gradient noising for data-private training.
  • Slight argument name change in Deployer.get_lr_schedule_fn(): warmup_rate -> warmup_ratio

redco-v0.4.20

22 Jul 01:42
Compare
Choose a tag to compare
  • Updated data example type support -- can be a list of whatever types now, e.g., examples=[str, str, str, ...] or examples=[dict, dict, dict, ...]
  • Updated mixed-precision training -- by setting compute_dtype, e.g., Trainer(compute_dtype=jnp.bfloat16).

Redco release v0.4.19

30 Jun 07:17
Compare
Choose a tag to compare
  • Accelerated multi-host running
  • Updated WandB login, e.g., redco.Deployer(wandb_init_kwargs={'project': '...', 'name': '...'})
  • Updated customization of params_sharding_rules

Redco release v0.4.18

24 Jun 04:41
Compare
Choose a tag to compare

Simplified checkpoint loading.

Redco release v0.4.17

20 Jun 04:32
Compare
Choose a tag to compare
  • Faster ckpt saving and loading
  • Text-to-image example updated with model parallelism for StableDiffusion

Redco release v0.4.16

20 Apr 04:30
Compare
Choose a tag to compare

Redco version 0.4.16.

Redco release v0.4.15

05 Jan 06:06
Compare
Choose a tag to compare

Redco version 0.4.15. Updated ckpt saving & loading.

Redco release v0.4.14

07 Dec 06:16
Compare
Choose a tag to compare

Redco version 0.4.14. Updated SLURM setting. See language_modeling or text_to_text for detailed use cases.