Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M metaseq
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 95
    • Issues 95
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 41
    • Merge requests 41
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Administrator
  • metaseq
  • Issues
  • #3
Closed
Open
Issue created Mar 13, 2022 by Administrator@rootOwner

Dynamic loss scaler does not fully checkpoint state, causing path dependency wrt restarts

Created by: suchenzang

Dynamic loss scaler has _iter and _last_overflow_iter attributes, which are not checkpointed: https://github.com/fairinternal/fairseq-py/blob/gshard_combine_megatron_fsdp/fairseq/optim/dynamic_loss_scaler.py#L32

As a result, loss scaling changes as a function of when we checkpoint / resume from checkpoints.

We should add a flag to enable checkpointing this state for reproducibility, along with keeping a flag for allowing this state to be forgotten if need be.

Assignee
Assign to
Time tracking