Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M metaseq
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 95
    • Issues 95
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 41
    • Merge requests 41
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Administrator
  • metaseq
  • Issues
  • #233
Closed
Open
Issue created Jul 19, 2022 by Administrator@rootOwner

Convert to singleton script fails for 1.3B checkpoint

Created by: punitkoura

🐛 Bug

The convert_to_singleton.py script fails for the 1.3B checkpoint

To Reproduce

ls 1.3b/
dict.txt  gpt2-merges.txt  gpt2-vocab.json  reshard-model_part-0.pt  reshard-model_part-1.pt
Loading extension module fused_mix_prec_layer_norm_cuda...
2022-07-19 03:15:10 | INFO | metaseq.modules.fused_bias_gelu | Done with compiling and loading fused kernels.
Traceback (most recent call last):
  File "/shared/home/punitkoura/miniconda3/envs/fairseq-20220503/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/shared/home/punitkoura/miniconda3/envs/fairseq-20220503/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/shared/home/punitkoura/src/metaseq/metaseq/scripts/convert_to_singleton.py", line 168, in <module>
    main()
  File "/shared/home/punitkoura/src/metaseq/metaseq/scripts/convert_to_singleton.py", line 164, in main
    dist_utils.call_main(cfg, worker_main)
  File "/shared/home/punitkoura/src/metaseq/metaseq/distributed/utils.py", line 263, in call_main
    return _spawn_helper(main, cfg, kwargs)
  File "/shared/home/punitkoura/src/metaseq/metaseq/distributed/utils.py", line 241, in _spawn_helper
    retval = distributed_main(-1, main, cfg, kwargs)
  File "/shared/home/punitkoura/src/metaseq/metaseq/distributed/utils.py", line 203, in distributed_main
    main(cfg, **kwargs)
  File "/shared/home/punitkoura/src/metaseq/metaseq/scripts/convert_to_singleton.py", line 113, in worker_main
    models, _model_args, _task = checkpoint_utils.load_model_ensemble_and_task(
  File "/shared/home/punitkoura/src/metaseq/metaseq/checkpoint_utils.py", line 582, in load_model_ensemble_and_task
    model = build_model_hook(cfg, task)
  File "/shared/home/punitkoura/src/metaseq/metaseq/scripts/convert_to_singleton.py", line 106, in _build_model
    model = task.build_model(cfg.model).half().cuda()
  File "/shared/home/punitkoura/src/metaseq/metaseq/tasks/base_task.py", line 551, in build_model
    model = models.build_model(args, self)
  File "/shared/home/punitkoura/src/metaseq/metaseq/models/__init__.py", line 89, in build_model
    return model.build_model(cfg, task)
  File "/shared/home/punitkoura/src/metaseq/metaseq/model_parallel/models/transformer_lm.py", line 55, in build_model
    decoder = ModelParallelTransformerDecoder(
  File "/shared/home/punitkoura/src/metaseq/metaseq/models/transformer.py", line 395, in __init__
    self.build_decoder_layer(
  File "/shared/home/punitkoura/src/metaseq/metaseq/models/transformer.py", line 542, in build_decoder_layer
    layer = fsdp_wrap(
  File "/shared/home/punitkoura/src/metaseq/metaseq/distributed/fully_sharded_data_parallel.py", line 145, in fsdp_wrap
    return wrap(module, **kwargs)
  File "/shared/home/punitkoura/src/fairscale/fairscale/nn/wrap/auto_wrap.py", line 187, in wrap
    return ConfigAutoWrap.wrapper_cls(module, **wrap_overrides)
  File "/shared/home/punitkoura/src/metaseq/metaseq/distributed/fully_sharded_data_parallel.py", line 48, in __init__
    super().__init__(*args, **kwargs)
  File "/shared/home/punitkoura/src/fairscale/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 459, in __init__
    self._fsdp_wrapped_module: nn.Module = FlattenParamsWrapper(
  File "/shared/home/punitkoura/src/fairscale/fairscale/nn/misc/flatten_params_wrapper.py", line 222, in __init__
    params, param_infos, shared_param_infos = self._init_flatten_params(new_p_set)
  File "/shared/home/punitkoura/src/fairscale/fairscale/nn/misc/flatten_params_wrapper.py", line 301, in _init_flatten_params
    assert (
AssertionError: expects all parameters to have same dtype: fp32: _fpw_module.self_attn_layer_norm,_fpw_module.self_attn_layer_norm,_fpw_module.final_layer_norm,_fpw_module.final_layer_norm 
 fp16: _fpw_module.self_attn.qkv_proj,_fpw_module.self_attn.qkv_proj,_fpw_module.self_attn.out_proj,_fpw_module.self_attn.out_proj,_fpw_module.fc1,_fpw_module.fc1,_fpw_module.fc2,_fpw_module.fc2 

Expected behavior

We should get a consolidated 1.3B checkpoint.

Environment

  • metaseq Version : main
  • PyTorch Version (e.g., 1.0) : 10
  • OS (e.g., Linux, Windows, MacOS):
  • How you installed metaseq (pip, source):
  • Build command you used (if compiling from source):
  • Python version:
  • CUDA/cuDNN version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

Assignee
Assign to
Time tracking