Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M metaseq
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 95
    • Issues 95
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 41
    • Merge requests 41
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Administrator
  • metaseq
  • Issues
  • #552
Closed
Open
Issue created Dec 16, 2022 by Administrator@rootOwner

`hub_utils.py` assumes a different sharding convention

Created by: EIFY

🐛 Bug

hub_utils.py of baa4e6d8 (current main) assumes a different sharding convention

To Reproduce

  1. Set up everything as before:
$ ls /home/jason_chou/redspot_home/66b/
dict.txt         gpt2-vocab.json          reshard-model_part-1.pt  reshard-model_part-3.pt  reshard-model_part-5.pt  reshard-model_part-7.pt
gpt2-merges.txt  reshard-model_part-0.pt  reshard-model_part-2.pt  reshard-model_part-4.pt  reshard-model_part-6.pt  restored.pt
$
$ cat metaseq/service/constants.py
# Copyright (c) Meta Platforms, Inc. and affiliates. All Rights Reserved.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.

import os

MAX_SEQ_LEN = 2048
BATCH_SIZE = 2048  # silly high bc we dynamically batch by MAX_BATCH_TOKENS
MAX_BATCH_TOKENS = 3072
DEFAULT_PORT = 6010
MODEL_PARALLEL = 8
TOTAL_WORLD_SIZE = 8
MAX_BEAM = 16

try:
    # internal logic denoting where checkpoints are in meta infrastructure
    from metaseq_internal.constants import CHECKPOINT_FOLDER
except ImportError:
    # CHECKPOINT_FOLDER should point to a shared drive (e.g. NFS) where the
    # checkpoints from S3 are stored. As an example:
    # CHECKPOINT_FOLDER = "/example/175B/reshard_no_os"
    # $ ls /example/175B/reshard_no_os
    # reshard-model_part-0.pt
    # reshard-model_part-1.pt
    # reshard-model_part-2.pt
    # reshard-model_part-3.pt
    # reshard-model_part-4.pt
    # reshard-model_part-5.pt
    # reshard-model_part-6.pt
    # reshard-model_part-7.pt
    CHECKPOINT_FOLDER = "/home/jason_chou/redspot_home/66b/"

# tokenizer files
BPE_MERGES = os.path.join(CHECKPOINT_FOLDER, "gpt2-merges.txt")
BPE_VOCAB = os.path.join(CHECKPOINT_FOLDER, "gpt2-vocab.json")
MODEL_FILE = os.path.join(CHECKPOINT_FOLDER, "reshard.pt")


LAUNCH_ARGS = [
    f"--model-parallel-size {MODEL_PARALLEL}",
    f"--distributed-world-size {TOTAL_WORLD_SIZE}",
    # If using FSDP shards, replace ddp-backend and add use-sharded-state
    "--ddp-backend fully_sharded",
    "--use-sharded-state",
    "--task language_modeling",
    f"--bpe-merges {BPE_MERGES}",
    f"--bpe-vocab {BPE_VOCAB}",
    "--bpe hf_byte_bpe",
    f"--merges-filename {BPE_MERGES}",  # TODO(susanz): hack for getting interactive_hosted working on public repo
    f"--vocab-filename {BPE_VOCAB}",  # TODO(susanz): hack for getting interactive_hosted working on public repo
    f"--path {MODEL_FILE}",
    "--beam 1",
    "--checkpoint-shard-count 1",
    f"--batch-size {BATCH_SIZE}",
    f"--buffer-size {BATCH_SIZE * MAX_SEQ_LEN}",
    f"--max-tokens {BATCH_SIZE * MAX_SEQ_LEN}",
    "/tmp",  # required "data" argument.
]

# Optional arg overrides which influence model loading during inference
INFERENCE_ARG_OVERRIDES = {}
  1. Run metaseq-api-local
  2. See error
(...)
FileNotFoundError: [Errno 2] No such file or directory: '/home/jason_chou/redspot_home/66b/reshard.pt'

Taking a closer look at the current hub_utils.py https://github.com/facebookresearch/metaseq/blob/baa4e6d840042404e51a60efbe9d65ad62c80fca/metaseq/hub_utils.py#L121-L125 It seems to assume a different sharding convention from the one in the constants.py comment and assumed by the OPT download path. Indeed, changing the above to the following gets around it:

        if len(sharded_files) > 0 and "reshard" in sharded_files[0]:
            # We are loading a sharded checkpoint
            suffix = f"-model_part-{r}"
        else:
            suffix += ""

With that, metaseq-api-local works.

Expected behavior

metaseq-api-local just works...?

Environment

  • metaseq Version: baa4e6d8 (current main)
  • PyTorch Version: 1.12.1+cu113
  • OS: Ubuntu 18.04.6 LTS
  • How you installed metaseq: pip
  • Build command you used (if compiling from source): N.A.
  • Python version: 3.10
  • CUDA/cuDNN version: CUDA 12.0
  • GPU models and configuration: 8 x V100 SXM2 32 GB
Assignee
Assign to
Time tracking