Mindformers拉取单机多卡deepseek推理
执行指令拉取的任务线程全在第一张卡上 并没有分布在四张卡上 这种情况是因为多卡不同机吗 求助使用modelarts 算力集群sh内容推理文件。
·
问题描述
执行指令
bash /home/ma-user/work/mindarmour/examples/model_protection/deepseekv3/infer/msrun_launcher.sh "/home/ma-user/work/mindarmour/examples/model_protection/deepseekv3/infer/run_deepseekv3_predict.py" 4
拉取的任务线程全在第一张卡上 并没有分布在四张卡上 这种情况是因为多卡不同机吗 求助
使用modelarts 算力集群
sh内容
WORKER_NUM=8
LOCAL_WORKER=8
MASTER_ADDR="127.0.0.1"
MASTER_PORT=8118
NODE_RANK=0
LOG_DIR="output/msrun_log"
JOIN="False"
CLUSTER_TIME_OUT=7200
# Set PYTHONPATH
MF_SCRIPTS_ROOT=$(realpath "$(dirname "$0")")
export PYTHONPATH=$MF_SCRIPTS_ROOT/../:$PYTHONPATH
# Set the log suffix
if [ -z "${MF_LOG_SUFFIX+x}" ] || [ "$MF_LOG_SUFFIX" == "" ]
then
MF_LOG_SUFFIX=$MF_LOG_SUFFIX
else
MF_LOG_SUFFIX=_$MF_LOG_SUFFIX
fi
# get the workspcace path
WORKSPACE_PATH=$(pwd)
# Add the suffix to the MF_LOG
if [ -z "${LOG_MF_PATH+x}" ] || [ "$LOG_MF_PATH" == "" ]
then
export LOG_MF_PATH=$WORKSPACE_PATH/output/log$MF_LOG_SUFFIX
fi
# Set the PLOG path
if [ -z "${PLOG_REDIRECT_TO_OUTPUT+x}" ] || [ $PLOG_REDIRECT_TO_OUTPUT == False ]
then
echo "No change the path of plog, the path of plog is /root/ascend"
else
export ASCEND_PROCESS_LOG_PATH=$WORKSPACE_PATH/output/plog$MF_LOG_SUFFIX
echo "PLOG_REDIRECT_TO_OUTPUT=$PLOG_REDIRECT_TO_OUTPUT, set the path of plog to $ASCEND_PROCESS_LOG_PATH"
fi
if [ $# != 1 ] && [ $# != 2 ] && [ $# != 6 ] && [ $# != 9 ]
then
echo "Usage Help: bash msrun_launcher.sh [EXECUTE_ORDER] For Default 8 Devices In Single Machine"
echo "Usage Help: bash msrun_launcher.sh [EXECUTE_ORDER] [WORKER_NUM] For Quick Start On Multiple Devices In Single Machine"
echo "Usage Help: bash msrun_launcher.sh [EXECUTE_ORDER] [WORKER_NUM] [MASTER_PORT] [LOG_DIR] [JOIN] [CLUSTER_TIME_OUT] For Multiple Devices In Single Machine"
echo "Usage Help: bash msrun_launcher.sh [EXECUTE_ORDER] [WORKER_NUM] [LOCAL_WORKER] [MASTER_ADDR] [MASTER_PORT] [NODE_RANK] [LOG_DIR] [JOIN] [CLUSTER_TIME_OUT] For Multiple Devices In Multiple Machines"
exit 1
fi
# Start Without Parameters For 8 Devices On Single Machine
if [ $# == 1 ]
then
echo "No parameter is entered. Notice that the program will run on default 8 cards. "
SINGLE_NODE=true
else
WORKER_NUM=$2
fi
# Check WORKER_NUM
if [[ ! $WORKER_NUM =~ ^[0-9]+$ ]]; then
echo "error: worker_num=$WORKER_NUM is not a number"
exit 1
fi
# Quick Start For Multiple Devices On Single Machine
if [ $# == 2 ]
then
LOCAL_WORKER=$WORKER_NUM
SINGLE_NODE=true
fi
# Multiple Devices On Single Machine
if [ $# == 6 ]
then
LOCAL_WORKER=$WORKER_NUM
MASTER_PORT=$3
LOG_DIR=$4
JOIN=$5
CLUSTER_TIME_OUT=$6
SINGLE_NODE=true
fi
# Multiple Devices On Multiple Machine
if [ $# == 9 ]
then
LOCAL_WORKER=$3
MASTER_ADDR=$4
MASTER_PORT=$5
NODE_RANK=$6
LOG_DIR=$7
JOIN=$8
CLUSTER_TIME_OUT=$9
if [ $WORKER_NUM == $LOCAL_WORKER ]
then
echo "worker_num is equal to local_worker, Notice that task will run on single node."
SINGLE_NODE=true
else
echo "worker_num=$WORKER_NUM, local_worker=$LOCAL_WORKER, \
Please run this script on other nodes with different node_rank."
SINGLE_NODE=false
fi
fi
# Add the suffix to the msrun_log
LOG_DIR=${LOG_DIR}${MF_LOG_SUFFIX}
# Init msrun Command
if [ $SINGLE_NODE == true ]
then
MSRUN_CMD="msrun --bind_core=True \
--worker_num=$WORKER_NUM \
--local_worker_num=$LOCAL_WORKER \
--master_port=$MASTER_PORT \
--log_dir=$LOG_DIR \
--join=$JOIN \
--cluster_time_out=$CLUSTER_TIME_OUT"
else
MSRUN_CMD="msrun --bind_core=True \
--worker_num=$WORKER_NUM \
--local_worker_num=$LOCAL_WORKER \
--master_addr=$MASTER_ADDR \
--master_port=$MASTER_PORT \
--node_rank=$NODE_RANK \
--log_dir=$LOG_DIR \
--join=$JOIN \
--cluster_time_out=$CLUSTER_TIME_OUT"
fi
if [ $WORKER_NUM == 1 ]
then
echo "You should use python instead of using msrun while running a single rank"
exit 0
fi
EXECUTE_ORDER="$MSRUN_CMD $1"
ulimit -u unlimited
echo "Running Command: $EXECUTE_ORDER"
echo "Please check log files in ${WORKSPACE_PATH}/${LOG_DIR}"
eval $EXECUTE_ORDER
推理文件
def run_predict(args):
"""Deepseek-V3/R1 predict"""
# inputs
input_questions = [args.input]
# set model config
yaml_file = args.config
config = MindFormerConfig(yaml_file)
build_context(config)
build_parallel_config(config)
model_config = config.model.model_config
model_config.parallel_config = config.parallel_config
model_config.moe_config = config.moe_config
model_config = DeepseekV3Config(**model_config)
# build tokenizer
tokenizer = LlamaTokenizerFast(config.processor.tokenizer.vocab_file,
config.processor.tokenizer.tokenizer_file,
unk_token=config.processor.tokenizer.unk_token,
bos_token=config.processor.tokenizer.bos_token,
eos_token=config.processor.tokenizer.eos_token,
fast_tokenizer=True)
tokenizer.pad_token = tokenizer.eos_token
# build model from config
network = InferenceDeepseekV3ForCausalLM(model_config)
ms_model = Model(network)
if config.load_checkpoint:
logger.info("----------------Transform and load checkpoint----------------")
seq_length = model_config.seq_length
input_ids = Tensor(shape=(model_config.batch_size, seq_length), dtype=ms.int32, init=initializer.One())
infer_data = network.prepare_inputs_for_predict_layout(input_ids)
transform_and_load_checkpoint(config, ms_model, network, infer_data, do_predict=True)
inputs = tokenizer(input_questions, max_length=64, padding="max_length")["input_ids"]
outputs = network.generate(inputs,
max_length=1024,
do_sample=False,
top_k=5,
top_p=1,
max_new_tokens=128)
answer = tokenizer.decode(outputs)
print("answer: ", answer)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
'--config',
type=str,
required=False,
help='YAML config files, such as'
'/home/ma-user/work/vllm-mindspore/install_depend_pkgs/mindformers-br_infer_boom/research/deepseek3/deepseek3_671b/predict_deepseek3_671b.yaml',
default="/home/ma-user/work/vllm-mindspore/install_depend_pkgs/mindformers-br_infer_boom/research/deepseek3/deepseek3_671b/predict_deepseek3_671b.yaml")
parser.add_argument(
'--input',
type=str,
default="生抽和老抽的区别是什么?")
args_ = parser.parse_args()
run_predict(args_)
问题解答
多卡不同机还需要提供额外的参数.
# Multiple Devices On Multiple Machine
LOCAL_WORKER=$3
MASTER_ADDR=$4
MASTER_PORT=$5
NODE_RANK=$6
LOG_DIR=$7
JOIN=$8
CLUSTER_TIME_OUT=$9
昇腾计算产业是基于昇腾系列(HUAWEI Ascend)处理器和基础软件构建的全栈 AI计算基础设施、行业应用及服务,https://devpress.csdn.net/organization/setting/general/146749包括昇腾系列处理器、系列硬件、CANN、AI计算框架、应用使能、开发工具链、管理运维工具、行业应用及服务等全产业链
更多推荐


所有评论(0)