diff --git a/README.md b/README.md index 1db9099387b267161ecfe88a8bb482471463c535..ae2f3dbf64b187932c626fb007fd8c82e3231367 100644 --- a/README.md +++ b/README.md @@ -23,3 +23,6 @@ Place the model files in `asr/1/whisper_models` * Do the same for `https://github.com/flashlight/sequence` * And do `pip install git+https://github.com/kpu/kenlm.git fast_pytorch_kmeans tensorboardX flashlight-text soundfile torchaudio data2vec-aqc/dist/fairseq-0.12.2-cp310-cp310-linux_x86_64.whl sequence/dist/flashlight_sequence-0.0.0+91e2b0f.d20240210-cp310-cp310-linux_x86_64.whl torchaudio-augmentations/dist/torchaudio_augmentations-0.2.4-py3-none-any.whl faster-whisper` * Finally use conda pack to save the env. + +## Running the triton server +`nvidia-docker run --gpus=all --shm-size 5g --network=host --rm -it --name triton-server -e LANG="C.UTF-8" -e LC_ALL="C.UTF-8" -v./triton_models_repo:/triton_repo nvcr.io/nvidia/tritonserver:24.01-py3 tritonserver --model-repo=/triton_repo --http-port=8011 --grpc-port=8012 --metrics-port=8013 --cache-config=local,size=1048576`