README.md 4.48 KB
Newer Older
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
1
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
2

ssmt's avatar
ssmt committed
3 4
# mt-model-deploy-dhruva

Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
5 6 7
## TL;DR

This repo contains code for python backend CTranslate2 based triton models for the SSMT project.
8
Prerequisites: `python3.xx-venv`, `nvidia-docker`, `bash`
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
9

Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
10
Quantization disabled until qualitative testing is performed.
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
11
```bash
12
git clone https://ssmt.iiit.ac.in/meitygit/ssmt/mt-model-deploy-dhruva.git
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
13
cd mt-model-deploy-dhruva
14 15 16
bash make_triton_model_repo.sh
docker build -t dhruva/himangy-model-server:1 .
nvidia-docker run --gpus=all --rm --shm-size 5g --network=host --name dhruva-himangy-triton-server -v./himangy_triton_repo:/models dhruva/himangy-model-server:1
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
17 18 19 20 21 22 23
```

## What this repo does

* This repo contains the templates and component triton models for the SSMT project.
* Also contained is a Dockerfile to construct the triton server instance.
* Dynamic batching and caching is supported and enabled by default.
24
* The repository folder can me mounted to the dhruva himangy triton server on `/models` and can be queried via a client.
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
25 26 27 28 29 30 31 32 33
* Sample client code is also given as an ipython notebook.

## Architecture of the pipeline

The pipeline consists of 4 components, executed in order:
* Tokenizer - This model is CPU only and tokenizes and applies BPE on the input string.
* Model Demuxer - This model is CPU only and depending on the language pair requested, queues up an `InferenceRequest` for the appropriate model and returns it as the response.
* Model - This is a GPU based model and it processes the tokenized text and returns the final form of the translated text to the caller.
* Pipeline - This is an ensemble model that wraps the above three components together and is the one meant to be exposed to the client.
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
34

Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
35 36 37
The exact specifications of the model inputs and outputs can be looked at in the corresponding `config.pbtxt` files.
One can construct the triton repo like so:
```bash
38
git clone https://ssmt.iiit.ac.in/meitygit/ssmt/mt-model-deploy-dhruva.git
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
39
cd mt-model-deploy-dhruva
40
bash make_triton_model_repo.sh
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
41 42 43 44 45 46
```

## Starting the triton server

We customize the tritonserver image with the required python packages in a venv and enable the cache in the startup command. After the model repo has beeen built, one can build and run the server like so:
```bash
47 48
docker build -t dhruva/himangy-model-server:1 .
nvidia-docker run --gpus=all --rm --shm-size 5g --network=host --name dhruva-himangy-triton-server -v./himangy_triton_repo:/models dhruva/himangy-model-server:1
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
49 50 51 52 53
```

## Querying the triton server

We provide a sample ipython notebook that shows how to concurrently request the client for translations.
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
54
Prerequisites: `pip install "tritonclient[all]" tqdm numpy wonderwords`
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
55 56

## Data and Model Training
Nikhilesh Bhatnagar's avatar
Nikhilesh Bhatnagar committed
57
Please refer to the training-code folder and the [README](https://ssmt.iiit.ac.in/meitygit/ssmt/mt-model-deploy-dhruva/-/blob/master/training-code/README.md) within for the links to the data and checkout the Ipython notebook for an annotated version of the training script.
ssmt's avatar
ssmt committed
58

ssmt's avatar
ssmt committed
59 60 61 62 63 64 65 66 67 68 69 70 71 72


## Already Trained Models Link

[download](https://ssmt.iiit.ac.in/uploads/data_mining/HimangY-oneMT-Models-V1-2.zip)

### supported language pairs

| Source - Target (Bi-direction)| 
|-------------------------------|
|Hindi - Telugu, Urdu, Gujarati, Odia, Punjabi|
|English - Hindi, Telugu, Gujarati|


ssmt's avatar
ssmt committed
73 74
## Citation
```bash
ssmt's avatar
ssmt committed
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97
@inproceedings{mujadia-sharma-2022-ltrc,
    title = "The {LTRC} {H}indi-{T}elugu Parallel Corpus",
    author = "Mujadia, Vandan  and Sharma, Dipti",
    booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
    month = jun,
    year = "2022",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2022.lrec-1.365",
    pages = "3417--3424",
}

@inproceedings{mujadia-sharma-2021-english,
title = "{E}nglish-{M}arathi Neural Machine Translation for {L}o{R}es{MT} 2021",
author = "Mujadia, Vandan  and Sharma, Dipti Misra",
booktitle = "Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021)",
month = aug,
year = "2021",
address = "Virtual",
publisher = "Association for Machine Translation in the Americas",
url = "https://aclanthology.org/2021.mtsummit-loresmt.16",
pages = "151--157",
}
ssmt's avatar
ssmt committed
98
```