how to use GPU in Windows without Docker
Hi,
If you want to use LibreTranslate locally for your own purpose, just follow the README and open your web browser on the local (127.0.0.1:5000) address. I recommend using a miniconda or anaconda environment to segregate it from other python applications you may want to use.
If you want to run a full-fledged website, first install locally, then either set the wsgi.py included wrapper up for your proxy or develop a small batch that will run, launch (relaunch in case of failure) LT with you web service’s options (port, …).
Please note that
- some applications in the community (the phone based one in particular) require direct access to the API, so you cannot proxify the service if you plan to use them,
- Windows is not my best recommendation for web services: if you’ve got more than a few dozens regular users or plan to translate a web service that has regular traffic on the fly, consider using a gpu, or switching to Linux, that’ll do until a few hundred users, then both, otherwise you may choke the system quite frequently.
I did not really understand your issue, because LibreTranslate will run on the GPU natively on Windows… that is, as long as you have installed the NVIDIA CUDA drivers before.
You also need to have a C++ compiler (MS Visual C++ toolkit, that comes with VS) installed to run CTranslate2, but that’s required for CPU usage as well.
Might help if you describe your low-level configuration precisely (GPU model, windows system, etc…), I’ll tell you how to install it step-by-step