LiteLLM proxy server#
MTLLM model can also be connected to a LiteLLM proxy server. This allows you to use the MTLLM model as a proxy for LiteLLM, enabling you to leverage the capabilities of MTLLM in a LiteLLM environment.
To set up and deploy the LiteLLM proxy server, you can follow the instructions provided in the LiteLLM documentation:
Reference: https://docs.litellm.ai/docs/proxy/deploy
Once The proxy server is setted up and running, you can connect to it by simply passing the URL of the proxy server to the MTLLM model with the parameter proxy_url
:
from mtllm import Model
llm = Model(
model_name="gpt-4o", # The model name to be used
api_key="your_litellm_api_key", # LiteLLM proxy server key
proxy_url="http://localhost:8000", # URL of the LiteLLM proxy server
)
Note that the api_key
parameter is necessary to authenticate the connection, which is not the OpenAI API key but the virtual key (or master key) generated by the LiteLLM proxy server. You can find more information about how to obtain this key in the LiteLLM documentation.