Skip to main content
Version: 0.12.x

Langchain

The Takeoff API also has an integration with LangChain, allowing you to inference various LLMs through the LangChain interface. To install LangChain, run the following command:

pip install langchain

Inferencing your LLM through LangChain​

Before making calls to your LLM, make sure the Takeoff Server is up and running. To access your LLM running on the Takeoff Server, import the TitanTakeoff LLM wrapper:

from langchain.llms import TitanTakeoffPro

llm = TitanTakeoffPro(
base_url="http://localhost:3000",
max_new_tokens=128,
sampling_topk=1,
sampling_topp=1.0,
sampling_temperature=1.0,
repetition_penalty=1,
no_repeat_ngram_size=0,
)

No arguments are needed to initialise the llm object, but a base_url pointing to the Takeoff Server can be specified and generation parameters can be supplied to override the default values.

caution

IMPORTANT BREAKING CHANGE

Previously, users specified a port on localhost for the Takeoff server to run on. This configuration is being phased out. Starting with the latest version of LangChain, you'll need to provide a base_url instead. This change offers you greater flexibility, especially if you wish to run your server in environments other than localhost.

# New method of specifying a URL (USE THIS)
llm = TitanTakeoffPro(base_url="http://localhost:5000")

# Old method specifying a localhost port (DO NOT USE)
llm = TitanTakeoffPro(port=5000)

Immediate Action Required: If you’re currently specifying the port that the Takeoff Server is running on, you must transition to the new base_url setting. The older method of specifying a port is no longer supported. Note that this might not throw an error, but specifying the port will have no effect and the llm object will point to the default base_url (http://localhost:8000).

Streaming​

Streaming is also supported via the streaming flag:

from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.callbacks.manager import CallbackManager

llm = TitanTakeoffPro(base_url="http://localhost:3000", streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))

prompt = "What is the capital of France?"

llm(prompt)

Chains​

Chains can also be used with the TitanTakeoff integration:

from langchain import PromptTemplate, LLMChain

llm = TitanTakeoffPro()

template = "What is the capital of {country}"

prompt = PromptTemplate(template=template, input_variables=["country"])

llm_chain = LLMChain(llm=llm, prompt=prompt)

generated = llm_chain.run(country="Belgium")