takeoff_client
This module contains the TakeoffClient class, which is used to interact with the Takeoff server.
TakeoffClient Objects​
class TakeoffClient()
__init__​
def __init__(base_url: str = "http://localhost",
port: int = 3000,
mgmt_port: int = None)
TakeoffClient is used to interact with the Takeoff server.
Arguments:
base_url
str, optional - base url that takeoff server runs on. Defaults to "http://localhost".port
int, optional - port that main server runs on. Defaults to 8000.mgmt_port
int, optional - port that management api runs on. Usually beport + 1
. Defaults to None.
get_readers​
def get_readers() -> dict
Get a list of information about all readers.
Returns:
dict
- List of information about all readers.
embed​
def embed(text: str | List[str], consumer_group: str = "embed") -> dict
Embed a batch of text.
Arguments:
text
str | List[str] - Text to embed.consumer_group
str, optional - consumer group to use. Defaults to "embed".
Returns:
dict
- Embedding response.
generate​
def generate(text: str | List[str],
sampling_temperature: float = None,
sampling_topp: float = None,
sampling_topk: int = None,
repetition_penalty: float = None,
no_repeat_ngram_size: int = None,
max_new_tokens: int = None,
min_new_tokens: int = None,
regex_string: str = None,
json_schema: Any = None,
prompt_max_tokens: int = None,
consumer_group: str = "primary") -> dict
Generates text, seeking a completion for the input prompt. Buffers output and returns at once.
Arguments:
text
str - Input prompt from which to generatesampling_topp
float, optional - Sample from set of tokens whose cumulative probability exceeds this valuesampling_temperature
float, optional - Sample predictions from the top K most probable candidatessampling_topk
int, optional - Sample with randomness. Bigger temperatures are associated with more randomness.repetition_penalty
float, optional - Penalise the generation of tokens that have been generated before. Set to > 1 to penalize.no_repeat_ngram_size
int, optional - Prevent repetitions of ngrams of this size.max_new_tokens
int, optional - The maximum number of (new) tokens that the model will generate.min_new_tokens
int, optional - The minimum number of (new) tokens that the model will generate.regex_string
str, optional - The regex string which generations will adhere to as they decode.json_schema
dict, optional - The JSON Schema which generations will adhere to as they decode. Ignored if regex_str is set.prompt_max_tokens
int, optional - The maximum length (in tokens) for this prompt. Prompts longer than this value will be truncated.consumer_group
str, optional - The consumer group to which to send the request.
Returns:
Output
dict - The response from Takeoff containing the generated text as a whole.
generate_stream​
def generate_stream(text: str | List[str],
sampling_temperature: float = None,
sampling_topp: float = None,
sampling_topk: int = None,
repetition_penalty: float = None,
no_repeat_ngram_size: int = None,
max_new_tokens: int = None,
min_new_tokens: int = None,
regex_string: str = None,
json_schema: dict = None,
prompt_max_tokens: int = None,
consumer_group: str = "primary") -> Iterator[Event]
Generates text, seeking a completion for the input prompt.
Arguments:
text
str | List[str] - Input prompt from which to generatesampling_temperature
float, optional - Sample predictions from the top K most probable candidatessampling_topp
float, optional - Sample from set of tokens whose cumulative probability exceeds this valuesampling_topk
int, optional - Sample with randomness. Bigger temperatures are associated with more randomness.repetition_penalty
float, optional - Penalise the generation of tokens that have been generated before. Set to > 1 to penalize.no_repeat_ngram_size
int, optional - Prevent repetitions of ngrams of this size.max_new_tokens
int, optional - The maximum number of (new) tokens that the model will generate.min_new_tokens
int, optional - The minimum number of (new) tokens that the model will generate.regex_string
str, optional - The regex string which generations will adhere to as they decode.json_schema
dict, optional - The JSON Schema which generations will adhere to as they decode. Ignored if regex_str is set.prompt_max_tokens
int, optional - The maximum length (in tokens) for this prompt. Prompts longer than this value will be truncated.consumer_group
str, optional - The consumer group to which to send the request.
Returns:
Iterator[sseclient.SSEClient.Event]
- An iterator of server-sent events.