Skip to main content


All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.


  • Snowflake Integration with Takeoff! See our docs for more information.
  • New AWQ kernels with improved performance.
  • Internal throughput optimisations.


  • Internal bugfixes and optimisations relating to: docker permissions when volume mounting model cache, better python GIL management, and token caching.



  • Support for Llama 3


  • Fully enabled SSD for static models
  • Tokenization endpoint to get tokenized text for any live reader
  • Support for Llava 1.6 models
  • Introduce new AWQ kernel with significantly lower memory overhead.
  • Updated LangChain integration, unified TitanTakeoff and TitanTakeoffPro, integrations use management api to spin up models, added text embedding support with TitanTakeoffEmbed.


  • Fixed issue with multi-gpu inference with models that have a bias in their attention linear layers.


  • Fixed the configuration issue with the entrypoint for Mistral embedding models.
  • Fixed the issue with continuous batching that was causing performance degradation.
  • Added tokenization endpoint in takeoff.


  • Support for inline images in image to text models. You can now supply an image to the image_generate (and image_generate_stream) endpoint in the form: <image:>.
  • Debug script for diagnosing issues with takeoff deployments.
  • Support for Jina's long context embedding models.
  • Support for Mistral based embedding models
  • Support for API based (openAI) model calls from takeoff.
  • Changes to default memory usage parameters to reduce the likelihood of OOM errors.
  • Fix a bug where model downloading was not properly atomic. This means that a failed model download will no longer cause issues for subsequent launches.
  • Fix a bug where the CPU container was larger than it should have been
  • Assorted performance improvements and bugfixes
  • Remove the ability to manually specify the backend that's used by takeoff.


  • Added OpenAI compatible interface layer
  • Spacelike Speculative Decoding enabled for non-static models. Uses in memory cache for higher generation performance.
  • Support for LLava image to text models.
  • Support for Google's gemma model series


  • Fixed a synchronization bug that could cause a timeout when leaving the server inactive for long periods of time.


  • Added support for reranking & classification models.
  • Added CUDA graph LRU caching to cap memory overheads when using CUDA graphs.
  • Reduce size of GPU image by over half
  • Fix bug where vertex integration couldn't find CUDA driver.
  • Fix bug where synchronization issues could arise when using multi-gpu


  • Introduced a new custom takeoff inference engine, which standardizes backend processes and offers an enhanced interface for generation models.
  • In light of the unified backend, continuous batching now works for all generation models.
  • Implemented GPU/CPU utilization tracking metrics.
  • Released takeoff_client, a Python client package on PyPI for server interaction.
  • Removed the option to select backends from the management frontend.
  • Overhauled all documentation. Add API References section.
  • Added support for Mixtral


  • Bugfix to ensure that GPU VRAM is always cleaned up after a model is dynamically deleted.


  • Added /config/:reader_id endpoint to Takeoff Management API to get config.json file of the model that the reader is currently running.


  • Ability to configure Takeoff with a "config.yaml" file which to be used should be mounted at code/config.yaml inside Takeoff container. This enables you when starting the container to specify multiple readers and server config in a declarative fashion. You can still use environment variables to overwrite individual settings, more details here.
  • compress-fast backend now supports splitting across multiple GPUs.


  • Add continuous batching for baseline, fast, compress-fast backend
  • Add licence validation for takeoff
  • Added loading readers to management frontend
  • Add the ability to cancel requests
  • Minor bug fix to speculative decoding
  • Minor bug fix to multigpu backend


  • Ready flag added to management api GET /reader_groups endpoint to know if model has done loading or not.
  • Redis max memory and takeoff single prompt limit are now configurable in environment variables: TAKEOFF_REDIS_MAX_MEMORY and TAKEOFF_MAX_PROMPT_STRING_BYTES. Their defaults are set to 1GB and 30KB respectively.
  • Stop ability to send generation requests to embedding model through frontend UIs.


  • New model memory calculator to inference frontend! You can calculate to see if your models will fit on your hardware with desired sequence length and batch size.
  • Hash history for inference and management apps to fix getting a 404 when refreshing a sub-page of app.


  • Inference and Management frontend applications can now be served under paths, e.g. or This is useful for serving frontends when deploying on kubernetes and using an ingress to route traffic to your takeoff pod.
  • Sagemaker and Vertex AI compatible inference apis are served on 8080 and 3002 respectively and now have api documentation under /docs.
  • Minor bug fix to Playground UI where no output was displayed.
  • Minor bug fixes to takeoff loading process to communicate more verbosely with api frontend. This ensure /healthz is more robust and added knowledge of loading reading to API.


  • Small adjustment to turn down default log verbosity for Takeoff users.


This release adds support for speculative decoding. Now a small draft model can be used to decrease model latency by drafting a response before the large model verifies it. This can increase speed 2x without affecting model outputs. This is applied be default whenever a valid student model is available, or can be controlled with the TAKEOFF_ASSISTANT_NAME environment variable.

The front end has two new features:

  1. A metric page which shows the statistics of the responses of each model
  2. JSON Schema support to use the controlled generation techniques introduced in 0.5.0


  • Add speculative decoding
  • Add metrics dashboard
  • Expand JSON schema support to the front-end




This release was focused on tools to integrate RAG funtionalities within Takeoff. We add support for embedding models with the BERT arcitechture. This gives an easy way to embed thousands of documents quickly. A single GPU can host a BERT model alongside one or more generative models, meaning multiple applications can be powered by a single GPU.

We also introduce controlled generation to the API. You can specify a regex string or a json scheme in the api which will guarantee that the output will match the schema / regex.

  • Add structured generation: JSON + regex outputs
  • Support multiple readers dynamically
  • Add "prompt_max_tokens" generation parameter across backends, for truncating prompts to max number of tokens
  • Frontend for model management, model selection for chat and playground UI
  • Embedding (Bert) model support




  • AWQ backend accepts safetensors as the model format in repo




  • OOM fixed for other backends




  • OOM fixed for HF and BNB backend



  • Bits and bytes HF 4 bit backend
  • Takeoff PRO added to Iris
  • Multi GPU support
  • Mistral support
  • API docs for takeoff
  • Redis and Python reader are spun up from rust gateway
  • Rust server
  • Rust server serves static files
  • AWQ Backend
  • Batched streaming for AWQ, python reader integrates with Rust gateway
  • Integration and benchmark tests for takeoff
  • Regex guided generation
  • Unify logging formats between rust & python, rationalise log levels
  • Change batching behaviour to fix throughput issues
  • Manager for redis connections in the rust server
  • Conversion entrypoint for AWQ, CT2.
  • Model management API PUT /models to spawn new reader with new config
  • Added bitsandbytes 4bit backend
  • React + Typescript Frontend