Skip to main content
Version: 0.21.x

Takeoff Stack Documentation


Overview

The Takeoff Stack facilitates the effortless building, deploying and scaling of private and secure LLM-powered applications.

We deploy a variety of APIs which seamlessly integrate with your existing infrastructure, allowing you to build and deploy applications with ease. Each of these APIs is owned by you and can be customised to your needs.

Inference Engine

A standalone deployment solution for LLMs, allowing you to deploy and run any open-source or custom model.

🤗 Support for any open-source or private model.

🔩 Proprietary inference engine backend for best-in-class speed and throughput.

📦 Packaged in a single, easily deployed container for self-hosted and offline machines.

Document Processing Pipeline

Ingest unstructured data to extract structured information from it. Stored in best-in-class databases for querying.

📚 Document processing for the extraction of your documents/data into structured chunks which can be effectively queried or stored for use in downstream applications.

👓 Highlighted extracts of document components and smart table extraction into html, even from scanned PDFs.

🔒 Attribute/Role Based Access Control (ABAC/RBAC) allowing secure access to documents/data from multiple clients.

Feature Rich Deployments

Underpinning all our APIs for deploying and managing resources effortlessly.

🕸 Connect all of these APIs together and deploy them in a single command.

🚀 Deploy on any cloud provider or on-premise. See our guides for deploying on EKS, GKE, AKS, Dataiku, Haystack, Langchain, Snowflake, and many more!

📈 Monitor usage and performance with built-in metrics dashboard.

🏗 Scale your deployments with ease, scale to zero and automatically provision compute to suit your needs.


Support


For support from the Titan Takeoff team and the community, contact hello@titanml.co.