top of page

Features

Kubernetes

Private by default

Your AI backend is going to be deployed as private Kubernetes cluster in your cloud project or account. It can be easily embedded in existing network topologies or created as new isolated VPC.

Terraform
OpenTofu

Simplified set-up

We strive to simplify cloud, same applies to set-up of your own AI backend. We offer quickstart templates based don Terraform / OpenTofu which can be used for manual deployments or integrated into your CI/CD pipelines.

vLLM

Performant runtime engine

To get the most for your money, we only use high performance AI runtimes like vLLM. It offers one of the best performance-to-value ratios and helps to maximise efficient resource usage in the cloud.

Privacy, Governance and Compliance

Active Resource Management

To cut cloud costs by 50-70% we distribute the AI backend across always-on instances and preemptive spot instances. The active resource management automatically scales the instances based on traffic, budget preferences and specified schedules.

Dashboard, insights and reporting

Build-in reporting

To keep an eye of the costs for your AI backend, our product comes with build-in reports to make better business decisions. These insights show you the utilisation of your AI backend, scaling demands and provides insights for further cost optimisations. 

OpenAI compatible server

OpenAI Compatible & Integrations

You would like to add another component to your AI backend like Llama Guard or OpenWebUI? As we rely on standards as the OpenAI API various integrations can be added to the cluster during set-up using Helm charts or Kubernetes Manifests.

bottom of page