Run any AI model as an APIwithin seconds
Low latency serverless API to run and deploy ML models
Our product is Crafted through millions of ML runs
5,000+
Developers using our API
9,000+
AI models deployed
The easiest way to get an API
endpoint from any ML model
All the infrastructure required to run AI models with a simple API call
curl -X POST 'https://www.mystic.ai/v3/runs'
-H 'Authorization: Bearer YOUR_TOKEN'
-H 'Content-Type: application/json'
-d '{"pipeline_id_or_pointer": "meta/llama2-70B-chat:latest", "input_data": [{"type":"string","value":"Hello World!"}]}'
Only pay for inference time
Pay per second with serverless pricing on our shared cluster. Pay only for the inference you use.
Inference within 0.035s
Within a few milliseconds our scheduler decides the optimal strategy of queuing, routing and scaling.
API first and Python lovers
RESTful API to call your model from anywhere. Python SDK to upload your own models.
How to get started
Run any model built by the community, dive into one of our
tutorials, or start uploading your own models.
Explore AI models built by the community
Our community uploads AI models and makes them available for everyone to use. They are ready to try and use as an API.

stabilityai/stable-diffusion-xl-refiner-1.0
SD-XL 1.0-base
Updated 3 months ago3 months ago
309.57K Runs

paulh/open-journey-xl
OpenJourney XL. A finetuned SDXL on the Midjourney v5 dataset
Updated 3 months ago3 months ago
65.79K Runs

meta/llama2-7B-chat
Llama 2 13B for chat applications with vLLM
Updated 3 months ago3 months ago
25.31K Runs
Upload your own AI pipeline
A pipeline contains all the code required to run your AI model as an endpoint. You can define the inputs to the endpoint, any pre-processing code, the inference pass, post-processing code and outputs to be returned back from the endpoint.
Learn how to leverage our cold-start optimizations, createcustom environments, enable debugging mode & logging, load model from file, and much more.
...
@pipe
def foo(bar: str) -> str:
return f"Input string: {bar}"
with Pipeline() as builder:
bar = Variable(str)
output_1 = foo(bar)
builder.output(output_1)
my_pipeline = builder.get_pipeline()
my_pipeline.run("test")
...