Getting Started
Learn how to install and set up Inference Gateway.
Installation
Using Docker
Terminal
docker pull ghcr.io/inference-gateway/inference-gateway:latest
docker run --rm -it -p 8080:8080 -e OPENAI_API_KEY=your_key_here ghcr.io/inference-gateway/inference-gateway:latest
Using Docker Compose
Checkout the examples in the Docker Compose examples.
Using Kubernetes
Checkout the examples in the Kubernetes examples.
Basic Usage
Send a request to the Inference Gateway:
Terminal
curl -X POST http://localhost:8080/v1/chat/completions \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello, world!"
}
]
}