Skip to main content

OpenAI Compat API

The OpenAI Compat API is a drop-in replacement for OpenAI’s official Images API. Fully compatible with the official openai SDK (Python & Node.js) and the raw OpenAI REST interface — just change the base_url.
No code changes needed — if you already use OpenAI’s images.generate / images.edit, simply override base_url to https://api.mountsea.ai/openai/v1 with your Mountsea API key.

Why Use Compat API?

Official SDK Support

Works with OpenAI’s official openai Python and Node.js SDKs

Same API Shape

Same request/response format as https://api.openai.com/v1/images/*

Synchronous

Returns the generated image directly — no polling needed

Unified Billing

Single API key, unified usage tracking through Mountsea

Configuration

Base URL

https://api.mountsea.ai/openai/v1

Authentication

Use your Mountsea API key:
  • HTTP header: Authorization: Bearer your-api-key
  • Or set via the SDK’s api_key / apiKey parameter

Supported Endpoints

EndpointMethodDescription
/openai/v1/images/generationsPOSTText-to-image (JSON) — identical to OpenAI
/openai/v1/images/editsPOSTImage editing (multipart/form-data) — identical to OpenAI

Supported Model

ModelDescription
gpt-image-2Latest OpenAI image model (default)

Using the Official openai SDK

Installation

pip install openai

Python

from openai import OpenAI
import base64

client = OpenAI(
    api_key="your-mountsea-api-key",
    base_url="https://api.mountsea.ai/openai/v1",
)

response = client.images.generate(
    model="gpt-image-2",
    prompt="A photorealistic cat wearing a space helmet, floating in orbit",
    size="1024x1024",
    quality="high",
    n=1,
)

# gpt-image-2 always returns b64_json
image_b64 = response.data[0].b64_json
with open("output.png", "wb") as f:
    f.write(base64.b64decode(image_b64))

Node.js / TypeScript

import OpenAI from 'openai';
import fs from 'fs';

const client = new OpenAI({
  apiKey: 'your-mountsea-api-key',
  baseURL: 'https://api.mountsea.ai/openai/v1',
});

const response = await client.images.generate({
  model: 'gpt-image-2',
  prompt: 'A photorealistic cat wearing a space helmet, floating in orbit',
  size: '1024x1024',
  quality: 'high',
  n: 1,
});

const imageB64 = response.data[0].b64_json!;
fs.writeFileSync('output.png', Buffer.from(imageB64, 'base64'));

Using the REST API Directly

If you don’t want to use the SDK, you can call the endpoints directly.

Text-to-Image (JSON)

curl -X POST "https://api.mountsea.ai/openai/v1/images/generations" \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-image-2",
    "prompt": "A photorealistic cat wearing a space helmet, floating in orbit",
    "size": "1024x1024",
    "quality": "high",
    "n": 1
  }'

Image Edit (multipart/form-data)

curl -X POST "https://api.mountsea.ai/openai/v1/images/edits" \
  -H "Authorization: Bearer your-api-key" \
  -F "model=gpt-image-2" \
  -F "image=@source.png" \
  -F "prompt=Add a dramatic sunset sky" \
  -F "size=1024x1024" \
  -F "input_fidelity=high"

Inpainting (multipart/form-data)

curl -X POST "https://api.mountsea.ai/openai/v1/images/edits" \
  -H "Authorization: Bearer your-api-key" \
  -F "model=gpt-image-2" \
  -F "image=@source.png" \
  -F "mask=@mask.png" \
  -F "prompt=Replace the masked area with a sunset beach" \
  -F "size=1024x1024"

Request Parameters

images/generations (JSON)

FieldTypeDescription
promptstring (required)Prompt for image generation
modelstringDefaults to gpt-image-2
sizeenumauto, 1024x1024, 1024x1536, 1536x1024
nnumberCurrently only 1
qualityenumauto, low, medium, high, standard
backgroundenumtransparent, opaque, auto
output_formatenumpng, jpeg, webp
output_compressionnumber0-100 for JPEG/WebP
moderationenumauto, low
response_formatenumurl or b64_json (gpt-image-2 always returns b64_json)
userstringEnd-user identifier for abuse monitoring

images/edits (multipart/form-data)

All fields above plus:
FieldTypeDescription
imagefile (required)Source image
maskfileOptional mask; transparent areas will be repainted
input_fidelityhigh / lowHow closely the output follows the input image

Response Format

Identical to OpenAI’s official response:
{
  "created": 1712345678,
  "data": [
    {
      "b64_json": "<base64 encoded image>",
      "revised_prompt": "<model's improved prompt>"
    }
  ],
  "background": "opaque",
  "output_format": "png",
  "quality": "high",
  "size": "1024x1024",
  "usage": {
    "input_tokens": 45,
    "input_tokens_details": { "image_tokens": 0, "text_tokens": 45 },
    "output_tokens": 3200,
    "total_tokens": 3245
  }
}
gpt-image-2 always returns the image as b64_json. The url response format is only kept for DALL·E compatibility and will be ignored for gpt-image models.

Notes & Limitations

Currently n only supports 1. Multi-image generation per request is not yet supported.
  • All standard OpenAI parameters are supported; unknown parameters are ignored gracefully.
  • response_format=url is accepted but gpt-image-2 will still return b64_json.
  • Both openai SDK and raw REST calls work identically.

Explore the API Documentation