跳转到主要内容

Gemini 原生 API

我们的 Chat 服务同样支持原生 Google Gemini API 格式,允许您直接使用官方 Google GenAI 库。
此 API 专为偏好使用 Google 原生 SDK 而非 OpenAI 兼容格式的开发者设计。

🌟 核心特性

  • ✅ 直接使用官方 Google GenAI SDK
  • ✅ 完全兼容 Gemini API 格式
  • ✅ 支持流式非流式响应
  • ✅ 通过我们的服务访问 Gemini 模型

📋 可用端点

端点方法描述
/chat/gemini/{apiVersion}/models/{model}:generateContentPOST生成内容(非流式)
/chat/gemini/{apiVersion}/models/{model}:streamGenerateContentPOST生成内容(流式 SSE)

💡 快速示例

import { GoogleGenAI } from '@google/genai';

const API_KEY = 'your-api-key';
const API_VERSION = 'v1beta';
const BASE_URL = 'https://api.mountsea.ai/chat/gemini';

const client = new GoogleGenAI({
  apiKey: API_KEY,
  apiVersion: API_VERSION,
  httpOptions: {
    baseUrl: BASE_URL,
    headers: {
      "Authorization": `Bearer ${API_KEY}`,
    },
  },
});

// Generate content (non-streaming)
const response = await client.models.generateContent({
  model: 'gemini-3-flash',
  contents: [
    {
      role: 'user',
      parts: [{ text: 'Hello! Tell me a joke.' }]
    }
  ],
  config: {
    temperature: 1,
    maxOutputTokens: 1024,
  }
});

console.log(response.text);
流式响应:
const stream = await client.models.streamGenerateContent({
  model: 'gemini-3-flash',
  contents: [
    {
      role: 'user',
      parts: [{ text: 'Write a short story about a robot.' }]
    }
  ],
  config: {
    temperature: 0.8,
    maxOutputTokens: 2048,
  }
});

for await (const chunk of stream) {
  process.stdout.write(chunk.text || '');
}
安装:
npm install @google/genai

📤 响应格式

非流式响应

{
  "candidates": [
    {
      "content": {
        "parts": [
          {
            "text": "Why don't scientists trust atoms? Because they make up everything!"
          }
        ],
        "role": "model"
      },
      "finishReason": "STOP",
      "safetyRatings": [...]
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 10,
    "candidatesTokenCount": 15,
    "totalTokenCount": 25
  }
}

流式响应 (SSE)

data: {"candidates":[{"content":{"parts":[{"text":"Why"}],"role":"model"}}]}

data: {"candidates":[{"content":{"parts":[{"text":" don't"}],"role":"model"}}]}

data: {"candidates":[{"content":{"parts":[{"text":" scientists"}],"role":"model"}}]}

...

🔧 配置选项

config / generationConfig 对象支持以下参数:
参数类型描述
temperaturenumber控制随机性(0-2)
maxOutputTokensnumber响应的最大 token 数
topPnumberNucleus 采样参数
topKnumberTop-K 采样参数
systemInstructionobject系统提示配置
toolsarray函数调用工具

使用系统指令

const response = await client.models.generateContent({
  model: 'gemini-3-flash',
  contents: [
    { role: 'user', parts: [{ text: 'What is the capital of France?' }] }
  ],
  config: {
    temperature: 0.7,
    maxOutputTokens: 1024,
    systemInstruction: {
      role: 'user',
      parts: [{ text: 'You are a helpful geography teacher. Answer concisely.' }]
    }
  }
});

📝 多轮对话

{
  "contents": [
    {
      "role": "user",
      "parts": [{ "text": "Hi, my name is Alice." }]
    },
    {
      "role": "model",
      "parts": [{ "text": "Hello Alice! Nice to meet you." }]
    },
    {
      "role": "user",
      "parts": [{ "text": "What's my name?" }]
    }
  ]
}

如需使用 OpenAI 兼容 API,请参阅 Chat Completions