流播放

OpenAI API提供了能够将部分结果流式传输回客户端的功能,以此来实现某些请求的流式响应。我们遵循了Server-sent events标准。我们的官方NodePython库都包含了帮助解析这些事件的工具。

流式响应支持Chat Completions API和Assistants API的Runs/CreateRun端点。本节将重点介绍Chat Completions的流式响应。了解有关Assistants API中的流式响应的更多信息,请参阅此处

在Python中,流式请求如下所示:

from openai import OpenAI

client = OpenAI()

stream = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Say this is a test"}],
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="")

在Node/Typescript中,流式请求如下所示:

import OpenAI from "openai";

const openai = new OpenAI();

async function main() {
  const stream = await openai.chat.completions.create({
    model: "gpt-3.5-turbo",
    messages: [{ role: "user", content: "Say this is a test" }],
    stream: true,
  });

  for await (const chunk of stream) {
    process.stdout.write(chunk.choices[0]?.delta?.content || "");
  }
}

main();

解析Server-sent events

解析Server-sent events是一个复杂的任务,应该谨慎进行。简单的策略,如按新行分割,可能会导致解析错误。我们建议您在可能的情况下使用现有的客户端库

Was this page helpful?