ChatGPT代理访问

前言

https://github.com/psvmc/ZChatGPTAPIProxy

申请代理

  1. 点击Use this template按钮创建一个新的代码库。
  2. 登录到Cloudflare控制台.
  3. 在帐户主页中,选择pages> Create a project > Connect to Git
  4. 选择你 Fork 的项目存储库,在Set up builds and deployments部分中,选择Next.js作为您的框架预设。您的选择将提供以下信息。

一般默认即可

Configuration option Value
Production branch main
Framework preset next.js
Build command npx @cloudflare/next-on-pages –experimental-minify
Build directory .vercel/output/static

Environment variables (advanced)添加一个参数

Variable name Value
NODE_VERSION 16

点击Save and Deploy部署,然后点Continue to project即可看到访问域名

把官方接口的https://api.openai.com替换为https://xxx.pages.dev/api 即可 (https://xxx.pages.dev/api 为你的域名)

注意路径多了一个api

如图所示

image-20230411114325126

自己服务器部署

安装GIT

1
yum install -y git

下载项目

1
2
3
4
cd /data
git clone https://github.com/psvmc/ZChatGPTAPIProxy.git

cd ZChatGPTAPIProxy

安装Node管理器

国内镜像

安装

1
2
3
4
5
yum install -y curl

bash -c "$(curl -fsSL https://gitee.com/RubyMetric/nvm-cn/raw/main/install.sh)"

source ~/.bashrc

卸载

1
bash -c "$(curl -fsSL https://gitee.com/RubyMetric/nvm-cn/raw/main/uninstall.sh)"

添加

1
export NVM_NODEJS_ORG_MIRROR=https://npm.taobao.org/mirrors/node

输入后,在终端中输入下面的命令使其生效,然后可以接着运行nvm命令

1
source ~/.bashrc

此时运行 查看所有可用版本

1
nvm ls-remote

安装Node16

1
nvm install v16.20.2

npm换源

1
2
3
4
# 查看配置
npm config ls

npm config set registry https://registry.npm.taobao.org

运行

1
2
3
npm install
npm run build
npm run start

接口调用

查看服务地址

https://dash.cloudflare.com/

JS直接调用

单次调用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
const requestOptions = {
method: 'POST',
headers: {
"Authorization": "Bearer sk-xxxxxxxxxxxx",
"Content-Type": "application/json"
},
body: JSON.stringify({
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "你好"
}
]
})
};

fetch("https://openai.1rmb.tk/v1/chat/completions", requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));

返回结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"id": "chatcmpl-76DKosLtzsQfBtq9YTxEkUlCuOnZt",
"object": "chat.completion",
"created": 1681715582,
"model": "gpt-3.5-turbo-0301",
"usage": {
"prompt_tokens": 10,
"completion_tokens": 19,
"total_tokens": 29
},
"choices": [
{
"message": {
"role": "assistant",
"content": "你好呀!有什么可以帮助你的吗?"
},
"finish_reason": "stop",
"index": 0
}
]
}

流式调用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
let url = "https://zchatgptapiproxy.pages.dev/api/v1/chat/completions";
let bodyStr = JSON.stringify({
"model": "gpt-3.5-turbo",
"stream": true,
"messages": [
{
"role": "user",
"content": "JS求和的方法"
}
]
});

let header = {
"Authorization": "Bearer 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",
"Content-Type": "application/json"
};


async function chatMessage() {
const response = await fetch(url, {
method: 'POST',
headers: header,
body: bodyStr
})
const reader = response.body.getReader()
const decoder = new TextDecoder()
let packed
while (!(packed = await reader.read()).done) {
let result = decoder.decode(packed.value) // uint8array转字符串
const lines = result.trim().split("\n\n") // 拆分返回的行
for (let i = 0; i < lines.length; i++) {
let lineItem = lines[i];
if (lineItem.indexOf("data:") !== -1) {
lineItem = lineItem.substring(6) // 去掉开头的 data:
if (lineItem === '[DONE]') { // 结束
break
}
let data = JSON.parse(lineItem)
if (!data['choices'][0]["finish_reason"]) {
let content = data['choices'][0]["delta"]["content"];
if(content){
document.write(content.replace("\n","<br>"))
}
}
} else {
let data = JSON.parse(lineItem)
console.info(data['error']["message"]);
}
}
}
}

chatMessage();

返回的数据

有数据时

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"id": "chatcmpl-76EPZVkuBAJ3z25zxdUGz6zL9onFd",
"object": "chat.completion.chunk",
"created": 1681719721,
"model": "gpt-3.5-turbo-0301",
"choices": [
{
"delta": {
"content": "用"
},
"index": 0,
"finish_reason": null
}
]
}

结束时

1
2
3
4
5
6
7
8
9
10
11
12
13
{
"id": "chatcmpl-76EPZVkuBAJ3z25zxdUGz6zL9onFd",
"object": "chat.completion.chunk",
"created": 1681719721,
"model": "gpt-3.5-turbo-0301",
"choices": [
{
"delta": {},
"index": 0,
"finish_reason": "stop"
}
]
}

NodeJS调用

添加依赖

1
npm install chatgpt

调用

1
2
3
4
5
6
7
8
9
10
11
12
13
import { ChatGPTAPI } from 'chatgpt'

async function example() {
const api = new ChatGPTAPI({
apiKey: "sk-xxxxxxxxxxxxxx",
apiBaseUrl:"https://openai.1rmb.tk/v1"
})

const res = await api.sendMessage('Hello World!')
console.log(res.text)
}

example()

Go调用

https://github.com/sashabaranov/go-openai

1
go get github.com/sashabaranov/go-openai

示例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
package main

import (
"context"
"errors"
"fmt"
"io"
openai "github.com/sashabaranov/go-openai"
)

func main() {
c := openai.NewClient("your token")
ctx := context.Background()

req := openai.ChatCompletionRequest{
Model: openai.GPT3Dot5Turbo,
MaxTokens: 2000,
Messages: []openai.ChatCompletionMessage{
{
Role: openai.ChatMessageRoleUser,
Content: "Lorem ipsum",
},
},
Stream: true,
}
stream, err := c.CreateChatCompletionStream(ctx, req)
if err != nil {
fmt.Printf("ChatCompletionStream error: %v\n", err)
return
}
defer stream.Close()

fmt.Printf("Stream response: ")
for {
response, err := stream.Recv()
if errors.Is(err, io.EOF) {
fmt.Println("\nStream finished")
return
}

if err != nil {
fmt.Printf("\nStream error: %v\n", err)
return
}

fmt.Printf(response.Choices[0].Delta.Content)
}
}

自定义URL

1
2
3
config := openai.DefaultConfig("sk-o2JMCcPYmFJapl48iWyHT3BlbkFJYhP8WjNGom5uFc10100")
config.BaseURL = "https://zchatgptapiproxy.pages.dev/api/v1"
c := openai.NewClientWithConfig(config)

官方接口测试

1
export OPENAI_API_KEY="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"

获取当前支持的模型

1
2
curl https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY"

对话

1
2
3
4
5
6
7
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}]
}'