部署 GKE 推理网关


本页介绍了如何部署 GKE 推理网关。

本页面适用于负责管理 GKE 基础架构的网络专家,以及负责管理 AI 工作负载的平台管理员。

在阅读本页面之前,请确保您熟悉以下内容:

GKE 推理网关可增强 Google Kubernetes Engine (GKE) 网关,以优化生成式 AI 应用的服务。借助 GKE 推理网关,您可以优化在 GKE 上提供生成式 AI 工作负载的方式。它可高效管理和伸缩 AI 工作负载,实现延迟时间等特定于工作负载的性能目标,并提升资源利用率、可观测性和 AI 安全性。

准备工作

在开始之前,请确保您已执行以下任务:

  • 启用 Google Kubernetes Engine API。
  • 启用 Google Kubernetes Engine API
  • 如果您要使用 Google Cloud CLI 执行此任务,请安装初始化 gcloud CLI。 如果您之前安装了 gcloud CLI,请运行 gcloud components update 以获取最新版本。
  • 如有需要,请启用 Compute Engine API、Network Services API 和 Model Armor API。

    前往启用对 API 的访问权限,然后按照说明操作。

GKE Gateway Controller 要求

  • GKE 版本 1.32.3。
  • Google Cloud CLI 407.0.0 版或更高版本。
  • 仅 VPC 原生集群支持 Gateway API。
  • 您必须启用代理专用子网。
  • 集群必须启用 HttpLoadBalancing 插件。
  • 如果您使用的是 Istio,则必须将 Istio 升级到以下版本之一:
    • 1.15.2 或更高版本
    • 1.14.5 或更高版本
    • 1.13.9 或更高版本
  • 如果您使用的是共享 VPC,则需要在宿主项目中将 Compute Network User 角色分配给服务项目的 GKE 服务账号。

限制和局限

需要遵循以下限制:

  • 不支持多集群网关。
  • 只有 gke-l7-regional-external-managedgke-l7-rilb GatewayClass 资源支持 GKE 推理网关。
  • 不支持跨区域内部应用负载平衡器。

配置 GKE 推理网关

如需配置 GKE 推理网关,请参考以下示例。一个团队运行 vLLMLlama3 模型,并积极尝试两种不同的 LoRA 微调适配器:“food-review”和“cad-fabricator”。

配置 GKE Inference Gateway 的概要工作流如下所示:

  1. 准备环境:设置必要的基础架构和组件。
  2. 创建推理池:使用 InferencePool 自定义资源定义模型服务器池。
  3. 指定模型服务目标:使用 InferenceModel 自定义资源指定模型目标。
  4. 创建网关:使用 Gateway API 公开推理服务。
  5. 创建 HTTPRoute:定义如何将 HTTP 流量路由到推理服务。
  6. 发送推理请求:向部署的模型发出请求。

准备环境

  1. 安装 Helm

  2. 创建 GKE 集群:

  3. 如需在 GKE 集群中安装 InferencePoolInferenceModel 自定义资源定义 (CRD),请运行以下命令:

    kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/releases/download/v0.3.0/manifests.yaml
    

    VERSION 替换为要安装的 CRD 的版本(例如 v0.3.0)。

  4. 如果您使用的是早于 v1.32.2-gke.1182001 的 GKE 版本,并且希望将 Model Armor 与 GKE 推理网关搭配使用,则必须安装流量和路由扩展 CRD:

    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-gateway-api/refs/heads/main/config/crd/networking.gke.io_gcptrafficextensions.yaml
    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/gke-gateway-api/refs/heads/main/config/crd/networking.gke.io_gcproutingextensions.yaml
    
  5. 如需设置授权以抓取指标,请创建 inference-gateway-sa-metrics-reader-secret Secret:

    kubectl apply -f - <
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: inference-gateway-metrics-reader
    rules:
    - nonResourceURLs:
      - /metrics
      verbs:
      - get
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: inference-gateway-sa-metrics-reader
      namespace: default
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: inference-gateway-sa-metrics-reader-role-binding
      namespace: default
    subjects:
    - kind: ServiceAccount
      name: inference-gateway-sa-metrics-reader
      namespace: default
    roleRef:
      kind: ClusterRole
      name: inference-gateway-metrics-reader
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: inference-gateway-sa-metrics-reader-secret
      namespace: default
      annotations:
        kubernetes.io/service-account.name: inference-gateway-sa-metrics-reader
    type: kubernetes.io/service-account-token
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: inference-gateway-sa-metrics-reader-secret-read
    rules:
    - resources:
      - secrets
      apiGroups: [""]
      verbs: ["get", "list", "watch"]
      resourceNames: ["inference-gateway-sa-metrics-reader-secret"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: gmp-system:collector:inference-gateway-sa-metrics-reader-secret-read
      namespace: default
    roleRef:
      name: inference-gateway-sa-metrics-reader-secret-read
      kind: ClusterRole
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - name: collector
      namespace: gmp-system
      kind: ServiceAccount
    EOF
    

创建模型服务器和模型部署

本部分介绍了如何部署模型服务器和模型。该示例使用 vLLM 模型服务器和 Llama3 模型。部署被标记为 app:vllm-llama3-8b-instruct。此部署还使用 Hugging Face 中的两个名为 food-reviewcad-fabricator 的 LoRA 适配器。

您可以使用自己的模型服务器容器和模型、服务端口和部署名称来调整此示例。您还可以在部署中配置 LoRA 适配器,或部署基准模型。以下步骤介绍了如何创建必要的 Kubernetes 资源。

  1. 创建一个 Kubernetes Secret 来存储 Hugging Face 令牌。此令牌用于访问 LoRA 适配器:

    kubectl create secret generic hf-token --from-literal=token=HF_TOKEN
    

    HF_TOKEN 替换为您的 Hugging Face 令牌。

  2. 如需在 nvidia-h100-80gb 加速器类型上部署,请将以下清单保存为 vllm-llama3-8b-instruct.yaml。此清单定义了一个包含模型和模型服务器的 Kubernetes Deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: vllm-llama3-8b-instruct
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: vllm-llama3-8b-instruct
      template:
        metadata:
          labels:
            app: vllm-llama3-8b-instruct
        spec:
          containers:
            - name: vllm
              image: "vllm/vllm-openai:latest"
              imagePullPolicy: Always
              command: ["python3", "-m", "vllm.entrypoints.openai.api_server"]
              args:
              - "--model"
              - "meta-llama/Llama-3.1-8B-Instruct"
              - "--tensor-parallel-size"
              - "1"
              - "--port"
              - "8000"
              - "--enable-lora"
              - "--max-loras"
              - "2"
              - "--max-cpu-loras"
              - "12"
              env:
                # Enabling LoRA support temporarily disables automatic v1, we want to force it on
                # until 0.8.3 vLLM is released.
                - name: PORT
                  value: "8000"
                - name: HUGGING_FACE_HUB_TOKEN
                  valueFrom:
                    secretKeyRef:
                      name: hf-token
                      key: token
                - name: VLLM_ALLOW_RUNTIME_LORA_UPDATING
                  value: "true"
              ports:
                - containerPort: 8000
                  name: http
                  protocol: TCP
              lifecycle:
                preStop:
                  # vLLM stops accepting connections when it receives SIGTERM, so we need to sleep
                  # to give upstream gateways a chance to take us out of rotation. The time we wait
                  # is dependent on the time it takes for all upstreams to completely remove us from
                  # rotation. Older or simpler load balancers might take upwards of 30s, but we expect
                  # our deployment to run behind a modern gateway like Envoy which is designed to
                  # probe for readiness aggressively.
                  sleep:
                    # Upstream gateway probers for health should be set on a low period, such as 5s,
                    # and the shorter we can tighten that bound the faster that we release
                    # accelerators during controlled shutdowns. However, we should expect variance,
                    # as load balancers may have internal delays, and we don't want to drop requests
                    # normally, so we're often aiming to set this value to a p99 propagation latency
                    # of readiness -> load balancer taking backend out of rotation, not the average.
                    #
                    # This value is generally stable and must often be experimentally determined on
                    # for a given load balancer and health check period. We set the value here to
                    # the highest value we observe on a supported load balancer, and we recommend
                    # tuning this value down and verifying no requests are dropped.
                    #
                    # If this value is updated, be sure to update terminationGracePeriodSeconds.
                    #
                    seconds: 30
                  #
                  # IMPORTANT: preStop.sleep is beta as of Kubernetes 1.30 - for older versions
                  # replace with this exec action.
                  #exec:
                  #  command:
                  #  - /usr/bin/sleep
                  #  - 30
              livenessProbe:
                httpGet:
                  path: /health
                  port: http
                  scheme: HTTP
                # vLLM's health check is simple, so we can more aggressively probe it.  Liveness
                # check endpoints should always be suitable for aggressive probing.
                periodSeconds: 1
                successThreshold: 1
                # vLLM has a very simple health implementation, which means that any failure is
                # likely significant. However, any liveness triggered restart requires the very
                # large core model to be reloaded, and so we should bias towards ensuring the
                # server is definitely unhealthy vs immediately restarting. Use 5 attempts as
                # evidence of a serious problem.
                failureThreshold: 5
                timeoutSeconds: 1
              readinessProbe:
                httpGet:
                  path: /health
                  port: http
                  scheme: HTTP
                # vLLM's health check is simple, so we can more aggressively probe it.  Readiness
                # check endpoints should always be suitable for aggressive probing, but may be
                # slightly more expensive than readiness probes.
                periodSeconds: 1
                successThreshold: 1
                # vLLM has a very simple health implementation, which means that any failure is
                # likely significant,
                failureThreshold: 1
                timeoutSeconds: 1
              # We set a startup probe so that we don't begin directing traffic or checking
              # liveness to this instance until the model is loaded.
              startupProbe:
                # Failure threshold is when we believe startup will not happen at all, and is set
                # to the maximum possible time we believe loading a model will take. In our
                # default configuration we are downloading a model from HuggingFace, which may
                # take a long time, then the model must load into the accelerator. We choose
                # 10 minutes as a reasonable maximum startup time before giving up and attempting
                # to restart the pod.
                #
                # IMPORTANT: If the core model takes more than 10 minutes to load, pods will crash
                # loop forever. Be sure to set this appropriately.
                failureThreshold: 3600
                # Set delay to start low so that if the base model changes to something smaller
                # or an optimization is deployed, we don't wait unnecessarily.
                initialDelaySeconds: 2
                # As a startup probe, this stops running and so we can more aggressively probe
                # even a moderately complex startup - this is a very important workload.
                periodSeconds: 1
                httpGet:
                  # vLLM does not start the OpenAI server (and hence make /health available)
                  # until models are loaded. This may not be true for all model servers.
                  path: /health
                  port: http
                  scheme: HTTP
    
              resources:
                limits:
                  nvidia.com/gpu: 1
                requests:
                  nvidia.com/gpu: 1
              volumeMounts:
                - mountPath: /data
                  name: data
                - mountPath: /dev/shm
                  name: shm
                - name: adapters
                  mountPath: "/adapters"
          initContainers:
            - name: lora-adapter-syncer
              tty: true
              stdin: true
              image: us-central1-docker.pkg.dev/k8s-staging-images/gateway-api-inference-extension/lora-syncer:main
              restartPolicy: Always
              imagePullPolicy: Always
              env:
                - name: DYNAMIC_LORA_ROLLOUT_CONFIG
                  value: "/config/configmap.yaml"
              volumeMounts: # DO NOT USE subPath, dynamic configmap updates don't work on subPaths
              - name: config-volume
                mountPath:  /config
          restartPolicy: Always
    
          # vLLM allows VLLM_PORT to be specified as an environment variable, but a user might
          # create a 'vllm' service in their namespace. That auto-injects VLLM_PORT in docker
          # compatible form as `tcp://:` instead of the numeric value vLLM accepts
          # causing CrashLoopBackoff. Set service environment injection off by default.
          enableServiceLinks: false
    
          # Generally, the termination grace period needs to last longer than the slowest request
          # we expect to serve plus any extra time spent waiting for load balancers to take the
          # model server out of rotation.
          #
          # An easy starting point is the p99 or max request latency measured for your workload,
          # although LLM request latencies vary significantly if clients send longer inputs or
          # trigger longer outputs. Since steady state p99 will be higher than the latency
          # to drain a server, you may wish to slightly this value either experimentally or
          # via the calculation below.
          #
          # For most models you can derive an upper bound for the maximum drain latency as
          # follows:
          #
          #   1. Identify the maximum context length the model was trained on, or the maximum
          #      allowed length of output tokens configured on vLLM (llama2-7b was trained to
          #      4k context length, while llama3-8b was trained to 128k).
          #   2. Output tokens are the more compute intensive to calculate and the accelerator
          #      will have a maximum concurrency (batch size) - the time per output token at
          #      maximum batch with no prompt tokens being processed is the slowest an output
          #      token can be generated (for this model it would be about 100ms TPOT at a max
          #      batch size around 50)
          #   3. Calculate the worst case request duration if a request starts immediately
          #      before the server stops accepting new connections - generally when it receives
          #      SIGTERM (for this model that is about 4096 / 10 ~ 40s)
          #   4. If there are any requests generating prompt tokens that will delay when those
          #      output tokens start, and prompt token generation is roughly 6x faster than
          #      compute-bound output token generation, so add 20% to the time from above (40s +
          #      16s ~ 55s)
          #
          # Thus we think it will take us at worst about 55s to complete the longest possible
          # request the model is likely to receive at maximum concurrency (highest latency)
          # once requests stop being sent.
          #
          # NOTE: This number will be lower than steady state p99 latency since we stop receiving
          #       new requests which require continuous prompt token computation.
          # NOTE: The max timeout for backend connections from gateway to model servers should
          #       be configured based on steady state p99 latency, not drain p99 latency
          #
          #   5. Add the time the pod takes in its preStop hook to allow the load balancers have
          #      stopped sending us new requests (55s + 30s ~ 85s)
          #
          # Because termination grace period controls when the Kubelet forcibly terminates a
          # stuck or hung process (a possibility due to a GPU crash), there is operational safety
          # in keeping the value roughly proportional to the time to finish serving. There is also
          # value in adding a bit of extra time to deal with unexpectedly long workloads.
          #
          #   6. Add a 50% safety buffer to this time since the operational impact should be low
          #      (85s * 1.5 ~ 130s)
          #
          # One additional source of drain latency is that some workloads may run close to
          # saturation and have queued requests on each server. Since traffic in excess of the
          # max sustainable QPS will result in timeouts as the queues grow, we assume that failure
          # to drain in time due to excess queues at the time of shutdown is an expected failure
          # mode of server overload. If your workload occasionally experiences high queue depths
          # due to periodic traffic, consider increasing the safety margin above to account for
          # time to drain queued requests.
          terminationGracePeriodSeconds: 130
          nodeSelector:
            cloud.google.com/gke-accelerator: "nvidia-h100-80gb"
          volumes:
            - name: data
              emptyDir: {}
            - name: shm
              emptyDir:
                medium: Memory
            - name: adapters
              emptyDir: {}
            - name: config-volume
              configMap:
                name: vllm-llama3-8b-adapters
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: vllm-llama3-8b-adapters
    data:
      configmap.yaml: |
          vLLMLoRAConfig:
            name: vllm-llama3.1-8b-instruct
            port: 8000
            defaultBaseModel: meta-llama/Llama-3.1-8B-Instruct
            ensureExist:
              models:
              - id: food-review
                source: Kawon/llama3.1-food-finetune_v14_r8
              - id: cad-fabricator
                source: redcathode/fabricator
    ---
    kind: HealthCheckPolicy
    apiVersion: networking.gke.io/v1
    metadata:
      name: health-check-policy
      namespace: default
    spec:
      targetRef:
        group: "inference.networking.x-k8s.io"
        kind: InferencePool
        name: vllm-llama3-8b-instruct
      default:
        config:
          type: HTTP
          httpHealthCheck:
              requestPath: /health
              port: 8000
    
  3. 将示例清单应用到您的集群:

    kubectl apply -f vllm-llama3-8b-instruct.yaml
    

应用清单后,请考虑以下关键字段和参数:

  • replicas:指定部署的 Pod 数量。
  • image:指定模型服务器的 Docker 映像。
  • command:指定容器启动时运行的命令。
  • args:指定要传递给命令的参数。
  • env:指定容器的环境变量。
  • ports:指定容器公开的端口。
  • resources:指定容器的资源请求和限制,例如 GPU。
  • volumeMounts:指定如何将卷挂载到容器中。
  • initContainers:指定在应用容器之前运行的容器。
  • restartPolicy:指定 Pod 的重启政策。
  • terminationGracePeriodSeconds:指定 Pod 终止的宽限期。
  • volumes:指定 Pod 使用的卷。

您可以修改这些字段,以满足您的具体要求。

创建推理池

InferencePool Kubernetes 自定义资源定义了一组具有共同基本大语言模型 (LLM) 和计算配置的 Pod。selector 字段指定哪些 Pod 属于此池。此选择器中的标签必须与应用于模型服务器 Pod 的标签完全匹配。targetPort 字段用于定义模型服务器在 Pod 中使用的端口。extensionRef 字段引用一个扩展程序服务,该服务可为推理池提供额外的功能。InferencePool 可让 GKE 推理网关将流量路由到您的模型服务器 Pod。

在创建 InferencePool 之前,请确保 InferencePool 选择的 Pod 已在运行。

如需使用 Helm 创建 InferencePool,请执行以下步骤:

helm install vllm-llama3-8b-instruct \
  --set inferencePool.modelServers.matchLabels.app=vllm-llama3-8b-instruct \
  --set provider.name=gke \
  --version v0.3.0 \
  oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool

将以下字段更改为与您的部署相符:

  • inferencePool.modelServers.matchLabels.app:用于选择模型服务器 Pod 的标签的键。

Helm 安装会自动安装必要的超时政策、端点选择器以及可观察性所需的 Pod。

这会创建一个 InferencePool 对象:vllm-llama3-8b-instruct,用于引用 Pod 中的模型端点服务。它还会为此创建的 InferencePool 创建一个名为 app:vllm-llama3-8b-instruct-epp 的端点选择器部署。

指定模型投放目标

InferenceModel 自定义资源用于定义要提供的特定模型,包括对 LoRA 调优型模型的支持以及其服务重要性。您必须通过创建 InferenceModel 资源来指定要在 InferencePool 上提供哪些模型。这些 InferenceModel 资源可以引用 InferencePool 中模型服务器支持的基础模型或 LoRA 适配器。

modelName 字段用于指定基础模型或 LoRA 适配器的名称。Criticality 字段用于指定模型的服务重要性。poolRef 字段指定此模型的服务器 InferencePool

如需创建 InferenceModel,请执行以下步骤:

  1. 将以下示例清单保存为 inferencemodel.yaml

    apiVersion: inference.networking.x-k8s.io/v1alpha2
    kind: InferenceModel
    metadata:
      name: inferencemodel-sample
    spec:
      modelName: MODEL_NAME
      criticality: VALUE
      poolRef:
        name: INFERENCE_POOL_NAME
    

    替换以下内容:

    • MODEL_NAME:基准模型或 LoRA 适配器的名称。例如 food-review
    • VALUE:所选的广告投放重要性。从 CriticalStandardSheddable 中选择。例如 Standard
    • INFERENCE_POOL_NAME:您在上一步中创建的 InferencePool 的名称。例如 vllm-llama3-8b-instruct
  2. 将示例清单应用到您的集群:

    kubectl apply -f inferencemodel.yaml
    

以下示例创建了一个 InferenceModel 对象,用于在 vllm-llama3-8b-instruct InferencePool 上配置 food-review LoRA 模型,并设置 Standard 服务重要性。InferenceModel 对象还会配置要以 Critical 优先级级别提供的基础模型。

apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
  name: food-review
spec:
  modelName: food-review
  criticality: Standard
  poolRef:
    name: vllm-llama3-8b-instruct
  targetModels:
  - name: food-review
    weight: 100

---
apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceModel
metadata:
  name: llama3-base-model
spec:
  modelName: meta-llama/Llama-3.1-8B-Instruct
  criticality: Critical
  poolRef:
    name: vllm-llama3-8b-instruct

创建网关

Gateway 资源是外部流量进入 Kubernetes 集群的入口点。它定义了接受传入连接的监听器。

GKE 推理网关可与以下网关类配合使用:

  • gke-l7-rilb:适用于区域级内部应用负载平衡器。
  • gke-l7-regional-external-managed

如需了解详情,请参阅 Gateway 类文档。

如需创建网关,请执行以下步骤:

  1. 将以下示例清单保存为 gateway.yaml

    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      name: GATEWAY_NAME
    spec:
      gatewayClassName: GATEWAY_CLASS
      listeners:
        - protocol: HTTP
          port: 80
          name: http
    

    GATEWAY_NAME 替换为网关资源的唯一名称(例如 inference-gateway),并将 GATEWAY_CLASS 替换为您要使用的网关类(例如 gke-l7-regional-external-managed)。

  2. 将清单应用到您的集群:

    kubectl apply -f gateway.yaml
    

注意:如需详细了解如何配置 TLS 以通过 HTTPS 保护网关,请参阅 GKE 文档中的 TLS 配置部分。

创建 HTTPRoute

HTTPRoute 资源定义了 GKE 网关如何将传入的 HTTP 请求路由到后端服务(在本文中是指您的 InferencePool)。HTTPRoute 资源指定匹配规则(例如标头或路径)以及应将流量转发到的后端。

  1. 如需创建 HTTPRoute,请将以下示例清单保存为 httproute.yaml

    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: HTTPROUTE_NAME
    spec:
      parentRefs:
      - name: GATEWAY_NAME
      rules:
      - matches:
        - path:
            type: PathPrefix
            value: PATH_PREFIX
        backendRefs:
        - name: INFERENCE_POOL_NAME
          group: inference.networking.x-k8s.io
          kind: InferencePool
    

    替换以下内容:

    • HTTPROUTE_NAME:您的 HTTPRoute 资源的唯一名称。例如 my-route
    • GATEWAY_NAME:您创建的 Gateway 资源的名称。例如 inference-gateway
    • PATH_PREFIX:用于匹配传入请求的路径前缀。例如,/ 用于匹配所有内容。
    • INFERENCE_POOL_NAME:您要将流量转送到的 InferencePool 资源的名称。例如 vllm-llama3-8b-instruct
  2. 将清单应用到您的集群:

    kubectl apply -f httproute.yaml
    

发送推理请求

配置 GKE 推理网关后,您可以向已部署的模型发送推理请求。这样,您就可以根据输入的提示和指定的参数生成文本。

如需发送推理请求,请执行以下步骤:

  1. 如需获取网关端点,请运行以下命令:

    IP=$(kubectl get gateway/GATEWAY_NAME -o jsonpath='{.status.addresses[0].value}')
    PORT=PORT_NUMBER # Use 80 for HTTP
    

    替换以下内容:

    • GATEWAY_NAME:网关资源的名称。
    • PORT_NUMBER:您在网关中配置的端口号。
  2. 如需使用 curl/v1/completions 端点发送请求,请运行以下命令:

    curl -i -X POST ${IP}:${PORT}/v1/completions \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer $(gcloud auth print-access-token)' \
    -d '{
        "model": "MODEL_NAME",
        "prompt": "PROMPT_TEXT",
        "max_tokens": MAX_TOKENS,
        "temperature": "TEMPERATURE"
    }'
    

    替换以下内容:

    • MODEL_NAME:要使用的模型或 LoRA 适配器的名称。
    • PROMPT_TEXT:模型的输入提示。
    • MAX_TOKENS:在回答中生成的词元数量上限。
    • TEMPERATURE:控制输出的随机程度。使用值 0 可获得确定性输出,使用更大的数字可获得更具创意的输出。

以下示例展示了如何向 GKE 推理网关发送示例请求:

curl -i -X POST ${IP}:${PORT}/v1/completions -H 'Content-Type: application/json' -H 'Authorization: Bearer $(gcloud auth print-access-token)' -d '{
    "model": "food-review",
    "prompt": "What is the best pizza in the world?",
    "max_tokens": 2048,
    "temperature": "0"
}'

请注意以下行为:

  • 请求正文:请求正文可以包含 stoptop_p 等其他参数。如需查看完整的选项列表,请参阅 OpenAI API 规范
  • 错误处理:在客户端代码中实现适当的错误处理,以处理响应中的潜在错误。例如,检查 curl 响应中的 HTTP 状态代码。非 200 状态代码通常表示错误。
  • 身份验证和授权:对于生产部署,请使用身份验证和授权机制保护您的 API 端点。在请求中添加适当的标头(例如 Authorization)。

后续步骤