This page describes how to configure and use the Apigee Model Armor policies to
protect your AI applications. These policies
sanitize the user prompts sent to and responses received from large language models (LLMs).
Using these policies in your Apigee API proxies can mitigate the risks associated with LLM
usage by leveraging Model Armor to detect prompt injection, prevent jailbreak attacks,
apply responsible AI filters, filter malicious URLs, and protect sensitive data.
To learn more about the benefits of integrating with Model Armor, see
Model Armor overview.
Before you begin
Before you begin, make sure to complete the following tasks:
Sign in to your Google Cloud account. If you're new to
Google Cloud,
create an account to evaluate how our products perform in
real-world scenarios. New customers also get $300 in free credits to
run, test, and deploy workloads.
In the Google Cloud console, on the project selector page,
select or create a Google Cloud project.
Confirm that you have a Comprehensive environment available in your Apigee instance.
Model Armor policies can only be deployed in Comprehensive environments.
Required roles
To get the permissions that
you need to create and use the Apigee Model Armor policies,
ask your administrator to grant you the
following IAM roles on the service account you use to deploy Apigee proxies:
To use Model Armor with Apigee, you must set the Model Armor regional endpoint.
The regional endpoint is used by the Model Armor policies to send requests to the Model Armor service.
Set the regional endpoint:
gcloud config set api_endpoint_overrides/modelarmor "https://modelarmor.$LOCATION.rep.googleapis.com/"
Substitute TEMPLATE_NAMEwith the name of the template you want to create. The template name
can have letters, digits, or hyphens. It must not exceed 63 characters and cannot have spaces or start with a hyphen.
This command creates a Model Armor template that uses all the available Model Armor
filters and settings. To learn more about the variety of filters available, see
Model Armor filters.
In the Model Armor policies section, enable the checkboxes for Sanitize User Prompt
and Sanitize Model Response.
Click Next.
Click Create.
The proxy details and XML configuration can be viewed in the Develop tab. To view the policy attachments in the
API proxy processing flows:
Click default under the Proxy endpoints folder.
The proxy editor displays a flow diagram showing the policy attachments, and the corresponding XML configuration.
The SanitizeUserPrompt policy is attached with the default proxy endpoint RequestPreFlow.
Click default under the Target endpoints folder.
The proxy editor displays a flow diagram showing the policy attachments, and the corresponding XML configuration.
The SanitizeModelResponse policy is attached with the default target endpoint Response PreFlow.
Edit the SanitizeUserPrompt and SanitizeModelResponse XML
Before you can deploy the API proxy, you must edit the XML of the SanitizeUserPrompt and SanitizeModelResponse policies.
You can view the XML configuration of each policy by clicking on the policy name in the Detail view of the
API proxy's Develop tab. Edits to the policy XML can be made directly in the Code view of
the Develop tab.
Edit the policies:
SanitizeUserPrompt:
Change the value of the element to {jsonPath('$.contents[-1].parts[-1].text',request.content,true)}
Change the value of the element to reflect your Google Cloud project ID and the name and location of your template.
For example:projects/my-project/locations/us-central1/templates/my-ma-template
SanitizeModelResponse:
Change the value of the element to {jsonPath('$.contents[-1].parts[-1].text',request.content,true)}
Change the value of the element to {jsonPath('$.candidates[-1].content.parts[-1].text',response.content,true)}
Change the value of the element to reflect your Google Cloud project ID and the name and location of your template.
For example:projects/my-project/locations/us-central1/templates/my-ma-template
Click Save.
Add Google authentication to the API proxy
You must also add Google authentication to the API proxy's target endpoint to enable proxy calls to call the LLM model endpoint.
To add the Google access token:
In the Develop tab, click default under the Target endpoints folder. The
Code view displays the XML configuration of the element.
Edit the XML to add the following configuration under :
https://www.googleapis.com/auth/cloud-platform
Click Save.
Deploy the API proxy
To deploy the API proxy:
Click Deploy to open the Deploy API proxy pane.
The Revision field should be set to 1. If not, click 1 to select it.
In the Environment list, select the environment where you want to deploy the proxy. The environment
must be a Comprehensive environment.
Enter the Service account you created in an earlier step.
Click Deploy.
Test the Model Armor policies
To test the Model Armor policies, you must send a request to the API proxy. The request must contain a user prompt.
The following sections provide suggested user prompts to include in the API requests to test for the
following conditions included in your Model Armor template:
Responsible AI (RAI) match
Malicious URL detection
Prompt injection detection
Each example includes the expected response if the Model Armor policies are working as intended.
RAI match example
To test for an RAI match, send the following request to the API proxy you created in the previous step:
The following sections provide examples of common configurations for Model Armor policies. This section is not
exhaustive but does provide a few examples of how the Model Armor policies can be customized for your needs.
Default model detection and prompt extraction
This example shows how the Model Armor policies work to extract and evaluate user prompts according to the
parameters of your Model Armor template. To implement this example,
add the SanitizeUserPrompt policy to your API proxy request flow. The sample policy shown below uses all default parameters:
When you call your API proxy, the input from the prompt is automatically extracted and passed on to Model Armor and processed
according to the parameters of your Model Armor template.
Disable a Model Armor policy
To disable the Model Armor policy, set the enabled attribute to false, as shown in the
following example:
You can edit policy content in the Google Cloud console. Once you have selected the API proxy with your policies in the
API proxies page of the UI, select the Develop tab. You can then select the policy you wish to
edit on from the API proxy's Detail view. The XML of the policy will display in the Code view
and you can edit the policy there.
Once editing is complete, click Save to save your changes to a new revision of the proxy. You
can then deploy this new revision to disable the policy.
Use regional templates across multiple Apigee instances
You can customize the Model Armor template to use regional templates across multiple Apigee instances.
The following example shows how to use the {system.region.name} variable in the TemplateName
attribute of the SanitizeModelResponse policy. This variable automatically chooses the region name based on the deployed instance.
This region name can be used to identify the correct Model Armor template to use for that instance
You can add additional processing logic after the Model Armor policy processes the LLM response.
To extract a variable from the Model Armor response, you can add the ExtractVariables policy to the
API proxy response flow.
To implement this example, add the ExtractVariables policy to your API proxy response PostFlow.
The following example shows the configuration for the ExtractVariables policy:
Add a Model Armor response error code and error message with the RaiseFault policy
You can add Model Armor template metadata to customize the error code and error message that is raised by the
Model Armor policy. To implement this example:
Add template metadata to your Model Armor template, as shown in the following example:
"templateMetadata":{{"customPromptSafetyErrorCode":1099,"customPromptSafetyErrorMessage":"Prompt not allowed",}}
Add the RaiseFault policy to the API proxy response PostFlow.
The following example shows the configuration for the RaiseFault policy:
Once the new policy is added and the API proxy is deployed, requests to the proxy that trigger the error specified
in the Model Armor template metadata will raise a fault with the error code and error message defined in the RaiseFault policy.
The message will contain the template-specific error code and error message.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-11 UTC."],[],[]]