azopenai

package module
v0.8.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 3, 2025 License: MIT Imports: 11 Imported by: 36

README

Azure OpenAI extensions module for Go

This module provides models and convenience functions to make it simpler to use Azure OpenAI features, such as Azure OpenAI On Your Data, with the OpenAI Go client (https://pkg.go.dev/github.com/openai/openai-go).

Source code | Package (pkg.go.dev) | REST API documentation | Product documentation

Getting started

Prerequisites
Install the packages

Install the azopenai and azidentity modules with go get:

go get github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai

# optional
go get github.com/Azure/azure-sdk-for-go/sdk/azidentity

The azidentity module is used for Azure Active Directory authentication with Azure OpenAI.

Key concepts

See Key concepts in the product documentation for more details about general concepts.

Examples

Examples for scenarios specific to Azure can be found on pkg.go.dev or in the example*_test.go files in our GitHub repo for azopenai.

For examples on using the openai-go client, see the examples in the openai-go repository.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Documentation

Overview

Example (AudioTranscription)

Example_audioTranscription demonstrates how to transcribe speech to text using Azure OpenAI's Whisper model. This example shows how to: - Create an Azure OpenAI client with token credentials - Read an audio file and send it to the API - Convert spoken language to written text using the Whisper model - Process the transcription response

The example uses environment variables for configuration: - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_WHISPER_MODEL: The deployment name of your Whisper model

Audio transcription is useful for accessibility features, creating searchable archives of audio content, generating captions or subtitles, and enabling voice commands in applications.

if !CheckRequiredEnvVars("AOAI_WHISPER_ENDPOINT", "AOAI_WHISPER_MODEL") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

endpoint := os.Getenv("AOAI_WHISPER_ENDPOINT")
model := os.Getenv("AOAI_WHISPER_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

audio_file, err := os.Open("testdata/sampledata_audiofiles_myVoiceIsMyPassportVerifyMe01.mp3")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}
defer audio_file.Close()

resp, err := client.Audio.Transcriptions.New(context.TODO(), openai.AudioTranscriptionNewParams{
	Model:          openai.AudioModel(model),
	File:           audio_file,
	ResponseFormat: openai.AudioResponseFormatJSON,
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "Transcribed text: %s\n", resp.Text)
Example (AudioTranslation)

Example_audioTranslation demonstrates how to translate speech from one language to English text. This example shows how to: - Create an Azure OpenAI client with token credentials - Read a non-English audio file - Translate the spoken content to English text - Process the translation response

The example uses environment variables for configuration: - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_WHISPER_MODEL: The deployment name of your Whisper model

Speech translation is essential for cross-language communication, creating multilingual content, and building applications that break down language barriers.

if !CheckRequiredEnvVars("AOAI_WHISPER_ENDPOINT", "AOAI_WHISPER_MODEL") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

endpoint := os.Getenv("AOAI_WHISPER_ENDPOINT")
model := os.Getenv("AOAI_WHISPER_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

audio_file, err := os.Open("testdata/sampleaudio_hindi_myVoiceIsMyPassportVerifyMe.mp3")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}
defer audio_file.Close()

resp, err := client.Audio.Translations.New(context.TODO(), openai.AudioTranslationNewParams{
	Model:  openai.AudioModel(model),
	File:   audio_file,
	Prompt: openai.String("Translate the following Hindi audio to English"),
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "Translated text: %s\n", resp.Text)
Example (ChatCompletionStream)

Example_chatCompletionStream demonstrates streaming responses from the Chat Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming chat completion request - Process incremental response chunks - Handle streaming errors and completion

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Streaming is useful for: - Real-time response display - Improved perceived latency - Interactive chat interfaces - Long-form content generation

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_MODEL", "AOAI_CHAT_COMPLETIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// This is a conversation in progress
stream := client.Chat.Completions.NewStreaming(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		// System message sets the tone
		{
			OfSystem: &openai.ChatCompletionSystemMessageParam{
				Content: openai.ChatCompletionSystemMessageParamContentUnion{
					OfString: openai.String("You are a helpful assistant. You will talk like a pirate and limit your responses to 20 words or less."),
				},
			},
		},
		// User question
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("Can you help me?"),
				},
			},
		},
		// Assistant reply
		{
			OfAssistant: &openai.ChatCompletionAssistantMessageParam{
				Content: openai.ChatCompletionAssistantMessageParamContentUnion{
					OfString: openai.String("Arrrr! Of course, me hearty! What can I do for ye?"),
				},
			},
		},
		// User follow-up
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("What's the best way to train a parrot?"),
				},
			},
		},
	},
})

gotReply := false

for stream.Next() {
	gotReply = true
	evt := stream.Current()
	if len(evt.Choices) > 0 {
		print(evt.Choices[0].Delta.Content)
	}
}

if stream.Err() != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
}

if gotReply {
	fmt.Fprintf(os.Stderr, "\nGot chat completions streaming reply\n")
}
Example (ChatCompletionsFunctions)

Example_chatCompletionsFunctions demonstrates how to use Azure OpenAI's function calling feature. This example shows how to: - Create an Azure OpenAI client with token credentials - Define a function schema for weather information - Request function execution through the chat API - Parse and handle function call responses

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Function calling is useful for: - Integrating external APIs and services - Structured data extraction from natural language - Task automation and workflow integration - Building context-aware applications

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_MODEL", "AOAI_CHAT_COMPLETIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Define the function schema
functionSchema := map[string]interface{}{
	"required": []string{"location"},
	"type":     "object",
	"properties": map[string]interface{}{
		"location": map[string]interface{}{
			"type":        "string",
			"description": "The city and state, e.g. San Francisco, CA",
		},
		"unit": map[string]interface{}{
			"type": "string",
			"enum": []string{"celsius", "fahrenheit"},
		},
	},
}

resp, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("What's the weather like in Boston, MA, in celsius?"),
				},
			},
		},
	},
	Tools: []openai.ChatCompletionToolParam{
		{
			Function: openai.FunctionDefinitionParam{
				Name:        "get_current_weather",
				Description: openai.String("Get the current weather in a given location"),
				Parameters:  functionSchema,
			},
			Type: "function",
		},
	},
	Temperature: openai.Float(0.0),
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) > 0 && len(resp.Choices[0].Message.ToolCalls) > 0 {
	toolCall := resp.Choices[0].Message.ToolCalls[0]

	// This is the function name we gave in the call
	fmt.Fprintf(os.Stderr, "Function name: %q\n", toolCall.Function.Name)

	// The arguments for your function come back as a JSON string
	var funcParams struct {
		Location string `json:"location"`
		Unit     string `json:"unit"`
	}

	err = json.Unmarshal([]byte(toolCall.Function.Arguments), &funcParams)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	fmt.Fprintf(os.Stderr, "Parameters: %#v\n", funcParams)
}
Example (ChatCompletionsLegacyFunctions)

Example_chatCompletionsLegacyFunctions demonstrates using the legacy function calling format. This example shows how to: - Create an Azure OpenAI client with token credentials - Define a function schema using the legacy format - Use tools API for backward compatibility - Handle function calling responses

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Legacy function support ensures: - Compatibility with older implementations - Smooth transition to new tools API - Support for existing function-based workflows

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_MODEL", "AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Define the function schema
parametersJSON := map[string]interface{}{
	"required": []string{"location"},
	"type":     "object",
	"properties": map[string]interface{}{
		"location": map[string]interface{}{
			"type":        "string",
			"description": "The city and state, e.g. San Francisco, CA",
		},
		"unit": map[string]interface{}{
			"type": "string",
			"enum": []string{"celsius", "fahrenheit"},
		},
	},
}

resp, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("What's the weather like in Boston, MA, in celsius?"),
				},
			},
		},
	},
	// Note: Legacy functions are supported through the Tools API in the OpenAI Go SDK
	Tools: []openai.ChatCompletionToolParam{
		{
			Type: "function",
			Function: openai.FunctionDefinitionParam{
				Name:        "get_current_weather",
				Description: openai.String("Get the current weather in a given location"),
				Parameters:  parametersJSON,
			},
		},
	},
	ToolChoice: openai.ChatCompletionToolChoiceOptionUnionParam{
		OfAuto: openai.String("auto"),
	},
	Temperature: openai.Float(0.0),
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) > 0 && len(resp.Choices[0].Message.ToolCalls) > 0 {
	toolCall := resp.Choices[0].Message.ToolCalls[0]

	// This is the function name we gave in the call
	fmt.Fprintf(os.Stderr, "Function name: %q\n", toolCall.Function.Name)

	// The arguments for your function come back as a JSON string
	var funcParams struct {
		Location string `json:"location"`
		Unit     string `json:"unit"`
	}

	err = json.Unmarshal([]byte(toolCall.Function.Arguments), &funcParams)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	fmt.Fprintf(os.Stderr, "Parameters: %#v\n", funcParams)
}
Example (ChatCompletionsStructuredOutputs)

Example_chatCompletionsStructuredOutputs demonstrates using structured outputs with function calling. This example shows how to: - Create an Azure OpenAI client with token credentials - Define complex JSON schemas for structured output - Request specific data structures through function calls - Parse and validate structured responses

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT: Your Azure OpenAI endpoint URL

Structured outputs are useful for: - Database query generation - Data extraction and transformation - API request formatting - Consistent response formatting

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL", "AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Define the structured output schema
structuredJSONSchema := map[string]interface{}{
	"type": "object",
	"properties": map[string]interface{}{
		"table_name": map[string]interface{}{
			"type": "string",
			"enum": []string{"orders"},
		},
		"columns": map[string]interface{}{
			"type": "array",
			"items": map[string]interface{}{
				"type": "string",
				"enum": []string{
					"id", "status", "expected_delivery_date", "delivered_at",
					"shipped_at", "ordered_at", "canceled_at",
				},
			},
		},
		"conditions": map[string]interface{}{
			"type": "array",
			"items": map[string]interface{}{
				"type": "object",
				"properties": map[string]interface{}{
					"column": map[string]interface{}{
						"type": "string",
					},
					"operator": map[string]interface{}{
						"type": "string",
						"enum": []string{"=", ">", "<", ">=", "<=", "!="},
					},
					"value": map[string]interface{}{
						"anyOf": []map[string]interface{}{
							{"type": "string"},
							{"type": "number"},
							{
								"type": "object",
								"properties": map[string]interface{}{
									"column_name": map[string]interface{}{"type": "string"},
								},
								"required":             []string{"column_name"},
								"additionalProperties": false,
							},
						},
					},
				},
				"required":             []string{"column", "operator", "value"},
				"additionalProperties": false,
			},
		},
		"order_by": map[string]interface{}{
			"type": "string",
			"enum": []string{"asc", "desc"},
		},
	},
	"required":             []string{"table_name", "columns", "conditions", "order_by"},
	"additionalProperties": false,
}

resp, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		{
			OfAssistant: &openai.ChatCompletionAssistantMessageParam{
				Content: openai.ChatCompletionAssistantMessageParamContentUnion{
					OfString: openai.String("You are a helpful assistant. The current date is August 6, 2024. You help users query for the data they are looking for by calling the query function."),
				},
			},
		},
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("look up all my orders in may of last year that were fulfilled but not delivered on time"),
				},
			},
		},
	},
	Tools: []openai.ChatCompletionToolParam{
		{
			Type: "function",
			Function: openai.FunctionDefinitionParam{
				Name:       "query",
				Parameters: structuredJSONSchema,
			},
		},
	},
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) > 0 && len(resp.Choices[0].Message.ToolCalls) > 0 {
	fn := resp.Choices[0].Message.ToolCalls[0].Function

	argumentsObj := map[string]interface{}{}
	err = json.Unmarshal([]byte(fn.Arguments), &argumentsObj)

	if err != nil {
		//  TODO: Update the following line with your application specific error handling logic
		log.Printf("ERROR: %s", err)
		return
	}

	fmt.Fprintf(os.Stderr, "%#v\n", argumentsObj)
}
Example (Completions)

Example_completions demonstrates how to use Azure OpenAI's legacy Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple text completion request - Handle the completion response - Process the generated text output

The example uses environment variables for configuration: - AOAI_COMPLETIONS_MODEL: The deployment name of your completions model - AOAI_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Legacy completions are useful for: - Simple text generation tasks - Completing partial text - Single-turn interactions - Basic language generation scenarios

if !CheckRequiredEnvVars("AOAI_COMPLETIONS_MODEL", "AOAI_COMPLETIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_COMPLETIONS_MODEL")
endpoint := os.Getenv("AOAI_COMPLETIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

resp, err := client.Completions.New(context.TODO(), openai.CompletionNewParams{
	Model: openai.CompletionNewParamsModel(model),
	Prompt: openai.CompletionNewParamsPromptUnion{
		OfString: openai.String("What is Azure OpenAI, in 20 words or less"),
	},
	Temperature: openai.Float(0.0),
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) > 0 {
	fmt.Fprintf(os.Stderr, "Result: %s\n", resp.Choices[0].Text)
}
Example (CreateImage)

Example_createImage demonstrates how to generate images using Azure OpenAI's DALL-E model. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure image generation parameters including size and format - Generate an image from a text prompt - Verify the generated image URL is accessible

The example uses environment variables for configuration: - AOAI_DALLE_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_DALLE_MODEL: The deployment name of your DALL-E model

Image generation is useful for: - Creating custom illustrations and artwork - Generating visual content for applications - Prototyping design concepts - Producing visual aids for documentation

if !CheckRequiredEnvVars("AOAI_DALLE_ENDPOINT", "AOAI_DALLE_MODEL") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

endpoint := os.Getenv("AOAI_DALLE_ENDPOINT")
model := os.Getenv("AOAI_DALLE_MODEL")

// Initialize OpenAI client with Azure configurations using token credential
client, err := CreateOpenAIClientWithToken(endpoint, "2024-12-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

resp, err := client.Images.Generate(context.TODO(), openai.ImageGenerateParams{
	Prompt:         "a cat",
	Model:          openai.ImageModel(model),
	ResponseFormat: openai.ImageGenerateParamsResponseFormatURL,
	Size:           openai.ImageGenerateParamsSize1024x1024,
})

if err != nil {
	// TODO: Update the following line with your application specific error handling logic
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

for _, generatedImage := range resp.Data {
	resp, err := http.Get(generatedImage.URL)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}
	defer resp.Body.Close()

	if resp.StatusCode != http.StatusOK {
		// Handle non-200 status code
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	imageData, err := io.ReadAll(resp.Body)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	// Save the generated image to a file
	err = os.WriteFile("generated_image.png", imageData, 0644)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}
}
Example (Embeddings)

Example_embeddings demonstrates how to generate text embeddings using Azure OpenAI's embedding models. This example shows how to: - Create an Azure OpenAI client with token credentials - Convert text input into numerical vector representations - Process the embedding vectors from the response - Handle embedding results for semantic analysis

The example uses environment variables for configuration: - AOAI_EMBEDDINGS_MODEL: The deployment name of your embedding model (e.g., text-embedding-ada-002) - AOAI_EMBEDDINGS_ENDPOINT: Your Azure OpenAI endpoint URL

Text embeddings are useful for: - Semantic search and information retrieval - Text classification and clustering - Content recommendation systems - Document similarity analysis - Natural language understanding tasks

if !CheckRequiredEnvVars("AOAI_EMBEDDINGS_MODEL", "AOAI_EMBEDDINGS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_EMBEDDINGS_MODEL") // eg. "text-embedding-ada-002"
endpoint := os.Getenv("AOAI_EMBEDDINGS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Call the embeddings API
resp, err := client.Embeddings.New(context.TODO(), openai.EmbeddingNewParams{
	Model: openai.EmbeddingModel(model),
	Input: openai.EmbeddingNewParamsInputUnion{
		OfString: openai.String("The food was delicious and the waiter..."),
	},
})

if err != nil {
	// TODO: Update the following line with your application specific error handling logic
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

for i, embed := range resp.Data {
	// embed.Embedding contains the embeddings for this input index
	fmt.Fprintf(os.Stderr, "Got embeddings for input %d with embedding length: %d\n", i, len(embed.Embedding))
}
Example (GenerateSpeechFromText)

Example_generateSpeechFromText demonstrates how to convert text to speech using Azure OpenAI's text-to-speech service. This example shows how to: - Create an Azure OpenAI client with token credentials - Send text to be converted to speech - Specify voice and audio format parameters - Handle the audio response stream

The example uses environment variables for configuration: - AOAI_TTS_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_TTS_MODEL: The deployment name of your text-to-speech model

Text-to-speech conversion is valuable for creating audiobooks, virtual assistants, accessibility tools, and adding voice interfaces to applications.

if !CheckRequiredEnvVars("AOAI_TTS_ENDPOINT", "AOAI_TTS_MODEL") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

endpoint := os.Getenv("AOAI_TTS_ENDPOINT")
model := os.Getenv("AOAI_TTS_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

audioResp, err := client.Audio.Speech.New(context.Background(), openai.AudioSpeechNewParams{
	Model:          openai.SpeechModel(model),
	Input:          "i am a computer",
	Voice:          openai.AudioSpeechNewParamsVoiceAlloy,
	ResponseFormat: openai.AudioSpeechNewParamsResponseFormatFLAC,
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

defer audioResp.Body.Close()

audioBytes, err := io.ReadAll(audioResp.Body)

if err != nil {
	// TODO: Update the following line with your application specific error handling logic
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "Got %d bytes of FLAC audio\n", len(audioBytes))
Example (GetChatCompletions)

Example_getChatCompletions demonstrates how to use Azure OpenAI's Chat Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Structure a multi-turn conversation with different message roles - Send a chat completion request and handle the response - Process multiple response choices and finish reasons

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Chat completions are useful for: - Building conversational AI interfaces - Creating chatbots with personality - Maintaining context across multiple interactions - Generating human-like text responses

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_MODEL", "AOAI_CHAT_COMPLETIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// This is a conversation in progress.
// NOTE: all messages, regardless of role, count against token usage for this API.
resp, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		// You set the tone and rules of the conversation with a prompt as the system role.
		{
			OfSystem: &openai.ChatCompletionSystemMessageParam{
				Content: openai.ChatCompletionSystemMessageParamContentUnion{
					OfString: openai.String("You are a helpful assistant. You will talk like a pirate."),
				},
			},
		},
		// The user asks a question
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("Can you help me?"),
				},
			},
		},
		// The reply would come back from the ChatGPT. You'd add it to the conversation so we can maintain context.
		{
			OfAssistant: &openai.ChatCompletionAssistantMessageParam{
				Content: openai.ChatCompletionAssistantMessageParamContentUnion{
					OfString: openai.String("Arrrr! Of course, me hearty! What can I do for ye?"),
				},
			},
		},
		// The user answers the question based on the latest reply.
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("What's the best way to train a parrot?"),
				},
			},
		},
	},
})

if err != nil {
	log.Printf("ERROR: %s", err)
	return
}

gotReply := false

for _, choice := range resp.Choices {
	gotReply = true

	if choice.Message.Content != "" {
		fmt.Fprintf(os.Stderr, "Content[%d]: %s\n", choice.Index, choice.Message.Content)
	}

	if choice.FinishReason != "" {
		fmt.Fprintf(os.Stderr, "Finish reason[%d]: %s\n", choice.Index, choice.FinishReason)
	}
}

if gotReply {
	fmt.Fprintf(os.Stderr, "Got chat completions reply\n")
}
Example (ResponsesApiChaining)

Example_responsesApiChaining demonstrates how to chain multiple responses together in a conversation flow using the Azure OpenAI Responses API. This example shows how to: - Create an initial response - Chain a follow-up response using the previous response ID - Process both responses - Delete both responses to clean up

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Create the first response
firstResponse, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("Define and explain the concept of catastrophic forgetting?"),
		},
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "First response ID: %s\n", firstResponse.ID)

// Chain a second response using the previous response ID
secondResponse, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("Explain this at a level that could be understood by a college freshman"),
		},
		PreviousResponseID: openai.String(firstResponse.ID),
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "Second response ID: %s\n", secondResponse.ID)

// Print the text content from the second response
for _, output := range secondResponse.Output {
	if output.Type == "message" {
		for _, content := range output.Content {
			if content.Type == "output_text" {
				fmt.Fprintf(os.Stderr, "Second response content: %s\n", content.Text)
			}
		}
	}
}

fmt.Fprintf(os.Stderr, "Example complete\n")
Example (ResponsesApiFunctionCalling)

Example_responsesApiFunctionCalling demonstrates how to use the Azure OpenAI Responses API with function calling. This example shows how to: - Create an Azure OpenAI client with token credentials - Define tools (functions) that the model can call - Process the response containing function calls - Provide function outputs back to the model - Delete the responses to clean up

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Define the get_weather function parameters as a JSON schema
paramSchema := map[string]interface{}{
	"type": "object",
	"properties": map[string]interface{}{
		"location": map[string]interface{}{
			"type": "string",
		},
	},
	"required": []string{"location"},
}

// Create a response with tools (functions)
resp, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("What's the weather in San Francisco?"),
		},
		Tools: []responses.ToolUnionParam{
			{
				OfFunction: &responses.FunctionToolParam{
					Name:        "get_weather",
					Description: openai.String("Get the weather for a location"),
					Parameters:  paramSchema,
				},
			},
		},
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Process the response to find function calls
var functionCallID string
var functionName string

for _, output := range resp.Output {
	if output.Type == "function_call" {
		functionCallID = output.CallID
		functionName = output.Name
		fmt.Fprintf(os.Stderr, "Function call detected: %s\n", functionName)
		fmt.Fprintf(os.Stderr, "Function arguments: %s\n", output.Arguments)
	}
}

// If a function call was found, provide the function output back to the model
if functionCallID != "" {
	// In a real application, you would actually call the function
	// Here we're just simulating a response
	var functionOutput string
	if functionName == "get_weather" {
		functionOutput = `{"temperature": "72 degrees", "condition": "sunny"}`
	}

	// Create a second response, providing the function output
	secondResp, err := client.Responses.New(
		context.TODO(),
		responses.ResponseNewParams{
			Model:              model,
			PreviousResponseID: openai.String(resp.ID),
			Input: responses.ResponseNewParamsInputUnion{
				OfInputItemList: []responses.ResponseInputItemUnionParam{
					{
						OfFunctionCallOutput: &responses.ResponseInputItemFunctionCallOutputParam{
							CallID: functionCallID,
							Output: functionOutput,
						},
					},
				},
			},
		},
	)

	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR with second response: %s\n", err)
		return
	}

	// Process the final model response after receiving function output
	for _, output := range secondResp.Output {
		if output.Type == "message" {
			for _, content := range output.Content {
				if content.Type == "output_text" {
					fmt.Fprintf(os.Stderr, "Final response: %s\n", content.Text)
				}
			}
		}
	}
}

fmt.Fprintf(os.Stderr, "Example complete\n")
Example (ResponsesApiImageInput)

Example_responsesApiImageInput demonstrates how to use the Azure OpenAI Responses API with image input. This example shows how to: - Create an Azure OpenAI client with token credentials - Fetch an image from a URL and encode it to Base64 - Send a query with both text and a Base64-encoded image - Process the response

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

Note: This example fetches and encodes an image from a URL because there is a known issue with image url based image input. Currently only base64 encoded images are supported.

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Image URL to fetch and encode, you can also use a local file path
imageURL := "https://www.bing.com/th?id=OHR.BradgateFallow_EN-US3932725763_1920x1080.jpg"

// Fetch the image from the URL and encode it to Base64
httpClient := &http.Client{Timeout: 30 * time.Second}
httpResp, err := httpClient.Get(imageURL)
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR fetching image: %s\n", err)
	return
}
defer httpResp.Body.Close()

imgBytes, err := io.ReadAll(httpResp.Body)
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR reading image: %s\n", err)
	return
}

// Encode the image to Base64
base64Image := base64.StdEncoding.EncodeToString(imgBytes)
fmt.Fprintf(os.Stderr, "Successfully encoded image from URL\n")

// Determine content type based on image data or response headers
contentType := httpResp.Header.Get("Content-Type")
if contentType == "" {
	// Default to jpeg if we can't determine
	contentType = "image/jpeg"
}

// Create the data URL for the image
dataURL := fmt.Sprintf("data:%s;base64,%s", contentType, base64Image)

// Create a response with the image input
resp, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfInputItemList: []responses.ResponseInputItemUnionParam{
				{
					OfInputMessage: &responses.ResponseInputItemMessageParam{
						Role: "user",
						Content: []responses.ResponseInputContentUnionParam{
							{
								OfInputText: &responses.ResponseInputTextParam{
									Text: "What can you see in this image?",
								},
							},
							{
								OfInputImage: &responses.ResponseInputImageParam{
									ImageURL: openai.String(dataURL),
								},
							},
						},
					},
				},
			},
		},
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Print the text content from the output
for _, output := range resp.Output {
	if output.Type == "message" {
		for _, content := range output.Content {
			if content.Type == "output_text" {
				fmt.Fprintf(os.Stderr, "Model's description of the image: %s\n", content.Text)
			}
		}
	}
}

fmt.Fprintf(os.Stderr, "Example complete\n")
Example (ResponsesApiReasoning)

Example_responsesApiReasoning demonstrates how to use the Azure OpenAI Responses API with reasoning. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a complex problem-solving request that requires reasoning - Enable the reasoning parameter to get step-by-step thought process - Process the response

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Create a response with reasoning enabled
// This will make the model show its step-by-step reasoning
resp, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("Solve the following problem step by step: If a train travels at 120 km/h and needs to cover a distance of 450 km, how long will the journey take?"),
		},
		Reasoning: openai.ReasoningParam{
			Effort: openai.ReasoningEffortMedium,
		},
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Print the text content from the output
for _, output := range resp.Output {
	if output.Type == "message" {
		for _, content := range output.Content {
			if content.Type == "output_text" {
				fmt.Fprintf(os.Stderr, "\nOutput: %s\n", content.Text)
			}
		}
	}
}

fmt.Fprintf(os.Stderr, "Example complete\n")
Example (ResponsesApiStreaming)

Example_responsesApiStreaming demonstrates how to use streaming with the Azure OpenAI Responses API. This example shows how to: - Create a streaming response - Process the stream events as they arrive - Clean up by deleting the response

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Create a streaming response
stream := client.Responses.NewStreaming(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("This is a test"),
		},
	},
)

// Process the stream
fmt.Fprintf(os.Stderr, "Streaming response: ")

for stream.Next() {
	event := stream.Current()
	if event.Type == "response.output_text.delta" {
		fmt.Fprintf(os.Stderr, "%s", event.Delta.OfString)
	}
}

if stream.Err() != nil {
	fmt.Fprintf(os.Stderr, "\nERROR: %s\n", stream.Err())
	return
}

fmt.Fprintf(os.Stderr, "\nExample complete\n")
Example (ResponsesApiTextGeneration)

Example_responsesApiTextGeneration demonstrates how to use the Azure OpenAI Responses API for text generation. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple text prompt - Process the response - Delete the response to clean up

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

The Responses API is a new stateful API from Azure OpenAI that brings together capabilities from chat completions and assistants APIs in a unified experience.

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Create a simple text input
resp, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("Define and explain the concept of catastrophic forgetting?"),
		},
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Process the response
fmt.Fprintf(os.Stderr, "Response ID: %s\n", resp.ID)
fmt.Fprintf(os.Stderr, "Model: %s\n", resp.Model)

// Print the text content from the output
for _, output := range resp.Output {
	if output.Type == "message" {
		for _, content := range output.Content {
			if content.Type == "output_text" {
				fmt.Fprintf(os.Stderr, "Content: %s\n", content.Text)
			}
		}
	}
}

// Delete the response to clean up
err = client.Responses.Delete(
	context.TODO(),
	resp.ID,
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR deleting response: %s\n", err)
} else {
	fmt.Fprintf(os.Stderr, "Response deleted successfully\n")
}

fmt.Fprintf(os.Stderr, "Example complete\n")
Example (StreamCompletions)

Example_streamCompletions demonstrates streaming responses from the legacy Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming completion request - Process incremental text chunks - Handle streaming errors and completion

The example uses environment variables for configuration: - AOAI_COMPLETIONS_MODEL: The deployment name of your completions model - AOAI_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Streaming completions are useful for: - Real-time text generation display - Reduced latency in responses - Interactive text generation - Long-form content creation

if !CheckRequiredEnvVars("AOAI_COMPLETIONS_MODEL", "AOAI_COMPLETIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_COMPLETIONS_MODEL")
endpoint := os.Getenv("AOAI_COMPLETIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

stream := client.Completions.NewStreaming(context.TODO(), openai.CompletionNewParams{
	Model: openai.CompletionNewParamsModel(model),
	Prompt: openai.CompletionNewParamsPromptUnion{
		OfString: openai.String("What is Azure OpenAI, in 20 words or less"),
	},
	MaxTokens:   openai.Int(2048),
	Temperature: openai.Float(0.0),
})

for stream.Next() {
	evt := stream.Current()
	if len(evt.Choices) > 0 {
		print(evt.Choices[0].Text)
	}
}

if stream.Err() != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
}
Example (StructuredOutputsResponseFormat)

Example_structuredOutputsResponseFormat demonstrates using JSON response formatting. This example shows how to: - Create an Azure OpenAI client with token credentials - Define JSON schema for response formatting - Request structured mathematical solutions - Parse and process formatted JSON responses

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT: Your Azure OpenAI endpoint URL

Response formatting is useful for: - Mathematical problem solving - Step-by-step explanations - Structured data generation - Consistent output formatting

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL", "AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Define the structured output schema
mathResponseSchema := map[string]interface{}{
	"type": "object",
	"properties": map[string]interface{}{
		"steps": map[string]interface{}{
			"type": "array",
			"items": map[string]interface{}{
				"type": "object",
				"properties": map[string]interface{}{
					"explanation": map[string]interface{}{"type": "string"},
					"output":      map[string]interface{}{"type": "string"},
				},
				"required":             []string{"explanation", "output"},
				"additionalProperties": false,
			},
		},
		"final_answer": map[string]interface{}{"type": "string"},
	},
	"required":             []string{"steps", "final_answer"},
	"additionalProperties": false,
}

resp, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		{
			OfAssistant: &openai.ChatCompletionAssistantMessageParam{
				Content: openai.ChatCompletionAssistantMessageParamContentUnion{
					OfString: openai.String("You are a helpful math tutor."),
				},
			},
		},
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("solve 8x + 31 = 2"),
				},
			},
		},
	},
	ResponseFormat: openai.ChatCompletionNewParamsResponseFormatUnion{
		OfJSONSchema: &openai.ResponseFormatJSONSchemaParam{
			JSONSchema: openai.ResponseFormatJSONSchemaJSONSchemaParam{
				Name:   "math_response",
				Schema: mathResponseSchema,
			},
		},
	},
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) > 0 && resp.Choices[0].Message.Content != "" {
	responseObj := map[string]interface{}{}
	err = json.Unmarshal([]byte(resp.Choices[0].Message.Content), &responseObj)

	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	fmt.Fprintf(os.Stderr, "%#v", responseObj)
}
Example (UsingAzureContentFiltering)

Example_usingAzureContentFiltering demonstrates how to use Azure OpenAI's content filtering capabilities. This example shows how to: - Create an Azure OpenAI client with token credentials - Make a chat completion request - Extract and handle content filter results - Process content filter errors - Access Azure-specific content filter information from responses

The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model

Content filtering is essential for: - Maintaining content safety and compliance - Monitoring content severity levels - Implementing content moderation policies - Handling filtered content gracefully

if !CheckRequiredEnvVars("AOAI_ENDPOINT", "AOAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.")
	return
}

endpoint := os.Getenv("AOAI_ENDPOINT")
model := os.Getenv("AOAI_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Standard OpenAI chat completion request
chatParams := openai.ChatCompletionNewParams{
	Model:     openai.ChatModel(model),
	MaxTokens: openai.Int(256),
	Messages: []openai.ChatCompletionMessageParamUnion{{
		OfUser: &openai.ChatCompletionUserMessageParam{
			Content: openai.ChatCompletionUserMessageParamContentUnion{
				OfString: openai.String("Explain briefly how solar panels work"),
			},
		},
	}},
}

resp, err := client.Chat.Completions.New(
	context.TODO(),
	chatParams,
)

// Check if there's a content filter error
var contentErr *azopenai.ContentFilterError
if azopenai.ExtractContentFilterError(err, &contentErr) {
	fmt.Fprintf(os.Stderr, "Content was filtered by Azure OpenAI:\n")

	if contentErr.Hate != nil && contentErr.Hate.Filtered != nil && *contentErr.Hate.Filtered {
		fmt.Fprintf(os.Stderr, "- Hate content was filtered\n")
	}

	if contentErr.Violence != nil && contentErr.Violence.Filtered != nil && *contentErr.Violence.Filtered {
		fmt.Fprintf(os.Stderr, "- Violent content was filtered\n")
	}

	if contentErr.Sexual != nil && contentErr.Sexual.Filtered != nil && *contentErr.Sexual.Filtered {
		fmt.Fprintf(os.Stderr, "- Sexual content was filtered\n")
	}

	if contentErr.SelfHarm != nil && contentErr.SelfHarm.Filtered != nil && *contentErr.SelfHarm.Filtered {
		fmt.Fprintf(os.Stderr, "- Self-harm content was filtered\n")
	}

	return
} else if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) == 0 {
	fmt.Fprintf(os.Stderr, "No choices returned in the response, the model may have failed to generate content\n")
	return
}

// Access the Azure-specific content filter results from the response
azureChatChoice := azopenai.ChatCompletionChoice(resp.Choices[0])
contentFilterResults, err := azureChatChoice.ContentFilterResults()

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
} else if contentFilterResults != nil {
	fmt.Fprintf(os.Stderr, "Content Filter Results:\n")

	if contentFilterResults.Hate != nil && contentFilterResults.Hate.Severity != nil {
		fmt.Fprintf(os.Stderr, "- Hate severity: %s\n", *contentFilterResults.Hate.Severity)
	}

	if contentFilterResults.Violence != nil && contentFilterResults.Violence.Severity != nil {
		fmt.Fprintf(os.Stderr, "- Violence severity: %s\n", *contentFilterResults.Violence.Severity)
	}

	if contentFilterResults.Sexual != nil && contentFilterResults.Sexual.Severity != nil {
		fmt.Fprintf(os.Stderr, "- Sexual severity: %s\n", *contentFilterResults.Sexual.Severity)
	}

	if contentFilterResults.SelfHarm != nil && contentFilterResults.SelfHarm.Severity != nil {
		fmt.Fprintf(os.Stderr, "- Self-harm severity: %s\n", *contentFilterResults.SelfHarm.Severity)
	}
}

// Access the response content
fmt.Fprintf(os.Stderr, "\nResponse: %s\n", resp.Choices[0].Message.Content)
Example (UsingAzureOnYourData)

Example_usingAzureOnYourData demonstrates how to use Azure OpenAI's Azure-On-Your-Data feature. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure an Azure Cognitive Search data source - Send a chat completion request with data source integration - Process Azure-specific response data including citations and content filtering results

The example uses environment variables for configuration: - AOAI_OYD_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_OYD_MODEL: The deployment name of your model - COGNITIVE_SEARCH_API_ENDPOINT: Your Azure Cognitive Search endpoint - COGNITIVE_SEARCH_API_INDEX: The name of your search index

Azure-On-Your-Data enables you to enhance chat completions with information from your own data sources, allowing for more contextual and accurate responses based on your content.

if !CheckRequiredEnvVars("AOAI_OYD_ENDPOINT", "AOAI_OYD_MODEL",
	"COGNITIVE_SEARCH_API_ENDPOINT", "COGNITIVE_SEARCH_API_INDEX") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not \nrunning example.")
	return
}

endpoint := os.Getenv("AOAI_OYD_ENDPOINT")
model := os.Getenv("AOAI_OYD_MODEL")
cognitiveSearchEndpoint := os.Getenv("COGNITIVE_SEARCH_API_ENDPOINT")
cognitiveSearchIndexName := os.Getenv("COGNITIVE_SEARCH_API_INDEX")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

chatParams := openai.ChatCompletionNewParams{
	Model:     openai.ChatModel(model),
	MaxTokens: openai.Int(512),
	Messages: []openai.ChatCompletionMessageParamUnion{{
		OfUser: &openai.ChatCompletionUserMessageParam{
			Content: openai.ChatCompletionUserMessageParamContentUnion{
				OfString: openai.String("What does the OpenAI package do?"),
			},
		},
	}},
}

// There are other types of data sources available. Examples:
//
// - AzureCosmosDBChatExtensionConfiguration
// - AzureMachineLearningIndexChatExtensionConfiguration
// - AzureSearchChatExtensionConfiguration
// - PineconeChatExtensionConfiguration
//
// See the definition of [AzureChatExtensionConfigurationClassification] for a full list.
azureSearchDataSource := &azopenai.AzureSearchChatExtensionConfiguration{
	Parameters: &azopenai.AzureSearchChatExtensionParameters{
		Endpoint:       &cognitiveSearchEndpoint,
		IndexName:      &cognitiveSearchIndexName,
		Authentication: &azopenai.OnYourDataSystemAssignedManagedIdentityAuthenticationOptions{},
	},
}

resp, err := client.Chat.Completions.New(
	context.TODO(),
	chatParams,
	azopenai.WithDataSources(azureSearchDataSource),
)

if err != nil {
	//  TODO: Update the following line with your application specific error handling logic
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

for _, chatChoice := range resp.Choices {
	// Azure-specific response data can be extracted using helpers, like [azopenai.ChatCompletionChoice].
	azureChatChoice := azopenai.ChatCompletionChoice(chatChoice)
	azureContentFilterResult, err := azureChatChoice.ContentFilterResults()

	if err != nil {
		//  TODO: Update the following line with your application specific error handling logic
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	if azureContentFilterResult != nil {
		fmt.Fprintf(os.Stderr, "ContentFilterResult: %#v\n", azureContentFilterResult)
	}

	// there are also helpers for individual types, not just top-level response types.
	azureChatCompletionMsg := azopenai.ChatCompletionMessage(chatChoice.Message)
	msgContext, err := azureChatCompletionMsg.Context()

	if err != nil {
		//  TODO: Update the following line with your application specific error handling logic
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	for _, citation := range msgContext.Citations {
		if citation.Content != nil {
			fmt.Fprintf(os.Stderr, "Citation = %s\n", *citation.Content)
		}
	}

	// the original fields from the type are also still available.
	fmt.Fprintf(os.Stderr, "Content: %s\n", azureChatCompletionMsg.Content)
}

fmt.Fprintf(os.Stderr, "Example complete\n")
Example (UsingAzurePromptFilteringWithStreaming)

Example_usingAzurePromptFilteringWithStreaming demonstrates how to use Azure OpenAI's prompt filtering with streaming responses. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming chat completion request - Handle streaming responses with Azure extensions - Monitor prompt filter results in real-time - Accumulate and process streamed content

The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model

Streaming with prompt filtering is useful for: - Real-time content moderation - Progressive content delivery - Monitoring content safety during generation - Building responsive applications with content safety checks

if !CheckRequiredEnvVars("AOAI_ENDPOINT", "AOAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.")
	return
}

endpoint := os.Getenv("AOAI_ENDPOINT")
model := os.Getenv("AOAI_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Example of streaming with Azure extensions
fmt.Fprintf(os.Stderr, "Streaming example:\n")
streamingParams := openai.ChatCompletionNewParams{
	Model:     openai.ChatModel(model),
	MaxTokens: openai.Int(256),
	Messages: []openai.ChatCompletionMessageParamUnion{{
		OfUser: &openai.ChatCompletionUserMessageParam{
			Content: openai.ChatCompletionUserMessageParamContentUnion{
				OfString: openai.String("List 3 benefits of renewable energy"),
			},
		},
	}},
}

stream := client.Chat.Completions.NewStreaming(
	context.TODO(),
	streamingParams,
)

var fullContent string

for stream.Next() {
	chunk := stream.Current()

	// Get Azure-specific prompt filter results, if available
	azureChunk := azopenai.ChatCompletionChunk(chunk)
	promptFilterResults, err := azureChunk.PromptFilterResults()

	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	if promptFilterResults != nil {
		fmt.Fprintf(os.Stderr, "- Prompt filter results detected\n")
	}

	if len(chunk.Choices) > 0 {
		content := chunk.Choices[0].Delta.Content
		fullContent += content
		fmt.Fprint(os.Stderr, content)
	}
}

if err := stream.Err(); err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "\n\nStreaming complete. Full content length: %d characters\n", len(fullContent))
Example (UsingDefaultAzureCredential)

Example_usingDefaultAzureCredential demonstrates how to authenticate with Azure OpenAI using Azure Active Directory credentials. This example shows how to: - Create an Azure OpenAI client using DefaultAzureCredential - Configure authentication options with tenant ID - Make a simple request to test the authentication

The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model - AZURE_TENANT_ID: Your Azure tenant ID - AZURE_CLIENT_ID: (Optional) Your Azure client ID - AZURE_CLIENT_SECRET: (Optional) Your Azure client secret

DefaultAzureCredential supports multiple authentication methods including: - Environment variables - Managed Identity - Azure CLI credentials

package main

import (
	"context"
	"fmt"
	"os"

	"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
	"github.com/openai/openai-go"
	"github.com/openai/openai-go/azure"
)

// Example_usingDefaultAzureCredential demonstrates how to authenticate with Azure OpenAI using Azure Active Directory credentials.
// This example shows how to:
// - Create an Azure OpenAI client using DefaultAzureCredential
// - Configure authentication options with tenant ID
// - Make a simple request to test the authentication
//
// The example uses environment variables for configuration:
// - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL
// - AOAI_MODEL: The deployment name of your model
// - AZURE_TENANT_ID: Your Azure tenant ID
// - AZURE_CLIENT_ID: (Optional) Your Azure client ID
// - AZURE_CLIENT_SECRET: (Optional) Your Azure client secret
//
// DefaultAzureCredential supports multiple authentication methods including:
// - Environment variables
// - Managed Identity
// - Azure CLI credentials
func main() {
	if !CheckRequiredEnvVars("AOAI_ENDPOINT", "AOAI_MODEL") {
		fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.")
		return
	}

	endpoint := os.Getenv("AOAI_ENDPOINT")
	model := os.Getenv("AOAI_MODEL")
	tenantID := os.Getenv("AZURE_TENANT_ID")

	// DefaultAzureCredential automatically tries different authentication methods in order:
	// - Environment variables (AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID)
	// - Managed Identity
	// - Azure CLI credentials
	credential, err := azidentity.NewDefaultAzureCredential(&azidentity.DefaultAzureCredentialOptions{
		TenantID: tenantID,
	})
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	client := openai.NewClient(
		azure.WithEndpoint(endpoint, "2024-08-01-preview"),
		azure.WithTokenCredential(credential),
	)

	// Use the client with default credentials
	makeSimpleRequest(&client, model)
}

// Helper function to make a simple request to Azure OpenAI
func makeSimpleRequest(client *openai.Client, model string) {
	chatParams := openai.ChatCompletionNewParams{
		Model:     openai.ChatModel(model),
		MaxTokens: openai.Int(512),
		Messages: []openai.ChatCompletionMessageParamUnion{{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("Say hello!"),
				},
			},
		}},
	}

	resp, err := client.Chat.Completions.New(
		context.TODO(),
		chatParams,
	)

	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	if len(resp.Choices) > 0 {
		fmt.Fprintf(os.Stderr, "Response: %s\n", resp.Choices[0].Message.Content)
	}
}
Example (UsingEnhancements)

Example_usingEnhancements demonstrates how to use Azure OpenAI's enhanced features. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure chat completion enhancements like grounding - Process Azure-specific response data including content filtering - Handle message context and citations

The example uses environment variables for configuration: - AOAI_OYD_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_OYD_MODEL: The deployment name of your model

Azure OpenAI enhancements provide additional capabilities beyond standard OpenAI features, such as improved grounding and content filtering for more accurate and controlled responses.

if !CheckRequiredEnvVars("AOAI_OYD_ENDPOINT", "AOAI_OYD_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not \nrunning example.")
	return
}

endpoint := os.Getenv("AOAI_OYD_ENDPOINT")
model := os.Getenv("AOAI_OYD_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

chatParams := openai.ChatCompletionNewParams{
	Model:     openai.ChatModel(model),
	MaxTokens: openai.Int(512),
	Messages: []openai.ChatCompletionMessageParamUnion{{
		OfUser: &openai.ChatCompletionUserMessageParam{
			Content: openai.ChatCompletionUserMessageParamContentUnion{
				OfString: openai.String("What does the OpenAI package do?"),
			},
		},
	}},
}

resp, err := client.Chat.Completions.New(
	context.TODO(),
	chatParams,
	azopenai.WithEnhancements(azopenai.AzureChatEnhancementConfiguration{
		Grounding: &azopenai.AzureChatGroundingEnhancementConfiguration{
			Enabled: to.Ptr(true),
		},
	}),
)

if err != nil {
	//  TODO: Update the following line with your application specific error handling logic
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

for _, chatChoice := range resp.Choices {
	// Azure-specific response data can be extracted using helpers, like [azopenai.ChatCompletionChoice].
	azureChatChoice := azopenai.ChatCompletionChoice(chatChoice)
	azureContentFilterResult, err := azureChatChoice.ContentFilterResults()

	if err != nil {
		//  TODO: Update the following line with your application specific error handling logic
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	if azureContentFilterResult != nil {
		fmt.Fprintf(os.Stderr, "ContentFilterResult: %#v\n", azureContentFilterResult)
	}

	// there are also helpers for individual types, not just top-level response types.
	azureChatCompletionMsg := azopenai.ChatCompletionMessage(chatChoice.Message)
	msgContext, err := azureChatCompletionMsg.Context()

	if err != nil {
		//  TODO: Update the following line with your application specific error handling logic
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	for _, citation := range msgContext.Citations {
		if citation.Content != nil {
			fmt.Fprintf(os.Stderr, "Citation = %s\n", *citation.Content)
		}
	}

	// the original fields from the type are also still available.
	fmt.Fprintf(os.Stderr, "Content: %s\n", azureChatCompletionMsg.Content)
}

fmt.Fprintf(os.Stderr, "Example complete\n")
Example (Vision)

Example_vision demonstrates how to use Azure OpenAI's Vision capabilities for image analysis. This example shows how to: - Create an Azure OpenAI client with token credentials - Send an image URL to the model for analysis - Configure the chat completion request with image content - Process the model's description of the image

The example uses environment variables for configuration: - AOAI_VISION_MODEL: The deployment name of your vision-capable model (e.g., gpt-4-vision) - AOAI_VISION_ENDPOINT: Your Azure OpenAI endpoint URL

Vision capabilities are useful for: - Image description and analysis - Visual question answering - Content moderation - Accessibility features - Image-based search and retrieval

if !CheckRequiredEnvVars("AOAI_VISION_MODEL", "AOAI_VISION_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_VISION_MODEL") // ex: gpt-4o"
endpoint := os.Getenv("AOAI_VISION_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

imageURL := "https://www.bing.com/th?id=OHR.BradgateFallow_EN-US3932725763_1920x1080.jpg"

ctx, cancel := context.WithTimeout(context.TODO(), time.Minute)
defer cancel()

resp, err := client.Chat.Completions.New(ctx, openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfArrayOfContentParts: []openai.ChatCompletionContentPartUnionParam{
						{
							OfText: &openai.ChatCompletionContentPartTextParam{
								Text: "Describe this image",
							},
						},
						{
							OfImageURL: &openai.ChatCompletionContentPartImageParam{
								ImageURL: openai.ChatCompletionContentPartImageImageURLParam{
									URL: imageURL,
								},
							},
						},
					},
				},
			},
		},
	},
	MaxTokens: openai.Int(512),
})

if err != nil {
	// TODO: Update the following line with your application specific error handling logic
	log.Printf("ERROR: %s", err)
	return
}

if len(resp.Choices) > 0 && resp.Choices[0].Message.Content != "" {
	// Prints "Result: The image shows two deer standing in a field of tall, autumn-colored ferns"
	fmt.Fprintf(os.Stderr, "Result: %s\n", resp.Choices[0].Message.Content)
}

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

func ExtractContentFilterError added in v0.8.0

func ExtractContentFilterError(err error, contentFilterErr **ContentFilterError) bool

ExtractContentFilterError checks the error to see if it contains content filtering information. If so it'll assign the resulting information to *contentFilterErr, similar to errors.As().

Prompt filtering information will be present if you see an error message similar to this: 'The response was filtered due to the prompt triggering'. (NOTE: error message is for illustrative purposes, and can change).

Usage looks like this:

resp, err := chatCompletionsService.New(args)

var contentFilterErr *azopenai.ContentFilterError

if openai.ExtractContentFilterError(err, &contentFilterErr) {
	// contentFilterErr.Hate, contentFilterErr.SelfHarm, contentFilterErr.Sexual or contentFilterErr.Violence
	// contain information about why content was flagged.
}

func WithDataSources added in v0.8.0

WithDataSources adds in Azure data sources to be used with the "Azure OpenAI On Your Data" feature.

func WithEnhancements added in v0.8.0

WithEnhancements configures Azure OpenAI enhancements, optical character recognition (OCR).

Types

type AzureChatEnhancementConfiguration added in v0.4.0

type AzureChatEnhancementConfiguration struct {
	// A representation of the available options for the Azure OpenAI grounding enhancement.
	Grounding *AzureChatGroundingEnhancementConfiguration

	// A representation of the available options for the Azure OpenAI optical character recognition (OCR) enhancement.
	Ocr *AzureChatOCREnhancementConfiguration
}

AzureChatEnhancementConfiguration - A representation of the available Azure OpenAI enhancement configurations.

func (AzureChatEnhancementConfiguration) MarshalJSON added in v0.4.0

func (a AzureChatEnhancementConfiguration) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureChatEnhancementConfiguration.

func (*AzureChatEnhancementConfiguration) UnmarshalJSON added in v0.4.0

func (a *AzureChatEnhancementConfiguration) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatEnhancementConfiguration.

type AzureChatEnhancements added in v0.4.0

type AzureChatEnhancements struct {
	// The grounding enhancement that returns the bounding box of the objects detected in the image.
	Grounding *AzureGroundingEnhancement
}

AzureChatEnhancements - Represents the output results of Azure enhancements to chat completions, as configured via the matching input provided in the request.

func (AzureChatEnhancements) MarshalJSON added in v0.4.0

func (a AzureChatEnhancements) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureChatEnhancements.

func (*AzureChatEnhancements) UnmarshalJSON added in v0.4.0

func (a *AzureChatEnhancements) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatEnhancements.

type AzureChatExtensionConfiguration added in v0.2.0

type AzureChatExtensionConfiguration struct {
	// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
	// chat extensions are only compatible with Azure OpenAI.
	Type *AzureChatExtensionType
}

AzureChatExtensionConfiguration - A representation of configuration data for a single Azure OpenAI chat extension. This will be used by a chat completions request that should use Azure OpenAI chat extensions to augment the response behavior. The use of this configuration is compatible only with Azure OpenAI.

func (*AzureChatExtensionConfiguration) GetAzureChatExtensionConfiguration added in v0.4.0

func (a *AzureChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type AzureChatExtensionConfiguration.

func (AzureChatExtensionConfiguration) MarshalJSON added in v0.2.0

func (a AzureChatExtensionConfiguration) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureChatExtensionConfiguration.

func (*AzureChatExtensionConfiguration) UnmarshalJSON added in v0.2.0

func (a *AzureChatExtensionConfiguration) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatExtensionConfiguration.

type AzureChatExtensionConfigurationClassification added in v0.4.0

type AzureChatExtensionConfigurationClassification interface {
	// GetAzureChatExtensionConfiguration returns the AzureChatExtensionConfiguration content of the underlying type.
	GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration
}

AzureChatExtensionConfigurationClassification provides polymorphic access to related types. Call the interface's GetAzureChatExtensionConfiguration() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *AzureChatExtensionConfiguration, *AzureCosmosDBChatExtensionConfiguration, *AzureSearchChatExtensionConfiguration, *ElasticsearchChatExtensionConfiguration, - *MongoDBChatExtensionConfiguration, *PineconeChatExtensionConfiguration

type AzureChatExtensionDataSourceResponseCitation added in v0.5.0

type AzureChatExtensionDataSourceResponseCitation struct {
	// REQUIRED; The content of the citation.
	Content *string

	// The chunk ID of the citation.
	ChunkID *string

	// The file path of the citation.
	Filepath *string

	// The rerank score of the retrieved document.
	RerankScore *float64

	// The title of the citation.
	Title *string

	// The URL of the citation.
	URL *string
}

AzureChatExtensionDataSourceResponseCitation - A single instance of additional context information available when Azure OpenAI chat extensions are involved in the generation of a corresponding chat completions response. This context information is only populated when using an Azure OpenAI request configured to use a matching extension.

func (AzureChatExtensionDataSourceResponseCitation) MarshalJSON added in v0.5.0

MarshalJSON implements the json.Marshaller interface for type AzureChatExtensionDataSourceResponseCitation.

func (*AzureChatExtensionDataSourceResponseCitation) UnmarshalJSON added in v0.5.0

func (a *AzureChatExtensionDataSourceResponseCitation) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatExtensionDataSourceResponseCitation.

type AzureChatExtensionRetrieveDocumentFilterReason added in v0.6.0

type AzureChatExtensionRetrieveDocumentFilterReason string

AzureChatExtensionRetrieveDocumentFilterReason - The reason for filtering the retrieved document.

const (
	// AzureChatExtensionRetrieveDocumentFilterReasonRerank - The document is not filtered by original search score threshold,
	// but is filtered by rerank score and `top_n_documents` configure.
	AzureChatExtensionRetrieveDocumentFilterReasonRerank AzureChatExtensionRetrieveDocumentFilterReason = "rerank"
	// AzureChatExtensionRetrieveDocumentFilterReasonScore - The document is filtered by original search score threshold defined
	// by `strictness` configure.
	AzureChatExtensionRetrieveDocumentFilterReasonScore AzureChatExtensionRetrieveDocumentFilterReason = "score"
)

func PossibleAzureChatExtensionRetrieveDocumentFilterReasonValues added in v0.6.0

func PossibleAzureChatExtensionRetrieveDocumentFilterReasonValues() []AzureChatExtensionRetrieveDocumentFilterReason

PossibleAzureChatExtensionRetrieveDocumentFilterReasonValues returns the possible values for the AzureChatExtensionRetrieveDocumentFilterReason const type.

type AzureChatExtensionRetrievedDocument added in v0.6.0

type AzureChatExtensionRetrievedDocument struct {
	// REQUIRED; The content of the citation.
	Content *string

	// REQUIRED; The index of the data source.
	DataSourceIndex *int32

	// REQUIRED; The search queries used to retrieve the document.
	SearchQueries []string

	// The chunk ID of the citation.
	ChunkID *string

	// The file path of the citation.
	Filepath *string

	// Represents the rationale for filtering the document. If the document does not undergo filtering, this field will remain
	// unset.
	FilterReason *AzureChatExtensionRetrieveDocumentFilterReason

	// The original search score of the retrieved document.
	OriginalSearchScore *float64

	// The rerank score of the retrieved document.
	RerankScore *float64

	// The title of the citation.
	Title *string

	// The URL of the citation.
	URL *string
}

AzureChatExtensionRetrievedDocument - The retrieved document.

func (AzureChatExtensionRetrievedDocument) MarshalJSON added in v0.6.0

func (a AzureChatExtensionRetrievedDocument) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureChatExtensionRetrievedDocument.

func (*AzureChatExtensionRetrievedDocument) UnmarshalJSON added in v0.6.0

func (a *AzureChatExtensionRetrievedDocument) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatExtensionRetrievedDocument.

type AzureChatExtensionType added in v0.2.0

type AzureChatExtensionType string

AzureChatExtensionType - A representation of configuration data for a single Azure OpenAI chat extension. This will be used by a chat completions request that should use Azure OpenAI chat extensions to augment the response behavior. The use of this configuration is compatible only with Azure OpenAI.

const (
	// AzureChatExtensionTypeAzureCosmosDB - Represents the use of Azure Cosmos DB as an Azure OpenAI chat extension.
	AzureChatExtensionTypeAzureCosmosDB AzureChatExtensionType = "azure_cosmos_db"
	// AzureChatExtensionTypeAzureSearch - Represents the use of Azure AI Search as an Azure OpenAI chat extension.
	AzureChatExtensionTypeAzureSearch AzureChatExtensionType = "azure_search"
	// AzureChatExtensionTypeElasticsearch - Represents the use of Elasticsearch® index as an Azure OpenAI chat extension.
	AzureChatExtensionTypeElasticsearch AzureChatExtensionType = "elasticsearch"
	// AzureChatExtensionTypeMongoDB - Represents the use of a MongoDB chat extension.
	AzureChatExtensionTypeMongoDB AzureChatExtensionType = "mongo_db"
	// AzureChatExtensionTypePinecone - Represents the use of Pinecone index as an Azure OpenAI chat extension.
	AzureChatExtensionTypePinecone AzureChatExtensionType = "pinecone"
)

func PossibleAzureChatExtensionTypeValues added in v0.2.0

func PossibleAzureChatExtensionTypeValues() []AzureChatExtensionType

PossibleAzureChatExtensionTypeValues returns the possible values for the AzureChatExtensionType const type.

type AzureChatExtensionsMessageContext added in v0.2.0

type AzureChatExtensionsMessageContext struct {
	// All the retrieved documents.
	AllRetrievedDocuments []AzureChatExtensionRetrievedDocument

	// The contextual information associated with the Azure chat extensions used for a chat completions request. These messages
	// describe the data source retrievals, plugin invocations, and other intermediate
	// steps taken in the course of generating a chat completions response that was augmented by capabilities from Azure OpenAI
	// chat extensions.
	Citations []AzureChatExtensionDataSourceResponseCitation

	// The detected intent from the chat history, used to pass to the next turn to carry over the context.
	Intent *string
}

AzureChatExtensionsMessageContext - A representation of the additional context information available when Azure OpenAI chat extensions are involved in the generation of a corresponding chat completions response. This context information is only populated when using an Azure OpenAI request configured to use a matching extension.

func (AzureChatExtensionsMessageContext) MarshalJSON added in v0.2.0

func (a AzureChatExtensionsMessageContext) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureChatExtensionsMessageContext.

func (*AzureChatExtensionsMessageContext) UnmarshalJSON added in v0.2.0

func (a *AzureChatExtensionsMessageContext) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatExtensionsMessageContext.

type AzureChatGroundingEnhancementConfiguration added in v0.4.0

type AzureChatGroundingEnhancementConfiguration struct {
	// REQUIRED; Specifies whether the enhancement is enabled.
	Enabled *bool
}

AzureChatGroundingEnhancementConfiguration - A representation of the available options for the Azure OpenAI grounding enhancement.

func (AzureChatGroundingEnhancementConfiguration) MarshalJSON added in v0.4.0

MarshalJSON implements the json.Marshaller interface for type AzureChatGroundingEnhancementConfiguration.

func (*AzureChatGroundingEnhancementConfiguration) UnmarshalJSON added in v0.4.0

func (a *AzureChatGroundingEnhancementConfiguration) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatGroundingEnhancementConfiguration.

type AzureChatOCREnhancementConfiguration added in v0.4.0

type AzureChatOCREnhancementConfiguration struct {
	// REQUIRED; Specifies whether the enhancement is enabled.
	Enabled *bool
}

AzureChatOCREnhancementConfiguration - A representation of the available options for the Azure OpenAI optical character recognition (OCR) enhancement.

func (AzureChatOCREnhancementConfiguration) MarshalJSON added in v0.4.0

func (a AzureChatOCREnhancementConfiguration) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureChatOCREnhancementConfiguration.

func (*AzureChatOCREnhancementConfiguration) UnmarshalJSON added in v0.4.0

func (a *AzureChatOCREnhancementConfiguration) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatOCREnhancementConfiguration.

type AzureCosmosDBChatExtensionConfiguration added in v0.4.0

type AzureCosmosDBChatExtensionConfiguration struct {
	// REQUIRED; The parameters to use when configuring Azure OpenAI CosmosDB chat extensions.
	Parameters *AzureCosmosDBChatExtensionParameters

	// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
	// chat extensions are only compatible with Azure OpenAI.
	Type *AzureChatExtensionType
}

AzureCosmosDBChatExtensionConfiguration - A specific representation of configurable options for Azure Cosmos DB when using it as an Azure OpenAI chat extension.

func (*AzureCosmosDBChatExtensionConfiguration) GetAzureChatExtensionConfiguration added in v0.4.0

func (a *AzureCosmosDBChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type AzureCosmosDBChatExtensionConfiguration.

func (AzureCosmosDBChatExtensionConfiguration) MarshalJSON added in v0.4.0

func (a AzureCosmosDBChatExtensionConfiguration) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureCosmosDBChatExtensionConfiguration.

func (*AzureCosmosDBChatExtensionConfiguration) UnmarshalJSON added in v0.4.0

func (a *AzureCosmosDBChatExtensionConfiguration) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureCosmosDBChatExtensionConfiguration.

type AzureCosmosDBChatExtensionParameters added in v0.4.0

type AzureCosmosDBChatExtensionParameters struct {
	// REQUIRED; The name of the Azure Cosmos DB resource container.
	ContainerName *string

	// REQUIRED; The MongoDB vCore database name to use with Azure Cosmos DB.
	DatabaseName *string

	// REQUIRED; The embedding dependency for vector search.
	EmbeddingDependency OnYourDataVectorizationSourceClassification

	// REQUIRED; Customized field mapping behavior to use when interacting with the search index.
	FieldsMapping *AzureCosmosDBFieldMappingOptions

	// REQUIRED; The MongoDB vCore index name to use with Azure Cosmos DB.
	IndexName *string

	// If specified as true, the system will allow partial search results to be used and the request fails if all the queries
	// fail. If not specified, or specified as false, the request will fail if any
	// search query fails.
	AllowPartialResult *bool

	// The authentication method to use when accessing the defined data source. Each data source type supports a specific set
	// of available authentication methods; please see the documentation of the data
	// source for supported mechanisms. If not otherwise provided, On Your Data will attempt to use System Managed Identity (default
	// credential) authentication.
	Authentication OnYourDataAuthenticationOptionsClassification

	// Whether queries should be restricted to use of indexed data.
	InScope *bool

	// The included properties of the output context. If not specified, the default value is citations and intent.
	IncludeContexts []OnYourDataContextProperty

	// The max number of rewritten queries should be send to search provider for one user message. If not specified, the system
	// will decide the number of queries to send.
	MaxSearchQueries *int32

	// The configured strictness of the search relevance filtering. The higher of strictness, the higher of the precision but
	// lower recall of the answer.
	Strictness *int32

	// The configured top number of documents to feature for the configured query.
	TopNDocuments *int32
}

AzureCosmosDBChatExtensionParameters - Parameters to use when configuring Azure OpenAI On Your Data chat extensions when using Azure Cosmos DB for MongoDB vCore. The supported authentication type is ConnectionString.

func (AzureCosmosDBChatExtensionParameters) MarshalJSON added in v0.4.0

func (a AzureCosmosDBChatExtensionParameters) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureCosmosDBChatExtensionParameters.

func (*AzureCosmosDBChatExtensionParameters) UnmarshalJSON added in v0.4.0

func (a *AzureCosmosDBChatExtensionParameters) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureCosmosDBChatExtensionParameters.

type AzureCosmosDBFieldMappingOptions added in v0.4.0

type AzureCosmosDBFieldMappingOptions struct {
	// REQUIRED; The names of index fields that should be treated as content.
	ContentFields []string

	// REQUIRED; The names of fields that represent vector data.
	VectorFields []string

	// The separator pattern that content fields should use.
	ContentFieldsSeparator *string

	// The name of the index field to use as a filepath.
	FilepathField *string

	// The name of the index field to use as a title.
	TitleField *string

	// The name of the index field to use as a URL.
	URLField *string
}

AzureCosmosDBFieldMappingOptions - Optional settings to control how fields are processed when using a configured Azure Cosmos DB resource.

func (AzureCosmosDBFieldMappingOptions) MarshalJSON added in v0.4.0

func (a AzureCosmosDBFieldMappingOptions) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureCosmosDBFieldMappingOptions.

func (*AzureCosmosDBFieldMappingOptions) UnmarshalJSON added in v0.4.0

func (a *AzureCosmosDBFieldMappingOptions) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureCosmosDBFieldMappingOptions.

type AzureGroundingEnhancement added in v0.4.0

type AzureGroundingEnhancement struct {
	// REQUIRED; The lines of text detected by the grounding enhancement.
	Lines []AzureGroundingEnhancementLine
}

AzureGroundingEnhancement - The grounding enhancement that returns the bounding box of the objects detected in the image.

func (AzureGroundingEnhancement) MarshalJSON added in v0.4.0

func (a AzureGroundingEnhancement) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureGroundingEnhancement.

func (*AzureGroundingEnhancement) UnmarshalJSON added in v0.4.0

func (a *AzureGroundingEnhancement) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureGroundingEnhancement.

type AzureGroundingEnhancementCoordinatePoint added in v0.4.0

type AzureGroundingEnhancementCoordinatePoint struct {
	// REQUIRED; The x-coordinate (horizontal axis) of the point.
	X *float32

	// REQUIRED; The y-coordinate (vertical axis) of the point.
	Y *float32
}

AzureGroundingEnhancementCoordinatePoint - A representation of a single polygon point as used by the Azure grounding enhancement.

func (AzureGroundingEnhancementCoordinatePoint) MarshalJSON added in v0.4.0

MarshalJSON implements the json.Marshaller interface for type AzureGroundingEnhancementCoordinatePoint.

func (*AzureGroundingEnhancementCoordinatePoint) UnmarshalJSON added in v0.4.0

func (a *AzureGroundingEnhancementCoordinatePoint) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureGroundingEnhancementCoordinatePoint.

type AzureGroundingEnhancementLine added in v0.4.0

type AzureGroundingEnhancementLine struct {
	// REQUIRED; An array of spans that represent detected objects and its bounding box information.
	Spans []AzureGroundingEnhancementLineSpan

	// REQUIRED; The text within the line.
	Text *string
}

AzureGroundingEnhancementLine - A content line object consisting of an adjacent sequence of content elements, such as words and selection marks.

func (AzureGroundingEnhancementLine) MarshalJSON added in v0.4.0

func (a AzureGroundingEnhancementLine) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureGroundingEnhancementLine.

func (*AzureGroundingEnhancementLine) UnmarshalJSON added in v0.4.0

func (a *AzureGroundingEnhancementLine) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureGroundingEnhancementLine.

type AzureGroundingEnhancementLineSpan added in v0.4.0

type AzureGroundingEnhancementLineSpan struct {
	// REQUIRED; The length of the span in characters, measured in Unicode codepoints.
	Length *int32

	// REQUIRED; The character offset within the text where the span begins. This offset is defined as the position of the first
	// character of the span, counting from the start of the text as Unicode codepoints.
	Offset *int32

	// REQUIRED; An array of objects representing points in the polygon that encloses the detected object.
	Polygon []AzureGroundingEnhancementCoordinatePoint

	// REQUIRED; The text content of the span that represents the detected object.
	Text *string
}

AzureGroundingEnhancementLineSpan - A span object that represents a detected object and its bounding box information.

func (AzureGroundingEnhancementLineSpan) MarshalJSON added in v0.4.0

func (a AzureGroundingEnhancementLineSpan) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureGroundingEnhancementLineSpan.

func (*AzureGroundingEnhancementLineSpan) UnmarshalJSON added in v0.4.0

func (a *AzureGroundingEnhancementLineSpan) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureGroundingEnhancementLineSpan.

type AzureSearchChatExtensionConfiguration added in v0.5.0

type AzureSearchChatExtensionConfiguration struct {
	// REQUIRED; The parameters to use when configuring Azure Search.
	Parameters *AzureSearchChatExtensionParameters

	// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
	// chat extensions are only compatible with Azure OpenAI.
	Type *AzureChatExtensionType
}

AzureSearchChatExtensionConfiguration - A specific representation of configurable options for Azure Search when using it as an Azure OpenAI chat extension.

func (*AzureSearchChatExtensionConfiguration) GetAzureChatExtensionConfiguration added in v0.5.0

func (a *AzureSearchChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type AzureSearchChatExtensionConfiguration.

func (AzureSearchChatExtensionConfiguration) MarshalJSON added in v0.5.0

func (a AzureSearchChatExtensionConfiguration) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureSearchChatExtensionConfiguration.

func (*AzureSearchChatExtensionConfiguration) UnmarshalJSON added in v0.5.0

func (a *AzureSearchChatExtensionConfiguration) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureSearchChatExtensionConfiguration.

type AzureSearchChatExtensionParameters added in v0.5.0

type AzureSearchChatExtensionParameters struct {
	// REQUIRED; The absolute endpoint path for the Azure Cognitive Search resource to use.
	Endpoint *string

	// REQUIRED; The name of the index to use as available in the referenced Azure Cognitive Search resource.
	IndexName *string

	// If specified as true, the system will allow partial search results to be used and the request fails if all the queries
	// fail. If not specified, or specified as false, the request will fail if any
	// search query fails.
	AllowPartialResult *bool

	// The authentication method to use when accessing the defined data source. Each data source type supports a specific set
	// of available authentication methods; please see the documentation of the data
	// source for supported mechanisms. If not otherwise provided, On Your Data will attempt to use System Managed Identity (default
	// credential) authentication.
	Authentication OnYourDataAuthenticationOptionsClassification

	// The embedding dependency for vector search.
	EmbeddingDependency OnYourDataVectorizationSourceClassification

	// Customized field mapping behavior to use when interacting with the search index.
	FieldsMapping *AzureSearchIndexFieldMappingOptions

	// Search filter.
	Filter *string

	// Whether queries should be restricted to use of indexed data.
	InScope *bool

	// The included properties of the output context. If not specified, the default value is citations and intent.
	IncludeContexts []OnYourDataContextProperty

	// The max number of rewritten queries should be send to search provider for one user message. If not specified, the system
	// will decide the number of queries to send.
	MaxSearchQueries *int32

	// The query type to use with Azure Cognitive Search.
	QueryType *AzureSearchQueryType

	// The additional semantic configuration for the query.
	SemanticConfiguration *string

	// The configured strictness of the search relevance filtering. The higher of strictness, the higher of the precision but
	// lower recall of the answer.
	Strictness *int32

	// The configured top number of documents to feature for the configured query.
	TopNDocuments *int32
}

AzureSearchChatExtensionParameters - Parameters for Azure Cognitive Search when used as an Azure OpenAI chat extension. The supported authentication types are APIKey, SystemAssignedManagedIdentity and UserAssignedManagedIdentity.

func (AzureSearchChatExtensionParameters) MarshalJSON added in v0.5.0

func (a AzureSearchChatExtensionParameters) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureSearchChatExtensionParameters.

func (*AzureSearchChatExtensionParameters) UnmarshalJSON added in v0.5.0

func (a *AzureSearchChatExtensionParameters) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureSearchChatExtensionParameters.

type AzureSearchIndexFieldMappingOptions added in v0.5.0

type AzureSearchIndexFieldMappingOptions struct {
	// The names of index fields that should be treated as content.
	ContentFields []string

	// The separator pattern that content fields should use.
	ContentFieldsSeparator *string

	// The name of the index field to use as a filepath.
	FilepathField *string

	// The names of fields that represent image vector data.
	ImageVectorFields []string

	// The name of the index field to use as a title.
	TitleField *string

	// The name of the index field to use as a URL.
	URLField *string

	// The names of fields that represent vector data.
	VectorFields []string
}

AzureSearchIndexFieldMappingOptions - Optional settings to control how fields are processed when using a configured Azure Search resource.

func (AzureSearchIndexFieldMappingOptions) MarshalJSON added in v0.5.0

func (a AzureSearchIndexFieldMappingOptions) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type AzureSearchIndexFieldMappingOptions.

func (*AzureSearchIndexFieldMappingOptions) UnmarshalJSON added in v0.5.0

func (a *AzureSearchIndexFieldMappingOptions) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type AzureSearchIndexFieldMappingOptions.

type AzureSearchQueryType added in v0.5.0

type AzureSearchQueryType string

AzureSearchQueryType - The type of Azure Search retrieval query that should be executed when using it as an Azure OpenAI chat extension.

const (
	// AzureSearchQueryTypeSemantic - Represents the semantic query parser for advanced semantic modeling.
	AzureSearchQueryTypeSemantic AzureSearchQueryType = "semantic"
	// AzureSearchQueryTypeSimple - Represents the default, simple query parser.
	AzureSearchQueryTypeSimple AzureSearchQueryType = "simple"
	// AzureSearchQueryTypeVector - Represents vector search over computed data.
	AzureSearchQueryTypeVector AzureSearchQueryType = "vector"
	// AzureSearchQueryTypeVectorSemanticHybrid - Represents a combination of semantic search and vector data querying.
	AzureSearchQueryTypeVectorSemanticHybrid AzureSearchQueryType = "vector_semantic_hybrid"
	// AzureSearchQueryTypeVectorSimpleHybrid - Represents a combination of the simple query strategy with vector data.
	AzureSearchQueryTypeVectorSimpleHybrid AzureSearchQueryType = "vector_simple_hybrid"
)

func PossibleAzureSearchQueryTypeValues added in v0.5.0

func PossibleAzureSearchQueryTypeValues() []AzureSearchQueryType

PossibleAzureSearchQueryTypeValues returns the possible values for the AzureSearchQueryType const type.

type ChatCompletion added in v0.8.0

type ChatCompletion openai.ChatCompletion

ChatCompletion wraps an openai.ChatCompletion, allowing access to Azure specific properties.

func (ChatCompletion) PromptFilterResults added in v0.8.0

func (c ChatCompletion) PromptFilterResults() ([]ContentFilterResultsForPrompt, error)

PromptFilterResults contains content filtering results for zero or more prompts in the request.

type ChatCompletionChoice added in v0.8.0

type ChatCompletionChoice openai.ChatCompletionChoice

ChatCompletionChoice wraps an openai.ChatCompletionChoice, allowing access to Azure specific properties.

func (ChatCompletionChoice) ContentFilterResults added in v0.8.0

func (c ChatCompletionChoice) ContentFilterResults() (*ContentFilterResultsForChoice, error)

ContentFilterResults contains content filtering information for this choice.

type ChatCompletionChunk added in v0.8.0

type ChatCompletionChunk openai.ChatCompletionChunk

ChatCompletionChunk wraps an openai.ChatCompletionChunk, allowing access to Azure specific properties.

func (ChatCompletionChunk) PromptFilterResults added in v0.8.0

func (c ChatCompletionChunk) PromptFilterResults() ([]ContentFilterResultsForPrompt, error)

PromptFilterResults contains content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders.

type ChatCompletionChunkChoiceDelta added in v0.8.0

type ChatCompletionChunkChoiceDelta openai.ChatCompletionChunkChoiceDelta

ChatCompletionChunkChoiceDelta wraps an openai.ChatCompletionChunkChoiceDelta, allowing access to Azure specific properties.

func (ChatCompletionChunkChoiceDelta) Context added in v0.8.0

Context contains additional context information available when Azure OpenAI chat extensions are involved in the generation of a corresponding chat completions response.

type ChatCompletionMessage added in v0.8.0

type ChatCompletionMessage openai.ChatCompletionMessage

ChatCompletionMessage wraps an openai.ChatCompletionMessage, allowing access to Azure specific properties.

func (ChatCompletionMessage) Context added in v0.8.0

Context contains additional context information available when Azure OpenAI chat extensions are involved in the generation of a corresponding chat completions response.

type Completion added in v0.8.0

type Completion openai.Completion

Completion wraps an openai.Completion, allowing access to Azure specific properties.

func (Completion) PromptFilterResults added in v0.8.0

func (c Completion) PromptFilterResults() ([]ContentFilterResultsForPrompt, error)

PromptFilterResults contains content filtering results for zero or more prompts in the request.

type CompletionChoice added in v0.8.0

type CompletionChoice openai.CompletionChoice

CompletionChoice wraps an openai.CompletionChoice, allowing access to Azure specific properties.

func (CompletionChoice) ContentFilterResults added in v0.8.0

func (c CompletionChoice) ContentFilterResults() (*ContentFilterResultsForChoice, error)

ContentFilterResults contains content filtering information for this choice.

type ContentFilterBlocklistIDResult added in v0.4.0

type ContentFilterBlocklistIDResult struct {
	// REQUIRED; A value indicating whether or not the content has been filtered.
	Filtered *bool

	// REQUIRED; The ID of the custom blocklist evaluated.
	ID *string
}

ContentFilterBlocklistIDResult - Represents the outcome of an evaluation against a custom blocklist as performed by content filtering.

func (ContentFilterBlocklistIDResult) MarshalJSON added in v0.4.0

func (c ContentFilterBlocklistIDResult) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ContentFilterBlocklistIDResult.

func (*ContentFilterBlocklistIDResult) UnmarshalJSON added in v0.4.0

func (c *ContentFilterBlocklistIDResult) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterBlocklistIDResult.

type ContentFilterCitedDetectionResult added in v0.4.0

type ContentFilterCitedDetectionResult struct {
	// REQUIRED; A value indicating whether detection occurred, irrespective of severity or whether the content was filtered.
	Detected *bool

	// REQUIRED; A value indicating whether or not the content has been filtered.
	Filtered *bool

	// The license description associated with the detection.
	License *string

	// The internet location associated with the detection.
	URL *string
}

ContentFilterCitedDetectionResult - Represents the outcome of a detection operation against protected resources as performed by content filtering.

func (ContentFilterCitedDetectionResult) MarshalJSON added in v0.4.0

func (c ContentFilterCitedDetectionResult) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ContentFilterCitedDetectionResult.

func (*ContentFilterCitedDetectionResult) UnmarshalJSON added in v0.4.0

func (c *ContentFilterCitedDetectionResult) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterCitedDetectionResult.

type ContentFilterDetailedResults added in v0.6.0

type ContentFilterDetailedResults struct {
	// REQUIRED; The collection of detailed blocklist result information.
	Details []ContentFilterBlocklistIDResult

	// REQUIRED; A value indicating whether or not the content has been filtered.
	Filtered *bool
}

ContentFilterDetailedResults - Represents a structured collection of result details for content filtering.

func (ContentFilterDetailedResults) MarshalJSON added in v0.6.0

func (c ContentFilterDetailedResults) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ContentFilterDetailedResults.

func (*ContentFilterDetailedResults) UnmarshalJSON added in v0.6.0

func (c *ContentFilterDetailedResults) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterDetailedResults.

type ContentFilterDetectionResult added in v0.4.0

type ContentFilterDetectionResult struct {
	// REQUIRED; A value indicating whether detection occurred, irrespective of severity or whether the content was filtered.
	Detected *bool

	// REQUIRED; A value indicating whether or not the content has been filtered.
	Filtered *bool
}

ContentFilterDetectionResult - Represents the outcome of a detection operation performed by content filtering.

func (ContentFilterDetectionResult) MarshalJSON added in v0.4.0

func (c ContentFilterDetectionResult) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ContentFilterDetectionResult.

func (*ContentFilterDetectionResult) UnmarshalJSON added in v0.4.0

func (c *ContentFilterDetectionResult) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterDetectionResult.

type ContentFilterError added in v0.8.0

type ContentFilterError struct {
	OpenAIError *openai.Error
	ContentFilterResultDetailsForPrompt
}

ContentFilterError can be extracted from an openai.Error using ExtractContentFilterError.

func (*ContentFilterError) Error added in v0.8.0

func (c *ContentFilterError) Error() string

Error implements the error interface for type ContentFilterError.

func (*ContentFilterError) NonRetriable added in v0.8.0

func (c *ContentFilterError) NonRetriable()

NonRetriable is a marker method, indicating the request failure is terminal.

func (*ContentFilterError) Unwrap added in v0.8.0

func (c *ContentFilterError) Unwrap() error

Unwrap returns the inner error for this error.

type ContentFilterResult added in v0.3.0

type ContentFilterResult struct {
	// REQUIRED; A value indicating whether or not the content has been filtered.
	Filtered *bool

	// REQUIRED; Ratings for the intensity and risk level of filtered content.
	Severity *ContentFilterSeverity
}

ContentFilterResult - Information about filtered content severity level and if it has been filtered or not.

func (ContentFilterResult) MarshalJSON added in v0.3.0

func (c ContentFilterResult) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ContentFilterResult.

func (*ContentFilterResult) UnmarshalJSON added in v0.3.0

func (c *ContentFilterResult) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterResult.

type ContentFilterResultDetailsForPrompt added in v0.4.0

type ContentFilterResultDetailsForPrompt struct {
	// Describes detection results against configured custom blocklists.
	CustomBlocklists *ContentFilterDetailedResults

	// Describes an error returned if the content filtering system is down or otherwise unable to complete the operation in time.
	Error *Error

	// Describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity
	// group on the basis of certain differentiating attributes of these groups
	// including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion,
	// immigration status, ability status, personal appearance, and body size.
	Hate *ContentFilterResult

	// Whether an indirect attack was detected in the prompt.
	IndirectAttack *ContentFilterDetectionResult

	// Whether a jailbreak attempt was detected in the prompt.
	Jailbreak *ContentFilterDetectionResult

	// Describes whether profanity was detected.
	Profanity *ContentFilterDetectionResult

	// Describes language related to physical actions intended to purposely hurt, injure, or damage one’s body, or kill oneself.
	SelfHarm *ContentFilterResult

	// Describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate
	// terms, physical sexual acts, including those portrayed as an assault or a
	// forced sexual violent act against one’s will, prostitution, pornography, and abuse.
	Sexual *ContentFilterResult

	// Describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes
	// weapons, etc.
	Violence *ContentFilterResult
}

ContentFilterResultDetailsForPrompt - Information about content filtering evaluated against input data to Azure OpenAI.

func (ContentFilterResultDetailsForPrompt) MarshalJSON added in v0.4.0

func (c ContentFilterResultDetailsForPrompt) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ContentFilterResultDetailsForPrompt.

func (*ContentFilterResultDetailsForPrompt) UnmarshalJSON added in v0.4.0

func (c *ContentFilterResultDetailsForPrompt) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterResultDetailsForPrompt.

type ContentFilterResultsForChoice added in v0.4.0

type ContentFilterResultsForChoice struct {
	// Describes detection results against configured custom blocklists.
	CustomBlocklists *ContentFilterDetailedResults

	// Describes an error returned if the content filtering system is down or otherwise unable to complete the operation in time.
	Error *Error

	// Describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity
	// group on the basis of certain differentiating attributes of these groups
	// including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion,
	// immigration status, ability status, personal appearance, and body size.
	Hate *ContentFilterResult

	// Describes whether profanity was detected.
	Profanity *ContentFilterDetectionResult

	// Information about detection of protected code material.
	ProtectedMaterialCode *ContentFilterCitedDetectionResult

	// Information about detection of protected text material.
	ProtectedMaterialText *ContentFilterDetectionResult

	// Describes language related to physical actions intended to purposely hurt, injure, or damage one’s body, or kill oneself.
	SelfHarm *ContentFilterResult

	// Describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate
	// terms, physical sexual acts, including those portrayed as an assault or a
	// forced sexual violent act against one’s will, prostitution, pornography, and abuse.
	Sexual *ContentFilterResult

	// Describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes
	// weapons, etc.
	Violence *ContentFilterResult
}

ContentFilterResultsForChoice - Information about content filtering evaluated against generated model output.

func (ContentFilterResultsForChoice) MarshalJSON added in v0.4.0

func (c ContentFilterResultsForChoice) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ContentFilterResultsForChoice.

func (*ContentFilterResultsForChoice) UnmarshalJSON added in v0.4.0

func (c *ContentFilterResultsForChoice) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterResultsForChoice.

type ContentFilterResultsForPrompt added in v0.4.0

type ContentFilterResultsForPrompt struct {
	// REQUIRED; Content filtering results for this prompt
	ContentFilterResults *ContentFilterResultDetailsForPrompt

	// REQUIRED; The index of this prompt in the set of prompt results
	PromptIndex *int32
}

ContentFilterResultsForPrompt - Content filtering results for a single prompt in the request.

func (ContentFilterResultsForPrompt) MarshalJSON added in v0.4.0

func (c ContentFilterResultsForPrompt) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ContentFilterResultsForPrompt.

func (*ContentFilterResultsForPrompt) UnmarshalJSON added in v0.4.0

func (c *ContentFilterResultsForPrompt) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterResultsForPrompt.

type ContentFilterSeverity

type ContentFilterSeverity string

ContentFilterSeverity - Ratings for the intensity and risk level of harmful content.

const (
	// ContentFilterSeverityHigh - Content that displays explicit and severe harmful instructions, actions,
	// damage, or abuse; includes endorsement, glorification, or promotion of severe
	// harmful acts, extreme or illegal forms of harm, radicalization, or non-consensual
	// power exchange or abuse.
	ContentFilterSeverityHigh ContentFilterSeverity = "high"
	// ContentFilterSeverityLow - Content that expresses prejudiced, judgmental, or opinionated views, includes offensive
	// use of language, stereotyping, use cases exploring a fictional world (for example, gaming,
	// literature) and depictions at low intensity.
	ContentFilterSeverityLow ContentFilterSeverity = "low"
	// ContentFilterSeverityMedium - Content that uses offensive, insulting, mocking, intimidating, or demeaning language
	// towards specific identity groups, includes depictions of seeking and executing harmful
	// instructions, fantasies, glorification, promotion of harm at medium intensity.
	ContentFilterSeverityMedium ContentFilterSeverity = "medium"
	// ContentFilterSeveritySafe - Content may be related to violence, self-harm, sexual, or hate categories but the terms
	// are used in general, journalistic, scientific, medical, and similar professional contexts,
	// which are appropriate for most audiences.
	ContentFilterSeveritySafe ContentFilterSeverity = "safe"
)

func PossibleContentFilterSeverityValues

func PossibleContentFilterSeverityValues() []ContentFilterSeverity

PossibleContentFilterSeverityValues returns the possible values for the ContentFilterSeverity const type.

type ElasticsearchChatExtensionConfiguration added in v0.4.0

type ElasticsearchChatExtensionConfiguration struct {
	// REQUIRED; The parameters to use when configuring Elasticsearch®.
	Parameters *ElasticsearchChatExtensionParameters

	// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
	// chat extensions are only compatible with Azure OpenAI.
	Type *AzureChatExtensionType
}

ElasticsearchChatExtensionConfiguration - A specific representation of configurable options for Elasticsearch when using it as an Azure OpenAI chat extension.

func (*ElasticsearchChatExtensionConfiguration) GetAzureChatExtensionConfiguration added in v0.4.0

func (e *ElasticsearchChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type ElasticsearchChatExtensionConfiguration.

func (ElasticsearchChatExtensionConfiguration) MarshalJSON added in v0.4.0

func (e ElasticsearchChatExtensionConfiguration) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ElasticsearchChatExtensionConfiguration.

func (*ElasticsearchChatExtensionConfiguration) UnmarshalJSON added in v0.4.0

func (e *ElasticsearchChatExtensionConfiguration) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ElasticsearchChatExtensionConfiguration.

type ElasticsearchChatExtensionParameters added in v0.4.0

type ElasticsearchChatExtensionParameters struct {
	// REQUIRED; The endpoint of Elasticsearch®.
	Endpoint *string

	// REQUIRED; The index name of Elasticsearch®.
	IndexName *string

	// If specified as true, the system will allow partial search results to be used and the request fails if all the queries
	// fail. If not specified, or specified as false, the request will fail if any
	// search query fails.
	AllowPartialResult *bool

	// The authentication method to use when accessing the defined data source. Each data source type supports a specific set
	// of available authentication methods; please see the documentation of the data
	// source for supported mechanisms. If not otherwise provided, On Your Data will attempt to use System Managed Identity (default
	// credential) authentication.
	Authentication OnYourDataAuthenticationOptionsClassification

	// The embedding dependency for vector search.
	EmbeddingDependency OnYourDataVectorizationSourceClassification

	// The index field mapping options of Elasticsearch®.
	FieldsMapping *ElasticsearchIndexFieldMappingOptions

	// Whether queries should be restricted to use of indexed data.
	InScope *bool

	// The included properties of the output context. If not specified, the default value is citations and intent.
	IncludeContexts []OnYourDataContextProperty

	// The max number of rewritten queries should be send to search provider for one user message. If not specified, the system
	// will decide the number of queries to send.
	MaxSearchQueries *int32

	// The query type of Elasticsearch®.
	QueryType *ElasticsearchQueryType

	// The configured strictness of the search relevance filtering. The higher of strictness, the higher of the precision but
	// lower recall of the answer.
	Strictness *int32

	// The configured top number of documents to feature for the configured query.
	TopNDocuments *int32
}

ElasticsearchChatExtensionParameters - Parameters to use when configuring Elasticsearch® as an Azure OpenAI chat extension. The supported authentication types are KeyAndKeyId and EncodedAPIKey.

func (ElasticsearchChatExtensionParameters) MarshalJSON added in v0.4.0

func (e ElasticsearchChatExtensionParameters) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ElasticsearchChatExtensionParameters.

func (*ElasticsearchChatExtensionParameters) UnmarshalJSON added in v0.4.0

func (e *ElasticsearchChatExtensionParameters) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ElasticsearchChatExtensionParameters.

type ElasticsearchIndexFieldMappingOptions added in v0.4.0

type ElasticsearchIndexFieldMappingOptions struct {
	// The names of index fields that should be treated as content.
	ContentFields []string

	// The separator pattern that content fields should use.
	ContentFieldsSeparator *string

	// The name of the index field to use as a filepath.
	FilepathField *string

	// The name of the index field to use as a title.
	TitleField *string

	// The name of the index field to use as a URL.
	URLField *string

	// The names of fields that represent vector data.
	VectorFields []string
}

ElasticsearchIndexFieldMappingOptions - Optional settings to control how fields are processed when using a configured Elasticsearch® resource.

func (ElasticsearchIndexFieldMappingOptions) MarshalJSON added in v0.4.0

func (e ElasticsearchIndexFieldMappingOptions) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type ElasticsearchIndexFieldMappingOptions.

func (*ElasticsearchIndexFieldMappingOptions) UnmarshalJSON added in v0.4.0

func (e *ElasticsearchIndexFieldMappingOptions) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type ElasticsearchIndexFieldMappingOptions.

type ElasticsearchQueryType added in v0.4.0

type ElasticsearchQueryType string

ElasticsearchQueryType - The type of Elasticsearch® retrieval query that should be executed when using it as an Azure OpenAI chat extension.

const (
	// ElasticsearchQueryTypeSimple - Represents the default, simple query parser.
	ElasticsearchQueryTypeSimple ElasticsearchQueryType = "simple"
	// ElasticsearchQueryTypeVector - Represents vector search over computed data.
	ElasticsearchQueryTypeVector ElasticsearchQueryType = "vector"
)

func PossibleElasticsearchQueryTypeValues added in v0.4.0

func PossibleElasticsearchQueryTypeValues() []ElasticsearchQueryType

PossibleElasticsearchQueryTypeValues returns the possible values for the ElasticsearchQueryType const type.

type Error added in v0.3.0

type Error struct {
	// REQUIRED; One of a server-defined set of error codes.
	Code *string

	// REQUIRED; A human-readable representation of the error.
	Message *string
}

Error - The error object.

func (Error) MarshalJSON added in v0.3.0

func (e Error) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type Error.

func (*Error) UnmarshalJSON added in v0.3.0

func (e *Error) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type Error.

type MongoDBChatExtensionConfiguration added in v0.7.0

type MongoDBChatExtensionConfiguration struct {
	// REQUIRED; The parameters for the MongoDB chat extension.
	Parameters *MongoDBChatExtensionParameters

	// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
	// chat extensions are only compatible with Azure OpenAI.
	Type *AzureChatExtensionType
}

MongoDBChatExtensionConfiguration - A specific representation of configurable options for a MongoDB chat extension configuration.

func (*MongoDBChatExtensionConfiguration) GetAzureChatExtensionConfiguration added in v0.7.0

func (m *MongoDBChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type MongoDBChatExtensionConfiguration.

func (MongoDBChatExtensionConfiguration) MarshalJSON added in v0.7.0

func (m MongoDBChatExtensionConfiguration) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type MongoDBChatExtensionConfiguration.

func (*MongoDBChatExtensionConfiguration) UnmarshalJSON added in v0.7.0

func (m *MongoDBChatExtensionConfiguration) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type MongoDBChatExtensionConfiguration.

type MongoDBChatExtensionParameters added in v0.7.0

type MongoDBChatExtensionParameters struct {
	// REQUIRED; The app name for MongoDB.
	AppName *string

	// REQUIRED; The collection name for MongoDB.
	CollectionName *string

	// REQUIRED; The database name for MongoDB.
	DatabaseName *string

	// REQUIRED; The vectorization source to use with the MongoDB chat extension.
	EmbeddingDependency *MongoDBChatExtensionParametersEmbeddingDependency

	// REQUIRED; The endpoint name for MongoDB.
	Endpoint *string

	// REQUIRED; Field mappings to apply to data used by the MongoDB data source. Note that content and vector field mappings
	// are required for MongoDB.
	FieldsMapping *MongoDBChatExtensionParametersFieldsMapping

	// REQUIRED; The name of the MongoDB index.
	IndexName *string

	// If specified as true, the system will allow partial search results to be used and the request fails if all the queries
	// fail. If not specified, or specified as false, the request will fail if any
	// search query fails.
	AllowPartialResult *bool

	// The authentication method to use when accessing the defined data source. Each data source type supports a specific set
	// of available authentication methods; please see the documentation of the data
	// source for supported mechanisms. If not otherwise provided, On Your Data will attempt to use System Managed Identity (default
	// credential) authentication.
	Authentication OnYourDataAuthenticationOptionsClassification

	// Whether queries should be restricted to use of indexed data.
	InScope *bool

	// The included properties of the output context. If not specified, the default value is citations and intent.
	IncludeContexts []OnYourDataContextProperty

	// The max number of rewritten queries should be send to search provider for one user message. If not specified, the system
	// will decide the number of queries to send.
	MaxSearchQueries *int32

	// The configured strictness of the search relevance filtering. The higher of strictness, the higher of the precision but
	// lower recall of the answer.
	Strictness *int32

	// The configured top number of documents to feature for the configured query.
	TopNDocuments *int32
}

MongoDBChatExtensionParameters - Parameters for the MongoDB chat extension. The supported authentication types are AccessToken, SystemAssignedManagedIdentity and UserAssignedManagedIdentity.

func (MongoDBChatExtensionParameters) MarshalJSON added in v0.7.0

func (m MongoDBChatExtensionParameters) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type MongoDBChatExtensionParameters.

func (*MongoDBChatExtensionParameters) UnmarshalJSON added in v0.7.0

func (m *MongoDBChatExtensionParameters) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type MongoDBChatExtensionParameters.

type MongoDBChatExtensionParametersEmbeddingDependency added in v0.7.0

type MongoDBChatExtensionParametersEmbeddingDependency struct {
	// contains filtered or unexported fields
}

MongoDBChatExtensionParametersEmbeddingDependency contains the embedding dependency for the MongoDBChatExtensionParameters. NOTE: This should be created using azopenai.NewMongoDBChatExtensionParametersEmbeddingDependency

func NewMongoDBChatExtensionParametersEmbeddingDependency added in v0.7.0

NewMongoDBChatExtensionParametersEmbeddingDependency creates a azopenai.MongoDBChatExtensionParametersEmbeddingDependency.

func (MongoDBChatExtensionParametersEmbeddingDependency) MarshalJSON added in v0.7.0

MarshalJSON implements the json.Marshaller interface for type MongoDBChatExtensionParametersEmbeddingDependency.

type MongoDBChatExtensionParametersFieldsMapping added in v0.7.0

type MongoDBChatExtensionParametersFieldsMapping struct {
	// REQUIRED
	ContentFields []string

	// REQUIRED
	VectorFields           []string
	ContentFieldsSeparator *string
	FilepathField          *string
	TitleField             *string
	URLField               *string
}

MongoDBChatExtensionParametersFieldsMapping - Field mappings to apply to data used by the MongoDB data source. Note that content and vector field mappings are required for MongoDB.

func (MongoDBChatExtensionParametersFieldsMapping) MarshalJSON added in v0.7.0

MarshalJSON implements the json.Marshaller interface for type MongoDBChatExtensionParametersFieldsMapping.

func (*MongoDBChatExtensionParametersFieldsMapping) UnmarshalJSON added in v0.7.0

func (m *MongoDBChatExtensionParametersFieldsMapping) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type MongoDBChatExtensionParametersFieldsMapping.

type OnYourDataAPIKeyAuthenticationOptions added in v0.4.0

type OnYourDataAPIKeyAuthenticationOptions struct {
	// REQUIRED; The API key to use for authentication.
	Key *string

	// REQUIRED; The authentication type.
	Type *OnYourDataAuthenticationType
}

OnYourDataAPIKeyAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an API key.

func (*OnYourDataAPIKeyAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.4.0

func (o *OnYourDataAPIKeyAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataAPIKeyAuthenticationOptions.

func (OnYourDataAPIKeyAuthenticationOptions) MarshalJSON added in v0.4.0

func (o OnYourDataAPIKeyAuthenticationOptions) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type OnYourDataAPIKeyAuthenticationOptions.

func (*OnYourDataAPIKeyAuthenticationOptions) UnmarshalJSON added in v0.4.0

func (o *OnYourDataAPIKeyAuthenticationOptions) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataAPIKeyAuthenticationOptions.

type OnYourDataAccessTokenAuthenticationOptions added in v0.5.0

type OnYourDataAccessTokenAuthenticationOptions struct {
	// REQUIRED; The access token to use for authentication.
	AccessToken *string

	// REQUIRED; The authentication type.
	Type *OnYourDataAuthenticationType
}

OnYourDataAccessTokenAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using access token.

func (*OnYourDataAccessTokenAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.5.0

func (o *OnYourDataAccessTokenAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataAccessTokenAuthenticationOptions.

func (OnYourDataAccessTokenAuthenticationOptions) MarshalJSON added in v0.5.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataAccessTokenAuthenticationOptions.

func (*OnYourDataAccessTokenAuthenticationOptions) UnmarshalJSON added in v0.5.0

func (o *OnYourDataAccessTokenAuthenticationOptions) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataAccessTokenAuthenticationOptions.

type OnYourDataAuthenticationOptions added in v0.4.0

type OnYourDataAuthenticationOptions struct {
	// REQUIRED; The authentication type.
	Type *OnYourDataAuthenticationType
}

OnYourDataAuthenticationOptions - The authentication options for Azure OpenAI On Your Data.

func (*OnYourDataAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.4.0

func (o *OnYourDataAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataAuthenticationOptions.

func (OnYourDataAuthenticationOptions) MarshalJSON added in v0.4.0

func (o OnYourDataAuthenticationOptions) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type OnYourDataAuthenticationOptions.

func (*OnYourDataAuthenticationOptions) UnmarshalJSON added in v0.4.0

func (o *OnYourDataAuthenticationOptions) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataAuthenticationOptions.

type OnYourDataAuthenticationOptionsClassification added in v0.4.0

type OnYourDataAuthenticationOptionsClassification interface {
	// GetOnYourDataAuthenticationOptions returns the OnYourDataAuthenticationOptions content of the underlying type.
	GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions
}

OnYourDataAuthenticationOptionsClassification provides polymorphic access to related types. Call the interface's GetOnYourDataAuthenticationOptions() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *OnYourDataAPIKeyAuthenticationOptions, *OnYourDataAccessTokenAuthenticationOptions, *OnYourDataAuthenticationOptions, - *OnYourDataConnectionStringAuthenticationOptions, *OnYourDataEncodedAPIKeyAuthenticationOptions, *OnYourDataKeyAndKeyIDAuthenticationOptions, - *OnYourDataSystemAssignedManagedIdentityAuthenticationOptions, *OnYourDataUserAssignedManagedIdentityAuthenticationOptions, - *OnYourDataUsernameAndPasswordAuthenticationOptions

type OnYourDataAuthenticationType added in v0.4.0

type OnYourDataAuthenticationType string

OnYourDataAuthenticationType - The authentication types supported with Azure OpenAI On Your Data.

const (
	// OnYourDataAuthenticationTypeAPIKey - Authentication via API key.
	OnYourDataAuthenticationTypeAPIKey OnYourDataAuthenticationType = "api_key"
	// OnYourDataAuthenticationTypeAccessToken - Authentication via access token.
	OnYourDataAuthenticationTypeAccessToken OnYourDataAuthenticationType = "access_token"
	// OnYourDataAuthenticationTypeConnectionString - Authentication via connection string.
	OnYourDataAuthenticationTypeConnectionString OnYourDataAuthenticationType = "connection_string"
	// OnYourDataAuthenticationTypeEncodedAPIKey - Authentication via encoded API key.
	OnYourDataAuthenticationTypeEncodedAPIKey OnYourDataAuthenticationType = "encoded_api_key"
	// OnYourDataAuthenticationTypeKeyAndKeyID - Authentication via key and key ID pair.
	OnYourDataAuthenticationTypeKeyAndKeyID OnYourDataAuthenticationType = "key_and_key_id"
	// OnYourDataAuthenticationTypeSystemAssignedManagedIdentity - Authentication via system-assigned managed identity.
	OnYourDataAuthenticationTypeSystemAssignedManagedIdentity OnYourDataAuthenticationType = "system_assigned_managed_identity"
	// OnYourDataAuthenticationTypeUserAssignedManagedIdentity - Authentication via user-assigned managed identity.
	OnYourDataAuthenticationTypeUserAssignedManagedIdentity OnYourDataAuthenticationType = "user_assigned_managed_identity"
	// OnYourDataAuthenticationTypeUsernameAndPassword - Authentication via username and password.
	OnYourDataAuthenticationTypeUsernameAndPassword OnYourDataAuthenticationType = "username_and_password"
)

func PossibleOnYourDataAuthenticationTypeValues added in v0.4.0

func PossibleOnYourDataAuthenticationTypeValues() []OnYourDataAuthenticationType

PossibleOnYourDataAuthenticationTypeValues returns the possible values for the OnYourDataAuthenticationType const type.

type OnYourDataConnectionStringAuthenticationOptions added in v0.4.0

type OnYourDataConnectionStringAuthenticationOptions struct {
	// REQUIRED; The connection string to use for authentication.
	ConnectionString *string

	// REQUIRED; The authentication type.
	Type *OnYourDataAuthenticationType
}

OnYourDataConnectionStringAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using a connection string.

func (*OnYourDataConnectionStringAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.4.0

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataConnectionStringAuthenticationOptions.

func (OnYourDataConnectionStringAuthenticationOptions) MarshalJSON added in v0.4.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataConnectionStringAuthenticationOptions.

func (*OnYourDataConnectionStringAuthenticationOptions) UnmarshalJSON added in v0.4.0

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataConnectionStringAuthenticationOptions.

type OnYourDataContextProperty added in v0.6.0

type OnYourDataContextProperty string

OnYourDataContextProperty - The context property.

const (
	// OnYourDataContextPropertyAllRetrievedDocuments - The `all_retrieved_documents` property.
	OnYourDataContextPropertyAllRetrievedDocuments OnYourDataContextProperty = "all_retrieved_documents"
	// OnYourDataContextPropertyCitations - The `citations` property.
	OnYourDataContextPropertyCitations OnYourDataContextProperty = "citations"
	// OnYourDataContextPropertyIntent - The `intent` property.
	OnYourDataContextPropertyIntent OnYourDataContextProperty = "intent"
)

func PossibleOnYourDataContextPropertyValues added in v0.6.0

func PossibleOnYourDataContextPropertyValues() []OnYourDataContextProperty

PossibleOnYourDataContextPropertyValues returns the possible values for the OnYourDataContextProperty const type.

type OnYourDataDeploymentNameVectorizationSource added in v0.4.0

type OnYourDataDeploymentNameVectorizationSource struct {
	// REQUIRED; The embedding model deployment name within the same Azure OpenAI resource. This enables you to use vector search
	// without Azure OpenAI api-key and without Azure OpenAI public network access.
	DeploymentName *string

	// REQUIRED; The type of vectorization source to use.
	Type *OnYourDataVectorizationSourceType

	// The number of dimensions the embeddings should have. Only supported in text-embedding-3 and later models.
	Dimensions *int32
}

OnYourDataDeploymentNameVectorizationSource - The details of a a vectorization source, used by Azure OpenAI On Your Data when applying vector search, that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.

func (*OnYourDataDeploymentNameVectorizationSource) GetOnYourDataVectorizationSource added in v0.4.0

func (o *OnYourDataDeploymentNameVectorizationSource) GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource

GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataDeploymentNameVectorizationSource.

func (OnYourDataDeploymentNameVectorizationSource) MarshalJSON added in v0.4.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataDeploymentNameVectorizationSource.

func (*OnYourDataDeploymentNameVectorizationSource) UnmarshalJSON added in v0.4.0

func (o *OnYourDataDeploymentNameVectorizationSource) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataDeploymentNameVectorizationSource.

type OnYourDataEncodedAPIKeyAuthenticationOptions added in v0.5.0

type OnYourDataEncodedAPIKeyAuthenticationOptions struct {
	// REQUIRED; The encoded API key to use for authentication.
	EncodedAPIKey *string

	// REQUIRED; The authentication type.
	Type *OnYourDataAuthenticationType
}

OnYourDataEncodedAPIKeyAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an Elasticsearch encoded API key.

func (*OnYourDataEncodedAPIKeyAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.5.0

func (o *OnYourDataEncodedAPIKeyAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataEncodedAPIKeyAuthenticationOptions.

func (OnYourDataEncodedAPIKeyAuthenticationOptions) MarshalJSON added in v0.5.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataEncodedAPIKeyAuthenticationOptions.

func (*OnYourDataEncodedAPIKeyAuthenticationOptions) UnmarshalJSON added in v0.5.0

func (o *OnYourDataEncodedAPIKeyAuthenticationOptions) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataEncodedAPIKeyAuthenticationOptions.

type OnYourDataEndpointVectorizationSource added in v0.4.0

type OnYourDataEndpointVectorizationSource struct {
	// REQUIRED; Specifies the authentication options to use when retrieving embeddings from the specified endpoint.
	Authentication OnYourDataVectorSearchAuthenticationOptionsClassification

	// REQUIRED; Specifies the resource endpoint URL from which embeddings should be retrieved. It should be in the format of
	// https://YOURRESOURCENAME.openai.azure.com/openai/deployments/YOURDEPLOYMENTNAME/embeddings.
	// The api-version query parameter is not allowed.
	Endpoint *string

	// REQUIRED; The type of vectorization source to use.
	Type *OnYourDataVectorizationSourceType
}

OnYourDataEndpointVectorizationSource - The details of a a vectorization source, used by Azure OpenAI On Your Data when applying vector search, that is based on a public Azure OpenAI endpoint call for embeddings.

func (*OnYourDataEndpointVectorizationSource) GetOnYourDataVectorizationSource added in v0.4.0

func (o *OnYourDataEndpointVectorizationSource) GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource

GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataEndpointVectorizationSource.

func (OnYourDataEndpointVectorizationSource) MarshalJSON added in v0.4.0

func (o OnYourDataEndpointVectorizationSource) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type OnYourDataEndpointVectorizationSource.

func (*OnYourDataEndpointVectorizationSource) UnmarshalJSON added in v0.4.0

func (o *OnYourDataEndpointVectorizationSource) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataEndpointVectorizationSource.

type OnYourDataIntegratedVectorizationSource added in v0.7.0

type OnYourDataIntegratedVectorizationSource struct {
	// REQUIRED; The type of vectorization source to use.
	Type *OnYourDataVectorizationSourceType
}

OnYourDataIntegratedVectorizationSource - Represents the integrated vectorizer defined within the search resource.

func (*OnYourDataIntegratedVectorizationSource) GetOnYourDataVectorizationSource added in v0.7.0

func (o *OnYourDataIntegratedVectorizationSource) GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource

GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataIntegratedVectorizationSource.

func (OnYourDataIntegratedVectorizationSource) MarshalJSON added in v0.7.0

func (o OnYourDataIntegratedVectorizationSource) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type OnYourDataIntegratedVectorizationSource.

func (*OnYourDataIntegratedVectorizationSource) UnmarshalJSON added in v0.7.0

func (o *OnYourDataIntegratedVectorizationSource) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataIntegratedVectorizationSource.

type OnYourDataKeyAndKeyIDAuthenticationOptions added in v0.4.0

type OnYourDataKeyAndKeyIDAuthenticationOptions struct {
	// REQUIRED; The key to use for authentication.
	Key *string

	// REQUIRED; The key ID to use for authentication.
	KeyID *string

	// REQUIRED; The authentication type.
	Type *OnYourDataAuthenticationType
}

OnYourDataKeyAndKeyIDAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an Elasticsearch key and key ID pair.

func (*OnYourDataKeyAndKeyIDAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.4.0

func (o *OnYourDataKeyAndKeyIDAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataKeyAndKeyIDAuthenticationOptions.

func (OnYourDataKeyAndKeyIDAuthenticationOptions) MarshalJSON added in v0.4.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataKeyAndKeyIDAuthenticationOptions.

func (*OnYourDataKeyAndKeyIDAuthenticationOptions) UnmarshalJSON added in v0.4.0

func (o *OnYourDataKeyAndKeyIDAuthenticationOptions) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataKeyAndKeyIDAuthenticationOptions.

type OnYourDataModelIDVectorizationSource added in v0.4.0

type OnYourDataModelIDVectorizationSource struct {
	// REQUIRED; The embedding model ID build inside the search service. Currently only supported by Elasticsearch®.
	ModelID *string

	// REQUIRED; The type of vectorization source to use.
	Type *OnYourDataVectorizationSourceType
}

OnYourDataModelIDVectorizationSource - The details of a a vectorization source, used by Azure OpenAI On Your Data when applying vector search, that is based on a search service model ID. Currently only supported by Elasticsearch®.

func (*OnYourDataModelIDVectorizationSource) GetOnYourDataVectorizationSource added in v0.4.0

func (o *OnYourDataModelIDVectorizationSource) GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource

GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataModelIDVectorizationSource.

func (OnYourDataModelIDVectorizationSource) MarshalJSON added in v0.4.0

func (o OnYourDataModelIDVectorizationSource) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type OnYourDataModelIDVectorizationSource.

func (*OnYourDataModelIDVectorizationSource) UnmarshalJSON added in v0.4.0

func (o *OnYourDataModelIDVectorizationSource) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataModelIDVectorizationSource.

type OnYourDataSystemAssignedManagedIdentityAuthenticationOptions added in v0.4.0

type OnYourDataSystemAssignedManagedIdentityAuthenticationOptions struct {
	// REQUIRED; The authentication type.
	Type *OnYourDataAuthenticationType
}

OnYourDataSystemAssignedManagedIdentityAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using a system-assigned managed identity.

func (*OnYourDataSystemAssignedManagedIdentityAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.4.0

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataSystemAssignedManagedIdentityAuthenticationOptions.

func (OnYourDataSystemAssignedManagedIdentityAuthenticationOptions) MarshalJSON added in v0.4.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataSystemAssignedManagedIdentityAuthenticationOptions.

func (*OnYourDataSystemAssignedManagedIdentityAuthenticationOptions) UnmarshalJSON added in v0.4.0

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataSystemAssignedManagedIdentityAuthenticationOptions.

type OnYourDataUserAssignedManagedIdentityAuthenticationOptions added in v0.4.0

type OnYourDataUserAssignedManagedIdentityAuthenticationOptions struct {
	// REQUIRED; The resource ID of the user-assigned managed identity to use for authentication.
	ManagedIdentityResourceID *string

	// REQUIRED; The authentication type.
	Type *OnYourDataAuthenticationType
}

OnYourDataUserAssignedManagedIdentityAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using a user-assigned managed identity.

func (*OnYourDataUserAssignedManagedIdentityAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.4.0

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataUserAssignedManagedIdentityAuthenticationOptions.

func (OnYourDataUserAssignedManagedIdentityAuthenticationOptions) MarshalJSON added in v0.4.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataUserAssignedManagedIdentityAuthenticationOptions.

func (*OnYourDataUserAssignedManagedIdentityAuthenticationOptions) UnmarshalJSON added in v0.4.0

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataUserAssignedManagedIdentityAuthenticationOptions.

type OnYourDataUsernameAndPasswordAuthenticationOptions added in v0.7.0

type OnYourDataUsernameAndPasswordAuthenticationOptions struct {
	// REQUIRED; The password.
	Password *string

	// REQUIRED; The authentication type.
	Type *OnYourDataAuthenticationType

	// REQUIRED; The username.
	Username *string
}

OnYourDataUsernameAndPasswordAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using a username and password.

func (*OnYourDataUsernameAndPasswordAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.7.0

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataUsernameAndPasswordAuthenticationOptions.

func (OnYourDataUsernameAndPasswordAuthenticationOptions) MarshalJSON added in v0.7.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataUsernameAndPasswordAuthenticationOptions.

func (*OnYourDataUsernameAndPasswordAuthenticationOptions) UnmarshalJSON added in v0.7.0

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataUsernameAndPasswordAuthenticationOptions.

type OnYourDataVectorSearchAPIKeyAuthenticationOptions added in v0.6.0

type OnYourDataVectorSearchAPIKeyAuthenticationOptions struct {
	// REQUIRED; The API key to use for authentication.
	Key *string

	// REQUIRED; The type of authentication to use.
	Type *OnYourDataVectorSearchAuthenticationType
}

OnYourDataVectorSearchAPIKeyAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an API key.

func (*OnYourDataVectorSearchAPIKeyAuthenticationOptions) GetOnYourDataVectorSearchAuthenticationOptions added in v0.6.0

func (o *OnYourDataVectorSearchAPIKeyAuthenticationOptions) GetOnYourDataVectorSearchAuthenticationOptions() *OnYourDataVectorSearchAuthenticationOptions

GetOnYourDataVectorSearchAuthenticationOptions implements the OnYourDataVectorSearchAuthenticationOptionsClassification interface for type OnYourDataVectorSearchAPIKeyAuthenticationOptions.

func (OnYourDataVectorSearchAPIKeyAuthenticationOptions) MarshalJSON added in v0.6.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataVectorSearchAPIKeyAuthenticationOptions.

func (*OnYourDataVectorSearchAPIKeyAuthenticationOptions) UnmarshalJSON added in v0.6.0

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataVectorSearchAPIKeyAuthenticationOptions.

type OnYourDataVectorSearchAccessTokenAuthenticationOptions added in v0.6.0

type OnYourDataVectorSearchAccessTokenAuthenticationOptions struct {
	// REQUIRED; The access token to use for authentication.
	AccessToken *string

	// REQUIRED; The type of authentication to use.
	Type *OnYourDataVectorSearchAuthenticationType
}

OnYourDataVectorSearchAccessTokenAuthenticationOptions - The authentication options for Azure OpenAI On Your Data vector search when using access token.

func (*OnYourDataVectorSearchAccessTokenAuthenticationOptions) GetOnYourDataVectorSearchAuthenticationOptions added in v0.6.0

GetOnYourDataVectorSearchAuthenticationOptions implements the OnYourDataVectorSearchAuthenticationOptionsClassification interface for type OnYourDataVectorSearchAccessTokenAuthenticationOptions.

func (OnYourDataVectorSearchAccessTokenAuthenticationOptions) MarshalJSON added in v0.6.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataVectorSearchAccessTokenAuthenticationOptions.

func (*OnYourDataVectorSearchAccessTokenAuthenticationOptions) UnmarshalJSON added in v0.6.0

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataVectorSearchAccessTokenAuthenticationOptions.

type OnYourDataVectorSearchAuthenticationOptions added in v0.6.0

type OnYourDataVectorSearchAuthenticationOptions struct {
	// REQUIRED; The type of authentication to use.
	Type *OnYourDataVectorSearchAuthenticationType
}

OnYourDataVectorSearchAuthenticationOptions - The authentication options for Azure OpenAI On Your Data vector search.

func (*OnYourDataVectorSearchAuthenticationOptions) GetOnYourDataVectorSearchAuthenticationOptions added in v0.6.0

func (o *OnYourDataVectorSearchAuthenticationOptions) GetOnYourDataVectorSearchAuthenticationOptions() *OnYourDataVectorSearchAuthenticationOptions

GetOnYourDataVectorSearchAuthenticationOptions implements the OnYourDataVectorSearchAuthenticationOptionsClassification interface for type OnYourDataVectorSearchAuthenticationOptions.

func (OnYourDataVectorSearchAuthenticationOptions) MarshalJSON added in v0.6.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataVectorSearchAuthenticationOptions.

func (*OnYourDataVectorSearchAuthenticationOptions) UnmarshalJSON added in v0.6.0

func (o *OnYourDataVectorSearchAuthenticationOptions) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataVectorSearchAuthenticationOptions.

type OnYourDataVectorSearchAuthenticationOptionsClassification added in v0.6.0

type OnYourDataVectorSearchAuthenticationOptionsClassification interface {
	// GetOnYourDataVectorSearchAuthenticationOptions returns the OnYourDataVectorSearchAuthenticationOptions content of the underlying type.
	GetOnYourDataVectorSearchAuthenticationOptions() *OnYourDataVectorSearchAuthenticationOptions
}

OnYourDataVectorSearchAuthenticationOptionsClassification provides polymorphic access to related types. Call the interface's GetOnYourDataVectorSearchAuthenticationOptions() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *OnYourDataVectorSearchAPIKeyAuthenticationOptions, *OnYourDataVectorSearchAccessTokenAuthenticationOptions, *OnYourDataVectorSearchAuthenticationOptions

type OnYourDataVectorSearchAuthenticationType added in v0.6.0

type OnYourDataVectorSearchAuthenticationType string

OnYourDataVectorSearchAuthenticationType - The authentication types supported with Azure OpenAI On Your Data vector search.

const (
	// OnYourDataVectorSearchAuthenticationTypeAPIKey - Authentication via API key.
	OnYourDataVectorSearchAuthenticationTypeAPIKey OnYourDataVectorSearchAuthenticationType = "api_key"
	// OnYourDataVectorSearchAuthenticationTypeAccessToken - Authentication via access token.
	OnYourDataVectorSearchAuthenticationTypeAccessToken OnYourDataVectorSearchAuthenticationType = "access_token"
)

func PossibleOnYourDataVectorSearchAuthenticationTypeValues added in v0.6.0

func PossibleOnYourDataVectorSearchAuthenticationTypeValues() []OnYourDataVectorSearchAuthenticationType

PossibleOnYourDataVectorSearchAuthenticationTypeValues returns the possible values for the OnYourDataVectorSearchAuthenticationType const type.

type OnYourDataVectorizationSource added in v0.4.0

type OnYourDataVectorizationSource struct {
	// REQUIRED; The type of vectorization source to use.
	Type *OnYourDataVectorizationSourceType
}

OnYourDataVectorizationSource - An abstract representation of a vectorization source for Azure OpenAI On Your Data with vector search.

func (*OnYourDataVectorizationSource) GetOnYourDataVectorizationSource added in v0.4.0

func (o *OnYourDataVectorizationSource) GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource

GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataVectorizationSource.

func (OnYourDataVectorizationSource) MarshalJSON added in v0.4.0

func (o OnYourDataVectorizationSource) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type OnYourDataVectorizationSource.

func (*OnYourDataVectorizationSource) UnmarshalJSON added in v0.4.0

func (o *OnYourDataVectorizationSource) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataVectorizationSource.

type OnYourDataVectorizationSourceClassification added in v0.4.0

type OnYourDataVectorizationSourceClassification interface {
	// GetOnYourDataVectorizationSource returns the OnYourDataVectorizationSource content of the underlying type.
	GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource
}

OnYourDataVectorizationSourceClassification provides polymorphic access to related types. Call the interface's GetOnYourDataVectorizationSource() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *OnYourDataDeploymentNameVectorizationSource, *OnYourDataEndpointVectorizationSource, *OnYourDataIntegratedVectorizationSource, - *OnYourDataModelIDVectorizationSource, *OnYourDataVectorizationSource

type OnYourDataVectorizationSourceType added in v0.4.0

type OnYourDataVectorizationSourceType string

OnYourDataVectorizationSourceType - Represents the available sources Azure OpenAI On Your Data can use to configure vectorization of data for use with vector search.

const (
	// OnYourDataVectorizationSourceTypeDeploymentName - Represents an Ada model deployment name to use. This model deployment
	// must be in the same Azure OpenAI resource, but
	// On Your Data will use this model deployment via an internal call rather than a public one, which enables vector
	// search even in private networks.
	OnYourDataVectorizationSourceTypeDeploymentName OnYourDataVectorizationSourceType = "deployment_name"
	// OnYourDataVectorizationSourceTypeEndpoint - Represents vectorization performed by public service calls to an Azure OpenAI
	// embedding model.
	OnYourDataVectorizationSourceTypeEndpoint OnYourDataVectorizationSourceType = "endpoint"
	// OnYourDataVectorizationSourceTypeIntegrated - Represents the integrated vectorizer defined within the search resource.
	OnYourDataVectorizationSourceTypeIntegrated OnYourDataVectorizationSourceType = "integrated"
	// OnYourDataVectorizationSourceTypeModelID - Represents a specific embedding model ID as defined in the search service.
	// Currently only supported by Elasticsearch®.
	OnYourDataVectorizationSourceTypeModelID OnYourDataVectorizationSourceType = "model_id"
)

func PossibleOnYourDataVectorizationSourceTypeValues added in v0.4.0

func PossibleOnYourDataVectorizationSourceTypeValues() []OnYourDataVectorizationSourceType

PossibleOnYourDataVectorizationSourceTypeValues returns the possible values for the OnYourDataVectorizationSourceType const type.

type PineconeChatExtensionConfiguration added in v0.4.0

type PineconeChatExtensionConfiguration struct {
	// REQUIRED; The parameters to use when configuring Azure OpenAI chat extensions.
	Parameters *PineconeChatExtensionParameters

	// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
	// chat extensions are only compatible with Azure OpenAI.
	Type *AzureChatExtensionType
}

PineconeChatExtensionConfiguration - A specific representation of configurable options for Pinecone when using it as an Azure OpenAI chat extension.

func (*PineconeChatExtensionConfiguration) GetAzureChatExtensionConfiguration added in v0.4.0

func (p *PineconeChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type PineconeChatExtensionConfiguration.

func (PineconeChatExtensionConfiguration) MarshalJSON added in v0.4.0

func (p PineconeChatExtensionConfiguration) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type PineconeChatExtensionConfiguration.

func (*PineconeChatExtensionConfiguration) UnmarshalJSON added in v0.4.0

func (p *PineconeChatExtensionConfiguration) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type PineconeChatExtensionConfiguration.

type PineconeChatExtensionParameters added in v0.4.0

type PineconeChatExtensionParameters struct {
	// REQUIRED; The embedding dependency for vector search.
	EmbeddingDependency OnYourDataVectorizationSourceClassification

	// REQUIRED; The environment name of Pinecone.
	Environment *string

	// REQUIRED; Customized field mapping behavior to use when interacting with the search index.
	FieldsMapping *PineconeFieldMappingOptions

	// REQUIRED; The name of the Pinecone database index.
	IndexName *string

	// If specified as true, the system will allow partial search results to be used and the request fails if all the queries
	// fail. If not specified, or specified as false, the request will fail if any
	// search query fails.
	AllowPartialResult *bool

	// The authentication method to use when accessing the defined data source. Each data source type supports a specific set
	// of available authentication methods; please see the documentation of the data
	// source for supported mechanisms. If not otherwise provided, On Your Data will attempt to use System Managed Identity (default
	// credential) authentication.
	Authentication OnYourDataAuthenticationOptionsClassification

	// Whether queries should be restricted to use of indexed data.
	InScope *bool

	// The included properties of the output context. If not specified, the default value is citations and intent.
	IncludeContexts []OnYourDataContextProperty

	// The max number of rewritten queries should be send to search provider for one user message. If not specified, the system
	// will decide the number of queries to send.
	MaxSearchQueries *int32

	// The configured strictness of the search relevance filtering. The higher of strictness, the higher of the precision but
	// lower recall of the answer.
	Strictness *int32

	// The configured top number of documents to feature for the configured query.
	TopNDocuments *int32
}

PineconeChatExtensionParameters - Parameters for configuring Azure OpenAI Pinecone chat extensions. The supported authentication type is APIKey.

func (PineconeChatExtensionParameters) MarshalJSON added in v0.4.0

func (p PineconeChatExtensionParameters) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type PineconeChatExtensionParameters.

func (*PineconeChatExtensionParameters) UnmarshalJSON added in v0.4.0

func (p *PineconeChatExtensionParameters) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type PineconeChatExtensionParameters.

type PineconeFieldMappingOptions added in v0.4.0

type PineconeFieldMappingOptions struct {
	// REQUIRED; The names of index fields that should be treated as content.
	ContentFields []string

	// The separator pattern that content fields should use.
	ContentFieldsSeparator *string

	// The name of the index field to use as a filepath.
	FilepathField *string

	// The name of the index field to use as a title.
	TitleField *string

	// The name of the index field to use as a URL.
	URLField *string
}

PineconeFieldMappingOptions - Optional settings to control how fields are processed when using a configured Pinecone resource.

func (PineconeFieldMappingOptions) MarshalJSON added in v0.4.0

func (p PineconeFieldMappingOptions) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaller interface for type PineconeFieldMappingOptions.

func (*PineconeFieldMappingOptions) UnmarshalJSON added in v0.4.0

func (p *PineconeFieldMappingOptions) UnmarshalJSON(data []byte) error

UnmarshalJSON implements the json.Unmarshaller interface for type PineconeFieldMappingOptions.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL