1. Packages
  2. Konnect Provider
  3. API Docs
  4. getGatewayPluginAiResponseTransformer
konnect 2.5.0 published on Tuesday, Apr 15, 2025 by kong

konnect.getGatewayPluginAiResponseTransformer

Explore with Pulumi AI

Using getGatewayPluginAiResponseTransformer

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getGatewayPluginAiResponseTransformer(args: GetGatewayPluginAiResponseTransformerArgs, opts?: InvokeOptions): Promise<GetGatewayPluginAiResponseTransformerResult>
function getGatewayPluginAiResponseTransformerOutput(args: GetGatewayPluginAiResponseTransformerOutputArgs, opts?: InvokeOptions): Output<GetGatewayPluginAiResponseTransformerResult>
Copy
def get_gateway_plugin_ai_response_transformer(control_plane_id: Optional[str] = None,
                                               opts: Optional[InvokeOptions] = None) -> GetGatewayPluginAiResponseTransformerResult
def get_gateway_plugin_ai_response_transformer_output(control_plane_id: Optional[pulumi.Input[str]] = None,
                                               opts: Optional[InvokeOptions] = None) -> Output[GetGatewayPluginAiResponseTransformerResult]
Copy
func LookupGatewayPluginAiResponseTransformer(ctx *Context, args *LookupGatewayPluginAiResponseTransformerArgs, opts ...InvokeOption) (*LookupGatewayPluginAiResponseTransformerResult, error)
func LookupGatewayPluginAiResponseTransformerOutput(ctx *Context, args *LookupGatewayPluginAiResponseTransformerOutputArgs, opts ...InvokeOption) LookupGatewayPluginAiResponseTransformerResultOutput
Copy

> Note: This function is named LookupGatewayPluginAiResponseTransformer in the Go SDK.

public static class GetGatewayPluginAiResponseTransformer 
{
    public static Task<GetGatewayPluginAiResponseTransformerResult> InvokeAsync(GetGatewayPluginAiResponseTransformerArgs args, InvokeOptions? opts = null)
    public static Output<GetGatewayPluginAiResponseTransformerResult> Invoke(GetGatewayPluginAiResponseTransformerInvokeArgs args, InvokeOptions? opts = null)
}
Copy
public static CompletableFuture<GetGatewayPluginAiResponseTransformerResult> getGatewayPluginAiResponseTransformer(GetGatewayPluginAiResponseTransformerArgs args, InvokeOptions options)
public static Output<GetGatewayPluginAiResponseTransformerResult> getGatewayPluginAiResponseTransformer(GetGatewayPluginAiResponseTransformerArgs args, InvokeOptions options)
Copy
fn::invoke:
  function: konnect:index/getGatewayPluginAiResponseTransformer:getGatewayPluginAiResponseTransformer
  arguments:
    # arguments dictionary
Copy

The following arguments are supported:

ControlPlaneId This property is required. string
ControlPlaneId This property is required. string
controlPlaneId This property is required. String
controlPlaneId This property is required. string
control_plane_id This property is required. str
controlPlaneId This property is required. String

getGatewayPluginAiResponseTransformer Result

The following output properties are available:

Supporting Types

GetGatewayPluginAiResponseTransformerConfig

HttpProxyHost This property is required. string
A string representing a host name, such as example.com.
HttpProxyPort This property is required. double
An integer representing a port number between 0 and 65535, inclusive.
HttpTimeout This property is required. double
Timeout in milliseconds for the AI upstream service.
HttpsProxyHost This property is required. string
A string representing a host name, such as example.com.
HttpsProxyPort This property is required. double
An integer representing a port number between 0 and 65535, inclusive.
HttpsVerify This property is required. bool
Verify the TLS certificate of the AI upstream service.
Llm This property is required. GetGatewayPluginAiResponseTransformerConfigLlm
MaxRequestBodySize This property is required. double
max allowed body size allowed to be introspected
ParseLlmResponseJsonInstructions This property is required. bool
Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
Prompt This property is required. string
Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
TransformationExtractPattern This property is required. string
Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
HttpProxyHost This property is required. string
A string representing a host name, such as example.com.
HttpProxyPort This property is required. float64
An integer representing a port number between 0 and 65535, inclusive.
HttpTimeout This property is required. float64
Timeout in milliseconds for the AI upstream service.
HttpsProxyHost This property is required. string
A string representing a host name, such as example.com.
HttpsProxyPort This property is required. float64
An integer representing a port number between 0 and 65535, inclusive.
HttpsVerify This property is required. bool
Verify the TLS certificate of the AI upstream service.
Llm This property is required. GetGatewayPluginAiResponseTransformerConfigLlm
MaxRequestBodySize This property is required. float64
max allowed body size allowed to be introspected
ParseLlmResponseJsonInstructions This property is required. bool
Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
Prompt This property is required. string
Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
TransformationExtractPattern This property is required. string
Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
httpProxyHost This property is required. String
A string representing a host name, such as example.com.
httpProxyPort This property is required. Double
An integer representing a port number between 0 and 65535, inclusive.
httpTimeout This property is required. Double
Timeout in milliseconds for the AI upstream service.
httpsProxyHost This property is required. String
A string representing a host name, such as example.com.
httpsProxyPort This property is required. Double
An integer representing a port number between 0 and 65535, inclusive.
httpsVerify This property is required. Boolean
Verify the TLS certificate of the AI upstream service.
llm This property is required. GetGatewayPluginAiResponseTransformerConfigLlm
maxRequestBodySize This property is required. Double
max allowed body size allowed to be introspected
parseLlmResponseJsonInstructions This property is required. Boolean
Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
prompt This property is required. String
Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
transformationExtractPattern This property is required. String
Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
httpProxyHost This property is required. string
A string representing a host name, such as example.com.
httpProxyPort This property is required. number
An integer representing a port number between 0 and 65535, inclusive.
httpTimeout This property is required. number
Timeout in milliseconds for the AI upstream service.
httpsProxyHost This property is required. string
A string representing a host name, such as example.com.
httpsProxyPort This property is required. number
An integer representing a port number between 0 and 65535, inclusive.
httpsVerify This property is required. boolean
Verify the TLS certificate of the AI upstream service.
llm This property is required. GetGatewayPluginAiResponseTransformerConfigLlm
maxRequestBodySize This property is required. number
max allowed body size allowed to be introspected
parseLlmResponseJsonInstructions This property is required. boolean
Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
prompt This property is required. string
Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
transformationExtractPattern This property is required. string
Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
http_proxy_host This property is required. str
A string representing a host name, such as example.com.
http_proxy_port This property is required. float
An integer representing a port number between 0 and 65535, inclusive.
http_timeout This property is required. float
Timeout in milliseconds for the AI upstream service.
https_proxy_host This property is required. str
A string representing a host name, such as example.com.
https_proxy_port This property is required. float
An integer representing a port number between 0 and 65535, inclusive.
https_verify This property is required. bool
Verify the TLS certificate of the AI upstream service.
llm This property is required. GetGatewayPluginAiResponseTransformerConfigLlm
max_request_body_size This property is required. float
max allowed body size allowed to be introspected
parse_llm_response_json_instructions This property is required. bool
Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
prompt This property is required. str
Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
transformation_extract_pattern This property is required. str
Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
httpProxyHost This property is required. String
A string representing a host name, such as example.com.
httpProxyPort This property is required. Number
An integer representing a port number between 0 and 65535, inclusive.
httpTimeout This property is required. Number
Timeout in milliseconds for the AI upstream service.
httpsProxyHost This property is required. String
A string representing a host name, such as example.com.
httpsProxyPort This property is required. Number
An integer representing a port number between 0 and 65535, inclusive.
httpsVerify This property is required. Boolean
Verify the TLS certificate of the AI upstream service.
llm This property is required. Property Map
maxRequestBodySize This property is required. Number
max allowed body size allowed to be introspected
parseLlmResponseJsonInstructions This property is required. Boolean
Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
prompt This property is required. String
Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
transformationExtractPattern This property is required. String
Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.

GetGatewayPluginAiResponseTransformerConfigLlm

Auth This property is required. GetGatewayPluginAiResponseTransformerConfigLlmAuth
Logging This property is required. GetGatewayPluginAiResponseTransformerConfigLlmLogging
Model This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModel
RouteType This property is required. string
The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
Auth This property is required. GetGatewayPluginAiResponseTransformerConfigLlmAuth
Logging This property is required. GetGatewayPluginAiResponseTransformerConfigLlmLogging
Model This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModel
RouteType This property is required. string
The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
auth This property is required. GetGatewayPluginAiResponseTransformerConfigLlmAuth
logging This property is required. GetGatewayPluginAiResponseTransformerConfigLlmLogging
model This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModel
routeType This property is required. String
The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
auth This property is required. GetGatewayPluginAiResponseTransformerConfigLlmAuth
logging This property is required. GetGatewayPluginAiResponseTransformerConfigLlmLogging
model This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModel
routeType This property is required. string
The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
auth This property is required. GetGatewayPluginAiResponseTransformerConfigLlmAuth
logging This property is required. GetGatewayPluginAiResponseTransformerConfigLlmLogging
model This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModel
route_type This property is required. str
The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
auth This property is required. Property Map
logging This property is required. Property Map
model This property is required. Property Map
routeType This property is required. String
The model's operation implementation, for this provider. Set to preserve to pass through without transformation.

GetGatewayPluginAiResponseTransformerConfigLlmAuth

AllowOverride This property is required. bool
If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
AwsAccessKeyId This property is required. string
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
AwsSecretAccessKey This property is required. string
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
AzureClientId This property is required. string
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
AzureClientSecret This property is required. string
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
AzureTenantId This property is required. string
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
AzureUseManagedIdentity This property is required. bool
Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
GcpServiceAccountJson This property is required. string
Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
GcpUseServiceAccount This property is required. bool
Use service account auth for GCP-based providers and models.
HeaderName This property is required. string
If AI model requires authentication via Authorization or API key header, specify its name here.
HeaderValue This property is required. string
Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
ParamLocation This property is required. string
Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
ParamName This property is required. string
If AI model requires authentication via query parameter, specify its name here.
ParamValue This property is required. string
Specify the full parameter value for 'param_name'.
AllowOverride This property is required. bool
If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
AwsAccessKeyId This property is required. string
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
AwsSecretAccessKey This property is required. string
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
AzureClientId This property is required. string
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
AzureClientSecret This property is required. string
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
AzureTenantId This property is required. string
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
AzureUseManagedIdentity This property is required. bool
Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
GcpServiceAccountJson This property is required. string
Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
GcpUseServiceAccount This property is required. bool
Use service account auth for GCP-based providers and models.
HeaderName This property is required. string
If AI model requires authentication via Authorization or API key header, specify its name here.
HeaderValue This property is required. string
Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
ParamLocation This property is required. string
Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
ParamName This property is required. string
If AI model requires authentication via query parameter, specify its name here.
ParamValue This property is required. string
Specify the full parameter value for 'param_name'.
allowOverride This property is required. Boolean
If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
awsAccessKeyId This property is required. String
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
awsSecretAccessKey This property is required. String
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
azureClientId This property is required. String
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
azureClientSecret This property is required. String
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
azureTenantId This property is required. String
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
azureUseManagedIdentity This property is required. Boolean
Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
gcpServiceAccountJson This property is required. String
Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
gcpUseServiceAccount This property is required. Boolean
Use service account auth for GCP-based providers and models.
headerName This property is required. String
If AI model requires authentication via Authorization or API key header, specify its name here.
headerValue This property is required. String
Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
paramLocation This property is required. String
Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
paramName This property is required. String
If AI model requires authentication via query parameter, specify its name here.
paramValue This property is required. String
Specify the full parameter value for 'param_name'.
allowOverride This property is required. boolean
If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
awsAccessKeyId This property is required. string
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
awsSecretAccessKey This property is required. string
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
azureClientId This property is required. string
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
azureClientSecret This property is required. string
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
azureTenantId This property is required. string
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
azureUseManagedIdentity This property is required. boolean
Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
gcpServiceAccountJson This property is required. string
Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
gcpUseServiceAccount This property is required. boolean
Use service account auth for GCP-based providers and models.
headerName This property is required. string
If AI model requires authentication via Authorization or API key header, specify its name here.
headerValue This property is required. string
Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
paramLocation This property is required. string
Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
paramName This property is required. string
If AI model requires authentication via query parameter, specify its name here.
paramValue This property is required. string
Specify the full parameter value for 'param_name'.
allow_override This property is required. bool
If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
aws_access_key_id This property is required. str
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
aws_secret_access_key This property is required. str
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
azure_client_id This property is required. str
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
azure_client_secret This property is required. str
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
azure_tenant_id This property is required. str
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
azure_use_managed_identity This property is required. bool
Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
gcp_service_account_json This property is required. str
Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
gcp_use_service_account This property is required. bool
Use service account auth for GCP-based providers and models.
header_name This property is required. str
If AI model requires authentication via Authorization or API key header, specify its name here.
header_value This property is required. str
Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
param_location This property is required. str
Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
param_name This property is required. str
If AI model requires authentication via query parameter, specify its name here.
param_value This property is required. str
Specify the full parameter value for 'param_name'.
allowOverride This property is required. Boolean
If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
awsAccessKeyId This property is required. String
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
awsSecretAccessKey This property is required. String
Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
azureClientId This property is required. String
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
azureClientSecret This property is required. String
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
azureTenantId This property is required. String
If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
azureUseManagedIdentity This property is required. Boolean
Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
gcpServiceAccountJson This property is required. String
Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
gcpUseServiceAccount This property is required. Boolean
Use service account auth for GCP-based providers and models.
headerName This property is required. String
If AI model requires authentication via Authorization or API key header, specify its name here.
headerValue This property is required. String
Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
paramLocation This property is required. String
Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
paramName This property is required. String
If AI model requires authentication via query parameter, specify its name here.
paramValue This property is required. String
Specify the full parameter value for 'param_name'.

GetGatewayPluginAiResponseTransformerConfigLlmLogging

LogPayloads This property is required. bool
If enabled, will log the request and response body into the Kong log plugin(s) output.
LogStatistics This property is required. bool
If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
LogPayloads This property is required. bool
If enabled, will log the request and response body into the Kong log plugin(s) output.
LogStatistics This property is required. bool
If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
logPayloads This property is required. Boolean
If enabled, will log the request and response body into the Kong log plugin(s) output.
logStatistics This property is required. Boolean
If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
logPayloads This property is required. boolean
If enabled, will log the request and response body into the Kong log plugin(s) output.
logStatistics This property is required. boolean
If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
log_payloads This property is required. bool
If enabled, will log the request and response body into the Kong log plugin(s) output.
log_statistics This property is required. bool
If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
logPayloads This property is required. Boolean
If enabled, will log the request and response body into the Kong log plugin(s) output.
logStatistics This property is required. Boolean
If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.

GetGatewayPluginAiResponseTransformerConfigLlmModel

Name This property is required. string
Model name to execute.
Options This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
Key/value settings for the model
Provider This property is required. string
AI provider request format - Kong translates requests to and from the specified backend compatible formats.
Name This property is required. string
Model name to execute.
Options This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
Key/value settings for the model
Provider This property is required. string
AI provider request format - Kong translates requests to and from the specified backend compatible formats.
name This property is required. String
Model name to execute.
options This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
Key/value settings for the model
provider This property is required. String
AI provider request format - Kong translates requests to and from the specified backend compatible formats.
name This property is required. string
Model name to execute.
options This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
Key/value settings for the model
provider This property is required. string
AI provider request format - Kong translates requests to and from the specified backend compatible formats.
name This property is required. str
Model name to execute.
options This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
Key/value settings for the model
provider This property is required. str
AI provider request format - Kong translates requests to and from the specified backend compatible formats.
name This property is required. String
Model name to execute.
options This property is required. Property Map
Key/value settings for the model
provider This property is required. String
AI provider request format - Kong translates requests to and from the specified backend compatible formats.

GetGatewayPluginAiResponseTransformerConfigLlmModelOptions

AnthropicVersion This property is required. string
Defines the schema/API version, if using Anthropic provider.
AzureApiVersion This property is required. string
'api-version' for Azure OpenAI instances.
AzureDeploymentId This property is required. string
Deployment ID for Azure OpenAI instances.
AzureInstance This property is required. string
Instance name for Azure OpenAI hosted models.
Bedrock This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
Gemini This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
Huggingface This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
InputCost This property is required. double
Defines the cost per 1M tokens in your prompt.
Llama2Format This property is required. string
If using llama2 provider, select the upstream message format.
MaxTokens This property is required. double
Defines the max_tokens, if using chat or completion models.
MistralFormat This property is required. string
If using mistral provider, select the upstream message format.
OutputCost This property is required. double
Defines the cost per 1M tokens in the output of the AI.
Temperature This property is required. double
Defines the matching temperature, if using chat or completion models.
TopK This property is required. double
Defines the top-k most likely tokens, if supported.
TopP This property is required. double
Defines the top-p probability mass, if supported.
UpstreamPath This property is required. string
Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
UpstreamUrl This property is required. string
Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
AnthropicVersion This property is required. string
Defines the schema/API version, if using Anthropic provider.
AzureApiVersion This property is required. string
'api-version' for Azure OpenAI instances.
AzureDeploymentId This property is required. string
Deployment ID for Azure OpenAI instances.
AzureInstance This property is required. string
Instance name for Azure OpenAI hosted models.
Bedrock This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
Gemini This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
Huggingface This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
InputCost This property is required. float64
Defines the cost per 1M tokens in your prompt.
Llama2Format This property is required. string
If using llama2 provider, select the upstream message format.
MaxTokens This property is required. float64
Defines the max_tokens, if using chat or completion models.
MistralFormat This property is required. string
If using mistral provider, select the upstream message format.
OutputCost This property is required. float64
Defines the cost per 1M tokens in the output of the AI.
Temperature This property is required. float64
Defines the matching temperature, if using chat or completion models.
TopK This property is required. float64
Defines the top-k most likely tokens, if supported.
TopP This property is required. float64
Defines the top-p probability mass, if supported.
UpstreamPath This property is required. string
Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
UpstreamUrl This property is required. string
Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
anthropicVersion This property is required. String
Defines the schema/API version, if using Anthropic provider.
azureApiVersion This property is required. String
'api-version' for Azure OpenAI instances.
azureDeploymentId This property is required. String
Deployment ID for Azure OpenAI instances.
azureInstance This property is required. String
Instance name for Azure OpenAI hosted models.
bedrock This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
gemini This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
huggingface This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
inputCost This property is required. Double
Defines the cost per 1M tokens in your prompt.
llama2Format This property is required. String
If using llama2 provider, select the upstream message format.
maxTokens This property is required. Double
Defines the max_tokens, if using chat or completion models.
mistralFormat This property is required. String
If using mistral provider, select the upstream message format.
outputCost This property is required. Double
Defines the cost per 1M tokens in the output of the AI.
temperature This property is required. Double
Defines the matching temperature, if using chat or completion models.
topK This property is required. Double
Defines the top-k most likely tokens, if supported.
topP This property is required. Double
Defines the top-p probability mass, if supported.
upstreamPath This property is required. String
Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
upstreamUrl This property is required. String
Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
anthropicVersion This property is required. string
Defines the schema/API version, if using Anthropic provider.
azureApiVersion This property is required. string
'api-version' for Azure OpenAI instances.
azureDeploymentId This property is required. string
Deployment ID for Azure OpenAI instances.
azureInstance This property is required. string
Instance name for Azure OpenAI hosted models.
bedrock This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
gemini This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
huggingface This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
inputCost This property is required. number
Defines the cost per 1M tokens in your prompt.
llama2Format This property is required. string
If using llama2 provider, select the upstream message format.
maxTokens This property is required. number
Defines the max_tokens, if using chat or completion models.
mistralFormat This property is required. string
If using mistral provider, select the upstream message format.
outputCost This property is required. number
Defines the cost per 1M tokens in the output of the AI.
temperature This property is required. number
Defines the matching temperature, if using chat or completion models.
topK This property is required. number
Defines the top-k most likely tokens, if supported.
topP This property is required. number
Defines the top-p probability mass, if supported.
upstreamPath This property is required. string
Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
upstreamUrl This property is required. string
Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
anthropic_version This property is required. str
Defines the schema/API version, if using Anthropic provider.
azure_api_version This property is required. str
'api-version' for Azure OpenAI instances.
azure_deployment_id This property is required. str
Deployment ID for Azure OpenAI instances.
azure_instance This property is required. str
Instance name for Azure OpenAI hosted models.
bedrock This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
gemini This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
huggingface This property is required. GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
input_cost This property is required. float
Defines the cost per 1M tokens in your prompt.
llama2_format This property is required. str
If using llama2 provider, select the upstream message format.
max_tokens This property is required. float
Defines the max_tokens, if using chat or completion models.
mistral_format This property is required. str
If using mistral provider, select the upstream message format.
output_cost This property is required. float
Defines the cost per 1M tokens in the output of the AI.
temperature This property is required. float
Defines the matching temperature, if using chat or completion models.
top_k This property is required. float
Defines the top-k most likely tokens, if supported.
top_p This property is required. float
Defines the top-p probability mass, if supported.
upstream_path This property is required. str
Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
upstream_url This property is required. str
Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
anthropicVersion This property is required. String
Defines the schema/API version, if using Anthropic provider.
azureApiVersion This property is required. String
'api-version' for Azure OpenAI instances.
azureDeploymentId This property is required. String
Deployment ID for Azure OpenAI instances.
azureInstance This property is required. String
Instance name for Azure OpenAI hosted models.
bedrock This property is required. Property Map
gemini This property is required. Property Map
huggingface This property is required. Property Map
inputCost This property is required. Number
Defines the cost per 1M tokens in your prompt.
llama2Format This property is required. String
If using llama2 provider, select the upstream message format.
maxTokens This property is required. Number
Defines the max_tokens, if using chat or completion models.
mistralFormat This property is required. String
If using mistral provider, select the upstream message format.
outputCost This property is required. Number
Defines the cost per 1M tokens in the output of the AI.
temperature This property is required. Number
Defines the matching temperature, if using chat or completion models.
topK This property is required. Number
Defines the top-k most likely tokens, if supported.
topP This property is required. Number
Defines the top-p probability mass, if supported.
upstreamPath This property is required. String
Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
upstreamUrl This property is required. String
Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.

GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock

AwsRegion This property is required. string
If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
AwsRegion This property is required. string
If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
awsRegion This property is required. String
If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
awsRegion This property is required. string
If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
aws_region This property is required. str
If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
awsRegion This property is required. String
If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.

GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini

ApiEndpoint This property is required. string
If running Gemini on Vertex, specify the regional API endpoint (hostname only).
LocationId This property is required. string
If running Gemini on Vertex, specify the location ID.
ProjectId This property is required. string
If running Gemini on Vertex, specify the project ID.
ApiEndpoint This property is required. string
If running Gemini on Vertex, specify the regional API endpoint (hostname only).
LocationId This property is required. string
If running Gemini on Vertex, specify the location ID.
ProjectId This property is required. string
If running Gemini on Vertex, specify the project ID.
apiEndpoint This property is required. String
If running Gemini on Vertex, specify the regional API endpoint (hostname only).
locationId This property is required. String
If running Gemini on Vertex, specify the location ID.
projectId This property is required. String
If running Gemini on Vertex, specify the project ID.
apiEndpoint This property is required. string
If running Gemini on Vertex, specify the regional API endpoint (hostname only).
locationId This property is required. string
If running Gemini on Vertex, specify the location ID.
projectId This property is required. string
If running Gemini on Vertex, specify the project ID.
api_endpoint This property is required. str
If running Gemini on Vertex, specify the regional API endpoint (hostname only).
location_id This property is required. str
If running Gemini on Vertex, specify the location ID.
project_id This property is required. str
If running Gemini on Vertex, specify the project ID.
apiEndpoint This property is required. String
If running Gemini on Vertex, specify the regional API endpoint (hostname only).
locationId This property is required. String
If running Gemini on Vertex, specify the location ID.
projectId This property is required. String
If running Gemini on Vertex, specify the project ID.

GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface

UseCache This property is required. bool
Use the cache layer on the inference API
WaitForModel This property is required. bool
Wait for the model if it is not ready
UseCache This property is required. bool
Use the cache layer on the inference API
WaitForModel This property is required. bool
Wait for the model if it is not ready
useCache This property is required. Boolean
Use the cache layer on the inference API
waitForModel This property is required. Boolean
Wait for the model if it is not ready
useCache This property is required. boolean
Use the cache layer on the inference API
waitForModel This property is required. boolean
Wait for the model if it is not ready
use_cache This property is required. bool
Use the cache layer on the inference API
wait_for_model This property is required. bool
Wait for the model if it is not ready
useCache This property is required. Boolean
Use the cache layer on the inference API
waitForModel This property is required. Boolean
Wait for the model if it is not ready

GetGatewayPluginAiResponseTransformerConsumer

Id This property is required. string
Id This property is required. string
id This property is required. String
id This property is required. string
id This property is required. str
id This property is required. String

GetGatewayPluginAiResponseTransformerConsumerGroup

Id This property is required. string
Id This property is required. string
id This property is required. String
id This property is required. string
id This property is required. str
id This property is required. String

GetGatewayPluginAiResponseTransformerOrdering

after This property is required. Property Map
before This property is required. Property Map

GetGatewayPluginAiResponseTransformerOrderingAfter

Accesses This property is required. List<string>
Accesses This property is required. []string
accesses This property is required. List<String>
accesses This property is required. string[]
accesses This property is required. Sequence[str]
accesses This property is required. List<String>

GetGatewayPluginAiResponseTransformerOrderingBefore

Accesses This property is required. List<string>
Accesses This property is required. []string
accesses This property is required. List<String>
accesses This property is required. string[]
accesses This property is required. Sequence[str]
accesses This property is required. List<String>

GetGatewayPluginAiResponseTransformerRoute

Id This property is required. string
Id This property is required. string
id This property is required. String
id This property is required. string
id This property is required. str
id This property is required. String

GetGatewayPluginAiResponseTransformerService

Id This property is required. string
Id This property is required. string
id This property is required. String
id This property is required. string
id This property is required. str
id This property is required. String

Package Details

Repository
konnect kong/terraform-provider-konnect
License
Notes
This Pulumi package is based on the konnect Terraform Provider.