konnect 2.5.0 published on Tuesday, Apr 15, 2025 by kong
konnect.getGatewayPluginAiResponseTransformer
Explore with Pulumi AI
Using getGatewayPluginAiResponseTransformer
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getGatewayPluginAiResponseTransformer(args: GetGatewayPluginAiResponseTransformerArgs, opts?: InvokeOptions): Promise<GetGatewayPluginAiResponseTransformerResult>
function getGatewayPluginAiResponseTransformerOutput(args: GetGatewayPluginAiResponseTransformerOutputArgs, opts?: InvokeOptions): Output<GetGatewayPluginAiResponseTransformerResult>
def get_gateway_plugin_ai_response_transformer(control_plane_id: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetGatewayPluginAiResponseTransformerResult
def get_gateway_plugin_ai_response_transformer_output(control_plane_id: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetGatewayPluginAiResponseTransformerResult]
func LookupGatewayPluginAiResponseTransformer(ctx *Context, args *LookupGatewayPluginAiResponseTransformerArgs, opts ...InvokeOption) (*LookupGatewayPluginAiResponseTransformerResult, error)
func LookupGatewayPluginAiResponseTransformerOutput(ctx *Context, args *LookupGatewayPluginAiResponseTransformerOutputArgs, opts ...InvokeOption) LookupGatewayPluginAiResponseTransformerResultOutput
> Note: This function is named LookupGatewayPluginAiResponseTransformer
in the Go SDK.
public static class GetGatewayPluginAiResponseTransformer
{
public static Task<GetGatewayPluginAiResponseTransformerResult> InvokeAsync(GetGatewayPluginAiResponseTransformerArgs args, InvokeOptions? opts = null)
public static Output<GetGatewayPluginAiResponseTransformerResult> Invoke(GetGatewayPluginAiResponseTransformerInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetGatewayPluginAiResponseTransformerResult> getGatewayPluginAiResponseTransformer(GetGatewayPluginAiResponseTransformerArgs args, InvokeOptions options)
public static Output<GetGatewayPluginAiResponseTransformerResult> getGatewayPluginAiResponseTransformer(GetGatewayPluginAiResponseTransformerArgs args, InvokeOptions options)
fn::invoke:
function: konnect:index/getGatewayPluginAiResponseTransformer:getGatewayPluginAiResponseTransformer
arguments:
# arguments dictionary
The following arguments are supported:
- Control
Plane Id This property is required. string
- Control
Plane Id This property is required. string
- control
Plane Id This property is required. String
- control
Plane Id This property is required. string
- control_
plane_ id This property is required. str
- control
Plane Id This property is required. String
getGatewayPluginAiResponseTransformer Result
The following output properties are available:
- Config
Get
Gateway Plugin Ai Response Transformer Config - Consumer
Get
Gateway Plugin Ai Response Transformer Consumer - Consumer
Group GetGateway Plugin Ai Response Transformer Consumer Group - Control
Plane stringId - Created
At double - Enabled bool
- Id string
- Instance
Name string - Ordering
Get
Gateway Plugin Ai Response Transformer Ordering - Protocols List<string>
- Route
Get
Gateway Plugin Ai Response Transformer Route - Service
Get
Gateway Plugin Ai Response Transformer Service - List<string>
- Updated
At double
- Config
Get
Gateway Plugin Ai Response Transformer Config - Consumer
Get
Gateway Plugin Ai Response Transformer Consumer - Consumer
Group GetGateway Plugin Ai Response Transformer Consumer Group - Control
Plane stringId - Created
At float64 - Enabled bool
- Id string
- Instance
Name string - Ordering
Get
Gateway Plugin Ai Response Transformer Ordering - Protocols []string
- Route
Get
Gateway Plugin Ai Response Transformer Route - Service
Get
Gateway Plugin Ai Response Transformer Service - []string
- Updated
At float64
- config
Get
Gateway Plugin Ai Response Transformer Config - consumer
Get
Gateway Plugin Ai Response Transformer Consumer - consumer
Group GetGateway Plugin Ai Response Transformer Consumer Group - control
Plane StringId - created
At Double - enabled Boolean
- id String
- instance
Name String - ordering
Get
Gateway Plugin Ai Response Transformer Ordering - protocols List<String>
- route
Get
Gateway Plugin Ai Response Transformer Route - service
Get
Gateway Plugin Ai Response Transformer Service - List<String>
- updated
At Double
- config
Get
Gateway Plugin Ai Response Transformer Config - consumer
Get
Gateway Plugin Ai Response Transformer Consumer - consumer
Group GetGateway Plugin Ai Response Transformer Consumer Group - control
Plane stringId - created
At number - enabled boolean
- id string
- instance
Name string - ordering
Get
Gateway Plugin Ai Response Transformer Ordering - protocols string[]
- route
Get
Gateway Plugin Ai Response Transformer Route - service
Get
Gateway Plugin Ai Response Transformer Service - string[]
- updated
At number
- config
Get
Gateway Plugin Ai Response Transformer Config - consumer
Get
Gateway Plugin Ai Response Transformer Consumer - consumer_
group GetGateway Plugin Ai Response Transformer Consumer Group - control_
plane_ strid - created_
at float - enabled bool
- id str
- instance_
name str - ordering
Get
Gateway Plugin Ai Response Transformer Ordering - protocols Sequence[str]
- route
Get
Gateway Plugin Ai Response Transformer Route - service
Get
Gateway Plugin Ai Response Transformer Service - Sequence[str]
- updated_
at float
- config Property Map
- consumer Property Map
- consumer
Group Property Map - control
Plane StringId - created
At Number - enabled Boolean
- id String
- instance
Name String - ordering Property Map
- protocols List<String>
- route Property Map
- service Property Map
- List<String>
- updated
At Number
Supporting Types
GetGatewayPluginAiResponseTransformerConfig
- Http
Proxy Host This property is required. string - A string representing a host name, such as example.com.
- Http
Proxy Port This property is required. double - An integer representing a port number between 0 and 65535, inclusive.
- Http
Timeout This property is required. double - Timeout in milliseconds for the AI upstream service.
- Https
Proxy Host This property is required. string - A string representing a host name, such as example.com.
- Https
Proxy Port This property is required. double - An integer representing a port number between 0 and 65535, inclusive.
- Https
Verify This property is required. bool - Verify the TLS certificate of the AI upstream service.
- Llm
This property is required. GetGateway Plugin Ai Response Transformer Config Llm - Max
Request Body Size This property is required. double - max allowed body size allowed to be introspected
- Parse
Llm Response Json Instructions This property is required. bool - Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
- Prompt
This property is required. string - Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
- Transformation
Extract Pattern This property is required. string - Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
- Http
Proxy Host This property is required. string - A string representing a host name, such as example.com.
- Http
Proxy Port This property is required. float64 - An integer representing a port number between 0 and 65535, inclusive.
- Http
Timeout This property is required. float64 - Timeout in milliseconds for the AI upstream service.
- Https
Proxy Host This property is required. string - A string representing a host name, such as example.com.
- Https
Proxy Port This property is required. float64 - An integer representing a port number between 0 and 65535, inclusive.
- Https
Verify This property is required. bool - Verify the TLS certificate of the AI upstream service.
- Llm
This property is required. GetGateway Plugin Ai Response Transformer Config Llm - Max
Request Body Size This property is required. float64 - max allowed body size allowed to be introspected
- Parse
Llm Response Json Instructions This property is required. bool - Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
- Prompt
This property is required. string - Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
- Transformation
Extract Pattern This property is required. string - Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
- http
Proxy Host This property is required. String - A string representing a host name, such as example.com.
- http
Proxy Port This property is required. Double - An integer representing a port number between 0 and 65535, inclusive.
- http
Timeout This property is required. Double - Timeout in milliseconds for the AI upstream service.
- https
Proxy Host This property is required. String - A string representing a host name, such as example.com.
- https
Proxy Port This property is required. Double - An integer representing a port number between 0 and 65535, inclusive.
- https
Verify This property is required. Boolean - Verify the TLS certificate of the AI upstream service.
- llm
This property is required. GetGateway Plugin Ai Response Transformer Config Llm - max
Request Body Size This property is required. Double - max allowed body size allowed to be introspected
- parse
Llm Response Json Instructions This property is required. Boolean - Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
- prompt
This property is required. String - Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
- transformation
Extract Pattern This property is required. String - Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
- http
Proxy Host This property is required. string - A string representing a host name, such as example.com.
- http
Proxy Port This property is required. number - An integer representing a port number between 0 and 65535, inclusive.
- http
Timeout This property is required. number - Timeout in milliseconds for the AI upstream service.
- https
Proxy Host This property is required. string - A string representing a host name, such as example.com.
- https
Proxy Port This property is required. number - An integer representing a port number between 0 and 65535, inclusive.
- https
Verify This property is required. boolean - Verify the TLS certificate of the AI upstream service.
- llm
This property is required. GetGateway Plugin Ai Response Transformer Config Llm - max
Request Body Size This property is required. number - max allowed body size allowed to be introspected
- parse
Llm Response Json Instructions This property is required. boolean - Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
- prompt
This property is required. string - Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
- transformation
Extract Pattern This property is required. string - Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
- http_
proxy_ host This property is required. str - A string representing a host name, such as example.com.
- http_
proxy_ port This property is required. float - An integer representing a port number between 0 and 65535, inclusive.
- http_
timeout This property is required. float - Timeout in milliseconds for the AI upstream service.
- https_
proxy_ host This property is required. str - A string representing a host name, such as example.com.
- https_
proxy_ port This property is required. float - An integer representing a port number between 0 and 65535, inclusive.
- https_
verify This property is required. bool - Verify the TLS certificate of the AI upstream service.
- llm
This property is required. GetGateway Plugin Ai Response Transformer Config Llm - max_
request_ body_ size This property is required. float - max allowed body size allowed to be introspected
- parse_
llm_ response_ json_ instructions This property is required. bool - Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
- prompt
This property is required. str - Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
- transformation_
extract_ pattern This property is required. str - Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
- http
Proxy Host This property is required. String - A string representing a host name, such as example.com.
- http
Proxy Port This property is required. Number - An integer representing a port number between 0 and 65535, inclusive.
- http
Timeout This property is required. Number - Timeout in milliseconds for the AI upstream service.
- https
Proxy Host This property is required. String - A string representing a host name, such as example.com.
- https
Proxy Port This property is required. Number - An integer representing a port number between 0 and 65535, inclusive.
- https
Verify This property is required. Boolean - Verify the TLS certificate of the AI upstream service.
- llm
This property is required. Property Map - max
Request Body Size This property is required. Number - max allowed body size allowed to be introspected
- parse
Llm Response Json Instructions This property is required. Boolean - Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
- prompt
This property is required. String - Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
- transformation
Extract Pattern This property is required. String - Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
GetGatewayPluginAiResponseTransformerConfigLlm
- Auth
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Auth - Logging
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Logging - Model
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model - Route
Type This property is required. string - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
- Auth
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Auth - Logging
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Logging - Model
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model - Route
Type This property is required. string - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
- auth
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Auth - logging
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Logging - model
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model - route
Type This property is required. String - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
- auth
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Auth - logging
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Logging - model
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model - route
Type This property is required. string - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
- auth
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Auth - logging
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Logging - model
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model - route_
type This property is required. str - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
- auth
This property is required. Property Map - logging
This property is required. Property Map - model
This property is required. Property Map - route
Type This property is required. String - The model's operation implementation, for this provider. Set to
preserve
to pass through without transformation.
GetGatewayPluginAiResponseTransformerConfigLlmAuth
- Allow
Override This property is required. bool - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- Aws
Access Key Id This property is required. string - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- Aws
Secret Access Key This property is required. string - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- Azure
Client Id This property is required. string - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- Azure
Client Secret This property is required. string - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- Azure
Tenant Id This property is required. string - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- Azure
Use Managed Identity This property is required. bool - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- Gcp
Service Account Json This property is required. string - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - Gcp
Use Service Account This property is required. bool - Use service account auth for GCP-based providers and models.
- Header
Name This property is required. string - If AI model requires authentication via Authorization or API key header, specify its name here.
- Header
Value This property is required. string - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- Param
Location This property is required. string - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- Param
Name This property is required. string - If AI model requires authentication via query parameter, specify its name here.
- Param
Value This property is required. string - Specify the full parameter value for 'param_name'.
- Allow
Override This property is required. bool - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- Aws
Access Key Id This property is required. string - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- Aws
Secret Access Key This property is required. string - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- Azure
Client Id This property is required. string - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- Azure
Client Secret This property is required. string - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- Azure
Tenant Id This property is required. string - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- Azure
Use Managed Identity This property is required. bool - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- Gcp
Service Account Json This property is required. string - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - Gcp
Use Service Account This property is required. bool - Use service account auth for GCP-based providers and models.
- Header
Name This property is required. string - If AI model requires authentication via Authorization or API key header, specify its name here.
- Header
Value This property is required. string - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- Param
Location This property is required. string - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- Param
Name This property is required. string - If AI model requires authentication via query parameter, specify its name here.
- Param
Value This property is required. string - Specify the full parameter value for 'param_name'.
- allow
Override This property is required. Boolean - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- aws
Access Key Id This property is required. String - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- aws
Secret Access Key This property is required. String - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- azure
Client Id This property is required. String - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- azure
Client Secret This property is required. String - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- azure
Tenant Id This property is required. String - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- azure
Use Managed Identity This property is required. Boolean - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- gcp
Service Account Json This property is required. String - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - gcp
Use Service Account This property is required. Boolean - Use service account auth for GCP-based providers and models.
- header
Name This property is required. String - If AI model requires authentication via Authorization or API key header, specify its name here.
- header
Value This property is required. String - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- param
Location This property is required. String - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- param
Name This property is required. String - If AI model requires authentication via query parameter, specify its name here.
- param
Value This property is required. String - Specify the full parameter value for 'param_name'.
- allow
Override This property is required. boolean - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- aws
Access Key Id This property is required. string - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- aws
Secret Access Key This property is required. string - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- azure
Client Id This property is required. string - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- azure
Client Secret This property is required. string - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- azure
Tenant Id This property is required. string - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- azure
Use Managed Identity This property is required. boolean - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- gcp
Service Account Json This property is required. string - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - gcp
Use Service Account This property is required. boolean - Use service account auth for GCP-based providers and models.
- header
Name This property is required. string - If AI model requires authentication via Authorization or API key header, specify its name here.
- header
Value This property is required. string - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- param
Location This property is required. string - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- param
Name This property is required. string - If AI model requires authentication via query parameter, specify its name here.
- param
Value This property is required. string - Specify the full parameter value for 'param_name'.
- allow_
override This property is required. bool - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- aws_
access_ key_ id This property is required. str - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- aws_
secret_ access_ key This property is required. str - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- azure_
client_ id This property is required. str - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- azure_
client_ secret This property is required. str - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- azure_
tenant_ id This property is required. str - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- azure_
use_ managed_ identity This property is required. bool - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- gcp_
service_ account_ json This property is required. str - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - gcp_
use_ service_ account This property is required. bool - Use service account auth for GCP-based providers and models.
- header_
name This property is required. str - If AI model requires authentication via Authorization or API key header, specify its name here.
- header_
value This property is required. str - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- param_
location This property is required. str - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- param_
name This property is required. str - If AI model requires authentication via query parameter, specify its name here.
- param_
value This property is required. str - Specify the full parameter value for 'param_name'.
- allow
Override This property is required. Boolean - If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
- aws
Access Key Id This property is required. String - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
- aws
Secret Access Key This property is required. String - Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
- azure
Client Id This property is required. String - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
- azure
Client Secret This property is required. String - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
- azure
Tenant Id This property is required. String - If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
- azure
Use Managed Identity This property is required. Boolean - Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
- gcp
Service Account Json This property is required. String - Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable
GCP_SERVICE_ACCOUNT
. - gcp
Use Service Account This property is required. Boolean - Use service account auth for GCP-based providers and models.
- header
Name This property is required. String - If AI model requires authentication via Authorization or API key header, specify its name here.
- header
Value This property is required. String - Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
- param
Location This property is required. String - Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
- param
Name This property is required. String - If AI model requires authentication via query parameter, specify its name here.
- param
Value This property is required. String - Specify the full parameter value for 'param_name'.
GetGatewayPluginAiResponseTransformerConfigLlmLogging
- Log
Payloads This property is required. bool - If enabled, will log the request and response body into the Kong log plugin(s) output.
- Log
Statistics This property is required. bool - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
- Log
Payloads This property is required. bool - If enabled, will log the request and response body into the Kong log plugin(s) output.
- Log
Statistics This property is required. bool - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
- log
Payloads This property is required. Boolean - If enabled, will log the request and response body into the Kong log plugin(s) output.
- log
Statistics This property is required. Boolean - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
- log
Payloads This property is required. boolean - If enabled, will log the request and response body into the Kong log plugin(s) output.
- log
Statistics This property is required. boolean - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
- log_
payloads This property is required. bool - If enabled, will log the request and response body into the Kong log plugin(s) output.
- log_
statistics This property is required. bool - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
- log
Payloads This property is required. Boolean - If enabled, will log the request and response body into the Kong log plugin(s) output.
- log
Statistics This property is required. Boolean - If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
GetGatewayPluginAiResponseTransformerConfigLlmModel
- Name
This property is required. string - Model name to execute.
- Options
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options - Key/value settings for the model
- Provider
This property is required. string - AI provider request format - Kong translates requests to and from the specified backend compatible formats.
- Name
This property is required. string - Model name to execute.
- Options
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options - Key/value settings for the model
- Provider
This property is required. string - AI provider request format - Kong translates requests to and from the specified backend compatible formats.
- name
This property is required. String - Model name to execute.
- options
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options - Key/value settings for the model
- provider
This property is required. String - AI provider request format - Kong translates requests to and from the specified backend compatible formats.
- name
This property is required. string - Model name to execute.
- options
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options - Key/value settings for the model
- provider
This property is required. string - AI provider request format - Kong translates requests to and from the specified backend compatible formats.
- name
This property is required. str - Model name to execute.
- options
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options - Key/value settings for the model
- provider
This property is required. str - AI provider request format - Kong translates requests to and from the specified backend compatible formats.
- name
This property is required. String - Model name to execute.
- options
This property is required. Property Map - Key/value settings for the model
- provider
This property is required. String - AI provider request format - Kong translates requests to and from the specified backend compatible formats.
GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
- Anthropic
Version This property is required. string - Defines the schema/API version, if using Anthropic provider.
- Azure
Api Version This property is required. string - 'api-version' for Azure OpenAI instances.
- Azure
Deployment Id This property is required. string - Deployment ID for Azure OpenAI instances.
- Azure
Instance This property is required. string - Instance name for Azure OpenAI hosted models.
- Bedrock
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Bedrock - Gemini
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Gemini - Huggingface
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Huggingface - Input
Cost This property is required. double - Defines the cost per 1M tokens in your prompt.
- Llama2Format
This property is required. string - If using llama2 provider, select the upstream message format.
- Max
Tokens This property is required. double - Defines the max_tokens, if using chat or completion models.
- Mistral
Format This property is required. string - If using mistral provider, select the upstream message format.
- Output
Cost This property is required. double - Defines the cost per 1M tokens in the output of the AI.
- Temperature
This property is required. double - Defines the matching temperature, if using chat or completion models.
- Top
K This property is required. double - Defines the top-k most likely tokens, if supported.
- Top
P This property is required. double - Defines the top-p probability mass, if supported.
- Upstream
Path This property is required. string - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- Upstream
Url This property is required. string - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
- Anthropic
Version This property is required. string - Defines the schema/API version, if using Anthropic provider.
- Azure
Api Version This property is required. string - 'api-version' for Azure OpenAI instances.
- Azure
Deployment Id This property is required. string - Deployment ID for Azure OpenAI instances.
- Azure
Instance This property is required. string - Instance name for Azure OpenAI hosted models.
- Bedrock
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Bedrock - Gemini
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Gemini - Huggingface
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Huggingface - Input
Cost This property is required. float64 - Defines the cost per 1M tokens in your prompt.
- Llama2Format
This property is required. string - If using llama2 provider, select the upstream message format.
- Max
Tokens This property is required. float64 - Defines the max_tokens, if using chat or completion models.
- Mistral
Format This property is required. string - If using mistral provider, select the upstream message format.
- Output
Cost This property is required. float64 - Defines the cost per 1M tokens in the output of the AI.
- Temperature
This property is required. float64 - Defines the matching temperature, if using chat or completion models.
- Top
K This property is required. float64 - Defines the top-k most likely tokens, if supported.
- Top
P This property is required. float64 - Defines the top-p probability mass, if supported.
- Upstream
Path This property is required. string - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- Upstream
Url This property is required. string - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
- anthropic
Version This property is required. String - Defines the schema/API version, if using Anthropic provider.
- azure
Api Version This property is required. String - 'api-version' for Azure OpenAI instances.
- azure
Deployment Id This property is required. String - Deployment ID for Azure OpenAI instances.
- azure
Instance This property is required. String - Instance name for Azure OpenAI hosted models.
- bedrock
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Bedrock - gemini
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Gemini - huggingface
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Huggingface - input
Cost This property is required. Double - Defines the cost per 1M tokens in your prompt.
- llama2Format
This property is required. String - If using llama2 provider, select the upstream message format.
- max
Tokens This property is required. Double - Defines the max_tokens, if using chat or completion models.
- mistral
Format This property is required. String - If using mistral provider, select the upstream message format.
- output
Cost This property is required. Double - Defines the cost per 1M tokens in the output of the AI.
- temperature
This property is required. Double - Defines the matching temperature, if using chat or completion models.
- top
K This property is required. Double - Defines the top-k most likely tokens, if supported.
- top
P This property is required. Double - Defines the top-p probability mass, if supported.
- upstream
Path This property is required. String - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- upstream
Url This property is required. String - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
- anthropic
Version This property is required. string - Defines the schema/API version, if using Anthropic provider.
- azure
Api Version This property is required. string - 'api-version' for Azure OpenAI instances.
- azure
Deployment Id This property is required. string - Deployment ID for Azure OpenAI instances.
- azure
Instance This property is required. string - Instance name for Azure OpenAI hosted models.
- bedrock
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Bedrock - gemini
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Gemini - huggingface
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Huggingface - input
Cost This property is required. number - Defines the cost per 1M tokens in your prompt.
- llama2Format
This property is required. string - If using llama2 provider, select the upstream message format.
- max
Tokens This property is required. number - Defines the max_tokens, if using chat or completion models.
- mistral
Format This property is required. string - If using mistral provider, select the upstream message format.
- output
Cost This property is required. number - Defines the cost per 1M tokens in the output of the AI.
- temperature
This property is required. number - Defines the matching temperature, if using chat or completion models.
- top
K This property is required. number - Defines the top-k most likely tokens, if supported.
- top
P This property is required. number - Defines the top-p probability mass, if supported.
- upstream
Path This property is required. string - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- upstream
Url This property is required. string - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
- anthropic_
version This property is required. str - Defines the schema/API version, if using Anthropic provider.
- azure_
api_ version This property is required. str - 'api-version' for Azure OpenAI instances.
- azure_
deployment_ id This property is required. str - Deployment ID for Azure OpenAI instances.
- azure_
instance This property is required. str - Instance name for Azure OpenAI hosted models.
- bedrock
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Bedrock - gemini
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Gemini - huggingface
This property is required. GetGateway Plugin Ai Response Transformer Config Llm Model Options Huggingface - input_
cost This property is required. float - Defines the cost per 1M tokens in your prompt.
- llama2_
format This property is required. str - If using llama2 provider, select the upstream message format.
- max_
tokens This property is required. float - Defines the max_tokens, if using chat or completion models.
- mistral_
format This property is required. str - If using mistral provider, select the upstream message format.
- output_
cost This property is required. float - Defines the cost per 1M tokens in the output of the AI.
- temperature
This property is required. float - Defines the matching temperature, if using chat or completion models.
- top_
k This property is required. float - Defines the top-k most likely tokens, if supported.
- top_
p This property is required. float - Defines the top-p probability mass, if supported.
- upstream_
path This property is required. str - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- upstream_
url This property is required. str - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
- anthropic
Version This property is required. String - Defines the schema/API version, if using Anthropic provider.
- azure
Api Version This property is required. String - 'api-version' for Azure OpenAI instances.
- azure
Deployment Id This property is required. String - Deployment ID for Azure OpenAI instances.
- azure
Instance This property is required. String - Instance name for Azure OpenAI hosted models.
- bedrock
This property is required. Property Map - gemini
This property is required. Property Map - huggingface
This property is required. Property Map - input
Cost This property is required. Number - Defines the cost per 1M tokens in your prompt.
- llama2Format
This property is required. String - If using llama2 provider, select the upstream message format.
- max
Tokens This property is required. Number - Defines the max_tokens, if using chat or completion models.
- mistral
Format This property is required. String - If using mistral provider, select the upstream message format.
- output
Cost This property is required. Number - Defines the cost per 1M tokens in the output of the AI.
- temperature
This property is required. Number - Defines the matching temperature, if using chat or completion models.
- top
K This property is required. Number - Defines the top-k most likely tokens, if supported.
- top
P This property is required. Number - Defines the top-p probability mass, if supported.
- upstream
Path This property is required. String - Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
- upstream
Url This property is required. String - Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
- Aws
Region This property is required. string - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
- Aws
Region This property is required. string - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
- aws
Region This property is required. String - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
- aws
Region This property is required. string - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
- aws_
region This property is required. str - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
- aws
Region This property is required. String - If using AWS providers (Bedrock) you can override the
AWS_REGION
environment variable by setting this option.
GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
- Api
Endpoint This property is required. string - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- Location
Id This property is required. string - If running Gemini on Vertex, specify the location ID.
- Project
Id This property is required. string - If running Gemini on Vertex, specify the project ID.
- Api
Endpoint This property is required. string - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- Location
Id This property is required. string - If running Gemini on Vertex, specify the location ID.
- Project
Id This property is required. string - If running Gemini on Vertex, specify the project ID.
- api
Endpoint This property is required. String - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- location
Id This property is required. String - If running Gemini on Vertex, specify the location ID.
- project
Id This property is required. String - If running Gemini on Vertex, specify the project ID.
- api
Endpoint This property is required. string - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- location
Id This property is required. string - If running Gemini on Vertex, specify the location ID.
- project
Id This property is required. string - If running Gemini on Vertex, specify the project ID.
- api_
endpoint This property is required. str - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- location_
id This property is required. str - If running Gemini on Vertex, specify the location ID.
- project_
id This property is required. str - If running Gemini on Vertex, specify the project ID.
- api
Endpoint This property is required. String - If running Gemini on Vertex, specify the regional API endpoint (hostname only).
- location
Id This property is required. String - If running Gemini on Vertex, specify the location ID.
- project
Id This property is required. String - If running Gemini on Vertex, specify the project ID.
GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
- Use
Cache This property is required. bool - Use the cache layer on the inference API
- Wait
For Model This property is required. bool - Wait for the model if it is not ready
- Use
Cache This property is required. bool - Use the cache layer on the inference API
- Wait
For Model This property is required. bool - Wait for the model if it is not ready
- use
Cache This property is required. Boolean - Use the cache layer on the inference API
- wait
For Model This property is required. Boolean - Wait for the model if it is not ready
- use
Cache This property is required. boolean - Use the cache layer on the inference API
- wait
For Model This property is required. boolean - Wait for the model if it is not ready
- use_
cache This property is required. bool - Use the cache layer on the inference API
- wait_
for_ model This property is required. bool - Wait for the model if it is not ready
- use
Cache This property is required. Boolean - Use the cache layer on the inference API
- wait
For Model This property is required. Boolean - Wait for the model if it is not ready
GetGatewayPluginAiResponseTransformerConsumer
- Id
This property is required. string
- Id
This property is required. string
- id
This property is required. String
- id
This property is required. string
- id
This property is required. str
- id
This property is required. String
GetGatewayPluginAiResponseTransformerConsumerGroup
- Id
This property is required. string
- Id
This property is required. string
- id
This property is required. String
- id
This property is required. string
- id
This property is required. str
- id
This property is required. String
GetGatewayPluginAiResponseTransformerOrdering
- After
This property is required. GetGateway Plugin Ai Response Transformer Ordering After - Before
This property is required. GetGateway Plugin Ai Response Transformer Ordering Before
- After
This property is required. GetGateway Plugin Ai Response Transformer Ordering After - Before
This property is required. GetGateway Plugin Ai Response Transformer Ordering Before
- after
This property is required. GetGateway Plugin Ai Response Transformer Ordering After - before
This property is required. GetGateway Plugin Ai Response Transformer Ordering Before
- after
This property is required. GetGateway Plugin Ai Response Transformer Ordering After - before
This property is required. GetGateway Plugin Ai Response Transformer Ordering Before
- after
This property is required. GetGateway Plugin Ai Response Transformer Ordering After - before
This property is required. GetGateway Plugin Ai Response Transformer Ordering Before
- after
This property is required. Property Map - before
This property is required. Property Map
GetGatewayPluginAiResponseTransformerOrderingAfter
- Accesses
This property is required. List<string>
- Accesses
This property is required. []string
- accesses
This property is required. List<String>
- accesses
This property is required. string[]
- accesses
This property is required. Sequence[str]
- accesses
This property is required. List<String>
GetGatewayPluginAiResponseTransformerOrderingBefore
- Accesses
This property is required. List<string>
- Accesses
This property is required. []string
- accesses
This property is required. List<String>
- accesses
This property is required. string[]
- accesses
This property is required. Sequence[str]
- accesses
This property is required. List<String>
GetGatewayPluginAiResponseTransformerRoute
- Id
This property is required. string
- Id
This property is required. string
- id
This property is required. String
- id
This property is required. string
- id
This property is required. str
- id
This property is required. String
GetGatewayPluginAiResponseTransformerService
- Id
This property is required. string
- Id
This property is required. string
- id
This property is required. String
- id
This property is required. string
- id
This property is required. str
- id
This property is required. String
Package Details
- Repository
- konnect kong/terraform-provider-konnect
- License
- Notes
- This Pulumi package is based on the
konnect
Terraform Provider.