1. Packages
  2. Aiven Provider
  3. API Docs
  4. getKafkaConnect
Aiven v6.37.0 published on Thursday, Apr 10, 2025 by Pulumi

aiven.getKafkaConnect

Explore with Pulumi AI

Aiven v6.37.0 published on Thursday, Apr 10, 2025 by Pulumi

Gets information about an Aiven for Apache Kafka® Connect service.

Example Usage

import * as pulumi from "@pulumi/pulumi";
import * as aiven from "@pulumi/aiven";

const exampleKafkaConnect = aiven.getKafkaConnect({
    project: exampleProject.project,
    serviceName: "example-connect-service",
});
Copy
import pulumi
import pulumi_aiven as aiven

example_kafka_connect = aiven.get_kafka_connect(project=example_project["project"],
    service_name="example-connect-service")
Copy
package main

import (
	"github.com/pulumi/pulumi-aiven/sdk/v6/go/aiven"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		_, err := aiven.LookupKafkaConnect(ctx, &aiven.LookupKafkaConnectArgs{
			Project:     exampleProject.Project,
			ServiceName: "example-connect-service",
		}, nil)
		if err != nil {
			return err
		}
		return nil
	})
}
Copy
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Aiven = Pulumi.Aiven;

return await Deployment.RunAsync(() => 
{
    var exampleKafkaConnect = Aiven.GetKafkaConnect.Invoke(new()
    {
        Project = exampleProject.Project,
        ServiceName = "example-connect-service",
    });

});
Copy
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aiven.AivenFunctions;
import com.pulumi.aiven.inputs.GetKafkaConnectArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        final var exampleKafkaConnect = AivenFunctions.getKafkaConnect(GetKafkaConnectArgs.builder()
            .project(exampleProject.project())
            .serviceName("example-connect-service")
            .build());

    }
}
Copy
variables:
  exampleKafkaConnect:
    fn::invoke:
      function: aiven:getKafkaConnect
      arguments:
        project: ${exampleProject.project}
        serviceName: example-connect-service
Copy

Using getKafkaConnect

Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

function getKafkaConnect(args: GetKafkaConnectArgs, opts?: InvokeOptions): Promise<GetKafkaConnectResult>
function getKafkaConnectOutput(args: GetKafkaConnectOutputArgs, opts?: InvokeOptions): Output<GetKafkaConnectResult>
Copy
def get_kafka_connect(project: Optional[str] = None,
                      service_name: Optional[str] = None,
                      opts: Optional[InvokeOptions] = None) -> GetKafkaConnectResult
def get_kafka_connect_output(project: Optional[pulumi.Input[str]] = None,
                      service_name: Optional[pulumi.Input[str]] = None,
                      opts: Optional[InvokeOptions] = None) -> Output[GetKafkaConnectResult]
Copy
func LookupKafkaConnect(ctx *Context, args *LookupKafkaConnectArgs, opts ...InvokeOption) (*LookupKafkaConnectResult, error)
func LookupKafkaConnectOutput(ctx *Context, args *LookupKafkaConnectOutputArgs, opts ...InvokeOption) LookupKafkaConnectResultOutput
Copy

> Note: This function is named LookupKafkaConnect in the Go SDK.

public static class GetKafkaConnect 
{
    public static Task<GetKafkaConnectResult> InvokeAsync(GetKafkaConnectArgs args, InvokeOptions? opts = null)
    public static Output<GetKafkaConnectResult> Invoke(GetKafkaConnectInvokeArgs args, InvokeOptions? opts = null)
}
Copy
public static CompletableFuture<GetKafkaConnectResult> getKafkaConnect(GetKafkaConnectArgs args, InvokeOptions options)
public static Output<GetKafkaConnectResult> getKafkaConnect(GetKafkaConnectArgs args, InvokeOptions options)
Copy
fn::invoke:
  function: aiven:index/getKafkaConnect:getKafkaConnect
  arguments:
    # arguments dictionary
Copy

The following arguments are supported:

Project This property is required. string
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
ServiceName This property is required. string
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
Project This property is required. string
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
ServiceName This property is required. string
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
project This property is required. String
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
serviceName This property is required. String
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
project This property is required. string
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
serviceName This property is required. string
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
project This property is required. str
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
service_name This property is required. str
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
project This property is required. String
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
serviceName This property is required. String
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.

getKafkaConnect Result

The following output properties are available:

AdditionalDiskSpace string
Add disk storage in increments of 30 GiB to scale your service. The maximum value depends on the service type and cloud provider. Removing additional storage causes the service nodes to go through a rolling restart, and there might be a short downtime for services without an autoscaler integration or high availability capabilities. The field can be safely removed when autoscaler is enabled without causing any changes.
CloudName string
The cloud provider and region the service is hosted in. The format is provider-region, for example: google-europe-west1. The available cloud regions can differ per project and service. Changing this value migrates the service to another cloud provider or region. The migration runs in the background and includes a DNS update to redirect traffic to the new region. Most services experience no downtime, but some databases may have a brief interruption during DNS propagation.
Components List<GetKafkaConnectComponent>
Service component information objects
DiskSpace string
Service disk space. Possible values depend on the service type, the cloud provider and the project. Therefore, reducing will result in the service rebalancing.
DiskSpaceCap string
The maximum disk space of the service, possible values depend on the service type, the cloud provider and the project.
DiskSpaceDefault string
The default disk space of the service, possible values depend on the service type, the cloud provider and the project. Its also the minimum value for disk_space
DiskSpaceStep string
The default disk space step of the service, possible values depend on the service type, the cloud provider and the project. disk_space needs to increment from disk_space_default by increments of this size.
DiskSpaceUsed string
Disk space that service is currently using
Id string
The provider-assigned unique ID for this managed resource.
KafkaConnectUserConfigs List<GetKafkaConnectKafkaConnectUserConfig>
KafkaConnect user configurable settings. Warning: There's no way to reset advanced configuration options to default. Options that you add cannot be removed later
MaintenanceWindowDow string
Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.
MaintenanceWindowTime string
Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.
Plan string
Defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The available options can be seen from the Aiven pricing page.
Project string
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
ProjectVpcId string
Specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.
ServiceHost string
The hostname of the service.
ServiceIntegrations List<GetKafkaConnectServiceIntegration>
Service integrations to specify when creating a service. Not applied after initial service creation
ServiceName string
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
ServicePassword string
Password used for connecting to the service, if applicable
ServicePort int
The port of the service
ServiceType string
Aiven internal service type code
ServiceUri string
URI for connecting to the service. Service specific info is under "kafka", "pg", etc.
ServiceUsername string
Username used for connecting to the service, if applicable
State string
StaticIps List<string>
Static IPs that are going to be associated with this service. Please assign a value using the 'toset' function. Once a static ip resource is in the 'assigned' state it cannot be unbound from the node again
Tags List<GetKafkaConnectTag>
Tags are key-value pairs that allow you to categorize services.
TechEmails List<GetKafkaConnectTechEmail>
The email addresses for service contacts, who will receive important alerts and updates about this service. You can also set email contacts at the project level.
TerminationProtection bool
Prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.
AdditionalDiskSpace string
Add disk storage in increments of 30 GiB to scale your service. The maximum value depends on the service type and cloud provider. Removing additional storage causes the service nodes to go through a rolling restart, and there might be a short downtime for services without an autoscaler integration or high availability capabilities. The field can be safely removed when autoscaler is enabled without causing any changes.
CloudName string
The cloud provider and region the service is hosted in. The format is provider-region, for example: google-europe-west1. The available cloud regions can differ per project and service. Changing this value migrates the service to another cloud provider or region. The migration runs in the background and includes a DNS update to redirect traffic to the new region. Most services experience no downtime, but some databases may have a brief interruption during DNS propagation.
Components []GetKafkaConnectComponent
Service component information objects
DiskSpace string
Service disk space. Possible values depend on the service type, the cloud provider and the project. Therefore, reducing will result in the service rebalancing.
DiskSpaceCap string
The maximum disk space of the service, possible values depend on the service type, the cloud provider and the project.
DiskSpaceDefault string
The default disk space of the service, possible values depend on the service type, the cloud provider and the project. Its also the minimum value for disk_space
DiskSpaceStep string
The default disk space step of the service, possible values depend on the service type, the cloud provider and the project. disk_space needs to increment from disk_space_default by increments of this size.
DiskSpaceUsed string
Disk space that service is currently using
Id string
The provider-assigned unique ID for this managed resource.
KafkaConnectUserConfigs []GetKafkaConnectKafkaConnectUserConfig
KafkaConnect user configurable settings. Warning: There's no way to reset advanced configuration options to default. Options that you add cannot be removed later
MaintenanceWindowDow string
Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.
MaintenanceWindowTime string
Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.
Plan string
Defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The available options can be seen from the Aiven pricing page.
Project string
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
ProjectVpcId string
Specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.
ServiceHost string
The hostname of the service.
ServiceIntegrations []GetKafkaConnectServiceIntegration
Service integrations to specify when creating a service. Not applied after initial service creation
ServiceName string
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
ServicePassword string
Password used for connecting to the service, if applicable
ServicePort int
The port of the service
ServiceType string
Aiven internal service type code
ServiceUri string
URI for connecting to the service. Service specific info is under "kafka", "pg", etc.
ServiceUsername string
Username used for connecting to the service, if applicable
State string
StaticIps []string
Static IPs that are going to be associated with this service. Please assign a value using the 'toset' function. Once a static ip resource is in the 'assigned' state it cannot be unbound from the node again
Tags []GetKafkaConnectTag
Tags are key-value pairs that allow you to categorize services.
TechEmails []GetKafkaConnectTechEmail
The email addresses for service contacts, who will receive important alerts and updates about this service. You can also set email contacts at the project level.
TerminationProtection bool
Prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.
additionalDiskSpace String
Add disk storage in increments of 30 GiB to scale your service. The maximum value depends on the service type and cloud provider. Removing additional storage causes the service nodes to go through a rolling restart, and there might be a short downtime for services without an autoscaler integration or high availability capabilities. The field can be safely removed when autoscaler is enabled without causing any changes.
cloudName String
The cloud provider and region the service is hosted in. The format is provider-region, for example: google-europe-west1. The available cloud regions can differ per project and service. Changing this value migrates the service to another cloud provider or region. The migration runs in the background and includes a DNS update to redirect traffic to the new region. Most services experience no downtime, but some databases may have a brief interruption during DNS propagation.
components List<GetKafkaConnectComponent>
Service component information objects
diskSpace String
Service disk space. Possible values depend on the service type, the cloud provider and the project. Therefore, reducing will result in the service rebalancing.
diskSpaceCap String
The maximum disk space of the service, possible values depend on the service type, the cloud provider and the project.
diskSpaceDefault String
The default disk space of the service, possible values depend on the service type, the cloud provider and the project. Its also the minimum value for disk_space
diskSpaceStep String
The default disk space step of the service, possible values depend on the service type, the cloud provider and the project. disk_space needs to increment from disk_space_default by increments of this size.
diskSpaceUsed String
Disk space that service is currently using
id String
The provider-assigned unique ID for this managed resource.
kafkaConnectUserConfigs List<GetKafkaConnectKafkaConnectUserConfig>
KafkaConnect user configurable settings. Warning: There's no way to reset advanced configuration options to default. Options that you add cannot be removed later
maintenanceWindowDow String
Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.
maintenanceWindowTime String
Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.
plan String
Defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The available options can be seen from the Aiven pricing page.
project String
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
projectVpcId String
Specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.
serviceHost String
The hostname of the service.
serviceIntegrations List<GetKafkaConnectServiceIntegration>
Service integrations to specify when creating a service. Not applied after initial service creation
serviceName String
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
servicePassword String
Password used for connecting to the service, if applicable
servicePort Integer
The port of the service
serviceType String
Aiven internal service type code
serviceUri String
URI for connecting to the service. Service specific info is under "kafka", "pg", etc.
serviceUsername String
Username used for connecting to the service, if applicable
state String
staticIps List<String>
Static IPs that are going to be associated with this service. Please assign a value using the 'toset' function. Once a static ip resource is in the 'assigned' state it cannot be unbound from the node again
tags List<GetKafkaConnectTag>
Tags are key-value pairs that allow you to categorize services.
techEmails List<GetKafkaConnectTechEmail>
The email addresses for service contacts, who will receive important alerts and updates about this service. You can also set email contacts at the project level.
terminationProtection Boolean
Prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.
additionalDiskSpace string
Add disk storage in increments of 30 GiB to scale your service. The maximum value depends on the service type and cloud provider. Removing additional storage causes the service nodes to go through a rolling restart, and there might be a short downtime for services without an autoscaler integration or high availability capabilities. The field can be safely removed when autoscaler is enabled without causing any changes.
cloudName string
The cloud provider and region the service is hosted in. The format is provider-region, for example: google-europe-west1. The available cloud regions can differ per project and service. Changing this value migrates the service to another cloud provider or region. The migration runs in the background and includes a DNS update to redirect traffic to the new region. Most services experience no downtime, but some databases may have a brief interruption during DNS propagation.
components GetKafkaConnectComponent[]
Service component information objects
diskSpace string
Service disk space. Possible values depend on the service type, the cloud provider and the project. Therefore, reducing will result in the service rebalancing.
diskSpaceCap string
The maximum disk space of the service, possible values depend on the service type, the cloud provider and the project.
diskSpaceDefault string
The default disk space of the service, possible values depend on the service type, the cloud provider and the project. Its also the minimum value for disk_space
diskSpaceStep string
The default disk space step of the service, possible values depend on the service type, the cloud provider and the project. disk_space needs to increment from disk_space_default by increments of this size.
diskSpaceUsed string
Disk space that service is currently using
id string
The provider-assigned unique ID for this managed resource.
kafkaConnectUserConfigs GetKafkaConnectKafkaConnectUserConfig[]
KafkaConnect user configurable settings. Warning: There's no way to reset advanced configuration options to default. Options that you add cannot be removed later
maintenanceWindowDow string
Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.
maintenanceWindowTime string
Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.
plan string
Defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The available options can be seen from the Aiven pricing page.
project string
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
projectVpcId string
Specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.
serviceHost string
The hostname of the service.
serviceIntegrations GetKafkaConnectServiceIntegration[]
Service integrations to specify when creating a service. Not applied after initial service creation
serviceName string
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
servicePassword string
Password used for connecting to the service, if applicable
servicePort number
The port of the service
serviceType string
Aiven internal service type code
serviceUri string
URI for connecting to the service. Service specific info is under "kafka", "pg", etc.
serviceUsername string
Username used for connecting to the service, if applicable
state string
staticIps string[]
Static IPs that are going to be associated with this service. Please assign a value using the 'toset' function. Once a static ip resource is in the 'assigned' state it cannot be unbound from the node again
tags GetKafkaConnectTag[]
Tags are key-value pairs that allow you to categorize services.
techEmails GetKafkaConnectTechEmail[]
The email addresses for service contacts, who will receive important alerts and updates about this service. You can also set email contacts at the project level.
terminationProtection boolean
Prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.
additional_disk_space str
Add disk storage in increments of 30 GiB to scale your service. The maximum value depends on the service type and cloud provider. Removing additional storage causes the service nodes to go through a rolling restart, and there might be a short downtime for services without an autoscaler integration or high availability capabilities. The field can be safely removed when autoscaler is enabled without causing any changes.
cloud_name str
The cloud provider and region the service is hosted in. The format is provider-region, for example: google-europe-west1. The available cloud regions can differ per project and service. Changing this value migrates the service to another cloud provider or region. The migration runs in the background and includes a DNS update to redirect traffic to the new region. Most services experience no downtime, but some databases may have a brief interruption during DNS propagation.
components Sequence[GetKafkaConnectComponent]
Service component information objects
disk_space str
Service disk space. Possible values depend on the service type, the cloud provider and the project. Therefore, reducing will result in the service rebalancing.
disk_space_cap str
The maximum disk space of the service, possible values depend on the service type, the cloud provider and the project.
disk_space_default str
The default disk space of the service, possible values depend on the service type, the cloud provider and the project. Its also the minimum value for disk_space
disk_space_step str
The default disk space step of the service, possible values depend on the service type, the cloud provider and the project. disk_space needs to increment from disk_space_default by increments of this size.
disk_space_used str
Disk space that service is currently using
id str
The provider-assigned unique ID for this managed resource.
kafka_connect_user_configs Sequence[GetKafkaConnectKafkaConnectUserConfig]
KafkaConnect user configurable settings. Warning: There's no way to reset advanced configuration options to default. Options that you add cannot be removed later
maintenance_window_dow str
Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.
maintenance_window_time str
Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.
plan str
Defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The available options can be seen from the Aiven pricing page.
project str
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
project_vpc_id str
Specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.
service_host str
The hostname of the service.
service_integrations Sequence[GetKafkaConnectServiceIntegration]
Service integrations to specify when creating a service. Not applied after initial service creation
service_name str
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
service_password str
Password used for connecting to the service, if applicable
service_port int
The port of the service
service_type str
Aiven internal service type code
service_uri str
URI for connecting to the service. Service specific info is under "kafka", "pg", etc.
service_username str
Username used for connecting to the service, if applicable
state str
static_ips Sequence[str]
Static IPs that are going to be associated with this service. Please assign a value using the 'toset' function. Once a static ip resource is in the 'assigned' state it cannot be unbound from the node again
tags Sequence[GetKafkaConnectTag]
Tags are key-value pairs that allow you to categorize services.
tech_emails Sequence[GetKafkaConnectTechEmail]
The email addresses for service contacts, who will receive important alerts and updates about this service. You can also set email contacts at the project level.
termination_protection bool
Prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.
additionalDiskSpace String
Add disk storage in increments of 30 GiB to scale your service. The maximum value depends on the service type and cloud provider. Removing additional storage causes the service nodes to go through a rolling restart, and there might be a short downtime for services without an autoscaler integration or high availability capabilities. The field can be safely removed when autoscaler is enabled without causing any changes.
cloudName String
The cloud provider and region the service is hosted in. The format is provider-region, for example: google-europe-west1. The available cloud regions can differ per project and service. Changing this value migrates the service to another cloud provider or region. The migration runs in the background and includes a DNS update to redirect traffic to the new region. Most services experience no downtime, but some databases may have a brief interruption during DNS propagation.
components List<Property Map>
Service component information objects
diskSpace String
Service disk space. Possible values depend on the service type, the cloud provider and the project. Therefore, reducing will result in the service rebalancing.
diskSpaceCap String
The maximum disk space of the service, possible values depend on the service type, the cloud provider and the project.
diskSpaceDefault String
The default disk space of the service, possible values depend on the service type, the cloud provider and the project. Its also the minimum value for disk_space
diskSpaceStep String
The default disk space step of the service, possible values depend on the service type, the cloud provider and the project. disk_space needs to increment from disk_space_default by increments of this size.
diskSpaceUsed String
Disk space that service is currently using
id String
The provider-assigned unique ID for this managed resource.
kafkaConnectUserConfigs List<Property Map>
KafkaConnect user configurable settings. Warning: There's no way to reset advanced configuration options to default. Options that you add cannot be removed later
maintenanceWindowDow String
Day of week when maintenance operations should be performed. One monday, tuesday, wednesday, etc.
maintenanceWindowTime String
Time of day when maintenance operations should be performed. UTC time in HH:mm:ss format.
plan String
Defines what kind of computing resources are allocated for the service. It can be changed after creation, though there are some restrictions when going to a smaller plan such as the new plan must have sufficient amount of disk space to store all current data and switching to a plan with fewer nodes might not be supported. The basic plan names are hobbyist, startup-x, business-x and premium-x where x is (roughly) the amount of memory on each node (also other attributes like number of CPUs and amount of disk space varies but naming is based on memory). The available options can be seen from the Aiven pricing page.
project String
The name of the project this resource belongs to. To set up proper dependencies please refer to this variable as a reference. Changing this property forces recreation of the resource.
projectVpcId String
Specifies the VPC the service should run in. If the value is not set the service is not run inside a VPC. When set, the value should be given as a reference to set up dependencies correctly and the VPC must be in the same cloud and region as the service itself. Project can be freely moved to and from VPC after creation but doing so triggers migration to new servers so the operation can take significant amount of time to complete if the service has a lot of data.
serviceHost String
The hostname of the service.
serviceIntegrations List<Property Map>
Service integrations to specify when creating a service. Not applied after initial service creation
serviceName String
Specifies the actual name of the service. The name cannot be changed later without destroying and re-creating the service so name should be picked based on intended service usage rather than current attributes.
servicePassword String
Password used for connecting to the service, if applicable
servicePort Number
The port of the service
serviceType String
Aiven internal service type code
serviceUri String
URI for connecting to the service. Service specific info is under "kafka", "pg", etc.
serviceUsername String
Username used for connecting to the service, if applicable
state String
staticIps List<String>
Static IPs that are going to be associated with this service. Please assign a value using the 'toset' function. Once a static ip resource is in the 'assigned' state it cannot be unbound from the node again
tags List<Property Map>
Tags are key-value pairs that allow you to categorize services.
techEmails List<Property Map>
The email addresses for service contacts, who will receive important alerts and updates about this service. You can also set email contacts at the project level.
terminationProtection Boolean
Prevents the service from being deleted. It is recommended to set this to true for all production services to prevent unintentional service deletion. This does not shield against deleting databases or topics but for services with backups much of the content can at least be restored from backup in case accidental deletion is done.

Supporting Types

GetKafkaConnectComponent

Component This property is required. string
Service component name
ConnectionUri This property is required. string
Connection info for connecting to the service component. This is a combination of host and port.
Host This property is required. string
Host name for connecting to the service component
KafkaAuthenticationMethod This property is required. string
Kafka authentication method. This is a value specific to the 'kafka' service component
KafkaSslCa This property is required. string
Kafka certificate used. The possible values are letsencrypt and project_ca.
Port This property is required. int
Port number for connecting to the service component
Route This property is required. string
Network access route
Ssl This property is required. bool
Whether the endpoint is encrypted or accepts plaintext. By default endpoints are always encrypted and this property is only included for service components they may disable encryption
Usage This property is required. string
DNS usage name
Component This property is required. string
Service component name
ConnectionUri This property is required. string
Connection info for connecting to the service component. This is a combination of host and port.
Host This property is required. string
Host name for connecting to the service component
KafkaAuthenticationMethod This property is required. string
Kafka authentication method. This is a value specific to the 'kafka' service component
KafkaSslCa This property is required. string
Kafka certificate used. The possible values are letsencrypt and project_ca.
Port This property is required. int
Port number for connecting to the service component
Route This property is required. string
Network access route
Ssl This property is required. bool
Whether the endpoint is encrypted or accepts plaintext. By default endpoints are always encrypted and this property is only included for service components they may disable encryption
Usage This property is required. string
DNS usage name
component This property is required. String
Service component name
connectionUri This property is required. String
Connection info for connecting to the service component. This is a combination of host and port.
host This property is required. String
Host name for connecting to the service component
kafkaAuthenticationMethod This property is required. String
Kafka authentication method. This is a value specific to the 'kafka' service component
kafkaSslCa This property is required. String
Kafka certificate used. The possible values are letsencrypt and project_ca.
port This property is required. Integer
Port number for connecting to the service component
route This property is required. String
Network access route
ssl This property is required. Boolean
Whether the endpoint is encrypted or accepts plaintext. By default endpoints are always encrypted and this property is only included for service components they may disable encryption
usage This property is required. String
DNS usage name
component This property is required. string
Service component name
connectionUri This property is required. string
Connection info for connecting to the service component. This is a combination of host and port.
host This property is required. string
Host name for connecting to the service component
kafkaAuthenticationMethod This property is required. string
Kafka authentication method. This is a value specific to the 'kafka' service component
kafkaSslCa This property is required. string
Kafka certificate used. The possible values are letsencrypt and project_ca.
port This property is required. number
Port number for connecting to the service component
route This property is required. string
Network access route
ssl This property is required. boolean
Whether the endpoint is encrypted or accepts plaintext. By default endpoints are always encrypted and this property is only included for service components they may disable encryption
usage This property is required. string
DNS usage name
component This property is required. str
Service component name
connection_uri This property is required. str
Connection info for connecting to the service component. This is a combination of host and port.
host This property is required. str
Host name for connecting to the service component
kafka_authentication_method This property is required. str
Kafka authentication method. This is a value specific to the 'kafka' service component
kafka_ssl_ca This property is required. str
Kafka certificate used. The possible values are letsencrypt and project_ca.
port This property is required. int
Port number for connecting to the service component
route This property is required. str
Network access route
ssl This property is required. bool
Whether the endpoint is encrypted or accepts plaintext. By default endpoints are always encrypted and this property is only included for service components they may disable encryption
usage This property is required. str
DNS usage name
component This property is required. String
Service component name
connectionUri This property is required. String
Connection info for connecting to the service component. This is a combination of host and port.
host This property is required. String
Host name for connecting to the service component
kafkaAuthenticationMethod This property is required. String
Kafka authentication method. This is a value specific to the 'kafka' service component
kafkaSslCa This property is required. String
Kafka certificate used. The possible values are letsencrypt and project_ca.
port This property is required. Number
Port number for connecting to the service component
route This property is required. String
Network access route
ssl This property is required. Boolean
Whether the endpoint is encrypted or accepts plaintext. By default endpoints are always encrypted and this property is only included for service components they may disable encryption
usage This property is required. String
DNS usage name

GetKafkaConnectKafkaConnectUserConfig

AdditionalBackupRegions string
Additional Cloud Regions for Backup Replication.

Deprecated: This property is deprecated.

IpFilterObjects List<GetKafkaConnectKafkaConnectUserConfigIpFilterObject>
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
IpFilterStrings List<string>
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.
IpFilters List<string>
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.

Deprecated: Deprecated. Use ip_filter_string instead.

KafkaConnect GetKafkaConnectKafkaConnectUserConfigKafkaConnect
Kafka Connect configuration values
PluginVersions List<GetKafkaConnectKafkaConnectUserConfigPluginVersion>
The plugin selected by the user
PrivateAccess GetKafkaConnectKafkaConnectUserConfigPrivateAccess
Allow access to selected service ports from private networks
PrivatelinkAccess GetKafkaConnectKafkaConnectUserConfigPrivatelinkAccess
Allow access to selected service components through Privatelink
PublicAccess GetKafkaConnectKafkaConnectUserConfigPublicAccess
Allow access to selected service ports from the public Internet
SecretProviders List<GetKafkaConnectKafkaConnectUserConfigSecretProvider>
ServiceLog bool
Store logs for the service so that they are available in the HTTP API and console.
StaticIps bool
Use static public IP addresses.
AdditionalBackupRegions string
Additional Cloud Regions for Backup Replication.

Deprecated: This property is deprecated.

IpFilterObjects []GetKafkaConnectKafkaConnectUserConfigIpFilterObject
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
IpFilterStrings []string
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.
IpFilters []string
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.

Deprecated: Deprecated. Use ip_filter_string instead.

KafkaConnect GetKafkaConnectKafkaConnectUserConfigKafkaConnect
Kafka Connect configuration values
PluginVersions []GetKafkaConnectKafkaConnectUserConfigPluginVersion
The plugin selected by the user
PrivateAccess GetKafkaConnectKafkaConnectUserConfigPrivateAccess
Allow access to selected service ports from private networks
PrivatelinkAccess GetKafkaConnectKafkaConnectUserConfigPrivatelinkAccess
Allow access to selected service components through Privatelink
PublicAccess GetKafkaConnectKafkaConnectUserConfigPublicAccess
Allow access to selected service ports from the public Internet
SecretProviders []GetKafkaConnectKafkaConnectUserConfigSecretProvider
ServiceLog bool
Store logs for the service so that they are available in the HTTP API and console.
StaticIps bool
Use static public IP addresses.
additionalBackupRegions String
Additional Cloud Regions for Backup Replication.

Deprecated: This property is deprecated.

ipFilterObjects List<GetKafkaConnectKafkaConnectUserConfigIpFilterObject>
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
ipFilterStrings List<String>
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.
ipFilters List<String>
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.

Deprecated: Deprecated. Use ip_filter_string instead.

kafkaConnect GetKafkaConnectKafkaConnectUserConfigKafkaConnect
Kafka Connect configuration values
pluginVersions List<GetKafkaConnectKafkaConnectUserConfigPluginVersion>
The plugin selected by the user
privateAccess GetKafkaConnectKafkaConnectUserConfigPrivateAccess
Allow access to selected service ports from private networks
privatelinkAccess GetKafkaConnectKafkaConnectUserConfigPrivatelinkAccess
Allow access to selected service components through Privatelink
publicAccess GetKafkaConnectKafkaConnectUserConfigPublicAccess
Allow access to selected service ports from the public Internet
secretProviders List<GetKafkaConnectKafkaConnectUserConfigSecretProvider>
serviceLog Boolean
Store logs for the service so that they are available in the HTTP API and console.
staticIps Boolean
Use static public IP addresses.
additionalBackupRegions string
Additional Cloud Regions for Backup Replication.

Deprecated: This property is deprecated.

ipFilterObjects GetKafkaConnectKafkaConnectUserConfigIpFilterObject[]
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
ipFilterStrings string[]
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.
ipFilters string[]
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.

Deprecated: Deprecated. Use ip_filter_string instead.

kafkaConnect GetKafkaConnectKafkaConnectUserConfigKafkaConnect
Kafka Connect configuration values
pluginVersions GetKafkaConnectKafkaConnectUserConfigPluginVersion[]
The plugin selected by the user
privateAccess GetKafkaConnectKafkaConnectUserConfigPrivateAccess
Allow access to selected service ports from private networks
privatelinkAccess GetKafkaConnectKafkaConnectUserConfigPrivatelinkAccess
Allow access to selected service components through Privatelink
publicAccess GetKafkaConnectKafkaConnectUserConfigPublicAccess
Allow access to selected service ports from the public Internet
secretProviders GetKafkaConnectKafkaConnectUserConfigSecretProvider[]
serviceLog boolean
Store logs for the service so that they are available in the HTTP API and console.
staticIps boolean
Use static public IP addresses.
additional_backup_regions str
Additional Cloud Regions for Backup Replication.

Deprecated: This property is deprecated.

ip_filter_objects Sequence[GetKafkaConnectKafkaConnectUserConfigIpFilterObject]
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
ip_filter_strings Sequence[str]
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.
ip_filters Sequence[str]
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.

Deprecated: Deprecated. Use ip_filter_string instead.

kafka_connect GetKafkaConnectKafkaConnectUserConfigKafkaConnect
Kafka Connect configuration values
plugin_versions Sequence[GetKafkaConnectKafkaConnectUserConfigPluginVersion]
The plugin selected by the user
private_access GetKafkaConnectKafkaConnectUserConfigPrivateAccess
Allow access to selected service ports from private networks
privatelink_access GetKafkaConnectKafkaConnectUserConfigPrivatelinkAccess
Allow access to selected service components through Privatelink
public_access GetKafkaConnectKafkaConnectUserConfigPublicAccess
Allow access to selected service ports from the public Internet
secret_providers Sequence[GetKafkaConnectKafkaConnectUserConfigSecretProvider]
service_log bool
Store logs for the service so that they are available in the HTTP API and console.
static_ips bool
Use static public IP addresses.
additionalBackupRegions String
Additional Cloud Regions for Backup Replication.

Deprecated: This property is deprecated.

ipFilterObjects List<Property Map>
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16
ipFilterStrings List<String>
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.
ipFilters List<String>
Allow incoming connections from CIDR address block, e.g. 10.20.0.0/16.

Deprecated: Deprecated. Use ip_filter_string instead.

kafkaConnect Property Map
Kafka Connect configuration values
pluginVersions List<Property Map>
The plugin selected by the user
privateAccess Property Map
Allow access to selected service ports from private networks
privatelinkAccess Property Map
Allow access to selected service components through Privatelink
publicAccess Property Map
Allow access to selected service ports from the public Internet
secretProviders List<Property Map>
serviceLog Boolean
Store logs for the service so that they are available in the HTTP API and console.
staticIps Boolean
Use static public IP addresses.

GetKafkaConnectKafkaConnectUserConfigIpFilterObject

Network This property is required. string
CIDR address block. Example: 10.20.0.0/16.
Description string
Description for IP filter list entry. Example: Production service IP range.
Network This property is required. string
CIDR address block. Example: 10.20.0.0/16.
Description string
Description for IP filter list entry. Example: Production service IP range.
network This property is required. String
CIDR address block. Example: 10.20.0.0/16.
description String
Description for IP filter list entry. Example: Production service IP range.
network This property is required. string
CIDR address block. Example: 10.20.0.0/16.
description string
Description for IP filter list entry. Example: Production service IP range.
network This property is required. str
CIDR address block. Example: 10.20.0.0/16.
description str
Description for IP filter list entry. Example: Production service IP range.
network This property is required. String
CIDR address block. Example: 10.20.0.0/16.
description String
Description for IP filter list entry. Example: Production service IP range.

GetKafkaConnectKafkaConnectUserConfigKafkaConnect

ConnectorClientConfigOverridePolicy string
Enum: All, None. Defines what client configurations can be overridden by the connector. Default is None.
ConsumerAutoOffsetReset string
Enum: earliest, latest. What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.
ConsumerFetchMaxBytes int
Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. Example: 52428800.
ConsumerIsolationLevel string
Enum: read_committed, read_uncommitted. Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.
ConsumerMaxPartitionFetchBytes int
Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. Example: 1048576.
ConsumerMaxPollIntervalMs int
The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).
ConsumerMaxPollRecords int
The maximum number of records returned in a single call to poll() (defaults to 500).
OffsetFlushIntervalMs int
The interval at which to try committing offsets for tasks (defaults to 60000).
OffsetFlushTimeoutMs int
Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).
ProducerBatchSize int
This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).
ProducerBufferMemory int
The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).
ProducerCompressionType string
Enum: gzip, lz4, none, snappy, zstd. Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
ProducerLingerMs int
This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger for the specified time waiting for more records to show up. Defaults to 0.
ProducerMaxRequestSize int
This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. Example: 1048576.
ScheduledRebalanceMaxDelayMs int
The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.
SessionTimeoutMs int
The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).
ConnectorClientConfigOverridePolicy string
Enum: All, None. Defines what client configurations can be overridden by the connector. Default is None.
ConsumerAutoOffsetReset string
Enum: earliest, latest. What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.
ConsumerFetchMaxBytes int
Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. Example: 52428800.
ConsumerIsolationLevel string
Enum: read_committed, read_uncommitted. Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.
ConsumerMaxPartitionFetchBytes int
Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. Example: 1048576.
ConsumerMaxPollIntervalMs int
The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).
ConsumerMaxPollRecords int
The maximum number of records returned in a single call to poll() (defaults to 500).
OffsetFlushIntervalMs int
The interval at which to try committing offsets for tasks (defaults to 60000).
OffsetFlushTimeoutMs int
Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).
ProducerBatchSize int
This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).
ProducerBufferMemory int
The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).
ProducerCompressionType string
Enum: gzip, lz4, none, snappy, zstd. Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
ProducerLingerMs int
This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger for the specified time waiting for more records to show up. Defaults to 0.
ProducerMaxRequestSize int
This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. Example: 1048576.
ScheduledRebalanceMaxDelayMs int
The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.
SessionTimeoutMs int
The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).
connectorClientConfigOverridePolicy String
Enum: All, None. Defines what client configurations can be overridden by the connector. Default is None.
consumerAutoOffsetReset String
Enum: earliest, latest. What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.
consumerFetchMaxBytes Integer
Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. Example: 52428800.
consumerIsolationLevel String
Enum: read_committed, read_uncommitted. Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.
consumerMaxPartitionFetchBytes Integer
Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. Example: 1048576.
consumerMaxPollIntervalMs Integer
The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).
consumerMaxPollRecords Integer
The maximum number of records returned in a single call to poll() (defaults to 500).
offsetFlushIntervalMs Integer
The interval at which to try committing offsets for tasks (defaults to 60000).
offsetFlushTimeoutMs Integer
Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).
producerBatchSize Integer
This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).
producerBufferMemory Integer
The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).
producerCompressionType String
Enum: gzip, lz4, none, snappy, zstd. Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
producerLingerMs Integer
This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger for the specified time waiting for more records to show up. Defaults to 0.
producerMaxRequestSize Integer
This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. Example: 1048576.
scheduledRebalanceMaxDelayMs Integer
The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.
sessionTimeoutMs Integer
The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).
connectorClientConfigOverridePolicy string
Enum: All, None. Defines what client configurations can be overridden by the connector. Default is None.
consumerAutoOffsetReset string
Enum: earliest, latest. What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.
consumerFetchMaxBytes number
Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. Example: 52428800.
consumerIsolationLevel string
Enum: read_committed, read_uncommitted. Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.
consumerMaxPartitionFetchBytes number
Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. Example: 1048576.
consumerMaxPollIntervalMs number
The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).
consumerMaxPollRecords number
The maximum number of records returned in a single call to poll() (defaults to 500).
offsetFlushIntervalMs number
The interval at which to try committing offsets for tasks (defaults to 60000).
offsetFlushTimeoutMs number
Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).
producerBatchSize number
This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).
producerBufferMemory number
The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).
producerCompressionType string
Enum: gzip, lz4, none, snappy, zstd. Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
producerLingerMs number
This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger for the specified time waiting for more records to show up. Defaults to 0.
producerMaxRequestSize number
This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. Example: 1048576.
scheduledRebalanceMaxDelayMs number
The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.
sessionTimeoutMs number
The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).
connector_client_config_override_policy str
Enum: All, None. Defines what client configurations can be overridden by the connector. Default is None.
consumer_auto_offset_reset str
Enum: earliest, latest. What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.
consumer_fetch_max_bytes int
Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. Example: 52428800.
consumer_isolation_level str
Enum: read_committed, read_uncommitted. Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.
consumer_max_partition_fetch_bytes int
Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. Example: 1048576.
consumer_max_poll_interval_ms int
The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).
consumer_max_poll_records int
The maximum number of records returned in a single call to poll() (defaults to 500).
offset_flush_interval_ms int
The interval at which to try committing offsets for tasks (defaults to 60000).
offset_flush_timeout_ms int
Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).
producer_batch_size int
This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).
producer_buffer_memory int
The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).
producer_compression_type str
Enum: gzip, lz4, none, snappy, zstd. Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
producer_linger_ms int
This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger for the specified time waiting for more records to show up. Defaults to 0.
producer_max_request_size int
This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. Example: 1048576.
scheduled_rebalance_max_delay_ms int
The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.
session_timeout_ms int
The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).
connectorClientConfigOverridePolicy String
Enum: All, None. Defines what client configurations can be overridden by the connector. Default is None.
consumerAutoOffsetReset String
Enum: earliest, latest. What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server. Default is earliest.
consumerFetchMaxBytes Number
Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. Example: 52428800.
consumerIsolationLevel String
Enum: read_committed, read_uncommitted. Transaction read isolation level. read_uncommitted is the default, but read_committed can be used if consume-exactly-once behavior is desired.
consumerMaxPartitionFetchBytes Number
Records are fetched in batches by the consumer.If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. Example: 1048576.
consumerMaxPollIntervalMs Number
The maximum delay in milliseconds between invocations of poll() when using consumer group management (defaults to 300000).
consumerMaxPollRecords Number
The maximum number of records returned in a single call to poll() (defaults to 500).
offsetFlushIntervalMs Number
The interval at which to try committing offsets for tasks (defaults to 60000).
offsetFlushTimeoutMs Number
Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt (defaults to 5000).
producerBatchSize Number
This setting gives the upper bound of the batch size to be sent. If there are fewer than this many bytes accumulated for this partition, the producer will linger for the linger.ms time waiting for more records to show up. A batch size of zero will disable batching entirely (defaults to 16384).
producerBufferMemory Number
The total bytes of memory the producer can use to buffer records waiting to be sent to the broker (defaults to 33554432).
producerCompressionType String
Enum: gzip, lz4, none, snappy, zstd. Specify the default compression type for producers. This configuration accepts the standard compression codecs (gzip, snappy, lz4, zstd). It additionally accepts none which is the default and equivalent to no compression.
producerLingerMs Number
This setting gives the upper bound on the delay for batching: once there is batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if there are fewer than this many bytes accumulated for this partition the producer will linger for the specified time waiting for more records to show up. Defaults to 0.
producerMaxRequestSize Number
This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. Example: 1048576.
scheduledRebalanceMaxDelayMs Number
The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned. Defaults to 5 minutes.
sessionTimeoutMs Number
The timeout in milliseconds used to detect failures when using Kafka’s group management facilities (defaults to 10000).

GetKafkaConnectKafkaConnectUserConfigPluginVersion

PluginName This property is required. string
The name of the plugin. Example: debezium-connector.
Version This property is required. string
The version of the plugin. Example: 2.5.0.
PluginName This property is required. string
The name of the plugin. Example: debezium-connector.
Version This property is required. string
The version of the plugin. Example: 2.5.0.
pluginName This property is required. String
The name of the plugin. Example: debezium-connector.
version This property is required. String
The version of the plugin. Example: 2.5.0.
pluginName This property is required. string
The name of the plugin. Example: debezium-connector.
version This property is required. string
The version of the plugin. Example: 2.5.0.
plugin_name This property is required. str
The name of the plugin. Example: debezium-connector.
version This property is required. str
The version of the plugin. Example: 2.5.0.
pluginName This property is required. String
The name of the plugin. Example: debezium-connector.
version This property is required. String
The version of the plugin. Example: 2.5.0.

GetKafkaConnectKafkaConnectUserConfigPrivateAccess

KafkaConnect bool
Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
Prometheus bool
Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
KafkaConnect bool
Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
Prometheus bool
Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
kafkaConnect Boolean
Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
prometheus Boolean
Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
kafkaConnect boolean
Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
prometheus boolean
Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
kafka_connect bool
Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
prometheus bool
Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
kafkaConnect Boolean
Allow clients to connect to kafka_connect with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.
prometheus Boolean
Allow clients to connect to prometheus with a DNS name that always resolves to the service's private IP addresses. Only available in certain network locations.

GetKafkaConnectKafkaConnectUserConfigPrivatelinkAccess

Jolokia bool
Enable jolokia.
KafkaConnect bool
Enable kafka_connect.
Prometheus bool
Enable prometheus.
Jolokia bool
Enable jolokia.
KafkaConnect bool
Enable kafka_connect.
Prometheus bool
Enable prometheus.
jolokia Boolean
Enable jolokia.
kafkaConnect Boolean
Enable kafka_connect.
prometheus Boolean
Enable prometheus.
jolokia boolean
Enable jolokia.
kafkaConnect boolean
Enable kafka_connect.
prometheus boolean
Enable prometheus.
jolokia bool
Enable jolokia.
kafka_connect bool
Enable kafka_connect.
prometheus bool
Enable prometheus.
jolokia Boolean
Enable jolokia.
kafkaConnect Boolean
Enable kafka_connect.
prometheus Boolean
Enable prometheus.

GetKafkaConnectKafkaConnectUserConfigPublicAccess

KafkaConnect bool
Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.
Prometheus bool
Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.
KafkaConnect bool
Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.
Prometheus bool
Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.
kafkaConnect Boolean
Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.
prometheus Boolean
Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.
kafkaConnect boolean
Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.
prometheus boolean
Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.
kafka_connect bool
Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.
prometheus bool
Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.
kafkaConnect Boolean
Allow clients to connect to kafka_connect from the public internet for service nodes that are in a project VPC or another type of private network.
prometheus Boolean
Allow clients to connect to prometheus from the public internet for service nodes that are in a project VPC or another type of private network.

GetKafkaConnectKafkaConnectUserConfigSecretProvider

Name This property is required. string
Name of the secret provider. Used to reference secrets in connector config.
Aws GetKafkaConnectKafkaConnectUserConfigSecretProviderAws
AWS secret provider configuration
Vault GetKafkaConnectKafkaConnectUserConfigSecretProviderVault
Vault secret provider configuration
Name This property is required. string
Name of the secret provider. Used to reference secrets in connector config.
Aws GetKafkaConnectKafkaConnectUserConfigSecretProviderAws
AWS secret provider configuration
Vault GetKafkaConnectKafkaConnectUserConfigSecretProviderVault
Vault secret provider configuration
name This property is required. String
Name of the secret provider. Used to reference secrets in connector config.
aws GetKafkaConnectKafkaConnectUserConfigSecretProviderAws
AWS secret provider configuration
vault GetKafkaConnectKafkaConnectUserConfigSecretProviderVault
Vault secret provider configuration
name This property is required. string
Name of the secret provider. Used to reference secrets in connector config.
aws GetKafkaConnectKafkaConnectUserConfigSecretProviderAws
AWS secret provider configuration
vault GetKafkaConnectKafkaConnectUserConfigSecretProviderVault
Vault secret provider configuration
name This property is required. str
Name of the secret provider. Used to reference secrets in connector config.
aws GetKafkaConnectKafkaConnectUserConfigSecretProviderAws
AWS secret provider configuration
vault GetKafkaConnectKafkaConnectUserConfigSecretProviderVault
Vault secret provider configuration
name This property is required. String
Name of the secret provider. Used to reference secrets in connector config.
aws Property Map
AWS secret provider configuration
vault Property Map
Vault secret provider configuration

GetKafkaConnectKafkaConnectUserConfigSecretProviderAws

AuthMethod This property is required. string
Enum: credentials. Auth method of the vault secret provider.
Region This property is required. string
Region used to lookup secrets with AWS SecretManager.
AccessKey string
Access key used to authenticate with aws.
SecretKey string
Secret key used to authenticate with aws.
AuthMethod This property is required. string
Enum: credentials. Auth method of the vault secret provider.
Region This property is required. string
Region used to lookup secrets with AWS SecretManager.
AccessKey string
Access key used to authenticate with aws.
SecretKey string
Secret key used to authenticate with aws.
authMethod This property is required. String
Enum: credentials. Auth method of the vault secret provider.
region This property is required. String
Region used to lookup secrets with AWS SecretManager.
accessKey String
Access key used to authenticate with aws.
secretKey String
Secret key used to authenticate with aws.
authMethod This property is required. string
Enum: credentials. Auth method of the vault secret provider.
region This property is required. string
Region used to lookup secrets with AWS SecretManager.
accessKey string
Access key used to authenticate with aws.
secretKey string
Secret key used to authenticate with aws.
auth_method This property is required. str
Enum: credentials. Auth method of the vault secret provider.
region This property is required. str
Region used to lookup secrets with AWS SecretManager.
access_key str
Access key used to authenticate with aws.
secret_key str
Secret key used to authenticate with aws.
authMethod This property is required. String
Enum: credentials. Auth method of the vault secret provider.
region This property is required. String
Region used to lookup secrets with AWS SecretManager.
accessKey String
Access key used to authenticate with aws.
secretKey String
Secret key used to authenticate with aws.

GetKafkaConnectKafkaConnectUserConfigSecretProviderVault

Address This property is required. string
Address of the Vault server.
AuthMethod This property is required. string
Enum: token. Auth method of the vault secret provider.
EngineVersion int
Enum: 1, 2, and newer. KV Secrets Engine version of the Vault server instance.
PrefixPathDepth int
Prefix path depth of the secrets Engine. Default is 1. If the secrets engine path has more than one segment it has to be increased to the number of segments.
Token string
Token used to authenticate with vault and auth method token.
Address This property is required. string
Address of the Vault server.
AuthMethod This property is required. string
Enum: token. Auth method of the vault secret provider.
EngineVersion int
Enum: 1, 2, and newer. KV Secrets Engine version of the Vault server instance.
PrefixPathDepth int
Prefix path depth of the secrets Engine. Default is 1. If the secrets engine path has more than one segment it has to be increased to the number of segments.
Token string
Token used to authenticate with vault and auth method token.
address This property is required. String
Address of the Vault server.
authMethod This property is required. String
Enum: token. Auth method of the vault secret provider.
engineVersion Integer
Enum: 1, 2, and newer. KV Secrets Engine version of the Vault server instance.
prefixPathDepth Integer
Prefix path depth of the secrets Engine. Default is 1. If the secrets engine path has more than one segment it has to be increased to the number of segments.
token String
Token used to authenticate with vault and auth method token.
address This property is required. string
Address of the Vault server.
authMethod This property is required. string
Enum: token. Auth method of the vault secret provider.
engineVersion number
Enum: 1, 2, and newer. KV Secrets Engine version of the Vault server instance.
prefixPathDepth number
Prefix path depth of the secrets Engine. Default is 1. If the secrets engine path has more than one segment it has to be increased to the number of segments.
token string
Token used to authenticate with vault and auth method token.
address This property is required. str
Address of the Vault server.
auth_method This property is required. str
Enum: token. Auth method of the vault secret provider.
engine_version int
Enum: 1, 2, and newer. KV Secrets Engine version of the Vault server instance.
prefix_path_depth int
Prefix path depth of the secrets Engine. Default is 1. If the secrets engine path has more than one segment it has to be increased to the number of segments.
token str
Token used to authenticate with vault and auth method token.
address This property is required. String
Address of the Vault server.
authMethod This property is required. String
Enum: token. Auth method of the vault secret provider.
engineVersion Number
Enum: 1, 2, and newer. KV Secrets Engine version of the Vault server instance.
prefixPathDepth Number
Prefix path depth of the secrets Engine. Default is 1. If the secrets engine path has more than one segment it has to be increased to the number of segments.
token String
Token used to authenticate with vault and auth method token.

GetKafkaConnectServiceIntegration

IntegrationType This property is required. string
Type of the service integration
SourceServiceName This property is required. string
Name of the source service
IntegrationType This property is required. string
Type of the service integration
SourceServiceName This property is required. string
Name of the source service
integrationType This property is required. String
Type of the service integration
sourceServiceName This property is required. String
Name of the source service
integrationType This property is required. string
Type of the service integration
sourceServiceName This property is required. string
Name of the source service
integration_type This property is required. str
Type of the service integration
source_service_name This property is required. str
Name of the source service
integrationType This property is required. String
Type of the service integration
sourceServiceName This property is required. String
Name of the source service

GetKafkaConnectTag

Key This property is required. string
Service tag key
Value This property is required. string
Service tag value
Key This property is required. string
Service tag key
Value This property is required. string
Service tag value
key This property is required. String
Service tag key
value This property is required. String
Service tag value
key This property is required. string
Service tag key
value This property is required. string
Service tag value
key This property is required. str
Service tag key
value This property is required. str
Service tag value
key This property is required. String
Service tag key
value This property is required. String
Service tag value

GetKafkaConnectTechEmail

Email This property is required. string
An email address to contact for technical issues
Email This property is required. string
An email address to contact for technical issues
email This property is required. String
An email address to contact for technical issues
email This property is required. string
An email address to contact for technical issues
email This property is required. str
An email address to contact for technical issues
email This property is required. String
An email address to contact for technical issues

Package Details

Repository
Aiven pulumi/pulumi-aiven
License
Apache-2.0
Notes
This Pulumi package is based on the aiven Terraform Provider.
Aiven v6.37.0 published on Thursday, Apr 10, 2025 by Pulumi