Skip to content

ockam_kafka

Ockam

# Common config fields, showing default values
output:
label: ""
ockam_kafka:
kafka:
seed_brokers: [] # No default (optional)
topic: "" # No default (required)
key: "" # No default (optional)
partition: ${! meta("partition") } # No default (optional)
metadata:
include_prefixes: []
include_patterns: []
max_in_flight: 10
batching:
count: 0
byte_size: 0
period: ""
check: ""
seed_brokers: [] # No default (optional)
disable_content_encryption: false
enrollment_ticket: "" # No default (optional)
identity_name: "" # No default (optional)
allow: self
route_to_kafka_outlet: self
allow_consumer: self
route_to_consumer: /ip4/127.0.0.1/tcp/6262
encrypted_fields: []

Fields

kafka

Sorry! This field is missing documentation.

Type: object

kafka.seed_brokers

A list of broker addresses to connect to in order to establish connections. If an item of the list contains commas it will be expanded into multiple addresses.

Type: array

# Examples
seed_brokers:
- localhost:9092
seed_brokers:
- foo:9092
- bar:9092
seed_brokers:
- foo:9092,bar:9092

kafka.topic

A topic to write messages to. This field supports interpolation functions.

Type: string

kafka.key

An optional key to populate for each message. This field supports interpolation functions.

Type: string

kafka.partitioner

Override the default murmur2 hashing partitioner.

Type: string

OptionSummary
least_backupChooses the least backed up partition (the partition with the fewest amount of buffered records). Partitions are selected per batch.
manualManually select a partition for each message, requires the field partition to be specified.
murmur2_hashKafka’s default hash algorithm that uses a 32-bit murmur2 hash of the key to compute which partition the record will be on.
round_robinRound-robin’s messages through all available partitions. This algorithm has lower throughput and causes higher CPU load on brokers, but can be useful if you want to ensure an even distribution of records to partitions.

kafka.partition

An optional explicit partition to set for each message. This field is only relevant when the partitioner is set to manual. The provided interpolation string must be a valid integer. This field supports interpolation functions.

Type: string

# Examples
partition: ${! meta("partition") }

kafka.client_id

An identifier for the client connection.

Type: string

Default: "benthos"

kafka.rack_id

A rack identifier for this client.

Type: string

Default: ""

kafka.idempotent_write

Enable the idempotent write producer option. This requires the IDEMPOTENT_WRITE permission on CLUSTER and can be disabled if this permission is not available.

Type: bool

Default: true

kafka.metadata

Determine which (if any) metadata values should be added to messages as headers.

Type: object

kafka.metadata.include_prefixes

Provide a list of explicit metadata key prefixes to match against.

Type: array

Default: []

# Examples
include_prefixes:
- foo_
- bar_
include_prefixes:
- kafka_
include_prefixes:
- content-

kafka.metadata.include_patterns

Provide a list of explicit metadata key regular expression (re2) patterns to match against.

Type: array

Default: []

# Examples
include_patterns:
- .*
include_patterns:
- _timestamp_unix$

kafka.max_in_flight

The maximum number of batches to be sending in parallel at any given time.

Type: int

Default: 10

kafka.timeout

The maximum period of time to wait for message sends before abandoning the request and retrying

Type: string

Default: "10s"

kafka.batching

Allows you to configure a batching policy.

Type: object

# Examples
batching:
byte_size: 5000
count: 0
period: 1s
batching:
count: 10
period: 1s
batching:
check: this.contains("END BATCH")
count: 0
period: 1m

kafka.batching.count

A number of messages at which the batch should be flushed. If 0 disables count based batching.

Type: int

Default: 0

kafka.batching.byte_size

An amount of bytes at which the batch should be flushed. If 0 disables size based batching.

Type: int

Default: 0

kafka.batching.period

A period in which an incomplete batch should be flushed regardless of its size.

Type: string

Default: ""

# Examples
period: 1s
period: 1m
period: 500ms

kafka.batching.check

A Bloblang query that should return a boolean value indicating whether a message should end a batch.

Type: string

Default: ""

# Examples
check: this.type == "end_of_transaction"

kafka.batching.processors

A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.

Type: array

# Examples
processors:
- archive:
format: concatenate
processors:
- archive:
format: lines
processors:
- archive:
format: json_array

kafka.max_message_bytes

The maximum space in bytes than an individual message may take, messages larger than this value will be rejected. This field corresponds to Kafka’s max.message.bytes.

Type: string

Default: "1MB"

# Examples
max_message_bytes: 100MB
max_message_bytes: 50mib

kafka.broker_write_max_bytes

The upper bound for the number of bytes written to a broker connection in a single write. This field corresponds to Kafka’s socket.request.max.bytes.

Type: string

Default: "100MB"

# Examples
broker_write_max_bytes: 128MB
broker_write_max_bytes: 50mib

kafka.compression

Optionally set an explicit compression type. The default preference is to use snappy when the broker supports it, and fall back to none if not.

Type: string

Options: lz4 , snappy , gzip , none , zstd .

kafka.tls

Custom TLS settings can be used to override system defaults.

Type: object

kafka.tls.enabled

Whether custom TLS settings are enabled.

Type: bool

Default: false

kafka.tls.skip_cert_verify

Whether to skip server side certificate verification.

Type: bool

Default: false

kafka.tls.enable_renegotiation

Whether to allow the remote server to repeatedly request renegotiation. Enable this option if you’re seeing the error message local error: tls: no renegotiation.

Type: bool

Default: false Requires version 3.45.0 or newer

kafka.tls.root_cas

An optional root certificate authority to use. This is a string, representing a certificate chain from the parent trusted root certificate, to possible intermediate signing certificates, to the host certificate.

Type: string

Default: ""

# Examples
root_cas: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

kafka.tls.root_cas_file

An optional path of a root certificate authority file to use. This is a file, often with a .pem extension, containing a certificate chain from the parent trusted root certificate, to possible intermediate signing certificates, to the host certificate.

Type: string

Default: ""

# Examples
root_cas_file: ./root_cas.pem

kafka.tls.client_certs

A list of client certificates to use. For each certificate either the fields cert and key, or cert_file and key_file should be specified, but not both.

Type: array

Default: []

# Examples
client_certs:
- cert: foo
key: bar
client_certs:
- cert_file: ./example.pem
key_file: ./example.key

kafka.tls.client_certs[].cert

A plain text certificate to use.

Type: string

Default: ""

kafka.tls.client_certs[].key

A plain text certificate key to use.

Type: string

Default: ""

kafka.tls.client_certs[].cert_file

The path of a certificate to use.

Type: string

Default: ""

kafka.tls.client_certs[].key_file

The path of a certificate key to use.

Type: string

Default: ""

kafka.tls.client_certs[].password

A plain text password for when the private key is password encrypted in PKCS#1 or PKCS#8 format. The obsolete pbeWithMD5AndDES-CBC algorithm is not supported for the PKCS#8 format.

Because the obsolete pbeWithMD5AndDES-CBC algorithm does not authenticate the ciphertext, it is vulnerable to padding oracle attacks that can let an attacker recover the plaintext.

Type: string

Default: ""

# Examples
password: foo
password: ${KEY_PASSWORD}

kafka.sasl

Specify one or more methods of SASL authentication. SASL is tried in order; if the broker supports the first mechanism, all connections will use that mechanism. If the first mechanism fails, the client will pick the first supported mechanism. If the broker does not support any client mechanisms, connections will fail.

Type: array

# Examples
sasl:
- mechanism: SCRAM-SHA-512
password: bar
username: foo

kafka.sasl[].mechanism

The SASL mechanism to use.

Type: string

OptionSummary
AWS_MSK_IAMAWS IAM based authentication as specified by the ‘aws-msk-iam-auth’ java library.
OAUTHBEAREROAuth Bearer based authentication.
PLAINPlain text authentication.
SCRAM-SHA-256SCRAM based authentication as specified in RFC5802.
SCRAM-SHA-512SCRAM based authentication as specified in RFC5802.
noneDisable sasl authentication

kafka.sasl[].username

A username to provide for PLAIN or SCRAM-* authentication.

Type: string

Default: ""

kafka.sasl[].password

A password to provide for PLAIN or SCRAM-* authentication.

Type: string

Default: ""

kafka.sasl[].token

The token to use for a single session’s OAUTHBEARER authentication.

Type: string

Default: ""

kafka.sasl[].extensions

Key/value pairs to add to OAUTHBEARER authentication requests.

Type: object

kafka.sasl[].aws

Contains AWS specific fields for when the mechanism is set to AWS_MSK_IAM.

Type: object

kafka.sasl[].aws.region

The AWS region to target.

Type: string

Default: ""

kafka.sasl[].aws.endpoint

Allows you to specify a custom endpoint for the AWS API.

Type: string

Default: ""

kafka.sasl[].aws.credentials

Optional manual configuration of AWS credentials to use. More information can be found in xref:guides:cloud/aws.adoc[].

Type: object

kafka.sasl[].aws.credentials.profile

A profile from ~/.aws/credentials to use.

Type: string

Default: ""

kafka.sasl[].aws.credentials.id

The ID of credentials to use.

Type: string

Default: ""

kafka.sasl[].aws.credentials.secret

The secret for the credentials being used.

Type: string

Default: ""

kafka.sasl[].aws.credentials.token

The token for the credentials being used, required when using short term credentials.

Type: string

Default: ""

kafka.sasl[].aws.credentials.from_ec2_role

Use the credentials of a host EC2 machine configured to assume an IAM role associated with the instance.

Type: bool

Default: false Requires version 4.2.0 or newer

kafka.sasl[].aws.credentials.role

A role ARN to assume.

Type: string

Default: ""

kafka.sasl[].aws.credentials.role_external_id

An external ID to provide when assuming a role.

Type: string

Default: ""

kafka.timestamp

An optional timestamp to set for each message. When left empty, the current timestamp is used. This field supports interpolation functions.

Type: string

# Examples
timestamp: ${! timestamp_unix() }
timestamp: ${! metadata("kafka_timestamp_unix") }

kafka.seed_brokers

A list of broker addresses to connect to in order to establish connections. If an item of the list contains commas it will be expanded into multiple addresses.

Type: array

# Examples
seed_brokers:
- localhost:9092
seed_brokers:
- foo:9092
- bar:9092
seed_brokers:
- foo:9092,bar:9092

disable_content_encryption

Sorry! This field is missing documentation.

Type: bool

Default: false

enrollment_ticket

Sorry! This field is missing documentation.

Type: string

identity_name

Sorry! This field is missing documentation.

Type: string

allow

Sorry! This field is missing documentation.

Type: string

Default: "self"

route_to_kafka_outlet

Sorry! This field is missing documentation.

Type: string

Default: "self"

allow_consumer

Sorry! This field is missing documentation.

Type: string

Default: "self"

route_to_consumer

Sorry! This field is missing documentation.

Type: string

Default: "/ip4/127.0.0.1/tcp/6262"

encrypted_fields

The fields to encrypt in the kafka messages, assuming the record is a valid JSON map. By default, the whole record is encrypted.

Type: array

Default: []