Skip to content

ockam_kafka

Ockam

# Common config fields, showing default values
input:
label: ""
ockam_kafka:
kafka:
seed_brokers: [] # No default (optional)
topics: [] # No default (required)
regexp_topics: false
consumer_group: "" # No default (optional)
auto_replay_nacks: true
seed_brokers: [] # No default (optional)
disable_content_encryption: false
enrollment_ticket: "" # No default (optional)
identity_name: "" # No default (optional)
allow: self
route_to_kafka_outlet: self
allow_producer: self
relay: "" # No default (optional)
node_address: 127.0.0.1:6262
encrypted_fields: []

Fields

kafka

Sorry! This field is missing documentation.

Type: object

kafka.seed_brokers

A list of broker addresses to connect to in order to establish connections. If an item of the list contains commas it will be expanded into multiple addresses.

Type: array

# Examples
seed_brokers:
- localhost:9092
seed_brokers:
- foo:9092
- bar:9092
seed_brokers:
- foo:9092,bar:9092

kafka.topics

A list of topics to consume from. Multiple comma separated topics can be listed in a single element. When a consumer_group is specified partitions are automatically distributed across consumers of a topic, otherwise all partitions are consumed.

Alternatively, it’s possible to specify explicit partitions to consume from with a colon after the topic name, e.g. foo:0 would consume the partition 0 of the topic foo. This syntax supports ranges, e.g. foo:0-10 would consume partitions 0 through to 10 inclusive.

Finally, it’s also possible to specify an explicit offset to consume from by adding another colon after the partition, e.g. foo:0:10 would consume the partition 0 of the topic foo starting from the offset 10. If the offset is not present (or remains unspecified) then the field start_from_oldest determines which offset to start from.

Type: array

# Examples
topics:
- foo
- bar
topics:
- things.*
topics:
- foo,bar
topics:
- foo:0
- bar:1
- bar:3
topics:
- foo:0,bar:1,bar:3
topics:
- foo:0-5

kafka.regexp_topics

Whether listed topics should be interpreted as regular expression patterns for matching multiple topics. When topics are specified with explicit partitions this field must remain set to false.

Type: bool

Default: false

kafka.consumer_group

An optional consumer group to consume as. When specified the partitions of specified topics are automatically distributed across consumers sharing a consumer group, and partition offsets are automatically committed and resumed under this name. Consumer groups are not supported when specifying explicit partitions to consume from in the topics field.

Type: string

kafka.client_id

An identifier for the client connection.

Type: string

Default: "benthos"

kafka.rack_id

A rack identifier for this client.

Type: string

Default: ""

kafka.checkpoint_limit

Determines how many messages of the same partition can be processed in parallel before applying back pressure. When a message of a given offset is delivered to the output the offset is only allowed to be committed when all messages of prior offsets have also been delivered, this ensures at-least-once delivery guarantees. However, this mechanism also increases the likelihood of duplicates in the event of crashes or server faults, reducing the checkpoint limit will mitigate this.

Type: int

Default: 1024

kafka.auto_replay_nacks

Whether messages that are rejected (nacked) at the output level should be automatically replayed indefinitely, eventually resulting in back pressure if the cause of the rejections is persistent. If set to false these messages will instead be deleted. Disabling auto replays can greatly improve memory efficiency of high throughput streams as the original shape of the data can be discarded immediately upon consumption and mutation.

Type: bool

Default: true

kafka.commit_period

The period of time between each commit of the current partition offsets. Offsets are always committed during shutdown.

Type: string

Default: "5s"

kafka.start_from_oldest

Determines whether to consume from the oldest available offset, otherwise messages are consumed from the latest offset. The setting is applied when creating a new consumer group or the saved offset no longer exists.

Type: bool

Default: true

kafka.tls

Custom TLS settings can be used to override system defaults.

Type: object

kafka.tls.enabled

Whether custom TLS settings are enabled.

Type: bool

Default: false

kafka.tls.skip_cert_verify

Whether to skip server side certificate verification.

Type: bool

Default: false

kafka.tls.enable_renegotiation

Whether to allow the remote server to repeatedly request renegotiation. Enable this option if you’re seeing the error message local error: tls: no renegotiation.

Type: bool

Default: false Requires version 3.45.0 or newer

kafka.tls.root_cas

An optional root certificate authority to use. This is a string, representing a certificate chain from the parent trusted root certificate, to possible intermediate signing certificates, to the host certificate.

Type: string

Default: ""

# Examples
root_cas: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

kafka.tls.root_cas_file

An optional path of a root certificate authority file to use. This is a file, often with a .pem extension, containing a certificate chain from the parent trusted root certificate, to possible intermediate signing certificates, to the host certificate.

Type: string

Default: ""

# Examples
root_cas_file: ./root_cas.pem

kafka.tls.client_certs

A list of client certificates to use. For each certificate either the fields cert and key, or cert_file and key_file should be specified, but not both.

Type: array

Default: []

# Examples
client_certs:
- cert: foo
key: bar
client_certs:
- cert_file: ./example.pem
key_file: ./example.key

kafka.tls.client_certs[].cert

A plain text certificate to use.

Type: string

Default: ""

kafka.tls.client_certs[].key

A plain text certificate key to use.

Type: string

Default: ""

kafka.tls.client_certs[].cert_file

The path of a certificate to use.

Type: string

Default: ""

kafka.tls.client_certs[].key_file

The path of a certificate key to use.

Type: string

Default: ""

kafka.tls.client_certs[].password

A plain text password for when the private key is password encrypted in PKCS#1 or PKCS#8 format. The obsolete pbeWithMD5AndDES-CBC algorithm is not supported for the PKCS#8 format.

Because the obsolete pbeWithMD5AndDES-CBC algorithm does not authenticate the ciphertext, it is vulnerable to padding oracle attacks that can let an attacker recover the plaintext.

Type: string

Default: ""

# Examples
password: foo
password: ${KEY_PASSWORD}

kafka.sasl

Specify one or more methods of SASL authentication. SASL is tried in order; if the broker supports the first mechanism, all connections will use that mechanism. If the first mechanism fails, the client will pick the first supported mechanism. If the broker does not support any client mechanisms, connections will fail.

Type: array

# Examples
sasl:
- mechanism: SCRAM-SHA-512
password: bar
username: foo

kafka.sasl[].mechanism

The SASL mechanism to use.

Type: string

OptionSummary
AWS_MSK_IAMAWS IAM based authentication as specified by the ‘aws-msk-iam-auth’ java library.
OAUTHBEAREROAuth Bearer based authentication.
PLAINPlain text authentication.
SCRAM-SHA-256SCRAM based authentication as specified in RFC5802.
SCRAM-SHA-512SCRAM based authentication as specified in RFC5802.
noneDisable sasl authentication

kafka.sasl[].username

A username to provide for PLAIN or SCRAM-* authentication.

Type: string

Default: ""

kafka.sasl[].password

A password to provide for PLAIN or SCRAM-* authentication.

Type: string

Default: ""

kafka.sasl[].token

The token to use for a single session’s OAUTHBEARER authentication.

Type: string

Default: ""

kafka.sasl[].extensions

Key/value pairs to add to OAUTHBEARER authentication requests.

Type: object

kafka.sasl[].aws

Contains AWS specific fields for when the mechanism is set to AWS_MSK_IAM.

Type: object

kafka.sasl[].aws.region

The AWS region to target.

Type: string

Default: ""

kafka.sasl[].aws.endpoint

Allows you to specify a custom endpoint for the AWS API.

Type: string

Default: ""

kafka.sasl[].aws.credentials

Optional manual configuration of AWS credentials to use. More information can be found in xref:guides:cloud/aws.adoc[].

Type: object

kafka.sasl[].aws.credentials.profile

A profile from ~/.aws/credentials to use.

Type: string

Default: ""

kafka.sasl[].aws.credentials.id

The ID of credentials to use.

Type: string

Default: ""

kafka.sasl[].aws.credentials.secret

The secret for the credentials being used.

Type: string

Default: ""

kafka.sasl[].aws.credentials.token

The token for the credentials being used, required when using short term credentials.

Type: string

Default: ""

kafka.sasl[].aws.credentials.from_ec2_role

Use the credentials of a host EC2 machine configured to assume an IAM role associated with the instance.

Type: bool

Default: false Requires version 4.2.0 or newer

kafka.sasl[].aws.credentials.role

A role ARN to assume.

Type: string

Default: ""

kafka.sasl[].aws.credentials.role_external_id

An external ID to provide when assuming a role.

Type: string

Default: ""

kafka.multi_header

Decode headers into lists to allow handling of multiple values with the same key

Type: bool

Default: false

kafka.batching

Allows you to configure a batching policy that applies to individual topic partitions in order to batch messages together before flushing them for processing. Batching can be beneficial for performance as well as useful for windowed processing, and doing so this way preserves the ordering of topic partitions.

Type: object

# Examples
batching:
byte_size: 5000
count: 0
period: 1s
batching:
count: 10
period: 1s
batching:
check: this.contains("END BATCH")
count: 0
period: 1m

kafka.batching.count

A number of messages at which the batch should be flushed. If 0 disables count based batching.

Type: int

Default: 0

kafka.batching.byte_size

An amount of bytes at which the batch should be flushed. If 0 disables size based batching.

Type: int

Default: 0

kafka.batching.period

A period in which an incomplete batch should be flushed regardless of its size.

Type: string

Default: ""

# Examples
period: 1s
period: 1m
period: 500ms

kafka.batching.check

A Bloblang query that should return a boolean value indicating whether a message should end a batch.

Type: string

Default: ""

# Examples
check: this.type == "end_of_transaction"

kafka.batching.processors

A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.

Type: array

# Examples
processors:
- archive:
format: concatenate
processors:
- archive:
format: lines
processors:
- archive:
format: json_array

kafka.metadata_max_age

The maximum age of metadata before it is refreshed.

Type: string

Default: "5m"

kafka.seed_brokers

A list of broker addresses to connect to in order to establish connections. If an item of the list contains commas it will be expanded into multiple addresses.

Type: array

# Examples
seed_brokers:
- localhost:9092
seed_brokers:
- foo:9092
- bar:9092
seed_brokers:
- foo:9092,bar:9092

disable_content_encryption

Sorry! This field is missing documentation.

Type: bool

Default: false

enrollment_ticket

Sorry! This field is missing documentation.

Type: string

identity_name

Sorry! This field is missing documentation.

Type: string

allow

Sorry! This field is missing documentation.

Type: string

Default: "self"

route_to_kafka_outlet

Sorry! This field is missing documentation.

Type: string

Default: "self"

allow_producer

Sorry! This field is missing documentation.

Type: string

Default: "self"

relay

Sorry! This field is missing documentation.

Type: string

node_address

Sorry! This field is missing documentation.

Type: string

Default: "127.0.0.1:6262"

encrypted_fields

The fields to encrypt in the kafka messages, assuming the record is a valid JSON map. By default, the whole record is encrypted.

Type: array

Default: []