ockam_kafka
Ockam
Fields
kafka
Sorry! This field is missing documentation.
Type: object
kafka.seed_brokers
A list of broker addresses to connect to in order to establish connections. If an item of the list contains commas it will be expanded into multiple addresses.
Type: array
kafka.tls
Custom TLS settings can be used to override system defaults.
Type: object
kafka.tls.enabled
Whether custom TLS settings are enabled.
Type: bool
Default: false
kafka.tls.skip_cert_verify
Whether to skip server side certificate verification.
Type: bool
Default: false
kafka.tls.enable_renegotiation
Whether to allow the remote server to repeatedly request renegotiation. Enable this option if you’re seeing the error message local error: tls: no renegotiation
.
Type: bool
Default: false
Requires version 3.45.0 or newer
kafka.tls.root_cas
An optional root certificate authority to use. This is a string, representing a certificate chain from the parent trusted root certificate, to possible intermediate signing certificates, to the host certificate.
Type: string
Default: ""
kafka.tls.root_cas_file
An optional path of a root certificate authority file to use. This is a file, often with a .pem extension, containing a certificate chain from the parent trusted root certificate, to possible intermediate signing certificates, to the host certificate.
Type: string
Default: ""
kafka.tls.client_certs
A list of client certificates to use. For each certificate either the fields cert
and key
, or cert_file
and key_file
should be specified, but not both.
Type: array
Default: []
kafka.tls.client_certs[].cert
A plain text certificate to use.
Type: string
Default: ""
kafka.tls.client_certs[].key
A plain text certificate key to use.
Type: string
Default: ""
kafka.tls.client_certs[].cert_file
The path of a certificate to use.
Type: string
Default: ""
kafka.tls.client_certs[].key_file
The path of a certificate key to use.
Type: string
Default: ""
kafka.tls.client_certs[].password
A plain text password for when the private key is password encrypted in PKCS#1 or PKCS#8 format. The obsolete pbeWithMD5AndDES-CBC
algorithm is not supported for the PKCS#8 format.
Because the obsolete pbeWithMD5AndDES-CBC algorithm does not authenticate the ciphertext, it is vulnerable to padding oracle attacks that can let an attacker recover the plaintext.
Type: string
Default: ""
kafka.topics
A list of topics to consume from. Multiple comma separated topics can be listed in a single element. When a consumer_group
is specified partitions are automatically distributed across consumers of a topic, otherwise all partitions are consumed.
Alternatively, it’s possible to specify explicit partitions to consume from with a colon after the topic name, e.g. foo:0
would consume the partition 0 of the topic foo. This syntax supports ranges, e.g. foo:0-10
would consume partitions 0 through to 10 inclusive.
Finally, it’s also possible to specify an explicit offset to consume from by adding another colon after the partition, e.g. foo:0:10
would consume the partition 0 of the topic foo starting from the offset 10. If the offset is not present (or remains unspecified) then the field start_from_oldest
determines which offset to start from.
Type: array
kafka.regexp_topics
Whether listed topics should be interpreted as regular expression patterns for matching multiple topics. When topics are specified with explicit partitions this field must remain set to false
.
Type: bool
Default: false
kafka.rack_id
A rack specifies where the client is physically located and changes fetch requests to consume from the closest replica as opposed to the leader replica.
Type: string
Default: ""
kafka.start_from_oldest
Determines whether to consume from the oldest available offset, otherwise messages are consumed from the latest offset. The setting is applied when creating a new consumer group or the saved offset no longer exists.
Type: bool
Default: true
kafka.fetch_max_bytes
Sets the maximum amount of bytes a broker will try to send during a fetch. Note that brokers may not obey this limit if it has records larger than this limit. This is the equivalent to the Java fetch.max.bytes setting.
Type: string
Default: "50MiB"
kafka.fetch_max_wait
Sets the maximum amount of time a broker will wait for a fetch response to hit the minimum number of required bytes. This is the equivalent to the Java fetch.max.wait.ms setting.
Type: string
Default: "5s"
kafka.fetch_min_bytes
Sets the minimum amount of bytes a broker will try to send during a fetch. This is the equivalent to the Java fetch.min.bytes setting.
Type: string
Default: "1B"
kafka.fetch_max_partition_bytes
Sets the maximum amount of bytes that will be consumed for a single partition in a fetch request. Note that if a single batch is larger than this number, that batch will still be returned so the client can make progress. This is the equivalent to the Java fetch.max.partition.bytes setting.
Type: string
Default: "1MiB"
kafka.consumer_group
An optional consumer group to consume as. When specified the partitions of specified topics are automatically distributed across consumers sharing a consumer group, and partition offsets are automatically committed and resumed under this name. Consumer groups are not supported when specifying explicit partitions to consume from in the topics
field.
Type: string
kafka.checkpoint_limit
Determines how many messages of the same partition can be processed in parallel before applying back pressure. When a message of a given offset is delivered to the output the offset is only allowed to be committed when all messages of prior offsets have also been delivered, this ensures at-least-once delivery guarantees. However, this mechanism also increases the likelihood of duplicates in the event of crashes or server faults, reducing the checkpoint limit will mitigate this.
Type: int
Default: 1024
kafka.commit_period
The period of time between each commit of the current partition offsets. Offsets are always committed during shutdown.
Type: string
Default: "5s"
kafka.multi_header
Decode headers into lists to allow handling of multiple values with the same key
Type: bool
Default: false
kafka.batching
Allows you to configure a batching policy that applies to individual topic partitions in order to batch messages together before flushing them for processing. Batching can be beneficial for performance as well as useful for windowed processing, and doing so this way preserves the ordering of topic partitions.
Type: object
kafka.batching.count
A number of messages at which the batch should be flushed. If 0
disables count based batching.
Type: int
Default: 0
kafka.batching.byte_size
An amount of bytes at which the batch should be flushed. If 0
disables size based batching.
Type: int
Default: 0
kafka.batching.period
A period in which an incomplete batch should be flushed regardless of its size.
Type: string
Default: ""
kafka.batching.check
A Bloblang query that should return a boolean value indicating whether a message should end a batch.
Type: string
Default: ""
kafka.batching.processors
A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.
Type: array
disable_content_encryption
Sorry! This field is missing documentation.
Type: bool
Default: false
enrollment_ticket
Sorry! This field is missing documentation.
Type: string
identity_name
Sorry! This field is missing documentation.
Type: string
allow
Sorry! This field is missing documentation.
Type: string
Default: "self"
route_to_kafka_outlet
Sorry! This field is missing documentation.
Type: string
Default: "self"
allow_producer
Sorry! This field is missing documentation.
Type: string
Default: "self"
relay
Sorry! This field is missing documentation.
Type: string
node_address
Sorry! This field is missing documentation.
Type: string
Default: "127.0.0.1:6262"
encrypted_fields
The fields to encrypt in the kafka messages, assuming the record is a valid JSON map. By default, the whole record is encrypted.
Type: array
Default: []