decoding block tells Tangent how to interpret message bytes coming from a source (Kafka/MSK, sockets, files, SQS, etc.).It has two knobs:
- format— how to parse the bytes (JSON/NDJSON/MsgPack/Text)
- compression— whether the bytes are compressed (auto-detect by default)
You can setdecodingon any source (e.g.,msk,socket,file,sqs).
Schema
Parsing format for the message payload.Options
- ndjson— Each message is a JSON object per line (newline-delimited JSON).
- json— Each message is one JSON object (no newlines required).
- json-array— Each message is a JSON array; Tangent emits each element as an individual record.
- msgpack— MessagePack-encoded payloads.
- text— Treat the payload as plain text; Tangent wraps it in a minimal JSON object.
- Use ndjsonfor most streaming log pipelines.
- Use json-arrayif a single message contains many records (batch).
Controls decompression. Defaults to 
auto.Options- auto— Detect from metadata, filename, or magic bytes.
- none— Do not decompress.
- gzip— Force gzip.
- zstd— Force Zstandard.
- Checks metadata (e.g., Content-Encoding) forgzip,zstd,identity/none.
- Falls back to filename suffixes: .gz/.gzip→ gzip,.zst/.zstd→ zstd.
- Finally, inspects magic bytes:
- Gzip: 1F 8B
- Zstd: 28 B5 2F FD
 
- Gzip: 
none.Examples
MSK (Kafka)
tangent.yaml
Socket (Unix domain socket)
tangent.yaml
File (compressed batch JSON)
tangent.yaml
SQS (MessagePack)
tangent.yaml
Behavior & Tips
- 
ndjsonvsjsonIf your producer writes one JSON object per line, usendjson. If each message is a single JSON object (no newline contract), usejson.
- 
Batch ingestion with json-arrayWhen a payload is a JSON array, Tangent emits one record per element. Great for files or batched Kafka messages.
- 
Compression autois production-friendly Works reliably across mixed inputs: headers, filenames, or magic bytes will be used to detect gzip/zstd.
- 
Text inputs
With text, Tangent will wrap lines as a minimal JSON event (e.g.,{ "message": "..." }) so your plugins receive consistent JSON.