Safe Haskell | Safe-Inferred |
---|---|
Language | Haskell2010 |
Synopsis
- data LogRecordExporter
- data LogRecordExporterArguments = LogRecordExporterArguments {}
- mkLogRecordExporter :: LogRecordExporterArguments -> IO LogRecordExporter
- logRecordExporterExport :: LogRecordExporter -> Vector ReadableLogRecord -> IO ExportResult
- logRecordExporterForceFlush :: LogRecordExporter -> IO ()
- logRecordExporterShutdown :: LogRecordExporter -> IO ()
- data LogRecordProcessor = LogRecordProcessor {}
- data LoggerProvider = LoggerProvider {}
- data Logger = Logger {}
- data ReadWriteLogRecord
- mkReadWriteLogRecord :: Logger -> ImmutableLogRecord -> IO ReadWriteLogRecord
- data ReadableLogRecord
- mkReadableLogRecord :: ReadWriteLogRecord -> ReadableLogRecord
- class IsReadableLogRecord r where
- class IsReadableLogRecord r => IsReadWriteLogRecord r where
- readLogRecordAttributeLimits :: r -> AttributeLimits
- modifyLogRecord :: r -> (ImmutableLogRecord -> ImmutableLogRecord) -> IO ()
- atomicModifyLogRecord :: r -> (ImmutableLogRecord -> (ImmutableLogRecord, b)) -> IO b
- data ImmutableLogRecord = ImmutableLogRecord {}
- data LogRecordArguments = LogRecordArguments {}
- emptyLogRecordArguments :: LogRecordArguments
- data SeverityNumber
- toShortName :: SeverityNumber -> Maybe Text
Documentation
data LogRecordExporter Source #
LogRecordExporter
defines the interface that protocol-specific exporters must implement so that they can be plugged into OpenTelemetry SDK and support sending of telemetry data.
The goal of the interface is to minimize burden of implementation for protocol-dependent telemetry exporters. The protocol exporter is expected to be primarily a simple telemetry data encoder and transmitter.
LogRecordExporter
s provide thread safety when calling logRecordExporterExport
data LogRecordExporterArguments Source #
See LogRecordExporter
for documentation
LogRecordExporterArguments | |
|
logRecordExporterExport :: LogRecordExporter -> Vector ReadableLogRecord -> IO ExportResult Source #
Exports a batch of ReadableLogRecords. Protocol exporters that will implement this function are typically expected to serialize and transmit the data to the destination.
Export will never be called concurrently for the same exporter instance. Depending on the implementation the result of the export may be returned to the Processor not in the return value of the call to Export but in a language specific way for signaling completion of an asynchronous task. This means that while an instance of an exporter will never have it Export called concurrently it does not mean that the task of exporting can not be done concurrently. How this is done is outside the scope of this specification. Each implementation MUST document the concurrency characteristics the SDK requires of the exporter.
Export MUST NOT block indefinitely, there MUST be a reasonable upper limit after which the call must time out with an error result (Failure).
Concurrent requests and retry logic is the responsibility of the exporter. The default SDK’s LogRecordProcessors SHOULD NOT implement retry logic, as the required logic is likely to depend heavily on the specific protocol and backend the logs are being sent to. For example, the OpenTelemetry Protocol (OTLP) specification defines logic for both sending concurrent requests and retrying requests.
Result: Success - The batch has been successfully exported. For protocol exporters this typically means that the data is sent over the wire and delivered to the destination server. Failure - exporting failed. The batch must be dropped. For example, this can happen when the batch contains bad data and cannot be serialized.
logRecordExporterForceFlush :: LogRecordExporter -> IO () Source #
This is a hint to ensure that the export of any ReadableLogRecords the exporter has received prior to the call to ForceFlush SHOULD be completed as soon as possible, preferably before returning from this method.
ForceFlush SHOULD provide a way to let the caller know whether it succeeded, failed or timed out.
ForceFlush SHOULD only be called in cases where it is absolutely necessary, such as when using some FaaS providers that may suspend the process after an invocation, but before the exporter exports the ReadlableLogRecords.
ForceFlush SHOULD complete or abort within some timeout. ForceFlush can be implemented as a blocking API or an asynchronous API which notifies the caller via a callback or an event. OpenTelemetry SDK authors MAY decide if they want to make the flush timeout configurable.
logRecordExporterShutdown :: LogRecordExporter -> IO () Source #
Shuts down the exporter. Called when SDK is shut down. This is an opportunity for exporter to do any cleanup required.
Shutdown SHOULD be called only once for each LogRecordExporter instance. After the call to Shutdown subsequent calls to Export are not allowed and SHOULD return a Failure result.
Shutdown SHOULD NOT block indefinitely (e.g. if it attempts to flush the data and the destination is unavailable). OpenTelemetry SDK authors MAY decide if they want to make the shutdown timeout configurable.
data LogRecordProcessor Source #
LogRecordProcessor is an interface which allows hooks for LogRecord emitting.
Built-in processors are responsible for batching and conversion of LogRecords to exportable representation and passing batches to exporters.
LogRecordProcessors can be registered directly on SDK LoggerProvider and they are invoked in the same order as they were registered.
Each processor registered on LoggerProvider is part of a pipeline that consists of a processor and optional exporter. The SDK MUST allow each pipeline to end with an individual exporter.
The SDK MUST allow users to implement and configure custom processors and decorate built-in processors for advanced scenarios such as enriching with attributes.
The following diagram shows LogRecordProcessor’s relationship to other components in the SDK:
+-----+------------------------+ +------------------------------+ +-------------------------+ | | | | | | | | | | | Batching LogRecordProcessor | | LogRecordExporter | | | +---> Simple LogRecordProcessor +---> (OtlpExporter) | | | | | | | | | SDK | Logger.emit(LogRecord) | +------------------------------+ +-------------------------+ | | | | | | | | | | | | | | | +-----+------------------------+
LogRecordProcessor | |
|
data LoggerProvider Source #
Logger
s can be created from LoggerProvider
s
LoggerProvider | |
|
LogRecords
can be created from Loggers
. Logger
s are uniquely identified by the libraryName
, libraryVersion
, schemaUrl
fields of InstrumentationLibrary
.
Creating two Logger
s with the same identity but different libraryAttributes
is a user error.
Logger | |
|
data ReadWriteLogRecord Source #
This is a data type that can represent logs from various sources: application log files, machine generated events, system logs, etc. Specification outlined here. Existing log formats can be unambiguously mapped to this data type. Reverse mapping from this data type is also possible to the extent that the target log format has equivalent capabilities. Uses an IORef under the hood to allow mutability.
Instances
data ReadableLogRecord Source #
Instances
class IsReadableLogRecord r where Source #
This is a typeclass representing LogRecord
s that can be read from.
A function receiving this as an argument MUST be able to access all the information added to the LogRecord. It MUST also be able to access the Instrumentation Scope and Resource information (implicitly) associated with the LogRecord.
The trace context fields MUST be populated from the resolved Context (either the explicitly passed Context or the current Context) when emitted.
Counts for attributes due to collection limits MUST be available for exporters to report as described in the transformation to non-OTLP formats specification.
readLogRecord :: r -> IO ImmutableLogRecord Source #
Reads the current state of the LogRecord
from its internal IORef
. The implementation mirrors readIORef
.
readLogRecordInstrumentationScope :: r -> InstrumentationLibrary Source #
Reads the InstrumentationScope
from the Logger
that emitted the LogRecord
readLogRecordResource :: r -> MaterializedResources Source #
Reads the Resource
from the LoggerProvider
that emitted the LogRecord
class IsReadableLogRecord r => IsReadWriteLogRecord r where Source #
This is a typeclass representing LogRecord
s that can be read from or written to. All ReadWriteLogRecord
s are ReadableLogRecord
s.
A function receiving this as an argument MUST additionally be able to modify the following information added to the LogRecord:
- Timestamp
- ObservedTimestamp
- SeverityText
- SeverityNumber
- Body
- Attributes (addition, modification, removal)
- TraceId
- SpanId
- TraceFlags
readLogRecordAttributeLimits :: r -> AttributeLimits Source #
Reads the attribute limits from the LoggerProvider
that emitted the LogRecord
. These are needed to add more attributes.
modifyLogRecord :: r -> (ImmutableLogRecord -> ImmutableLogRecord) -> IO () Source #
Modifies the LogRecord
using its internal IORef
. This is lazy and is not an atomic operation. The implementation mirrors modifyIORef
.
atomicModifyLogRecord :: r -> (ImmutableLogRecord -> (ImmutableLogRecord, b)) -> IO b Source #
An atomic version of modifyLogRecord
. This function is lazy. The implementation mirrors atomicModifyIORef
.
Instances
data ImmutableLogRecord Source #
ImmutableLogRecord | ||||||||||||||||||||||
|
data LogRecordArguments Source #
Arguments that may be set on LogRecord creation. If observedTimestamp is not set, it will default to the current timestamp.
If context is not specified it will default to the current context. Refer to the documentation of LogRecord
for descriptions
of the fields.
data SeverityNumber Source #
Trace | |
Trace2 | |
Trace3 | |
Trace4 | |
Debug | |
Debug2 | |
Debug3 | |
Debug4 | |
Info | |
Info2 | |
Info3 | |
Info4 | |
Warn | |
Warn2 | |
Warn3 | |
Warn4 | |
Error | |
Error2 | |
Error3 | |
Error4 | |
Fatal | |
Fatal2 | |
Fatal3 | |
Fatal4 | |
Unknown !Int |
Instances
toShortName :: SeverityNumber -> Maybe Text Source #