How to Build a Logs Pipeline in .NET with OpenTelemetry

March 14, 2023
Written by
Rahul Rai
Opinions expressed by Twilio contributors are their own
Reviewed by

How to Build a Logs Pipeline in .NET with OpenTelemetry

OpenTelemetry is an open-source observability framework that enables developers to collect, process, and export telemetry data from their applications, systems, and infrastructure. It provides a unified API and SDKs in multiple programming languages for capturing telemetry data such as traces, metrics, and logs from telemetry sources such as applications and platforms. With OpenTelemetry, developers can instrument their applications with ease and flexibility, and then send the collected data to various backends for analysis and visualization. The framework is highly customizable, extensible, and vendor-agnostic, allowing users to choose their preferred telemetry collection and analysis tools.

One of the essential components of observability is logging. Logging is the process of capturing and storing information about an application or system's behavior, performance, and errors. It helps developers diagnose and debug issues, monitor system health, and gain insights into user behavior.

In this article, you will learn the step-by-step process of building an efficient OpenTelemetry logs pipeline with .NET. You will learn the key concepts of OpenTelemetry logs, how to integrate OpenTelemetry into your .NET application, and how to configure the logs pipeline to send the logs to a preferred backend for analysis and visualization.

Logging with .NET

The Microsoft.Extensions.Logging library provides native support for logging in .NET applications. Developers can utilize the simple and extensible logging API provided by this library to log messages of varying severity levels, filter and format log messages according to their preferences, and configure logging providers to suit their needs. A logging provider is responsible for handling the logged messages and storing them in different destinations such as the console, a file, a database, or a remote server.

The OpenTelemetry logging signal does not aim to standardize the logging interface. Although OpenTelemetry has its own logging API, the logging signal is designed to hook into existing logging facilities already available in programming languages. In addition, OpenTelemetry focuses on augmenting the logs produced by applications and provides a mechanism to correlate the logs with other signals. The logs collected by OpenTelemetry can be exported to different destinations, including centralized logging systems such as Elasticsearch, Grafana Loki, and Azure Monitor.

Building an OpenTelemetry logs pipeline

OpenTelemetry components can be arranged in the form of a pipeline in which they work together to collect, process, and export telemetry data from an application or system. The pipeline offers flexibility and customizability, enabling users to tailor the pipeline to their specific requirements and integrate it with their preferred telemetry analysis tools. The components of the pipeline will vary with each signal - metrics, traces, and logs. The following diagram illustrates the core components of the OpenTelemetry logging pipeline:

The components of a log pipeline showing the control flow between the components. The Logger provider connects to the logger. The Logger connects to the log record. The log record connects to the the log record processor. The Log record processor connects to the log record exporter. The log record exporter connects to the telemetry backend.

The pipeline components perform different functions, as summarized below:

  • Logger provider: Provides a mechanism for instantiating one or more loggers.
  • Logger: It creates a log record. A logger is associated with a resource and an instrumentation library, which is determined by the name and version of the library.
  • Log record: It represents logs from various sources, including application log files, machine-generated events, and system logs. The data model of the log record supports mapping from existing log formats. A reverse mapping from the log record model is also possible if the target format has equivalent capabilities.
  • Log record processor: It consumes log records and forwards them to a log exporter. Currently, there are three built-in implementations of the log processor: SimpleLogRecordExportProcessor and BatchLogRecordExportProcessor, and a processor to combine multiple processors named CompositeLogRecordExportProcessor.
  • Log record exporter: It sends log data to the backend. The exporter implementation is specific to the backend that receives the log data.

Let's implement the logging pipeline in an ASP.NET Core application. Use the instructions outlined in the Microsoft Learn guide to create a new ASP.NET Core Web API project named LogsDemoApi and install the following NuGet package to it that adds support for OpenTelemetry and the OpenTelemetry console exporter to your project. You will use this exporter to write application logs to the console.

dotnet add package OpenTelemetry.Exporter.Console --prerelease

It is now time to define the logging pipeline. First, you need to define the attributes of the resource representing your application. The attributes are defined as key-value pairs, with the keys recommended to follow the OpenTelemetry semantic conventions. To define the attributes of your application, edit the Program.cs file and use the following code:

using System.Runtime.InteropServices;
using OpenTelemetry.Logs;
using OpenTelemetry.Resources;

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.

// Define attributes for your application
var resourceBuilder = ResourceBuilder.CreateDefault()
    // add attributes for the name and version of the service
    .AddService(serviceName: "MyCompany.MyProduct.LogsDemoApi", serviceVersion: "1.0.0")
    // add attributes for the OpenTelemetry SDK version
    // add custom attributes
    .AddAttributes(new Dictionary<string, object>
        [""] = Environment.MachineName,
        ["os.description"] = RuntimeInformation.OSDescription,
        ["deployment.environment"] =

// ...

Then, add the following code after the previous code to define the logging pipeline:

    .AddOpenTelemetry(loggerOptions =>
            // define the resource
            // add custom processor
            .AddProcessor(new CustomLogProcessor())
            // send logs to the console using exporter

        loggerOptions.IncludeFormattedMessage = true;
        loggerOptions.IncludeScopes = true;
        loggerOptions.ParseStateValues = true;

Let's take a closer look at the code you just wrote. In the default configuration, your application writes logs to the console, debug, and event source outputs. The ClearProviders method deletes all logger providers from the application to ensure that your application uses only the OpenTelemetry console exporter for writing logs. We'll now take a look at the OpenTelemetry pipeline setup. First, the SetResourceBuilder method associates the logger with the resource builder you defined. Then, using the AddProcessor method, you added a custom log record processor of the type CustomLogProcessor to the pipeline to manipulate the logs before they reach the exporter. Adding a processor to the logging pipeline is optional. You will, however, use the CustomLogProcessor to include state information in the application logs. Your final component in the pipeline is a console exporter that prints all logs to the console.

You can further customize log records by adding additional information, such as scopes and states, through the logger options. You can enable or disable these settings depending on your complexity and storage needs.

Log enrichment

In the previous section, you configured the OpenTelemetry logger to use the CustomLogProcessor to process logs. By adding your own processor, you can update the log records to add additional state information or remove sensitive information before they are sent to their destination. In your project, create  a file named CustomLogProcessor.cs and define the CustomLogProcessor class as follows:

using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using OpenTelemetry;
using OpenTelemetry.Logs;

public class CustomLogProcessor : BaseProcessor<LogRecord>
    public override void OnEnd(LogRecord data)
        // Custom state information
        var logState = new List<KeyValuePair<string, object?>>
            new("ProcessID", Environment.ProcessId),
            new("DotnetFramework", RuntimeInformation.FrameworkDescription),
            new("Runtime", RuntimeInformation.RuntimeIdentifier),
        // Example of masking sensitive data
        if (data.StateValues != null)
            var state = data.StateValues.ToList();
            // Find a key value pair with key "password" and update its value to "masked value"
            var foundPair = state.Find(kvp => kvp.Key.Equals("password", StringComparison.OrdinalIgnoreCase));
            if (!foundPair.Equals(default(KeyValuePair<string, object?>)))
                // Find the index of the original pair in the list
                var index = state.IndexOf(foundPair);

                // Replace the original pair with the updated pair at the same index
                state[index] = new(foundPair.Key, "masked value");
                data.FormattedMessage = "Message masked due to sensitive data";

            data.StateValues = new ReadOnlyCollectionBuilder<KeyValuePair<string, object?>>(state.Concat(logState))


Any custom log processor must inherit from the BaseProcessor<LogRecord> class. You can override different base class methods depending on where in a log record's lifecycle you want to modify it. Here, you captured the LogRecord object just before the logger passed it to the exporter, removed sensitive data, and added some runtime information to its state.

Writing logs

OpenTelemetry's logger provider– OpenTelemetryLoggerProvider, plugs into the logging framework of .NET. Therefore, you can now use the .NET logging API to log messages, and these messages will be collected by OpenTelemetry and exported to the configured exporters. To illustrate, you can add the following API endpoints to your Program.cs class to log messages with different severity levels:

app.MapGet("/", (ILogger<Program> logger) =>
    logger.LogInformation("Hello user!");
    return Results.Ok();

app.MapPost("/login", (ILogger<Program> logger, [FromBody] LoginData data) =>
    logger.LogInformation("User login attempted: Username {Username}, Password {Password}", data.Username, data.Password);
    logger.LogWarning("User login failed: Username {Username}", data.Username);
    return Results.Unauthorized();


internal record LoginData(string Username, string Password);

Now you can launch the application and send requests to both endpoints. To do so, here are the cURL commands and PowerShell cmdlets for your convenience:

curl http://localhost:5199/

curl -X POST \
     -H "Content-Type: application/json" \
     -d '{"username":"Foo","password":"Bar"}' \

As a result of this operation, you can view the logs you defined in the application console. For example, the screenshot below shows the logs recorded from a GET request to the default endpoint.

Console output from request to the default API endpoint showing automatically generated information to correlated logs to traces, log level, custom state values, and resource information

You can gather the following information from the recorded output:

  • Timestamp: The time associated with the log record.
  • Trace id and span id: Identifiers of the trace to correlate with the log record.
  • Log level: String representation of the severity level of the log.
  • State values: State information stored in the log.
  • Resource: The resource associated with the producer of the log record.

Finally, here is the console output from the request sent to the /login endpoint. The screenshot below shows the custom log message that replaces the original log message with sensitive information. Note that the state key-value pair, which contains the password, has been updated as well:

Console output from request to the /login API endpoint showing custom log message replacing the original log message with sensitive information and the password state value modified.

Below is the screenshot of the second log statement containing the warning message from the same API endpoint:

Console output from request to the /login API endpoint showing a warning message.


This exercise concludes the discussion on building an OpenTelemetry logs pipeline for a .NET application. In this article, you learned how to build an OpenTelemetry logs pipeline with .NET. You installed the OpenTelemetry .NET console logging exporter, configured the OpenTelemetry logs pipeline. You also built a custom log processor to add state information and manipulate the log records before export. The application is also available on my GitHub repository for reference.

In the next part of this series, you will configure this application to export logs to Azure Monitor.

Also, check out this tutorial on how to automatically instrument containerized .NET applications with OpenTelemetry.

Outside of his day job, Rahul ensures he’s still contributing to the cloud ecosystem by authoring books, offering free workshops, and frequently posting on his blog: to share his insights and help break down complex topics for other current and aspiring professionals.