Python Logging: A Beginner-Friendly Tutorial and Practical Reference

25 Mar 2025 - tsp
Last update 25 Mar 2025
Reading time 20 mins

Introduction

Logging is the process of recording information about a program’s execution. This information can include debug messages, informational events, warnings, errors, and critical issues. Python’s logging module provides a robust and configurable system to write these messages to various outputs such as the console, files, or system log services.

Python’s built-in logging module provides a flexible and extensible framework for emitting log messages from Python programs—similar in purpose to well-known systems like Log4j in Java, log4c in C-based environments, or the Logger class in .NET applications. Whether you’re debugging a script, monitoring a running service, or building a reusable component library, Python logging offers a professional, scalable solution for integrating structured, controllable, and environment-appropriate diagnostic output into your code.

In this short overview tutorial, we’ll cover:

Content

What Is Logging and Why Use It?

Logging refers to the structured recording of events, diagnostics, and execution details within a program. Unlike ad-hoc print() statements, which are suitable only for quick-and-dirty debugging, logging offers a scalable, configurable, and long-term solution for understanding and monitoring program behavior. It provides developers with insight into how their applications behave under normal conditions, as well as when unexpected situations occur. Crucially, logging can remain active in deployed applications, serving as a permanent instrumentation layer for tracing, diagnosing, and resolving bugs in production environments without altering the codebase itself.

With logging, messages can be categorized by severity levels such as DEBUG, INFO, WARNING, ERROR, and CRITICAL. These logs can be automatically routed to different destinations such as standard output, files, system logs, or even over the network. Logging also enables the inclusion of contextual metadata like timestamps, log levels, and the module or function from which the log originated. Importantly, Python’s logging system can be configured centrally and hierarchically, giving fine-grained control over how messages are filtered and where they are sent throughout large applications. Hierarchical logging means that loggers are organized in a tree-like structure that reflects the module or package structure of your codebase. For example, a logger named myapp.module.submodule is a child of myapp.module, which in turn is a child of myapp. Messages sent to child loggers propagate upward unless explicitly blocked, and handlers can be attached at any level in the hierarchy to control the formatting and destination of log messages. This design allows you to configure logging behavior globally while still enabling module-specific overrides when needed.

First Steps: Basic Logging

Let’s begin with a minimal example that shows how to activate Python’s logging system and produce a simple message. This example configures the logging module to display informational messages and above (INFO, WARNING, ERROR, and CRITICAL) and then writes a single informational message to standard output:

import logging

logging.basicConfig(level=logging.INFO)
logging.info("Hello, world!")

Explanation

In this example, we start by importing Python’s built-in logging module. The call to logging.basicConfig(level=logging.INFO) sets up a default logging configuration that includes a basic handler which writes to the console. It also sets the log level threshold to INFO, meaning that all messages at the INFO level or higher will be shown, while DEBUG messages will be ignored.

The call to logging.info("Hello, world!") produces a log message at the INFO level. This message will appear in the terminal output because it meets the severity threshold specified in the configuration. If we wanted to see debug messages as well, we could change the level to logging.DEBUG. Conversely, raising the level to WARNING would hide the INFO message.

This minimal setup demonstrates how logging can replace print statements in a more robust and configurable way, laying the foundation for more advanced usage as your application grows.

Logging Levels Explained

So what are the logging levels, and why do they matter? Logging levels are categories that indicate the severity or importance of a log message. By assigning a level to each message, you can control which messages are recorded or displayed depending on the configuration. This allows you to reduce noise in production environments while still capturing fine-grained detail when debugging. The Python logging module defines five standard levels, ranging from DEBUG (the most verbose) to CRITICAL (the most severe):

Level Description and Example
DEBUG Fine-grained details useful for diagnosing problems during development. E.g., “User input received: ‘42’” or “Fetching data from API at URL…”
INFO General events showing normal operation. E.g., “Application started” or “User logged in successfully”
WARNING Something unexpected happened, but the program can still proceed. E.g., “Configuration file not found, using defaults”
ERROR A serious issue that occurred during execution but didn’t crash the program. E.g., “Failed to connect to database”
CRITICAL A severe error that may cause the program to terminate. E.g., “Unrecoverable system failure”

Creating and Using Named Loggers

As your projects grow in size and complexity, it’s often useful to distinguish where log messages come from. This is where named loggers come into play. Rather than using the root logger for everything, you can create named loggers using logging.getLogger(name). This makes it easier to trace logs back to specific modules or components of your application. Additionally, named loggers participate in the logging hierarchy, which means their configuration can inherit from parent loggers — a concept we’ll explore in detail in the next section.

logger = logging.getLogger("myapp.module")
logger.setLevel(logging.DEBUG)

logger.debug("This is a debug message.")
logger.warning("This is a warning.")

In this example, we create a logger explicitly named myapp.module using logging.getLogger("myapp.module"). This name not only identifies the source of the log messages, but also places the logger within a hierarchy — the logger myapp.module is a child of myapp, and both are ultimately children of the root logger. We set the log level to DEBUG, which allows all messages of level DEBUG and above to be emitted.

We then emit two messages: one at the DEBUG level and one at the WARNING level. Depending on the logger’s configuration and handlers, these messages will be shown or ignored. If no specific handlers are attached to the logger, messages propagate up to the root logger, which by default outputs to the console. This behavior can be fine-tuned later using handlers, formatters, and the propagate flag.

Using Logging in Classes and Functions

When using logging inside classes and functions, there are a few helpful patterns that make life easier. One of them is using the built-in __name__ variable, which refers to the current module’s name. This allows you to automatically generate meaningful logger names without hardcoding strings. Similarly, when writing reusable classes, you might want to either create a logger specifically for that class or accept an external logger instance for better integration in larger applications.

This design also helps when writing libraries or components that may be reused elsewhere. By supporting logger injection, you’re allowing the surrounding application to control how logging is handled—whether it’s printed to the console, written to a file, or completely silenced.

The example below demonstrates this pattern: if no logger is passed in, a default one is created using the module’s __name__. This allows the logger name to reflect the module where the class is defined, making log messages easier to trace.

class MyWorker:
    def __init__(self, logger=None):
        self.logger = logger or logging.getLogger(__name__)

    def do_work(self):
        self.logger.info("Doing some work")

# Usage
worker = MyWorker()
worker.do_work()

In this example, the MyWorker class accepts an optional logger argument in its constructor. If no logger is provided, it falls back to creating a module-level logger using logging.getLogger(__name__). This allows each class or component to log messages under a consistent, hierarchical namespace without hardcoding names.

Calling do_work() triggers a call to self.logger.info(...), which emits a log message using the configured logger. This pattern makes it easy to test and reuse the class in different contexts, while maintaining full control over logging behavior.

Hierarchical Loggers

As your application grows and includes multiple components or modules, it becomes useful to structure your logging setup hierarchically. Hierarchical loggers allow you to organize loggers in a tree-like fashion, where each logger name reflects its place in the project structure. For example, a logger named myapp.module.submodule is considered a child of myapp.module, which in turn is a child of myapp.

This structure not only helps in identifying the source of each log message, but also enables powerful configuration. Messages generated by a child logger will automatically propagate or “bubble up” to its parent, unless configured otherwise. This means that you can attach handlers at a higher level (like myapp) and automatically collect logs from all submodules without repeating the configuration.

# Logging hierarchy is dot-separated:
root = logging.getLogger()
parent = logging.getLogger("myapp")
child = logging.getLogger("myapp.module.sub")

child.warning("I inherit handlers from parent")

In this example, we create three loggers to demonstrate the hierarchy: root, myapp, and myapp.module.sub. The root logger is always present and sits at the top of the hierarchy. The logger myapp.module.sub is a child of myapp, and unless explicitly configured not to, it will propagate its messages upward.

When we log a warning message using the child logger (myapp.module.sub), that message will propagate up the hierarchy and be handled by any handlers attached to parent loggers, including myapp and root. This mechanism provides a scalable way to manage logging behavior across an entire application, while still allowing individual modules to override or extend the behavior if necessary.

Logger Types: Handlers and Output Destinations

Loggers themselves do not output messages—they rely on handlers to determine what should be done with log records. A handler is responsible for sending the log message to a destination: this could be standard output, a file, or a system logging service. You can attach multiple handlers to a single logger, and each handler can have its own level and formatting.

In the context of hierarchical logging, when a logger emits a message, that message is passed to all of its own handlers. If the logger is configured to propagate messages (which is the default), the same message is then passed up the hierarchy to the parent logger, and so on, until it reaches the root. Each logger along the path may have one or more handlers, and all of them will be triggered unless propagation is disabled or a level filter suppresses the message.

For example the following handlers are available:

Below are examples of commonly used handlers and how they are configured and attached to a logger:

StreamHandler (stdout / stderr)

import logging
import sys

handler = logging.StreamHandler(sys.stdout)
logger = logging.getLogger()
logger.addHandler(handler)
logger.setLevel(logging.INFO)
logger.info("This message is sent to standard output")

This handler writes log messages to a stream, such as sys.stdout or sys.stderr. If no handler is configured explicitly, Python defaults to using a StreamHandler that outputs to stderr. Stream handlers are ideal for command-line applications and development use, where logs are immediately visible in the console.

FileHandler

import logging

handler = logging.FileHandler("app.log")
logger = logging.getLogger()
logger.addHandler(handler)
logger.setLevel(logging.INFO)
logger.info("This message is written to a file")

The FileHandler writes log messages to a specified file. This is especially useful for long-running services, background jobs, or any application where persistent records are needed. In more advanced setups, file handlers can be extended with log rotation via RotatingFileHandler or TimedRotatingFileHandler to avoid uncontrolled file growth.

SysLogHandler

import logging
from logging.handlers import SysLogHandler

handler = SysLogHandler(address='/dev/log')  # On Unix; or use ('localhost', 514) for UDP
logger = logging.getLogger()
logger.addHandler(handler)
logger.setLevel(logging.WARNING)
logger.warning("This message is sent to the system log")

The SysLogHandler allows integration with the system-wide logging facility available on most Unix-like operating systems. It can send logs via a Unix domain socket or UDP. This is often used in production deployments where centralized logging and monitoring systems aggregate messages from multiple sources.

NullHandler (for libraries)

import logging

logger = logging.getLogger(__name__)
logger.addHandler(logging.NullHandler())

The NullHandler is commonly used in libraries to suppress any logging output unless the end user explicitly configures logging. It prevents log messages from propagating to the root logger and displaying unwanted output.

HTTPHandler

import logging
from logging.handlers import HTTPHandler

handler = HTTPHandler('localhost:8000', '/log', method='POST')
logger = logging.getLogger()
logger.addHandler(handler)
logger.setLevel(logging.ERROR)
logger.error("This message is sent via HTTP POST")

The HTTPHandler sends log records to a remote web server using HTTP GET or POST requests. It is useful for forwarding logs to centralized APIs or log collection endpoints. When using the POST method, the log record is URL-encoded and sent as form data, with a Content-Type of application/x-www-form-urlencoded. The payload includes fields such as name (the logger name), levelname, msg (the formatted message), and other attributes of the log record. On the server side, this data can be processed like any form submission, typically by reading POST parameters from the request body.

SMTPHandler

import logging
from logging.handlers import SMTPHandler

handler = SMTPHandler(
    mailhost=('smtp.example.com', 587),
    fromaddr='error@example.com',
    toaddrs=['admin@example.com'],
    subject='Application Error',
    credentials=('user', 'password'),
    secure=()
)
logger = logging.getLogger()
logger.addHandler(handler)
logger.setLevel(logging.CRITICAL)
logger.critical("This critical error was emailed")

The SMTPHandler sends log messages via email. It’s typically used for high-severity alerts such as uncaught exceptions or critical failures that require immediate attention.

SocketHandler and DatagramHandler

import logging
from logging.handlers import SocketHandler, DatagramHandler

# TCP logging
tcp_handler = SocketHandler('localhost', 9020)
# UDP logging
udp_handler = DatagramHandler('localhost', 9021)

logger = logging.getLogger()
logger.addHandler(tcp_handler)
logger.addHandler(udp_handler)
logger.setLevel(logging.WARNING)
logger.warning("This message is sent over TCP and UDP")

The SocketHandler sends log records over a TCP connection, while the DatagramHandler uses UDP. These handlers are useful for transmitting logs to remote servers or log aggregators, often in centralized or distributed systems.

When a message is emitted through either handler, the log record is serialized using Python’s pickle module and transmitted over the network. The format is a length-prefixed binary stream, where the first 4 bytes specify the length of the pickled payload (in network byte order), followed by the actual pickled data. This applies to SocketHandler. For DatagramHandler, which uses UDP, the message consists only of the pickled data without a length prefix.

The server that receives these messages must therefore be capable of unpickling the incoming data. This implies that the receiving server should be implemented in Python or another system capable of deserializing Python pickle data, which is inherently insecure if not protected — always ensure only trusted clients can send log messages to your log collector.

Custom Formatters and Handlers

While the default logging format provides essential information, many real-world applications benefit from customizing how log messages are displayed or processed. Custom formatters allow you to control the structure and content of each log message—this might include timestamps, log levels, source module names, or even user-defined fields. Similarly, custom handlers let you define exactly how and where log messages are delivered, whether that’s a rotating file, an external monitoring service, or a custom alerting pipeline.

Custom formatters are applied to handlers and define the string representation of the log record. By tailoring the format string, you can make your logs more readable or easier to parse by automated tools.

The example below shows how to apply a formatter to a stream handler and attach it to a logger:

handler = logging.StreamHandler()
formatter = logging.Formatter('[%(asctime)s] %(levelname)s - %(message)s')
handler.setFormatter(formatter)

logger = logging.getLogger("custom")
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)

logger.debug("Custom formatted message")

Here, we create a StreamHandler and apply a Formatter to it. The format string "[%(asctime)s] %(levelname)s - %(message)s" includes a timestamp, the log level, and the actual log message. This formatted output is then applied to the handler and used by the logger to emit structured logs.

The format string can include a wide range of placeholders taken from the LogRecord object. Below is a table with commonly used format fields:

Placeholder Description
%(asctime)s Time the log message was created
%(levelname)s Text logging level (e.g., ‘INFO’, ‘ERROR’)
%(name)s Name of the logger that emitted the record
%(message)s The actual logged message (after argument substitution)
%(filename)s Filename portion of pathname
%(pathname)s Full pathname of the source file
%(module)s Module name of the caller
%(funcName)s Name of function that called the logger
%(lineno)d Source line number where the logging call was made
%(threadName)s Name of the thread in which the log message was issued
%(process)d Process ID (PID) of the process that issued the log
%(levelno)s Numeric logging level (e.g., 20 for INFO)
%(created)f Time the LogRecord was created (as a UNIX timestamp)
%(relativeCreated)d Time in milliseconds since logging system was loaded

You can combine these placeholders into any format string depending on your needs. Handlers may also filter messages using custom filter classes if you need additional control beyond formatting.

Logging Configuration with dictConfig

For small scripts or simple applications, configuring logging directly in code may be sufficient. However, for larger or more configurable applications, it’s often better to define your logging configuration using structured data. Python’s dictConfig system allows you to describe the entire logging setup using dictionaries. This enables you to store your logging configuration externally as JSON or YAML, making it easy to adjust without touching the source code.

Using dictConfig, you can define formatters, handlers, filters, and loggers in a declarative way. This becomes particularly powerful when deploying applications across environments where different logging behavior is needed (e.g., more verbose in development, silent in production).

Once configured using dictConfig, you can retrieve loggers in your application as usual with logging.getLogger("myapp"), and they will behave according to the configuration specified. If a logger is not explicitly defined in the configuration, it will fall back to default behavior or inherit from parent loggers, depending on the hierarchy.

import logging.config

logging.config.dictConfig({
    "version": 1,
    "formatters": {
        "standard": {
            "format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
        },
    },
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
            "formatter": "standard",
        },
    },
    "loggers": {
        "myapp": {
            "handlers": ["console"],
            "level": "INFO",
        },
    },
})

Conclusion

Python’s logging module is a powerful and scalable tool for capturing, analyzing, and persisting application behavior. Unlike simple print statements, logging enables structured, multi-level message reporting that can be routed to different outputs and filtered by context, severity, or component. This makes it not only a superior tool for debugging but also a long-term investment in application observability.

Even in programs that may appear simple at first, incorporating proper logging can pay off later when tracing unexpected behaviors, reproducing user reports, or monitoring system health in production. With flexible configuration options, hierarchical logger design, and support for custom handlers and formats, Python logging provides a solid instrumentation foundation that grows with your software.

By integrating logging thoughtfully into every component—no matter how trivial—it becomes significantly easier to diagnose and resolve problems both during development and long after deployment.

This article is tagged:


Data protection policy

Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)

This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/

Valid HTML 4.01 Strict Powered by FreeBSD IPv6 support