Introduction
Are you still using print() for debugging and production monitoring in Python? While print is simple, it lacks log level control, file output, timestamps, and the ability to silence logs in production.
Python’s built-in logging module handles all of this declaratively. This article covers everything from basics to production-ready patterns.
Log Levels
logging provides five severity levels:
| Level | Value | Use Case |
|---|---|---|
DEBUG | 10 | Detailed diagnostic information (dev only) |
INFO | 20 | Confirmation that things are working normally |
WARNING | 30 | Something unexpected but processing continues |
ERROR | 40 | A serious problem; operation failed |
CRITICAL | 50 | A fatal error; the program may not continue |
The root logger outputs WARNING and above by default.
Basic Usage
Configuration with basicConfig
import logging
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s %(levelname)s %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
logging.debug("Debug info")
logging.info("Normal operation")
logging.warning("Warning")
logging.error("Error occurred")
logging.critical("Critical failure")
Sample output:
2026-03-12 10:00:00 DEBUG Debug info
2026-03-12 10:00:00 INFO Normal operation
2026-03-12 10:00:00 WARNING Warning
2026-03-12 10:00:00 ERROR Error occurred
2026-03-12 10:00:00 CRITICAL Critical failure
Per-Module Loggers
In production code, use logging.getLogger(__name__) to create a module-specific logger. This makes it easy to identify the source of each log message.
import logging
logger = logging.getLogger(__name__)
def process_data(data):
logger.info("Processing started: %d items", len(data))
try:
result = [x * 2 for x in data]
logger.debug("Result: %s", result)
return result
except Exception as e:
logger.error("Processing failed: %s", e, exc_info=True)
raise
exc_info=True automatically appends the stack trace to the log entry.
Customizing Formats
Use %(...)s-style format strings to control how log entries look:
import logging
formatter = logging.Formatter(
fmt="%(asctime)s [%(levelname)-8s] %(name)s:%(lineno)d - %(message)s",
datefmt="%Y-%m-%dT%H:%M:%S",
)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
logger.info("Service started")
Sample output:
2026-03-12T10:00:00 [INFO ] myapp:10 - Service started
Key format variables:
| Variable | Content |
|---|---|
%(asctime)s | Timestamp |
%(levelname)s | Log level name |
%(name)s | Logger name |
%(filename)s | Source filename |
%(lineno)d | Line number |
%(funcName)s | Function name |
%(message)s | Log message |
%(process)d | Process ID |
%(thread)d | Thread ID |
Handler Types
Handlers determine where logs are sent. You can attach multiple handlers to a single logger.
StreamHandler (stdout/stderr)
import logging
import sys
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.INFO)
FileHandler (file output)
file_handler = logging.FileHandler("app.log", encoding="utf-8")
file_handler.setLevel(logging.DEBUG)
RotatingFileHandler (size-based rotation)
In production, prevent log files from growing indefinitely using automatic rotation:
from logging.handlers import RotatingFileHandler
rotating_handler = RotatingFileHandler(
"app.log",
maxBytes=10 * 1024 * 1024, # 10 MB
backupCount=5, # Keep up to 5 backup files
encoding="utf-8",
)
When app.log reaches 10 MB, it is renamed to app.log.1, and a new app.log is created.
TimedRotatingFileHandler (time-based rotation)
For daily rotation:
from logging.handlers import TimedRotatingFileHandler
timed_handler = TimedRotatingFileHandler(
"app.log",
when="midnight", # Rotate at midnight each day
interval=1,
backupCount=30, # Keep 30 days of logs
encoding="utf-8",
)
Production Setup Pattern
For multi-module applications, use a dedicated setup function:
import logging
import sys
from logging.handlers import RotatingFileHandler
def setup_logging(log_level: str = "INFO", log_file: str = "app.log") -> None:
"""Initialize application-wide logging."""
level = getattr(logging, log_level.upper(), logging.INFO)
formatter = logging.Formatter(
fmt="%(asctime)s [%(levelname)-8s] %(name)s - %(message)s",
datefmt="%Y-%m-%dT%H:%M:%S",
)
# Console: INFO and above
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.INFO)
console_handler.setFormatter(formatter)
# File: DEBUG and above (with rotation)
file_handler = RotatingFileHandler(
log_file,
maxBytes=10 * 1024 * 1024,
backupCount=5,
encoding="utf-8",
)
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(formatter)
root_logger = logging.getLogger()
root_logger.setLevel(level)
root_logger.addHandler(console_handler)
root_logger.addHandler(file_handler)
if __name__ == "__main__":
setup_logging(log_level="DEBUG")
logger = logging.getLogger(__name__)
logger.info("Application started")
logger.debug("Debug info (file only)")
Logging Exceptions
Use logger.exception() inside an except block to log both the message and full stack trace in one call:
import logging
logger = logging.getLogger(__name__)
def divide(a, b):
try:
return a / b
except ZeroDivisionError:
logger.exception("Division by zero: a=%s, b=%s", a, b)
return None
divide(10, 0)
Sample output:
2026-03-12T10:00:00 [ERROR ] __main__ - Division by zero: a=10, b=0
Traceback (most recent call last):
File "example.py", line 7, in divide
return a / b
ZeroDivisionError: division by zero
Dictionary-based Configuration (dictConfig)
For large applications, manage logging configuration as a dictionary (or YAML/JSON):
import logging
import logging.config
LOGGING_CONFIG = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s",
},
"detailed": {
"format": "%(asctime)s [%(levelname)-8s] %(name)s:%(lineno)d %(funcName)s() - %(message)s",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "standard",
"stream": "ext://sys.stdout",
},
"file": {
"class": "logging.handlers.RotatingFileHandler",
"level": "DEBUG",
"formatter": "detailed",
"filename": "app.log",
"maxBytes": 10485760,
"backupCount": 5,
"encoding": "utf-8",
},
},
"loggers": {
"myapp": {
"handlers": ["console", "file"],
"level": "DEBUG",
"propagate": False,
},
},
"root": {
"handlers": ["console"],
"level": "WARNING",
},
}
logging.config.dictConfig(LOGGING_CONFIG)
logger = logging.getLogger("myapp")
logger.info("Logger configured via dictConfig")
Structured Logging with structlog
JSON logs integrate easily with log aggregation platforms (Datadog, CloudWatch, ELK). The structlog library makes structured logging straightforward:
# pip install structlog
import structlog
structlog.configure(
processors=[
structlog.processors.TimeStamper(fmt="iso"),
structlog.stdlib.add_log_level,
structlog.processors.JSONRenderer(),
],
)
log = structlog.get_logger()
log.info("User logged in", user_id=42, ip="192.168.1.1")
Sample output (JSON):
{
"timestamp": "2026-03-12T10:00:00Z",
"level": "info",
"event": "User logged in",
"user_id": 42,
"ip": "192.168.1.1"
}
Without structlog, you can achieve JSON logging with a custom formatter:
import json
import logging
class JsonFormatter(logging.Formatter):
def format(self, record):
log_data = {
"timestamp": self.formatTime(record, "%Y-%m-%dT%H:%M:%S"),
"level": record.levelname,
"logger": record.name,
"message": record.getMessage(),
}
if record.exc_info:
log_data["exception"] = self.formatException(record.exc_info)
return json.dumps(log_data, ensure_ascii=False)
handler = logging.StreamHandler()
handler.setFormatter(JsonFormatter())
logger = logging.getLogger("myapp")
logger.addHandler(handler)
logger.info("JSON formatted log entry")
print vs logging
| Aspect | print | logging |
|---|---|---|
| Level control | Not available | 5 levels |
| File output | Shell redirection only | FileHandler |
| Timestamps | Manual | Automatic via formatter |
| Stack traces | Manual with traceback | exc_info=True |
| Silence in production | Delete or conditionals | Set level to WARNING+ |
| Log rotation | Not available | RotatingFileHandler |
| Structured logging | Not available | dictConfig + structlog |
Related Articles
- Python Decorator Patterns - Combining
@timerand@retrydecorators with logging - Python asyncio Introduction - Logging considerations in async code
- Building a Progress Bar in Python - When to use logging vs progress indicators in CLI tools
- How to Overwrite Print Output in Python - When print is still the right tool