The "logging" Module

OOTB

1
2
3
4
5
6
7
import logging as log

log.info('99 bottles of beer on the wall')
log.error('one of them happen to fall')

# Output 1
# ERROR:root:one of them happen to fall 2
1 To standard error
2 Default level is WARNING, default format %(levelname)s:%(name)s:%(message)s

Is it Thread Safe?

Lucky for us, the logging module is thread safe.

The Easy Way

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import logging as log

log.basicConfig(
    filename='/tmp/basic.log',  # 1
    format='[%(asctime)s] %(levelname)s %(filename)s:%(lineno)s %(message)s',
    level=log.INFO,
)

log.info('How are you?')

# Output (in /tmp/basic.log)
# [2013-09-30 16:02:58,927] INFO basiccfg.py:9 How are you?
1 Will be appended

The Hard Way

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
import logging

log = logging.getLogger('looney')
log.setLevel(logging.INFO)
handler = logging.FileHandler('/tmp/hard.log')
log.addHandler(handler)

fmt = '[%(asctime)s] %(levelname)s %(filename)s:%(lineno)s %(message)s'
formatter = logging.Formatter(fmt)
handler.setFormatter(formatter)

log.info('How are you?')

# Output (in /tmp/hard.log)
# [2013-09-30 16:39:24,810] INFO hard_way.py:12 How are you?

What should we log?

Everything?

Srsly Though

loggings.handlers

Provides many handlers, including:

And please don’t use SMTPHandler :)

logging.config

isEnabledFor

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
import logging
from time import sleep

def slow():
    sleep(100)
    return 7

log = logging.getLogger()

if log.isEnabledFor(logging.DEBUG):
    log.debug('slow() returned {}'.format(slow()))

Changing Configuration on-the-fly

Use logging.config.listen, then send over a .ini file prefixed with size encoded as big-endian, 4 byte unsigned long - easy peasy :)

Huh?

miracle.jpg

log.conf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
[loggers]
keys = root

[handlers]
keys = console

[formatters]
keys = console

[logger_root]
level = ERROR
handlers = console

[handler_console]
class = StreamHandler
formatter = console
args = (sys.stderr,)

[formatter_console]
format= %(asctime)s - %(levelname)s - %(filename)s:%(lineno)d: %(message)s

logsrv.py

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
#!/usr/bin/env python

import logging as log
import logging.config as logcfg

port = 9999

if __name__ == '__main__':
    logcfg.fileConfig('src/log.conf')
    thr = logcfg.listen(port)
    thr.daemon = True  # Don't hangup
    thr.start()

    while True:
        line = raw_input('>>> ')
        level, message = line.split(' ', 1)
        fn = getattr(log, level, None)
        fn(message)

chcfg.py

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
#!/usr/bin/env python

import struct, socket
from logsrv import port

with open('src/log.conf') as fo:
    data = fo.read().replace('level = ERROR', 'level = INFO')  # 1

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('localhost', port))
sock.send(struct.pack('>L', len(data)) + data)  # 2
sock.close()
1 Change level to INFO
2 Size of data

Demo

1
2
3
4
5
$ ./src/logsrv.py
>>> error ouch
2013-10-01 05:51:06,901 - ERROR - logsrv.py:17: ouch
>>> info hello  1
>>>
1 Default level is ERROR
1
./src/chcfg.py
1
2
>>> info hello
2013-10-01 05:51:27,876 - INFO - logsrv.py:17: hello

json logging

JSON (structured) logging is excellent when you build some tooling around your logs. Usually a combination of "traditional" logs and structured logs is used.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
import logging, json

class JSONFormatter(logging.Formatter):
    def format(self, record):
        return json.dumps(vars(record))

handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())
log = logging.getLogger()
log.addHandler(handler)
log.fatal('Please reinstall universe and reboot')

logstash + kibana

logstash

A prayer for the demo gods … :)

What Didn’t We Cover?

Resources

Thank You

question.png

This presentation was made with asciidoc using the slidy backend and Pygments syntax highlighter.