Anand Sudhanaboina

Remote Logging With Python

Debugging logs can be formidable task if you run same service on multiple production loads with local logging behind a load balancer, you are only left one option, ssh into different servers and then debug the logs.

Logging to a single server from multiple servers can simply debugging, Python provides a in built functionality for logging, by just adding a few lines to the logging config you will be able to send the log to a remote server and then your remote server need to handle this request. In remote server you can store this logs into flat files or NoSQL

A rudimentary architecture would be:

architecture

I’ve created a few code samples to get this done:

Configure a HTTPHandler to the logging handler to send logs to remote server instead of local tty:

1
2
3
4
5
6
7
8
9
10
11
12
13
import logging
import logging.handlers
logger = logging.getLogger('Synchronous Logging')
http_handler = logging.handlers.HTTPHandler(
    '127.0.0.1:3000',
    '/log',
    method='POST',
)
logger.addHandler(http_handler)

# Log messages:
logger.warn('Hey log a warning')
logger.error("Hey log a error")

On the logging server, I’ve created a simple flask application which can handle a post request:

1
2
3
4
5
6
7
8
9
10
11
12
from flask import Flask, request
import json

app = Flask(__name__)

@app.route('/log',methods=['POST'])
def index():
  print json.dumps(request.form)
  return ""

if __name__ == '__main__':
  app.run(host='0.0.0.0', port = 3000, debug=True)

Assuming the server is up and you send a log request, this is how the log structure looks:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
    "relativeCreated": "52.1631240845",
    "process": "10204",
    "args": "()",
    "module": "km",
    "funcName": "<module>",
    "exc_text": "None",
    "name": "Synchronous Logging",
    "thread": "139819818469184",
    "created": "1446532937.04",
    "threadName": "MainThread",
    "msecs": "37.367105484",
    "filename": "km.py",
    "levelno": "40",
    "processName": "MainProcess",
    "pathname": "km.py",
    "lineno": "13",
    "msg": "Hey log a error",
    "exc_info": "None",
    "levelname": "ERROR"
}

Important properties of this structure would be msg, name & level. Property name is what you pass to getLogger function, and level would the level of logging (error = 40, warning = 30, etc).

This approach is synchronized, if you want logging to be async use threads:

1
2
3
4
5
6
7
8
9
10
11
import logging, thread, time
import logging.handlers
logger = logging.getLogger('Asynchronous Logging') # Name
http_handler = logging.handlers.HTTPHandler(
    '127.0.0.1:3000',
    '/log',
    method='POST',
)
logger.addHandler(http_handler)
thread.start_new_thread( logger.error, ("Log error",))
time.sleep(1) # Just to keep main thread alive.

By this way we need not bother about storage of application server (If you are not storing any data to FS then logs would be the only thing) and debugging would be easy.

Save to mongo to perform analytics and / or to perform quick queries:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
from flask import Flask, request
import json
from pymongo import MongoClient

app = Flask(__name__)

# Mongo setup:
client = MongoClient()
db = client['logs']
collection = db['testlog']

@app.route('/log',methods=['POST'])
def index():
  # Convert form POST object into a representation suitable for mongodb
  data = json.loads(json.dumps(request.form))
  response = collection.insert_one(data)
  print response.inserted_id
  return ""

if __name__ == '__main__':
  app.run(host='0.0.0.0', port = 3000, debug=True)

Comments