JSON document logging & reporting inspired by loggly for node.js

View the Project on GitHub visionmedia/jog

JSON logging & reporting inspired by Loggly for node.js.


$ npm install jog



log.write(level, msg[, obj])

Write to the logs:

log.write(level, msg[, obj])
log.debug(msg[, obj])[, obj])
log.warn(msg[, obj])
log.error(msg[, obj])


Namespace with the given obj, returning a new Jog instance inheriting previous properties. You may call this several times to produce more and more specific loggers.

var log = jog(new jog.FileStore('/tmp/log'));

// log a user 5
log = log.ns({ uid: 5 });

// log video id 99 for user 5
log = log.ns({ vid: 99 });

// or both at once
log = log.ns({ uid: 5, vid: 99 });

Return an EventEmitter emitting "data" and "end" events.


Clear the logs and invoke the callback.


Log random data using the FileStore and tail the file for changes (typically in different processes). Jog will add the .level and .msg properties for you.

var jog = require('jog')
  , log = jog(new jog.FileStore('/tmp/tail'))
  , id = 0;

// generate random log data
function again() {'something happened', { id: ++id, user: 'Tobi' });
  setTimeout(again, Math.random() * 100 | 0);


// tail the json "documents"{ end: false, interval: 500 })
  .on('data', function(obj){


{ id: 1,
  level: 'info',
  msg: 'something happened',
  timestamp: 1332907641734 }
{ id: 2,
  level: 'info',
  msg: 'something happened',
  timestamp: 1332907641771 }


  Usage: jog [options]


    -h, --help         output usage information
    -V, --version      output the version number
    -F, --file <path>  load from the given <path>
    -R, --redis        load from redis store
    -s, --select <fn>  use the given <fn> for filtering
    -m, --map <fn>     use the given <fn> for mapping
    -c, --color        enable colors for json output


View all logs from tobi. The _ object for the function bodies of --select and --map represents the current document, it's all just javascript.

jog --file /tmp/jog --select "_.user == 'tobi'"
[ { user: 'tobi',
    duration: 1000,
    level: 'info',
    msg: 'rendering video',
    timestamp: 1332861272100 },
  { user: 'tobi',
    duration: 2000,
    level: 'info',
    msg: 'compiling video',
    timestamp: 1332861272100 },

Filter video compilation durations from "tobi" only:

$ jog --file /var/log/videos.log --select "_.user == 'tobi'" --map _.duration
[ 1000, 2000, 1200, 1000, 2000, 1200 ]

The --map flag can be used several times:

jog --file /var/log/videos.log --select "_.vid < 5" --map _.msg --map "_.split(' ')"
[ [ 'compiling', 'video' ],
  [ 'compiling', 'video' ],
  [ 'compiling', 'video' ],
  [ 'compiling', 'video' ] ]


By default Jog ships with the FileStore and RedisStore, however anything with the following methods implemented will work:

- `add(obj)` to add a log object
- `stream() => EventEmitter` to stream data
- `stream({ end: false }) => EventEmitter` to stream data indefinitely
- `clear(fn)` to clear the logs


Store logs on disk.

var jog = require('jog');
var log = jog(new jog.FileStore('/var/log/videos.log'));


Store logs in redis.

var jog = require('jog');
var log = jog(new jog.RedisStore);


No profiling or optimizations yet but the FileStore can stream back 250,000 documents (~21MB) in 1.2 seconds on my macbook air.

The RedisStore with 250,000 documents streamed back in 2.8 seconds on my air.

Running tests

$ npm install
$ redis-server &
$ make test