cherrypy.lib.cpstats module

CPStats, a package for collecting and reporting on program statistics.

Overview

Statistics about program operation are an invaluable monitoring and debugging tool. Unfortunately, the gathering and reporting of these critical values is usually ad-hoc. This package aims to add a centralized place for gathering statistical performance data, a structure for recording that data which provides for extrapolation of that data into more useful information, and a method of serving that data to both human investigators and monitoring software. Let’s examine each of those in more detail.

Data Gathering

Just as Python’s logging module provides a common importable for gathering and sending messages, performance statistics would benefit from a similar common mechanism, and one that does not require each package which wishes to collect stats to import a third-party module. Therefore, we choose to re-use the logging module by adding a statistics object to it.

That logging.statistics object is a nested dict. It is not a custom class, because that would:

  1. require libraries and applications to import a third-party module in order to participate

  2. inhibit innovation in extrapolation approaches and in reporting tools, and

  3. be slow.

There are, however, some specifications regarding the structure of the dict.:

{
  +----"SQLAlchemy": {
  |        "Inserts": 4389745,
  |        "Inserts per Second":
  |            lambda s: s["Inserts"] / (time() - s["Start"]),
  |  C +---"Table Statistics": {
  |  o |        "widgets": {-----------+
N |  l |            "Rows": 1.3M,      | Record
a |  l |            "Inserts": 400,    |
m |  e |        },---------------------+
e |  c |        "froobles": {
s |  t |            "Rows": 7845,
p |  i |            "Inserts": 0,
a |  o |        },
c |  n +---},
e |        "Slow Queries":
  |            [{"Query": "SELECT * FROM widgets;",
  |              "Processing Time": 47.840923343,
  |              },
  |             ],
  +----},
}

The logging.statistics dict has four levels. The topmost level is nothing more than a set of names to introduce modularity, usually along the lines of package names. If the SQLAlchemy project wanted to participate, for example, it might populate the item logging.statistics['SQLAlchemy'], whose value would be a second-layer dict we call a “namespace”. Namespaces help multiple packages to avoid collisions over key names, and make reports easier to read, to boot. The maintainers of SQLAlchemy should feel free to use more than one namespace if needed (such as ‘SQLAlchemy ORM’). Note that there are no case or other syntax constraints on the namespace names; they should be chosen to be maximally readable by humans (neither too short nor too long).

Each namespace, then, is a dict of named statistical values, such as ‘Requests/sec’ or ‘Uptime’. You should choose names which will look good on a report: spaces and capitalization are just fine.

In addition to scalars, values in a namespace MAY be a (third-layer) dict, or a list, called a “collection”. For example, the CherryPy StatsTool keeps track of what each request is doing (or has most recently done) in a ‘Requests’ collection, where each key is a thread ID; each value in the subdict MUST be a fourth dict (whew!) of statistical data about each thread. We call each subdict in the collection a “record”. Similarly, the StatsTool also keeps a list of slow queries, where each record contains data about each slow query, in order.

Values in a namespace or record may also be functions, which brings us to:

Extrapolation

The collection of statistical data needs to be fast, as close to unnoticeable as possible to the host program. That requires us to minimize I/O, for example, but in Python it also means we need to minimize function calls. So when you are designing your namespace and record values, try to insert the most basic scalar values you already have on hand.

When it comes time to report on the gathered data, however, we usually have much more freedom in what we can calculate. Therefore, whenever reporting tools (like the provided StatsPage CherryPy class) fetch the contents of logging.statistics for reporting, they first call extrapolate_statistics (passing the whole statistics dict as the only argument). This makes a deep copy of the statistics dict so that the reporting tool can both iterate over it and even change it without harming the original. But it also expands any functions in the dict by calling them. For example, you might have a ‘Current Time’ entry in the namespace with the value “lambda scope: time.time()”. The “scope” parameter is the current namespace dict (or record, if we’re currently expanding one of those instead), allowing you access to existing static entries. If you’re truly evil, you can even modify more than one entry at a time.

However, don’t try to calculate an entry and then use its value in further extrapolations; the order in which the functions are called is not guaranteed. This can lead to a certain amount of duplicated work (or a redesign of your schema), but that’s better than complicating the spec.

After the whole thing has been extrapolated, it’s time for:

Reporting

The StatsPage class grabs the logging.statistics dict, extrapolates it all, and then transforms it to HTML for easy viewing. Each namespace gets its own header and attribute table, plus an extra table for each collection. This is NOT part of the statistics specification; other tools can format how they like.

You can control which columns are output and how they are formatted by updating StatsPage.formatting, which is a dict that mirrors the keys and nesting of logging.statistics. The difference is that, instead of data values, it has formatting values. Use None for a given key to indicate to the StatsPage that a given column should not be output. Use a string with formatting (such as ‘%.3f’) to interpolate the value(s), or use a callable (such as lambda v: v.isoformat()) for more advanced formatting. Any entry which is not mentioned in the formatting dict is output unchanged.

Monitoring

Although the HTML output takes pains to assign unique id’s to each <td> with statistical data, you’re probably better off fetching /cpstats/data, which outputs the whole (extrapolated) logging.statistics dict in JSON format. That is probably easier to parse, and doesn’t have any formatting controls, so you get the “original” data in a consistently-serialized format. Note: there’s no treatment yet for datetime objects. Try time.time() instead for now if you can. Nagios will probably thank you.

Turning Collection Off

It is recommended each namespace have an “Enabled” item which, if False, stops collection (but not reporting) of statistical data. Applications SHOULD provide controls to pause and resume collection by setting these entries to False or True, if present.

Usage

To collect statistics on CherryPy applications:

from cherrypy.lib import cpstats
appconfig['/']['tools.cpstats.on'] = True

To collect statistics on your own code:

import logging
# Initialize the repository
if not hasattr(logging, 'statistics'): logging.statistics = {}
# Initialize my namespace
mystats = logging.statistics.setdefault('My Stuff', {})
# Initialize my namespace's scalars and collections
mystats.update({
    'Enabled': True,
    'Start Time': time.time(),
    'Important Events': 0,
    'Events/Second': lambda s: (
        (s['Important Events'] / (time.time() - s['Start Time']))),
    })
...
for event in events:
    ...
    # Collect stats
    if mystats.get('Enabled', False):
        mystats['Important Events'] += 1

To report statistics:

root.cpstats = cpstats.StatsPage()

To format statistics reports:

See 'Reporting', above.
class cherrypy.lib.cpstats.ByteCountWrapper(rfile)[source]

Bases: object

Wraps a file-like object, counting the number of bytes read.

close()[source]
next()[source]
read(size=-1)[source]
readline(size=-1)[source]
readlines(sizehint=0)[source]
class cherrypy.lib.cpstats.StatsPage[source]

Bases: object

data()[source]
formatting = {'CherryPy Applications': {'Bytes Read/Request': '%.3f', 'Bytes Read/Second': '%.3f', 'Bytes Written/Request': '%.3f', 'Bytes Written/Second': '%.3f', 'Current Time': <function iso_format>, 'Enabled': <function pause_resume.<locals>._pause_resume>, 'Requests': {'Bytes Read': '%s', 'Bytes Written': '%s', 'End Time': None, 'Processing Time': '%.3f', 'Start Time': None}, 'Requests/Second': '%.3f', 'Slow Queries': {'End Time': None, 'Processing Time': '%.3f', 'Start Time': <function iso_format>}, 'Start Time': <function iso_format>, 'Total Time': '%.3f', 'URI Set Tracking': {'Avg': '%.3f', 'Max': '%.3f', 'Min': '%.3f', 'Sum': '%.3f'}, 'Uptime': '%.3f'}, 'CherryPy WSGIServer': {'Connections/second': '%.3f', 'Enabled': <function pause_resume.<locals>._pause_resume>, 'Start time': <function iso_format>}}
get_dict_collection(v, formatting)[source]

Return ([headers], [rows]) for the given collection.

get_list_collection(v, formatting)[source]

Return ([headers], [subrows]) for the given collection.

get_namespaces()[source]

Yield (title, scalars, collections) for each namespace.

index()[source]
pause(namespace)[source]
resume(namespace)[source]
class cherrypy.lib.cpstats.StatsTool[source]

Bases: Tool

Record various information about the current request.

_setup()[source]

Hook this tool into cherrypy.request.

The standard CherryPy request object will automatically call this method when the tool is “turned on” in config.

record_start()[source]

Record the beginning of a request.

record_stop(uriset=None, slow_queries=1.0, slow_queries_count=100, debug=False, **kwargs)[source]

Record the end of a request.

cherrypy.lib.cpstats._get_threading_ident()[source]
cherrypy.lib.cpstats.average_uriset_time(s)[source]
cherrypy.lib.cpstats.extrapolate_statistics(scope)[source]

Return an extrapolated copy of the given scope.

cherrypy.lib.cpstats.iso_format(v)[source]
cherrypy.lib.cpstats.locale_date(v)[source]
cherrypy.lib.cpstats.pause_resume(ns)[source]
cherrypy.lib.cpstats.proc_time(s)[source]