Digital Edition

SYS-CON.TV
Tracing Python — An API
New Python Instrumentation

We’re pleased to announce a new Python instrumentation version — Oboeware 1.1!

We’ve added a few new libraries recently, but we’re really excited about the new customization API we’ve introduced in this version. More than just a Python bump, this is the first package we’re releasing with an implementation of our new Oboe API. The Oboe API is a common set of idioms and metaphors for extending Tracelytics instrumentation or quickly writing your own from the ground up. We’re excited to get it out there, and we’re even more excited to see what you build with it!

Conceptually, the Oboe API is a multi-tiered system that allows instrumentation of everything from simple function calls to crazy distributed asynchronous event-driven applications. There are three parts: the low-level API, the high-level API, and language-specific functions.

Language-specific functions
The language-specific functions are just that: language-specific idioms that give you the most tracing bang for your buck. In Python, we’ve put together four:

  • The @trace decorator allows you to start a trace wherever you want. While we catch most web requests, sometimes it can be valuable to trace your backend processes as well. This can give you visibility into celery jobs, command-line scripts, or cron jobs.
  • In an existing trace, the @log_method decorator creates a new layer. We use this extensively in oboeware to actually instrument the standard python libraries like urllib2, pylibmc, or pymongo. If you’ve got an internal API client, or even just a wrapper around an existing library, use this to make sure you’re seeing exactly how performant that library is!
  • The @profile_block and @profile_function allows you to mark particular blocks of code in your app, and we’ll keep track of them in Tracelytics. If you’ve got a particularly thorny algorithm, or just a lot of log parsing to do, throw it inside a @profile_block

PythonAPI-1

… and make sure you’re never losing sight of how long that job is taking.

We understand that modifying your code with performance annotations can be trying. These functions aim to be one-line modifications that open up whole new areas of visibility in your app, with no change in functionality.

High-Level API
Do you want to trace your custom-built, high-performance, NumPy simulation library? Are you looking for a simple way to instrument your internal-but-open-sourced RPC protocol? Or do you maybe want to trace all 7 custom tiers in your in your 7-tier web app?

The high-level API provides a logging-like set of functions that report to the Tracelyzer agent. With these functions, you can control exactly what, where, and how you report your performance information. Additionally, this lets you control the key/value pairs reported, which can be used to mark certain layers for special treatment in Tracelytics. For instance, adding the KVOp key identifies the layer as cache. Adding these keys to a fictional hybrid Cassandra / memcache cache would allow visualization in the same place.

PythonAPI-2

Layers introduced by the high-level API aren’t an afterthought — they’re treated exactly the same as any other layer within the Tracelytics interface. Each layer gets its own visualization and a set of filters, and you can set Alerts on it just as you would full requests or application times.

Low-Level API
For the truly hardcore, we also have a low-level API. In addition to all of the power and flexibility of the High-Level API, this level adds the concept of a Context — an object that encapsulates the request context needed to trace a request through your full stack, under whatever evented, microthread, or serializable paradigm you’re working.

Build it out!
We hope this release is the start of a long and varied journey to tracing everything under the sun. Are you building something with it? Let us know! Are you thinking about it? Sign up for a Traceview account, and get started today.

Related Articles

About TR Jordan
A veteran of MIT’s Lincoln Labs, TR is a reformed physicist and full-stack hacker – for some limited definition of full stack. After a few years as Software Development Lead with Thermopylae Science and Techology, he left to join Tracelytics as its first engineer. Following Tracelytics merger with AppNeta, TR was tapped to run all of its developer and market evangelism efforts. TR still harbors a not-so-secret love for Matlab-esque graphs and half-baked statistics, as well as elegant and highly-performant code. Read more of his articles at www.appneta.com/blog or visit www.appneta.com.



ADS BY GOOGLE
Subscribe to the World's Most Powerful Newsletters

ADS BY GOOGLE

The explosion of new web/cloud/IoT-based applications and the data they generate are transforming ou...
CI/CD is conceptually straightforward, yet often technically intricate to implement since it require...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple ...
Enterprises are striving to become digital businesses for differentiated innovation and customer-cen...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As au...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't com...
DXWorldEXPO LLC announced today that All in Mobile, a mobile app development company from Poland, wi...
The now mainstream platform changes stemming from the first Internet boom brought many changes but d...
DXWorldEXPO LLC announced today that Ed Featherston has been named the "Tech Chair" of "FinTechEXPO ...
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in developm...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: D...
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with e...
In this presentation, you will learn first hand what works and what doesn't while architecting and d...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitori...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use ...
If your cloud deployment is on AWS with predictable workloads, Reserved Instances (RIs) can provide ...
Disruption, Innovation, Artificial Intelligence and Machine Learning, Leadership and Management hear...
We build IoT infrastructure products - when you have to integrate different devices, different syste...
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling ...