We're commonly asked to provide benchmarking results to prospective customers who are concerned about processing data in a timely fashion. Bearing in mind Mark Twain's famous dictum about statistics, we hoped to blog briefly about the complexities of ASN.1 benchmarking; we provide a link to a paper below with some extended discussion.

Benchmarking software performance can be difficult because there are no standard hardware configurations to measure software performance. ASN.1 applications complicate benchmarking further by introducing many additional variables:

  • encoding rules — BER has two different length variants and two canonical forms (CER and DER). PER supports aligned and unaligned variants and offers standard and canonical forms (ASN1C uses canonical PER in all applications). XER has canonical and non-canonical forms.

  • specification complexity — specifications with a high degree of complexity result in larger code size and generally slower processing.

  • message complexity — large messages may impose memory constraints that can dominate processing time.

  • code generation options — ASN1C supports many options that can alternately improve and degrade performance. Strict constraint checking, for example, is more computationally expensive than lax constraint checking.

  • runtime compilation options — ASN1C comes with libraries suitable for debugging that are larger and slower than optimized runtime libraries suitable for deployment. Some of the features, like bit trace handling, can be very computationally expensive.

  • programming language choice — while modern programming languages often perform comparably (especially in the fuzzier definitions of user experience), we have found that lower-level languages still enjoy a perceptible advantage.

  • user implementation — in many cases, performance can be improved by unwrapping outer types and processing the inner data in batches (see, for example, the sample_ber/tap3batch program we ship with each language).

Our users are normally interested in "records-per-second" metrics: how many Ericsson R12 CDRs we can process in a second, for example, or whether we can handle the data coming from a switch in real time. These sort of metrics can be deceiving, though: decoding 3,000 records per second does not mean much if the messages are two bytes long. Looking at overall throughput (in kilobytes per second, for example) is often a better way to evaluate needs and performance.

In conclusion, then, we would make the following recommendations:

  1. First, identify the performance need, preferably using a metric that is consistent across all invocations of the application.
  2. Second, identify likely bottlenecks in performance: message size and memory use are the most common in our experience. If needed, adjust your interface code to reduce memory use.
  3. Third, deploy your applications with optimized runtime libraries instead of non-optimized libraries.

While performance will certainly vary from application to application, our runtime libraries have been used in real-time applications as well as large-scale data clearing houses—if you have questions about how well ASN1C might perform for you, feel free to drop us a line. We'll be glad to chat. Click here for a longer discussion of benchmarking, including data points collected against our Java runtime libraries.


Published

Category

ASN1C

Tags