Factor/Optimization

Data structures

A good choice of data structures is critical.

For example, when constructing a sequence in a loop, it is better to push elements onto a vector with push and push-all than it is to use suffix or append, since the latter copy the head of the sequence.

When taking sequences apart, using slices instead of subsequence operations can yield a performance boost. For example, calling rest on a one-million-element array will allocate a new array with 999,999 elements, whereas calling rest-slice will allocate a slice which is a very efficient operation.

Static stack effects

The first step when optimizing Factor code is to ensure that it is compiled with the Optimizing compiler. A word will be optimized if and only if it has a static stack effect. To check this,

[ my-word ] infer.

You can define unit tests which assert this is true,

\ my-word must-infer

High-level optimizer

To see how the high level optimizer treats a word, you can use the following tool:

USE: compiler.tree.debugger
\ foo optimized.

You will be able to see if generic dispatch has been eliminated, and if generic arithmetic has been converted to machine arithmetic, and which words have been inlined.

You can also pass a quotation to optimized.:

[ 100 [ ] times ] optimized.

The high level compiler can be given hints and inline declarations which affects the output of the above.

Big picture: The optimizing compiler works on single words. Gather code using inline into those words. Give the optimizing compiler assistance in figuring out what types things are. It's does well when it knows, but often lacks for information.

Low-level optimizer

To see how the low level optimizer treats a word,

USE: compiler.cfg.debugger
\ foo test-optimizer cfgs.

Disassembly

To see the final code generated for a word, use the disassemble word.

Disassembling words is accomplished using libudis86 on x86, and gdb on PowerPC.

Note that udis needs to be compiled with the --enable-shared option:

Additional advice

  • Take a look at the code in the extra/benchmark/ directory. These programs are extremely optimized, but also written in a high-level style.
  • Avoid using dynamic variables in tight loops; use locals or the stack instead.
  • Immutable locals and stack shuffling are equivalent semantically, and compile to the same code; pick whichever one makes for clearer code.
  • Mutable locals, on the other hand, incur a performance penalty because a heap-allocated container is made to hold the mutable value.
  • Core language features such as generic words, the case combinator, and so on, are highly-optimized, and might be faster than something you'd cook up on your own. For example, case with integer keys turns into a jump table, member? with a literal sequence becomes a hashed dispatch or a bit array lookup, generic word dispatch scales very well with a large number of methods, and so on.
  • Don't use low-level words like fixnum+, and so on. Given the right set of inlineing declarations and HINTS, the compiler can eliminate generic arithmetic, and the result will be safer and more readable than if you attempt these transformations by hand. See also TYPED, and dangerous declare.
  • On the other hand, unsafe sequence operations are useful sometimes when accessing sequence elements in tight loops; however, try to express your code using map, reduce, or something like that instead, since they already use unsafe operations under the hood.
  • The optimizing and basic compilers have different characteristics. Some abstractions have zero cost in one, but not the other (eg, locals).

Todo

  • the math.vectors.simd and specialized-arrays docs have some optimization advice you could fold into this article. also read slava's blog posts, they explain everything.
  • Page needs discussion of tuple issues: read-only slots, escape analysis, open coding.

This revision created on Thu, 9 Sep 2010 18:24:56 by mncharity (Todo: +additional sources of advice.)