The future is tree shaped

Have been thinking and reading more about parallelism recently. This set of slides from Guy Steele distilled a lot for me.

In order to realise parallel hardware performance we need to optimize our programs for computational bandwidth rather than latency. In terms of programming this means deprecating accumulation (cons, fold, streams, sequences) and favouring divide-and-conquer. This suggests a move to trees as the fundamental abstract building block for data.

Idea for a global interpreter lock optimized for low contention

Slava is thinking about adding a global interpreter lock to Factor, much like the Python GIL, as a step on the path to a full multithreaded VM. This would allow factor code to run while blocking FFI calls (e.g. C library calls) execute. As part of this each FFI call would need to release the lock before calling into C code and obtain it again afterwards before returning to Factor.

One of the problems with adding a GIL is that it penalizes the common single-threaded case. This got me thinking about how a mutex lock could be implemented that optimized performance for the low contention case so as to minimize the performance impact to single threaded apps. Here's the best approach I've come up with so far:

You need:
  1. a spinlock (implemented inline via cpu primitives)
  2. a boolean to represent the GIL
  3. an int to represent contention on the GIL
  4. an OS semaphore (win32 semaphore or pthread condition)

The goal is to arrive at a situation where if there is no contention then FFI calls just use inline assembler primitives to acquire/release the spinlock and flip the GIL, otherwise they fall back on the additional overhead of library calls to an OS semaphore.

Acquiring the Lock

The idea is that if there's no contention then acquiring the lock is just a case of obtaining the spinlock and flipping the GIL boolean.

(code is half-baked pseudocode, sorry!)

- acquire spinlock
- if GIL == false
   - GIL=true     // acquire GIL
   - release spinlock
   DONE!
- else:
   - increment contention-counter
   - release spinlock
   - goto LOOP

LOOP:                 // contention counter has been incremented by this thread
- acquire spinlock
- if GIL == false
   - GIL=true     // acquire GIL
   - decrement contention-counter
   - release spinlock
   DONE!
- else
   - release spinlock
   - wait on semaphore
   - goto LOOP

Releasing the lock

- acquire spinlock
- release GIL
- read contention counter
- release spinlock
- if contention counter is non-zero:
     notify semaphore

This is just an idea at this stage. There was no working wifi on my train home today so haven't done any decent research on this, and I haven't done any empirical testing yet. Also I'm not a multithreading expert so if there's a glaring error in the idea or in the implementation then I'd be very pleased if somebody could point it out - thanks!

Adding atomic CAS instruction support to Factor's compiler

I said in the last post that I'd write a bit about adding a new machine instruction to the factor compiler once I'd got round to actually doing it, so here it is:

If you've been following my blog you'll know that I wanted to utilise multiple cpu cores for a personal database project. Unfortunately Factor doesn't have an os-threaded runtime yet and so in order to work around this I modified the factor vm code to allow multiple vms to run simultaneously on separate threads.

I'm now writing a concurrent queue so that messages can be passed internally between the running vms. To implement my queue I wanted some fast atomic primitives so I set about adding a Compare-and-swap (CAS) instruction to Factor's compiler.

To implement CAS on x86 you basically need the CMPXCHG and LOCK instructions, so my first job was to get these added to Factor's x86 assembler DSL. I located the machine-code opcodes in the intel manuals and added the CMPXCHG and LOCK instructions to the x86 assembler vocabulary thus:

basis/cpu/x86/assembler/assembler.factor:

: CMPXCHG ( dst src -- ) { HEX: 0f HEX: B1 } 2-operand ;

: LOCK ( -- ) HEX: f0 , ;

With the new x86 instructions in place I was all set to add a new low-level IR 'virtual' instruction to factor's compiler. There are basically 3 steps to this:

  1. Declare the new low-level IR instruction along with the number and types of registers it requires
  2. Tell the code generator where to dispatch the calls to generate the 'real' cpu machine code for the instruction
  3. Write methods that emit both X86 and PPC versions of the machine code for the instruction.
I'll explain each step below:

Step one: Declaring the new instruction

Thanks to the recent addition of a low-level instruction DSL, adding a new instruction to the compiler backend is just a case of declaring the instruction name and the argument registers it requires:

basis/compiler/cfg/instructions/instructions.factor:

INSN: ##atomic-compare-exchange
      def: dst/int-rep
      use: ptr/int-rep old/int-rep new/int-rep
      temp: temp/int-rep ;

##atomic-compare-exchange is my new virtual CAS instruction that receives 3 input arguments: a pointer to a word in memory, an expected 'old' value and a new value.

The implementation of ##atomic-compare-exchange will do the following: compare the value pointed to by the ptr register to the value in old and if it's equal replace it with the value in new, otherwise leaves it as it is. Finally, put the resulting value in the destination register dst. In case you haven't guessed, this is pretty much exactly what CMPXCHG does on x86.

At compile time Factor's register allocator allocates the real (e.g. x86) cpu registers and passes them to our code generator.

Unfortunately as an added complication in this particular case the x86 instruction CMPXCHG uses the EAX/RAX register as an implicit argument, and unfortunately Factor's code generation doesn't support tying particular cpu registers to parameters yet (though Slava assures me it will soon). To work around this I'm asking the compiler to pass me an extra 'temp' register so we can use this if any of the others happens to be EAX/RAX.

With the low-level instruction declared I now want to get some interactive testing going, so I write a high level word called 'compare-swap' and a compiler intrinsic which uses the ##atomic-compare-exchange instruction. (See the previous post for an overview of factor compiler intrinsics). The point of this is that we can dump out and check the low level IR instructions emitted by the compiler. Here's the compare-swap word and compiler intrinsic:

: compare-swap ( ptr old new -- ? )
    2drop "compare-swap needs to be compiled" throw ;

: emit-compare-swap ( node -- )
    drop
    3inputs
    ^^atomic-compare-exchange
    ds-push ;

\ compare-swap [ emit-compare-swap ] "intrinsic" set-word-prop

Now we can use the 'test-mr.' debugger word to see the new instruction in action. We'll just pass in a bunch of junk arguments for now so we can see what the low-level MR instructions look like in context:

( scratchpad ) USE: compiler.cfg.debugger
( scratchpad ) [ ALIEN: 20 1 2 compare-swap ] test-mr.
=== word: ( gensym ), label: ( gensym )

_label 0 
_label 1 
##load-reference RAX ALIEN: 20 
##load-immediate RCX 8                            ! 1 << 3
##load-immediate RDX 16                           ! 2 << 3
##atomic-compare-exchange RAX RAX RCX RDX RBX     ! Oops! - RAX allocated as a register
##inc-d 1 
##replace RAX D 0 
_label 2 
##return 
_spill-area-size 0

In this example you can see that the register allocator has allocated RAX as both the destination register and one of the input registers, so our X86 implementation of ##atomic-compare-exchange will need to work around that.

Step two: Wiring up the code generator

Ok, now that we have low level IR working the next step is to tell the compiler how to generate the real cpu machine-code for the new instruction. There's a convention that all machine code emitting words start with a '%' so I'm going to create a generic word %atomic-compare-exchange with method implementations for each CPU. Here's the generic word declaration:

/basis/cpu/architecture/architecture.factor:

HOOK: %atomic-compare-exchange cpu ( dst cptr old new temp -- )

N.B. Factor has a generic dispatch mechanism called 'HOOK:' which dispatches polymorphically based on the value of a variable at compile time. In this case it's the cpu variable which is set to a singleton representing the target architecture (x86.32, x86.64, ppc), so essentially this generic word is polymophic based on CPU architecture.

Now I tell the compiler to used this generic word for code generation using the CODEGEN: DSL word. (again, see Slava's post on the new compiler DSL words):

/basis/compiler/codegen/codegen.factor:

CODEGEN: ##atomic-compare-exchange %atomic-compare-exchange

Step three: Doing the x86 code generation

All that's left now is to implement %atomic-compare-exchange for our cpu architectures. Below is my method implementation for x86. To make the example more straightforward I've omitted the code that works around the implicit EAX/RAX register, which I abstracted into a 'with-protected-accumulator' combinator.

/basis/cpu/x86/x86.factor:

:: (%atomic-compare-exchange) ( dst cptr old new -- )
    accumulator-reg old MOV
    LOCK
    cptr [] new CMPXCHG
    dst accumulator-reg MOV ;

! CMPXCHG implicitly uses EAX/RAX (accumulator) so need to remove
! EAX from arguments and protect it from being stomped
M: x86 %atomic-compare-exchange ( dst cptr old new temp -- )
    [ (%atomic-compare-exchange) ] with-protected-accumulator ;

The (%atomic-compare-exchange) word contains the actual machine code generation: You can see I simply output 4 lines of assembler using the factor x86 assembler DSL and the registers passed to me by the compiler. (N.B. 'accumulator-reg' is my helper word that returns EAX or RAX depending on whether the arch is 32 or 64 bit)

Now that the x86 implementation is written we can check the output machine code with the disassemble word (which uses either the Udis86 library or GDB under the hood to do the disassembling):

( scratchpad ) [ ALIEN: 20 1 2 compare-swap ] disassemble
00007f85690413a0: 48b8fe3ac566857f0000  mov rax, 0x7f8566c53afe
00007f85690413aa: 48b90800000000000000  mov rcx, 0x8
00007f85690413b4: 48ba1000000000000000  mov rdx, 0x10
00007f85690413be: 4889c3                mov rbx, rax
00007f85690413c1: 4889c8                mov rax, rcx
00007f85690413c4: f0480fb113            lock cmpxchg [rbx], rdx
00007f85690413c9: 4889c0                mov rax, rax
00007f85690413cc: 4983c608              add r14, 0x8
00007f85690413d0: 498906                mov [r14], rax
00007f85690413d3: c3                    ret

The disassembler output verifies that the cmpxchg instruction is being compiled correctly. You can also see that I'm doing some juggling with the rax register to manage using it as an implicit argument to cmpxchg.

Hopefully that gives a good overview of how to get new low-level instructions added to Factor's compiler, and also illustrates how machine-code generation works in Factor.

Hand-coding multi-platform assembler using Factor compiler intrinsics

Disclaimer: I'm not a Factor compiler expert and am just getting to grips with compiler intrinsics so some of this might be a bit iffy.

'Compiler Intrinsics' is a mechanism by which you can insert your low-level implementation of a subroutine into the compiler output. This is useful in a couple of scenarios:

  • if the compiler doesn't support the desired functionality - e.g. it does something hardwarey that Factor can't do yet
  • if the subroutine is performance critical and the compiler isn't generating the most efficient code

The old way of doing compiler intrinsics in Factor was to hand-code some assembler using one of Factor's assembler DSLs (PPC or X86) and then attach it to an existing word as a word-property along with an argument type pattern. When the compiler compiled calls to the word it would compare the input parameters to the pattern and on match would insert the assembler directly into the generated code.

Since my last post about Factor's compiler over a year ago Slava has pretty much re-written the whole thing. It now has two intermediate stages:

The first frontend stage transforms the factor code into an intermediate representation called 'high level IR'. This is basically a decomposition of factor code into primitive word-calls and control nodes through various optimization passes. This is very similiar to the dataflow IR in the original Factor compiler that I described in the previous blog post

The second backend stage is the new bit. It converts the high-level IR into low-level IR, which is basically a platform independent assembler language. An optimization stage then runs and cpu registers are allocated resulting in 'machine IR' (abbreviated to 'MR' in the debug tools). The real machine code generation is then done from this MR.

The new way of doing compiler intrinsics allows you to insert low-level IR code at the beginning of the 'backend' stage. Differences to the old way include:

  • You now code using the platform independent instructions defined in compiler.cfg.instructions
  • Instructions operate on virtual registers. There are an infinite number of those
  • Subroutine arguments don't appear in registers. Instead you manually insert code to get them in and out of the data stack using ds-push, ds-pop
  • You still have to box and unbox values manually (just as before)
  • There's an optimization stage that runs after you've emitted the low level IR instructions from your compiler intrinsic

As a really simple example here's a word which is going to add 35 to the fixnum on the top of the stack and push the result. To make sure that we're executing the intrinsic assembler I'll give it a default implementation that throws an error.

: add-35 ( n -- n' ) 
    drop "shouldn't call this" throw  ;

Incidently, here are the MR instructions generated from this default implementation:

( scratchpad ) USE: compiler.cfg.debugger
( scratchpad ) \ add-35 test-mr.
=== word: add-35, label: add-35

_label 0 
_prologue T{ stack-frame { total-size 32 } } 
_label 1 
##load-reference RAX "shouldn't call this" 
##replace RAX D 0 
_label 2 
##call M\ object throw 
_label 3 
##no-tco 
_spill-area-size 0

A couple of things to notice:

  • The instructions are prefixed with ##. E.g. ##load-reference, ##replace

  • This MR output is displayed after cpu register allocation has been done: RAX is an x86.64 register. Also D is a pseudo-register that points to the data stack. If you look at the disassembled machine code (just below the callstack juggling) you can see that D actually becomes R14:

( scratchpad ) \ add-35 disassemble
00007f6d98780ce0: 49b8e00c78986d7f0000  mov r8, 0x7f6d98780ce0 (add-35)
00007f6d98780cea: 6820000000            push dword 0x20
00007f6d98780cef: 4150                  push r8
00007f6d98780cf1: 4883ec08              sub rsp, 0x8
00007f6d98780cf5: 48b8e6866ca76d7f0000  mov rax, 0x7f6da76c86e6
00007f6d98780cff: 498906                mov [r14], rax
00007f6d98780d02: e859a385ff            call 0x7f6d97fdb060

Ok, so instead of an implementation that throws an error I want to insert my own instructions into the output. I can do this by attaching some low-level-IR emitting code to the word using the "intrinsic" word property:

: emit-add-35 ( node -- )
    drop              ! don't need to inspect the compiler node
    ds-pop            ! insert instruction to pop value off the stack
    ^^untag-fixnum    ! insert code to untag the value in the register
    35 ^^add-imm      ! insert instruction to add 35 to it (add-imm = add immediate)
    ^^tag-fixnum      ! insert code to tag the result
    ds-push ;         ! insert code to push the result onto the data stack

\ add-35 [ emit-add-35 ] "intrinsic" set-word-prop

The emit-add-35 just pops a value off of the stack, un-tags (unboxes) it and then adds 35 to it and tags the result. A couple of points:

  • 'Hats' - The ^^ form of instructions are the same as the ## form, except that after emitting the instruction the ^^ form returns the (new) destination register so that it can be used by the next instruction.

  • 'tag/untag' - Factor aligns all its heap data to the nearest 8 byte boundary, which leaves the bottom 3 bits of each pointer free for runtime type identification (RTTI). These 3 RTTI bits are called the 'tag', and in the case of a fixnum the tag is '000' and the other bits store the actual value rather than a pointer to the value. So instead of unboxing fixnums we simply untag them, which equates to shifting them 3 bits to the right.

  • node parameter - You'll notice that the emit-add-35 word takes a node parameter. This parameter is a structure passed by the compiler and contains information about the inferred types and value-ranges of the arguments at compile time. This is handy if you're dispatching based on type or you want to decide whether to include overflow logic. In this example I'm doing neither so I discard it

Now that the add-35 word has a compiler intrinsic we can see the emitted code by compiling it within a quotation (code-block) and displaying the mr:

( scratchpad ) [ add-35 ] test-mr.
=== word: ( gensym ), label: ( gensym )

_label 0 
_label 1 
##peek RAX D 0                     ! - load value from stack
##sar-imm RAX RAX 3                ! - untag
##add-imm RAX RAX 35               ! - add 35
##shl-imm RAX RAX 3                ! - tag
##replace RAX D 0                  ! - replace top stack elem with result
_label 2 
##return 
_spill-area-size 0

I've annotated this output but you could probably guess what it was doing anyway.

I mentioned earlier that a backend optimizer stage runs after the intrinsic word is called. To illustrate this here's a compilation of the add-35 word with a supplied constant argument:

( scratchpad ) [ 4 add-35 ] test-mr.
=== word: ( gensym ), label: ( gensym )

_label 0 
_label 1 
##load-immediate RAX 312 
##inc-d 1 
##replace RAX D 0 
_label 2 
##return 
_spill-area-size 0

You can see that the Factor compiler dispensed with our hand-coded add instruction and instead just stuck the fixnum-tagged result in the RAX register. It did this because it could perform the evaluation and boxing at compile time. ( 312 = (35 + 4)<<3 ). Here's the resulting X86 assembler:

( scratchpad ) [ 4 add-35 ] disassemble
00007feac680e0c0: 48b83801000000000000  mov rax, 0x138
00007feac680e0ca: 4983c608              add r14, 0x8
00007feac680e0ce: 498906                mov [r14], rax
00007feac680e0d1: c3                    ret

So that leaves the question: How do I code actual X86 assembler into a subroutine?

To do that you need to create a new low-level instruction tuple and emit your X86 assembler from a generate-insn method on that instruction. This is a lot easier than it sounds thanks to the INSN: and CODEGEN: words.

I've got to add some CAS instructions soon so I'll probably write a bit about it then.

Making a C codebase reentrant by turning it into a big C++ object

Over the last couple of months I've been spending my spare time working at making the factor vm codebase reentrant. The Factor vm is a mostly C codebase with global variables holding the runtime state, and I wanted to be able to run multiple vms in a single process. I thought I'd document the approach I used to make the C portions reentrant because it's one of those things that's obvious and easy in hindsight but took me a few abortive attempts and some wasted time to find the best way.

The output of this process is one big vm object with all the global variables and functions in it. I originally spent some time trying to refactor the vm codebase into an OO model but this turned out to be a really subjective operation and I ended up thinking I'd do more harm than good attempting that. Ultimately I opted for the one-big-vm-object approach, with the previso that it can then be refactored into an object model later if that's deemed a good idea.

Anyway, here's the recipe to put all the variables and functions into the object. The purpose of the technique is to have a working running version at all times:

  1. create an empty vm class and singleton instance
  2. move the functions into the class one by one, leaving a forwarding function (The forwarding function calls the method through the singleton pointer, meaning all the existing refs to the function still work)
  3. once all the functions are converted to methods, remove the forwarding functions
  4. then move the global variables into the class
  5. finally remove the singleton pointer

The reason for moving the variables at the end is that once the functions are in the object it doesn't matter if variables are made local to the object or global: the code refering to them in the functions doesn't change. This means you can incrementally move variables in and out (for testing) and everything builds ok at each step.

I should mention that it really helps if you've got a good editor with macro support. I wielded emacs' macro and register features to pretty much automate the whole thing, which is a godsend if you've only got about an hour a night to spend on hacking. (I have kids).

Obviously there was a lot more to making the vm reentrant than simply putting all the C code in an object, but doing that really broke the back of the work and motivated me to press on with the assembler and compiler changes. Hopefully I'll get around to writing something about the vm internals soon.

BTriples - a model for aggregating structured data

Things have settled down a bit after the birth of baby #2 and I'm starting to get a bit of time to program again: about an hour a night. That means I'm thinking a lot about indexing structured data again.

Here are my most up-to-date thoughts on a model for representing aggregated structured data which I'm tentatively calling 'BTriples'. I'm writing this down mainly so I can refer to it in future writing.

The purpose of BTriples is to be an internal model for an OLAP database such that it can represent structured data from a variety of popular formats (json, xml, csv, relational) and can index and query across heterogeneous data sources.

A good candidate for such a model would appear to be RDF, but it falls short on a couple of counts for my requirements:

  • The first issue is that in order to represent vanilla data as RDF there's a certain amount of manual mapping that needs to be done. You need to come up with a URI scheme for your imported data, and you then need to do some schema and ontology work so that the data can be semantically joined with other RDF data. This manual import overhead removes the ability to do one-click database imports, which is something I'd like to achieve with my database tool.

  • The second issue is that the RDF model has strict semantic constraints that are difficult to manage over a large set of disconnected parties. Specifically the RDF model says that "URI references have the same meaning whenever they occur". This 'same meaning' is difficult to enforce without central control and makes RDF brittle in the face of merging data from globally disconnected teams.

TagTriples was my first attempt at creating a simplified RDF-like model, but it suffers from the problem that it can't represent anonymous nodes. This makes importing tree structures like XML or JSON a tricky exercise as you need to have some way to generate branch node labels from data that has none. When I was designing tagtriples I was also thinking in terms of an interchange format (like RDF). I no longer think creating an interchange format is important - the world already has plenty of these.

Btriples is basically my attempt at fixing the problems with tagtriples. The format is triple based like RDF and so I borrow a bunch of the terms from the RDF model.

BTriples Specification

The Btriples universe consists of a set of distinct graphs (think: documents). Each graph consists of an ordered set of statements. A statement is intended to convey some information about a subject. Each statement has three parts: a subject, a predicate (or property) and an object.

  • A subject identity is anonymous and is local to the graph. This means you can't refer to it outside the graph. (This is similar to a 'blank node' in RDF).
  • A predicate is a literal symbol (e.g. strings, numbers).
  • An object is either a literal symbol or an internal reference to a subject in the same graph.

Example (logical) statements:

  // row data
#1 name "Phil Dawes"
#1 "hair colour" Brown
#1 plays "French Horn"

  // array
#2 elem "Item 1"
#2 elem "Item 2"
#2 elem "Item 3"
#2 elem "Item 4"

  // tree
#3 type feed
#3 entry #4
#4 title "BTriples - a model for aggregating structured data"
#4 content "blah blah ..RDF... blah"

That's it.

Notes:

  • Btriples is not an interchange format. I have deliberately not defined a serialization of BTriples.

  • BTriples graphs are disconnected: Btriples does not define a method for them to refer to each other.

  • Perhaps the biggest departure from RDF is that there are no formal semantics in Btriples. The btriples model cannot tell you if a subject in one graph denotes the same thing as a subject in another.

  • Also the semantic meaning of symbols is not defined by BTriples and is up to the user to decide. Two identical symbols do not necessarily 'mean' the same thing.

  • The statements in a BTriples graph are *ordered*, so you can get data out in the same order it went in.

  • I'm not crazy about the BTriples name. Maybe I'll change it.

Speed reading using RSVP

I discovered RSVP today (Rapid Serial Visual Presentation). It's a method of increasing your reading speed by flashing the words sequentially in one place.

This speedreading test measured my normal reading speed at 466 words per minute, but with RSVP I found it comfortable to read at over 600 words per minute using the Reasy firefox extension without any training. I suspect I can probably increase that further with practice.

Aside from reading performance the other interesting feature of RSVP is that it requires hardly any screen real estate making it really practical for portable devices.

Intuitive overview of principal components analysis (PCA)

I found an excellent and short introductory tutorial pdf on principal components analysis (PCA). It provides a good overview of the following concepts in a particularly intuitive manner:

  1. Mean Average
  2. Standard Deviation
  3. Variance
  4. Co-Variance
  5. Matrix transformations
  6. Eigenvectors & EigenValues
  7. Principal Component Analysis

Unfortunately I found the eigenvectors bit a bit heavy going. Luckily the wikipedia page for eigenvectors has a fantastic illustration on the right that gave me an instant feel of what was happening.

Factor makes you write better code

I program in Python, Javascript and Factor on a roughly daily basis. My experience is that I can write functions/methods quicker in Python and Javascript than I can in Factor, but that my Factor code ends up being of considerably higher quality. By higher quality I mean that it's better factored and easier to pull apart and change. In this post I'm making the claim that factor forces me to write better code, and I'm going to illustrate this with an example.

(I also use perl, ruby, scheme and java, but not nearly as often)

I've recently been writing a trading simulator in my spare time so that I can test my trading ideas on historical data. As part of this project I've written some of the same functionality in both javascript and factor and this experience gave me a good basis from which to compare the languages.

The example I'm going to use to illustrate the comparison is: Coding a simple moving average (SMA) function.

A simple moving average involves stepping along an array of numbers, generating at each step the average (mean) of the last p elements of the sequence (where p is the period). The output of the function is the sequence/array of averages.

E.g. an sma with period 4 on a six element array:

sma([0,1,2,3,4,5],4) => [0,0,0,1.5,2.5,3.5]

(I padded the start of the array with zeros in the javascript version)

For the javascript implementation I built SMA as two nested 'for' loops, with the inner loop summing the last n elements at each turn. This isn't the most efficient way of computing a moving average, but it is what I thought of and implemented first:


function sma (arr,period) {
    var out = [];
    // fill initial space with zeros
    for(var i=0;i<period -1;i++) { out.push(0);}  
    // fill rest with averages
    for (var i=period-1; i<arr.length;i++) {
        var sum = 0;
        for (var j=i-(period-1); j<=i; j++){
           sum += arr[j];
        }
        out.push(sum / period) ;   
    }
    return out;
}

When I went to code the Factor version the idea of coding up nested loops made my head hurt. Factor's stack based approach effectively means serial access to state - you have to shuffle the right variables into the right order at the right time. This makes it is very hard to write functions that manage more than ~3-4 variables at a time.

Javascript by comparison has random access to local variables* and my javascript version uses: 'arr', 'out', 'i', 'j', 'period', 'sum', not to mention a bunch of unnamed temporaries like 'length' arr[j], 'period-1' etc...

Shuffling all these variables manually on a stack while mentally keeping tabs on the order and position of each variable is a pretty tough challenge. I suspect the resultant code would be the sort of thing only a compiler could love.

So faced with this problem I used my traditional factor problem-hammer, which is to step away from the screen, walk around a bit and ask myself the question: 'What abstraction could there be that would make this easier?'.

I came up with 'map-window' which implements a sliding window across the input sequence and applies a block of code to each subset in turn. The code to implement the moving average is then:


[ mean ] map-window

Which is clearly a much cleaner implementation of SMA.

Before I continue I should mention that I could also have written the map-window abstraction in javascript (javascript has good higher order function facilities), but the point of this post is that factor forced me to come up with the approach.

Once I'd had the 'map-window' idea I could easily see how to compute the moving average. I also had an idea of how I could build map-window using 'head' and 'tail', or at least I had enough of an idea to motivate my trying it.

Ok, so here's my full implementation for comparison with the javascript:


: window ( seq start window-width -- subseq )
    [ 1+ head ] dip short tail* ; 

: map-i ( seq quot: ( seq i -- elt ) -- seq' )
    [ dup length ] dip with map ; inline

: map-window ( seq window-width quot -- seq )
    '[ _ window @ ] map-i ; inline

: sma ( seq period -- seq' ) 
    [ mean ] map-window ;

To my eyes the factor implementation is quite a bit more complex than the javascript one, at least consumed in its entirety. This might be because the concept of a for-loop is deeply engrained in my brain whereas the Factor implementation invents both map-i and map-window to build sma.

However the individual parts of the factor implementation are both generic and composable, and once you know what each bit does the whole thing pretty elegantly describes itself.

A big advantage to all this abstraction is that when you discover an implementation pattern occuring more than once, the chances are that the pattern is already factored out to some extent and is ripe for reuse with very little modification. I find this makes refactoring quicker and easier than with python and keeps the codebase relatively lean. This in turn means that the codebase doesn't drag as much as it gets bigger. The tradeoff is that I spend more time upfront finding and creating abstractions in the first place.

Of course if the right abstractions already exist then coding performance is improved dramatically. e.g. if map-window had already existied then sma would have been a slam dunk. I'd assume that as the factor library improves the likelyhood of this happening will increase, maybe at the expense of more time required to learn the core vocabularies. Programming in factor is already more about the libraries than the native language and I'd imagine this trend will continue, especially when you consider that in a lot of cases the libraries implement the core language.

Aside: I was surprised to discover last year that genuinely new and important stack language abstractions like 'fry' and the cleave/spread combinators were only just being conceived, despite Factor being quite a few years old and stack languages in general being many decades old. When you consider that very few languages actually 'invent' new features this makes Factor quite an interesting language in itself. Also interesting is that apart from a small bootstrapping core, the factor language is actually implemented in libraries meaning that anybody can build and experiment with new language constructs.

Anyway I'm diverging from the subject so I ought to sum up. The takeaway is: Whereas other languages provide the ability to create good abstractions, Factor pretty much forces you to create good abstractions because it is so bloody difficult to write any code without them.

--

Update: During writing this post I realised that what I'm doing with map-window is actually very similar to an abstraction in the factor library called <clumps> which constructs a virtual array of overlapping subsequences. That's the nature of factor programming: you keep finding that somebody else has built a similar abstraction to yours and it would have saved you a ton of time if only you'd realised!

  • Factor actually has support for efficient lexical local variables via the 'locals' vocabulary (library), which is a pretty impressive feat. However I only tend to use this when the problem I'm solving doesn't factor well (or sometimes temporarily out of desperation when I can't come up with the right abstraction).

Spread Betting

Over the last few years I’ve read a number of books on stock trading but despite this I never really felt compelled to risk any of my own money on the markets.

Then in October I discovered that spread betting offers a cheap way to test the water on a small budget. Spread betting on markets is treated as gambling by uk laws and so there’s no stamp duty or capital gains tax to pay. Also the competition between firms is enough that pretty much all of them provide £1 per point betting on the major stocks and indices. Competition keeps the spreads narrow too - usually less than 1%.

Finally a lot of the firms offer a period of training where for a few weeks you can put on bets at 10 or 20p a point with guaranteed stops, which means it’s feasible to be putting £3-£6 total risk on each trade, especially for stocks priced under 500p.

I got started with IGIndex, which provides an impressive range of uk stocks including all of the ftse 100 and 250. The online web software is really very good: the trading platform is totally ajax using yahoo YUI libraries. Unfortunately the charting stuff is a java applet, but that appears to be par for the course with spread betting platforms.

I funded the account with £100 of risk capital and started betting. As an employee of an investment bank I need to get clearance for each stock from our compliance dept, but this turned out to be not as onerous as it sounds. Anyway, I had the misfortune of my first trade in october winning big: a £6 short bet on the ftse100 index at 10p a point. I trailed a stop and made £30 before the trade was stopped out. 30% appreciation in capital in one trade!

Of course this was pure fluke but it made me over confident and I quickly lost almost everything over the next 3 weeks: I got down from £130 to about £35. This was exactly what I needed: in my opinion you want to lose big during the training period when your capital exposure is small. During this first month I learnt a bunch of things:

  • Money management and risk management are really important!

    I suspect this is more important than stock picking skills. Trading is a probabilistic thing and so you have to expect to lose a (maybe large) percentage of trades. For example I currently get stopped out on approximately two thirds of my trades. Apportioning risk capital between trades and using calculated stops ensures that a run of bad trades doesn’t destroy your capital account and allows you to stop and re-consider.

  • I was confusing short term oscillations with trends

    I did this quite a bit in the first few trades, and the lesson is: always look at longer term graphs for overall patterns. My trades usually last a couple of weeks when they don’t get stopped out. This means I need to look at charts over a year to check for trends.

  • Plan the trade, trade the plan

    I’ve discovered that trading can be a stressful affair and it’s easy to make silly mistakes in front of the 5 minute charts. I suspect this is because for a beginner like me there’s a big perception of time pressure, and also because you’re always in a position to do something with your stocks which makes it difficult look away.

    To mitigate the background stress and the opportunity for mistakes I’ve found it much better to make plans when the markets are closed (e.g. in the evening or weekend) and not in my lunch break. This is one reason I prefer to trade uk stocks which trade during the day rather than e.g. forex or commodities which are trading all the time.

    For the stocks I have open at a point in time I make notes about what I expect to happen and what to do if it does or doesn’t happen. IG index offers an SMS alert service which I use to alert me of price breaks. The alerts and the written plan keep me from worrying about my stocks while I’m working.

  • record everything

    This is super important! I keep a spreadsheet of trades, a diary of notes about each entry/exit and screenshots of the intraday charts. This was especially important during the early weeks as I was able to learn from the many beginner mistakes I made. A couple of months later and I’m still making plenty of mistakes that I can learn from and I don’t expect that to change any time soon.

I’m out of the training period at IGIndex now and so am betting £1 a point on uk stocks and I’m currently apportioning a maximum risk of £20 per trade. I’m starting to become more consistent and confident but I’m wary that it’s difficult to tell at this early stage whether this is more luck or skill. If you’re thinking about trying your hand at trading I’d definitely recommend spread betting as a first step before committing any real money to the markets.