Racer on Rust Beta!

I'm very pleased to announce that Racer now compiles with the Rust 1.0 beta releases! This means that hopefully Racer should be ready to go for Rust 1.0 when it launches later this month.

The big blocker for this release was rustc's libsyntax. I'd like to say a big thanks to Erick Tryzelaar for porting libsyntax into the syntex_syntax crate, and for working hard to remove unstable features from libsyntax. Also I'd like to say a special thanks to Tamir Duberstein for helping to remove the dependency on rust unstable features.

And of course thanks to all the Racer contributors.

(BTW, a side effect of relying on a separate syntax crate is that build times for Racer have gone up dramatically. This is a shame, but I think a small price to pay for Rust 1.0 compatibility)

Rustc noodling for code completion

Background: Racer is a code completion tool for the Rust programming language. This is going to be a bit of a meandering post as I try to decide what to do about type inference.

As I wrote in the previous post, I'm pretty convinced that racer should be using rustc to interrogate the type of expressions. This would enable racer to perform more accurate completions (and potentually complex refactorings in the future). Currently racer does some of its own type inference, but getting this right is error prone and I think long-term probably a dead end.

However the nice thing about racer doing its own type inference is that it can still work in the face of unfinished code, and also be very fast (because it can work incrementally, take shortcuts and avoid doing a lot of the work that rustc does). The downside is that hand-rolling rust type inference is a lot of work and I don't have much spare time.

Initially I was wondering if racer could approach the type inference problem by isolating the target expression and constructing a smaller representative program around it. The program would be quicker to compile by rustc but have same type expression output. This is essentially the approach racer currently takes with parsing.

Unfortunately after attempting a few hand written examples I realised this would be a very complex approach. E.g. consider this piece of code from libsyntax parse/parser.rs and imagine you want to construct a program to extract the type of the |p| argument:

                let tts = try!(self.parse_seq_to_before_end(
                    |p| p.parse_token_tree()

Racer would need to incrementaly resolve the types of a whole bunch of expressions before it could reasonably construct a representative program to run through rustc in order to find the type of 'p'. I suspect this work is similar in scope to completing racer's existing hand-rolled type analysis, which kind of defeats the point.

So the other approach is to run the rustc type inference over whole crates, including imports, and somehow get this fast enough for a good user experience. I'm trying to figure out how best to accomplish this. (Ignoring unfinished code for now)

I started by putting together a spike which attempts to use rustc to run a full type-inference pass on some code. I had to create a copy of the rustc_typeck crate (called mytypeck) in order to get access to the private innards.

(N.B. rustc changes frequently so it is likely that the mytypeck crate will get out-of-date very fast)

The approach I currently intend to persue is to maintain an ast and side tables of all the items in a crate, but no function bodies, (a 'skeleton ast' if you will), and then somehow patch the ast with an individual function body and get the side-tables updated in order to obtain the type of an expression contained within. The function body may be need to be artificially constructed by racer to handle unfinished code. The skeleton ast would need to be constantly updated as the item signatures are updated.

This is all uninformed speculation, but I'm a bit limited with spare time and wanted to get something written down to encourage some chat and input from people with more experience. Am I on the right track?

Racer Progress Update 5

Racer is a code completion tool for the Rust programming language. Time for another progress update!

1. Cargo support

The big news is that I recently added some cargo support to racer:

When searching for crates racer now interrogates the nearest Cargo.lock and Cargo.toml file and then searches git checkouts and crates.io repos in the ~/.cargo directory. This was a bit fiddly but luckily I had the source-code to Daniel Trstenjak's excellent rusty-tags project to refer to. Thanks very much Daniel!

2. Rust Beta

Racer relies on some unstable libraries at the moment, which unfortunately means it won't currently build with the Rust Beta channel. Hopefully this situation will be remedied over the next few weeks before 1.0, but in the meantime you need a rust nightly install to build racer.

(Unstable features used are: collections, core, rustc_private, str_char, path_ext, exit_status. Patches very welcome!)

3. Performance improvements

Back in February Renato Zannon noticed that racer was spending a lot of time spawning threads. The spawning was originally designed to protect against libsyntax panics, and preventing crashes has become an important issue recently because there is a lot of demand to provide racer as a library for linking with IDEs. Unfortunately the thread spawning also generates a lot of overhead and removing it more than doubled the performance of racer in a lot of cases.

The thread spawning is currently removed from the codebase, which means that racer will sometimes panic and fail to produce matches. As an alternative to wrapping libsyntax calls I've been spending time removing panics from Rust's libsyntax itself in the last few weeks. If you want to follow along the PR-in-progress is here.

4. Type Inference

Not much progress on the type inference front I'm afraid. I'm now more convinced than ever that racer should be using rustc for type inference rather than continuing to roll its own. The question is how to get there: the rust compiler is currently not the right shape to do incremental type inference. There is plenty of desire in the community to have a compiler library that can support IDEs, so I plan to help out there where I can.

As a start I'm going to try having racer construct small representative pieces of code to compile (in the same way as it does with using libsyntax to parse), and see where that takes me.

5. Thanks!

Once again, a big thanks to all the Racer contributors. Your help and support is much appreciated:
Andrew Andkjar, awdavies, Ben Batha, Björn Steinbrink, Björn Zeutzheim, Dabo Ross, Darin Morrison, Derek Chiang (Enchi Jiang), Emil Lauridsen, Henrik Johansson, Heorhi Valakhanovich, Jake Kerr, Johann Tuffe, Jorge Aparicio, Justin Harper, Keiichiro Ui, krzat, Leonids Maslovs, Marvel Mathew, mathieu _alkama_ m, Michael Gehring, Michael Maurizi, nlordell, oakes, Pyry Kontio, Renato Zannon, rsw0x, Saurabh Rawat, Sebastian Thiel, Siddharth Bhat, Vincent Huang, Vlad Svoka, Zbigniew Siciarz, Ziad Hatahet.

Racer Progress Update 4

Time for another Racer update! Most of the work on racer in the last few months has been about keeping up with the flood of changes to the rust language and internal apis. Aside from that there were a couple of improvements:

  • Racer got some support for generics (connect() returns IoResult<TcpStream>)
  • It got some support for destructuring
  • Racer got better at handling half-finished code, dangling scopes and match statements
  • It also got a bit better at working on windows (in particular carriage returns and BOMs (byte-order-marks) in files).


Racer is starting to get some heavier use by people now, which is great, but has thrown up a few deficiencies that need addressing in the coming weeks/months

  • Cargo support

    Racer works well for a small project using the rust standard libraries. Unfortunately using it with a modern cargo project is a bit clunky and sometimes completely unworkable, involving having to fiddle with the RUST_SRC_PATH variable and in some cases make a copy of external sources in a separate directory. Racer needs to be able to read cargo.toml files and locate the sources referenced in them.

  • Racer as a library (+ daemon)

    Some people would like racer to operate as a long-running daemon for performance reasons. Current Racer performance isn't a big deal for me personally (a complete or find-definition is a few hundred ms on my laptop in most cases), but @valloric thinks it will be and he has a lot more experience in this area due to his YouCompleteMe project. The plan is to create a racer library that editors and daemons can embed.

  • Racer only reads code from files on the file system

    This is somewhat connected to the library thing: Currently racer only consumes code from the file system - you cannot pass it text to work on. This simplifies the internals but means that if you're half way through editing a file and you need racer, your editor must save the contents of your file before invoking it. This is currently handled by emacs and vim by saving a temporary file in the current directory. If racer is to work as a library in a long running process it will need the capability to allow editors to pass unsaved code to it directly and not get confused when it doesn't match what's on disk.

  • Type inference + rustc

    This is probably Racer's biggest hurdle: Racer currently does a bunch of custom type inference in order to resolve method and field completions and definitions. Unfortunately this inference is nowhere near 'complete' (ha!), and the rust language is a fast moving target making this a bunch of effort. I could easily spend the next year or two of my spare time replicating rust type inference and getting nothing else done.

    Instead I'd ideally like to lever the rustc compiler to do the type inference. Unfortunately rustc currently isn't designed for type-inferring anything other than an full complete crate, which is slow and also won't work for half-finished programs like the code you typically autocomplete. I need to spend some time researching what can be done in this area and I'm open to any ideas or advice.

    I noticed @sanxiyn has been experimenting with adding an ast node to represent a completion ('ExprCompletion') to enable this to go through the various compiler passes even though the code is 'broken'. This is cool and really promising - I think the meat here is in allowing incomplete programs to get through rustc's type-inference pass. I'm not sure whether having a 'completion' ast node is as important (once you can evaluate the type of an expression, providing completions is possible without the compiler support), but maybe this approach could be generalised to allow the compiler to skip over sections of code that don't parse correctly.

    In the short term I'm wondering if it might be possible/reasonable to generate small programs that represent the inputs to a specific type query but are quicker for rustc to type-infer. Racer currently does this for parsing (constructs a artificial statement similar to the original and then uses rustc's libsyntax) and it works really well there, but parsing is considerably less brittle than a full type-inference pass. I'm unsure of the best place to discuss this sort of thing; Reddit?

Finally I'd like to thank everybody that has contributed fixes and code to Racer so far:

Michael Gehring, Johann Tuffe, rsw0x, Darin Morrison, Zbigniew Siciarz, Michael Maurizi, Björn Steinbrink, Leonids Maslovs, Vincent Huang, mathieu _alkama_ m, Ben Batha, Björn Zeutzheim, Derek Chiang (Enchi Jiang), Emil Lauridsen, Henrik Johansson, Heorhi Valakhanovich, Jorge Aparicio, Justin Harper, Keiichiro Ui, Pyry Kontio, Renato Zannon, Saurabh Rawat, Vlad Svoka, Ziad Hatahet, awdavies, nlordell

I have very limited spare time so it's a real joy to wake up in the morning and find that somebody has submitted a patch to e.g. bring racer up to date with the latest rust. Thanks everyone!

Another Racer progress update

Racer has come on a bit since my last post. Here's a quick rundown of the main changes:

  • Method resolution got better. Racer now hunts around traits for default method implementations. This means that e.g. a completion on a Vec method gives you all the methods available, including 'is_exists()' and other methods implemented in the sub traits (see above). Racer can also resolve the type of 'self' so that it can match methods and fields when you're writing method implementations.
  • More completion information Racer now outputs some contextual info which gets included in matches - see the image above.
    (N.B. unfortunately there's a bug in my emacs bindings somewhere in the interaction with company mode which means that the extra stuff only crops up after you move the cursor. I'm not really sure why this happens, I'll try and fix it at some point.)
  • Module searching got better: Racer does a more accurate job of following 'extern crate' and nested modules. It turned out that modules didn't quite work the way I originally thought they did.
  • Racer now recognises explicit types in let expressions
    let v : Vec<uint> = ...
    This gives you a way to 'tell' Racer the type if it isn't smart enough to deduce it.
  • Internally Racer now uses iterators everywhere instead of callback functions. (I mentioned callbacks as something I didn't like about the design in a previous post).

    I originally thought iterators would solve the 'don't do the whole search if I only need the first match' problem. Unfortunately writing lazy iterators in Rust is non-trivial when the thing you're iterating is nested and recursive (like a code search). For now I've replaced the callbacks with a simpler eager solution: 'collect the results in a vector and return vec.move_iter()'.

    N.B. I did manage to cobble together a lazy iterator in one place by chaining together a bunch of procs in the central 'resolve_name' function; This has improved performance in racer - I might do the same trick for sub-trait searches.

Basically Racer is now pretty good at navigating and completing anything that does not involve lambdas, destructuring or generics. I'm planning to work on generics next.

I should say a big thanks to the people who committed patches to racer in the last couple of months, in particular rsw0x who posted a couple of fixes and has also been working on the vim plugin, Jorge Aparicio who got racer integrated with the travis build system and Micheal Gehring who dilligently kept Racer compiling in the face of breaking Rust changes.

While on the subject of contributors: I've been feeling the need for somewhere to discuss Racer recently, in particular for things that I don't have much experience with - e.g. windows support, vim + other ide plugins. I've created a google group racer-discuss; please join if you're interested in Racer.

Racer progress update (+ some vim support!)

It's been a few weeks since I posted about Racer, my rust code-autocompletion project, so here's another update.

First off, thanks to Michael Gehring who has been regularly submitting patches to bring Racer up to date with breaking Rust master changes. I have a very limited amount of time to work on Racer so I really appreciate this, thanks Michael.

The main news is that I've cobbled together a Racer vim plugin to go with the emacs one. I don't use vim much myself and I was kinda hoping somebody with expertise would show up and do this but I guess racer isn't sufficiently advanced for people to be interested yet. I'm a bit of a vim n00b so there's probably a much better way of structuring the plugin - any feedback or help would be gratefully received.

The other big improvement from last time is that racer can now perform some code completion of fields and methods (in addition to the static lookups I talked about last time). It turns out that type deduction is pretty involved. As an illustration here are the steps racer took in the screenshot above:

  1. In the screenshot the completion is being performed on fields of the 'v' variable. The plan is to find the type of 'v' and then locate all the fields and methods connected with that type beginning with 'pu'.
  2. Racer starts by searching backwards in the current scope, and parses the statement 'let v = Vec::new();'. It needs to evaluate the type of the right hand side of the expression so that it can know the type of 'v'. Vec::new() parses to a 'ExprCall' AST type with path: 'Vec::new'.
  3. Racer splits the path 'Vec::new' and starts searching for 'Vec'. 'Vec' is not defined in the screenshot and there are no 'use' or 'extern crate' statements in the file, so racer takes a look in libstd/prelude.rs. (The prelude module is implicitly included in all rust files).
  4. libstd/prelude.rs contains the line:
           #[doc(no_inline)] pub use vec::Vec;
    So now Racer needs to find the path 'vec::Vec'. It splits the path and starts searching for 'vec'
  5. 'vec' isn't defined anywhere in the prelude module. The next place to search is in the crate root, since paths in use statements are implicitly relative to the crate root. Racer hunts for the crate root using some heuristics and finds libstd/lib.rs.
  6. libstd/lib.rs (the std lib crate root) contains the line:
           pub use core_collections::vec;
    So Racer needs to find 'core_collections::vec'. It starts searching for 'core_collections' in the same file.
  7. The std lib crate root also contains the line:
           extern crate core_collections = "collections";
    Which means in the std lib crate 'core_collections' is an alias for the collections crate. Racer searches for the collections crate root and finds it in libcollections/lib.rs.
  8. The collections crate root contains:
           pub mod vec;
    Which means the source for the vec module should be in either vec.rs or vec/mod.rs under the libcollections directory. Racer finds vec.rs and searches for 'Vec::new' under it. It splits the path and searches for 'Vec'
  9. The vec module contains (amongst other things):
           impl<T> Vec<T> {
    Which means racer has located an impl for Vec function. It now searches for 'new' in the impl scope and finds
        pub fn new() -> Vec<T> {
            Vec { len: 0, cap: 0, ptr: 0 as *mut T }
  10. Rather than parse the whole method Racer extracts the method type signature of new() and parses that. 'new()' returns a 'Vec<T>' type. Racer now has the type of the right hand side of the 'let v = Vec::new();' statement from step (2) and so knows that the type of the 'v' variable is 'Vec<T>'. All it needs to do now is look for fields and methods belonging to the Vec type. It starts by looking for the definition of the Vec type.
  11. Racer Locates the Vec struct in the the vec module
    pub struct Vec<T> {
        len: uint,
        cap: uint,
        ptr: *mut T
  12. This is starting to get a bit boring so I'll cut the rest short: Racer looks for public fields in the struct (there are none), and then hunts for traits implemented for Vec and lists the methods (which it identifies as functions with 'self' as the first argument) starting with 'pu'. Phew!

Despite this elaborate field completing capability, Racer's type inference facilities are very far from er.. complete. At this point in time it is easy to confuse Racer with generic types, destructuring, lambdas and many other constructs that pop up all the time in rust code.

My plan now is to grind through each of these in some sort of order based on usage, probably starting with generics. My hope is that by incrementally adding smarts Racer will gradually become more and more useful and evolve into a fully fledged decent ide library by the time Rust hits 1.0.

Incidentally, this Thursday (26th June) is the first Rust London meetup, which looks like it is going to be awesome. (I'm not involved with organising it, just excited to be going!)

Racer progress update (Code autocompletion for Rust)

I've recently become a fan of the Rust Language. I think memory safety without garbage collection is an important innovation in the field and I'm excited by the prospect of using a safer alternative to C++ for fast low level code in my job.

Unfortunately the language is currently in heavy development, with major features changing every week or so. This is exciting but makes it unsuitable for production code at the moment. Instead I've started a project in my evenings and weekends to help me get up to speed with the language.

'Racer' is intended to bring code completion facilites to Rust editors and IDEs. It works as a standalone program that takes source code coordinates (filename, character position) and outputs completions. Eventually I hope it will also provide other search and editing capabilities. At present though it provides auto-complete of functions, structs + enums, and also does 'find-definition' for jumping around code.

It works like this:

  1. User presses <tab> or whatever to complete some code
  2. Editor writes out a temporary copy of the edited text and invokes Racer on it
  3. Racer searches the source code and prints the matching completion options
  4. The editor then reads the output and displays nice drop down list or whatever

The only editor support bundled with Racer at the moment is emacs. I also heard that Eric West has written support for the Atom editor. It appears that a lot of the core Rust developers use vi so I'm privately hoping that somebody will come along and add some vi support too.

Under the hood Racer works by parsing the local expression to be completed and then traversing the source code, doing text searches to narrow scope and parsing individual statements and blocks with libsyntax to perform type inference.


The code is very 'work in progress' (i.e. a bit crappy), mainly because I'm still getting to grips with how best to do things in Rust. There are some good bits and some less good bits:

  • Racer has a comment and string scrubbing iterator that works well. This is important for text-searching code without getting tricked by doc strings and comments.
  • It also has a statement iterator that steps through blocks of code quickly without requiring the parser (which is relatively expensive). This means statements can be text searched and then sent to libsyntax for full-blown parsing if they look like a match.
  • I'm now up to speed with Rust's parser package libsyntax and use it to parse various statements and expressions. The Rust language has changed a bunch over the last year and libsyntax still has a bunch of old idioms and GC based code so I feel like this is an achievement in itself.
  • I'm quite pleased with the decision to invoke racer as a command for each search rather than running a daemon that the editor talks to. It makes things easier to test from the command line and I hope will make integration with editors easier too. The downside is that I'll have to make code searching fast enough to work without building an up front in-memory model of the code. If it gets too slow I'll probably consider writing a code cache file rather than running Racer as a daemon.
Less good:
  • Internally the various searching functions should really return iterators so that the calling client can stop consuming whenever it wants. Instead I'm passing a callback arg to the methods which they then invoke when they find something, e.g.
    pub fn complete_from_file(src: &str, 
                              filepath: &Path, 
                              position: uint, 
                              callback: &mut |Match|) { 
    complete_from_file("foo", "myfile.rs", 3, |match| {
         // do something with each match
    This means the whole search happens regardless of whether you just want the first match. The main reason for doing this is because iterators are somewhat less easy to write for my brain. I'm hoping that yield functionality shows up to help at some point. In the meantime converting Racer's search functions into iterators is on the todo list.
  • Lots of debug!() calls. I'm using logging as my primary debug tool. This is a habit I've got in practically every language I use, even with java+intellij which has a great debugging environment. Rust's debug story isn't bad actually and I've used GDB a bit to debug stuff in Rust.
  • I'm still not completely comfortable with Option types in Rust; for my brain they still generate a little friction every time I need to think about unpacking a result. In Racer the majority case is to just ignore the None type (because we are searching so 'None' usually means nothing showed up). My current go-to method of doing this in code is .map():
    do_something_that_might_work().map(|res| {
    I feel I should be using .iter() instead:
    for res in do_something_that_might_work().iter() {
    but I can't look at 'for' without thinking 'loop'. I find this particularly unclear for methods that are named as if they could return plural results (e.g. StrSlice::find_str(), which looks like it could return multiple results, but in fact stops at the first one and returns an Option<uint>).
  • Code positions are referenced by byte offset in the start of the file. Unfortunately some days I prefer calling this the 'point' (like emacs does), and some days it's 'pos'. I really should just pick one.
  • Actually the biggest problem with Racer at the moment is that I'm tracking Rust master and am often a step behind new language changes. This means the Racer build is frequently broken with respect to the latest Rust nightly. I'm not sure how best to address this; Maybe I'll release a version that tracks the next Rust release when it gets done.

That's enough brain dumping for now. Racer's completion is pretty immature at the moment but it is still somewhat useful. I personally use the find-definition feature a lot for jumping around code to look up function arguments and I'd miss coding in Rust without it. The next phase for Racer is beefing up the type inference to better track types and handle methods on traits.

Clojure + emacs/slime without the magic package install stuff

Sometimes I just want to install stuff into emacs manually so I know what's going on. Here's a minimal setup to use emacs slime and connect to a swank server with a repl

I stick everything in the /opt directory. I'm using clojure 1.3.0-alpha3

  1. git clone git://git.boinkor.net/slime.git
  2. git clone https://github.com/technomancy/swank-clojure.git
  3. setup emacs:

    (add-to-list 'load-path "/opt/clojure-mode")
    (require 'clojure-mode)
    (add-to-list 'load-path "/opt/slime")
    (eval-after-load "slime"
         (slime-setup '(slime-repl))))
    (require 'slime)

    (That last 'eval-after-load' bit is to enable the slime-repl. Recent versions of slime come without it enabled by default)

  4. Go to your project directory and start the swank server:

    java -cp "lib/*:.:/opt/swank-clojure/src:/opt/clojure-1.3.0-alpha3/clojure.jar" clojure.main -e "(use 'swank.swank)(apply -main *command-line-args*)" /dev/null "$@"

  5. In emacs, connect to swank server:
    M-x slime-connect

The future is tree shaped

Have been thinking and reading more about parallelism recently. This set of slides from Guy Steele distilled a lot for me.

In order to realise parallel hardware performance we need to optimize our programs for computational bandwidth rather than latency. In terms of programming this means deprecating accumulation (cons, fold, streams, sequences) and favouring divide-and-conquer. This suggests a move to trees as the fundamental abstract building block for data.

Making tests less brittle

I do most of my interesting programming at home currently. Limited time in the evenings means that development is very stop-starty: projects get periods of mad enthusiasm and then are dropped for a few months while I concentrate on something else.

In this context I've found that having a large suite of automated tests can be a double edged sword: Usually when I come back to a project after a few months I have slightly different goals and so want to change the codebase, often drastically.

If the tests are tightly coupled to the implementation this adds significant drag, and is sometimes enough to cause me delete a ton of code and start again. Worse than that, occasionally I just stop running tests and hack.

So over the years I've come to spend an increasing amount of effort isolating tests to make them less brittle in the face of change. This is not an exact science but there's often some way to re-gig a test to make it achieve most of the same purpose without being quite so tightly coupled to the implementation. I guess for my purposes I'm advocating mostly blackbox over whitebox testing at some level. Here's a couple of examples:

System test data in 'native' formats

I like to code 'top-down' as it keeps me focused. I usually start with some data that I want presenting or transforming or storing or mining or whatever. I write some small system tests and code down from there.

I learnt this tip the hard way: It's very advantagous to have test data in a native external format that's not likely to change.

On my first data-aggregation project I had most of the tests using an internal triples format of my own design, which meant that when I changed my mind a few months later I had a ton of testdata and tests to change. I ended up deleting a lot of the code and starting again.

The second time I picked up the project I converted all the inline testdata to CSV and JSON and made the tests run an implicit import transform before invoking the top-level functions. The tests became slightly more complex but also less brittle and I'm now much less likely to delete them.

Inputs/outputs as language primitives

Inevitably as a codebase gets bigger I find that adding top-level blackbox tests isn't enough to drive development and that I need whitebox tests at a unit-level to help with algorithmically intense parts of the project. These tests increase motivation and speed up my coding but unfortunately are a lot more brittle during change and tend to be the ones that get deleted first when I come back to a project.

To combat this I often find it's worth refactoring important algorithms into functions that take language-primitive arguments (e.g. ints, lists etc..), separate from the object graph of the application.

  • A totally contrived illustration:
  • Replace:

        Foo::do-something-clever-with-Bar-Objects( objects )
        do-something-clever-with-id->name-pairs( id/name-pairs )

    and have 'Foo' callers unpack the Bah objects into an list of id,name pairs before calling the function.

The tests checking 'do-something-clever' functionality are now less coupled to the internal object graph and are passing only the data required to fulfill the operation.

Now this is obviously a tradeoff: The additional unpacking may add overhead (sometimes not). It might make the function interface unnecessarily complicated. Sometimes the tradeoff works well, sometimes it doesn't, but I always at least consider trying to separate out domain-objects from an algorithmically intense function. Often the algorithm is central to the application but the layout and interaction of the object graph is contrived.


It might be that I'm missing some important piece of the testing puzzle - I've mostly coded test-first for as long as I can remember but I've always had a mixed relationship with the outcome. Hopefully there's a silver bullet somewhere that I just haven't been told about yet.