How to Think Like a Mathematician
During our work on the hardware-based, motion-tracking side of EliteForm, we started out in a heavy R&D mode. We went through a number of possible solutions and the process was prototype, prototype, prototype. Once our current technology passed our smoke tests, we needed to put it through several rigorous testing and validation cycles to see if it would hold up in the field. Luckily, that’s about the time we hired our Applied Mathematician, Josh. Yes, that Josh from MinutesPlease.
Josh came from an academic background and, to put it simply, he thinks differently (no, that’s not an Apple pun). He comes from a world where a “working” prototype isn’t enough for something to be accepted by the community. Josh was able to instill several of these values into our team and ended up having a huge positive impact on our project. Here are some of the takeaways we had that might help you out on your projects:
See the data, not just what you want to see
The human brain is an interesting device. Its primary function is to take in evidence and draw conclusions on that evidence. As it turns out, it doesn’t always do this very well. Often when trying a new solution or running a new set of tests that recently failed, your brain will look for evidence that confirms your hopes rather than taking in all the evidence. This is a form of confirmation bias, which is a more scientific explanation of wishful thinking. On his blog You Are Not So Smart (and in his book of the same name), David McRaney has some great explanations on other ways your brain can trick you.
At one point in the project, we had an objective test set up to assess the quality of the system. After the first run, we had an idea that seemed like it would improve the system. After about 30 minutes, the 15 test runs were completed. The person running the test came back and said, “Look at how much better it did!” We asked for a better look at the numbers and realized that there was actually no improvement over the previous set of tests. There were some individual runs that were slightly better, but there were an equal number that were worse. The person running the tests was seeing what they wanted to see.
Control the variables
In a non-trivial system, there are often more than a few moving parts. You may have some thresholds, environmental variables, and even a variable of the user who is testing it. If you want to figure out which of these variables will actually improve the system, it’s important to only test one at a time if possible. If you run a test after changing two variables, you’ll never know which one (or if both) caused your new results.
Statistical significance is significant
The real world has a lot of randomness in it. In software, maybe the OS decided to schedule your program slightly different because another program was using the CPU. In the physical world, maybe the weather is slightly different from the last time you tested. It’s important to get enough samples of your tests to make sure the change, for better or worse, is not just a random anomaly.
Every bit counts
Another fun aspect of bringing Josh on was his approach to programming. He’s been programming for a long time, but hadn’t used .NET before. This allowed him to ask some very interesting questions. For example: “How is List<T> implemented in .NET?” Most CS grads won’t know and will never care, but when performance is an important factor, it might matter. A linked list implementation will perform much differently on insertions and random indexing than an array.
Many of us are still not as objective as a well-studied mathematician, but that’s why surrounding yourself with different types of people is a great thing.