I was thinking more like compile time. That would make it easy as differences between the 2 boards come up. For example, the LED on the Bean+ is a LOT brighter than the Bean. Bright enough to be uncomfortable. At compile time you do something like
const int dimLED = 1;
const int midLED = 8;
const int brightLED = 64;
const int dimLED = 32;
const int midLED = 128;
const int brightLED = 256;
The exact values don't really matter.
Right now, that is the big thing. Ok, not that big, but others will come. For example we are starting to suspect there is something wrong/worse in how scratch banks are handled and the reliability & integrity of that data. I understand that if it is true, it could be a difficult thing to fix and might take many months. But by knowing ahead of time would could program a work around if the target is bean+.
It could be argued if compile time or run time would be better. As I think about it, I lean towards compile time as it would reduce code size and more importantly dynamic memory usage.
And dynamic memory is a wall I've already run up against. And it hurt
Just thought of something. I don't know how much control you guys have over the compiler. Might have to have the run time.