Yerushalmi: Does anyone know if this is accurate?
I don't know, but one could check it with a debugger (use a heavy debug build of DOSBox, or emulate a different version with an emulator that has a debugger); find where a specific stat is stored (shouldn't be hard), put a breakpoint there, and see if you can figure out what the code is doing.
Of course, to do this you will need to have some knowledge of the assembly language used by the specific CPU (x86 for the DOS version, but non-DOS versions are built for different CPUs for obvious reasons), and be at least somewhat familiar with a debugger, but this could, with some time and effort, be done.
Note that, for the algorithm you suggest, the game would have to store the flips somewhere in memory.
By the way, here's a slightly different algorithm that gives the same result, but would, I think, be easier to code:
* Game flips 12 coins, and adds the number of heads. (Alternatively, game rolls 12d2 - 12; that works the same way but has an added subtraction.)
* Game caps the result at 10.
* Then, the game adds 8.
Alternatively, one could just say that each stat is 12d2 - 4, but capped at 18; this is the algorithm in standard dice notation.
(By the way, some games of its era, like some Wizardry games, actually *did* use non-standard die types; I have seen games that would actually use d7 for the damage of certain weapons.)