I am working on building a combat modelling program in Excel for the WOW expansion due out in December. The only really tricky part about building such a program is figuring out how to model exactly what happens in combat and let the user know what sequences of abilities will be the most effective. The hardest part about doing this by far is the random elements that are sometimes involved. For example, in the last expansion I had to model an effect where 40% of the time a weapon swing would cause an ability to be usable right away. I ended up modelling it by having the program keep track of a variable that started at 1 and incremented each time it was checked and when it got to 21 it was reset to 1. I preset 8 different numbers (1,2,4,5,8,9,13,15) that would trigger a 'yes' result and otherwise the program assumed it was a 'no'. Doing this allowed me to make sure that over time 8/20 or 40% of the results would be yes, and this system worked out well. It wasn't random so it would generate the exact same results each time for any user but it gave results that were extremely close to actual game values.
The trick this time is I have % chances for specific things to occur that are not fixed. For example, my Hand of Light (HoL) ability has a chance to go off on each attack but the chance of it going off starts at 8% and goes up with the gear I wear. I need a modelling system that has the ability to model any particular % chance including decimal places and I want that system to also produce identical results each time it is run. Unfortunately the only way I know how to do this is to make the program run the simulation for huge time periods so that the really large numbers can smooth out the data; otherwise the results will be different on each run and will ultimately make it very difficult to draw useful conclusions. As I understand it I can control the seed that the program uses to generate random numbers but if that seed happens to be really wonky in some way my data would still be useless.
The only kludge I have been able to think up is to have a preset start sequence and then have the program check to see if the overall percentage so far is over/under the theoretical one and pick the next result based on that. For example, I just assign the program to have the ability go off on the 4th, 14th and 24th try of the first 30. On the 31st try we know that the current result is 3/30 or 10%, so if the theoretical chance is over 10% it goes off on the 31st try, and if the theoretical chance is below 10% then it does not. Either way it follows up by calculating either 3/31 or 4/31, checking against the theoretical chance and going again. This would at least give me identical results each run but it has big problems with plateaus. If I run a test with 300 attacks I might raise the chance from 9.99% to 10.01% and see a significant gain because it made that 300th result change in value, whereas going from 10.01% to 10.03% no difference at all will be recorded. I suppose I really have no way to avoid plateaus without using *immense* simulations so this may well be the best way to make this work. If anyone else has any suggestions please feel free to let me know though.