Corticon's integrity checking tools can help you make sure you have no rule conflicts and that the rules are complete.
However you still need to create test data and expected results to prove that the rules produce the correct answers.
It would be nice if we could automate the creation of test scenarios. At the very least there ought to be a test case that will cause every rule to fire. Its fairly easy to determine what will cause a rule to fire - you just read down the conditions in its rule column.
When the value in a rule column is a literal (eg 'red' or 98.6) it can be used directly to create test cases automatically.
If the value is a range (21..65) or a comparison >65) then we can copy that expression to the test value and allow the user to select a specific value.
I started to write such a generator because I saw it as so valuable. However, I was unable to parse clearly the rulesheet xml to identify cases to generate.
This tool would be a great complement to the looping/conflict/completeness checks!!