Thanks for the feedback. What I've done for now is to copy the specific functions I want to stub out by default to the particular instance mock which roughly achieves what I'm looking for.
I should probably explain a little better what I am trying to achieve: I wrote a C++ mocking framework which basically allows you to specify default return values for any function. If you specifically record one of the default function calls, then that recorded version takes priority. After about 6 months of using both TypeMock, and the C++ framework, we began to notice that the C++ tests were far easier to read, debug and maintain. It turns out that the reason for this is the default stubs, and being able to override them.
In C#, we may have a test like this:
[Test]
[VerifyMocks]
public void TestMyFunctionDamagesBarIfCalculateSomethingElseIsPositive()
{
Bar bar = new Bar();
using (RecordExpectations recorder = RecorderManager.StartRecording())
{
recorder.DefaultBehavior.CheckArguments();
recorder.ExpectAndReturn(bar.Property1, 5);
recorder.ExpectAndReturn(bar.Property2, 6);
recorder.ExpectAndReturn(bar.Property3, 7);
recorder.ExpectAndReturn(bar.Property4, 8);
bar.CalculateSomething(5, 6, 7);
recorder.CheckArguments();
recorder.Return(123);
bar.CalculateSomethingElse(123, 8);
recorder.CheckArguments();
recorder.Return(5);
bar.Damage();
}
Foo foo = new Foo();
foo.MyFunction(bar);
}
[Test]
[VerifyMocks]
public void TestMyFunctionKillsBarIfCalculateSomethingElseIsNegative()
{
Bar bar = new Bar();
using (RecordExpectations recorder = RecorderManager.StartRecording())
{
recorder.DefaultBehavior.CheckArguments();
recorder.ExpectAndReturn(bar.Property1, 5);
recorder.ExpectAndReturn(bar.Property2, 6);
recorder.ExpectAndReturn(bar.Property3, 7);
recorder.ExpectAndReturn(bar.Property4, 8);
bar.CalculateSomething(5, 6, 7);
recorder.Return(123);
bar.CalculateSomethingElse(123, 8);
recorder.Return(-5);
bar.Kill();
}
Foo foo = new Foo();
foo.MyFunction(bar);
}
In, C++ the tests end up looking more like this:
// REGISTER_STUB(Function, optional ReturnValue)
// EXPECT_RETURN(FunctionCall, ReturnValue);
FIXTURE(...)
{
REGISTER_STUB(Bar::GetProperty1, 5);
REGISTER_STUB(Bar::GetProperty2, 6);
REGISTER_STUB(Bar::GetProperty3, 7);
REGISTER_STUB(Bar::GetProperty4, 8);
REGISTER_STUB(Bar::CalculateSomething, 1);
REGISTER_STUB(Bar::CalculateSomethingElse, 1);
REGISTER_STUB(Bar::Damage);
REGISTER_STUB(Bar::Kill);
}
TEST_FIXTURE (MyFunctionUsesResultOfFirstCalculationInSecondCalculation)
{
Bar bar;
RECORD
{
EXPECT_RETURN(bar.CalculateSomething(5, 6, 7), 123);
EXPECT_RETURN(bar.CalculateSomethingElse(123, 8), 5);
}
Foo foo;
foo.MyFunction(bar);
}
TEST_FIXTURE (MyFunctionDamagesBarIfFinalCalculationIsPositive)
{
Bar bar;
RECORD
{
EXPECT_RETURN(bar.CalculateSomethingElse(0, 0), 1).IgnoreArguments();
bar.Damage();
}
Foo foo;
foo.MyFunction(bar);
}
TEST_FIXTURE (MyFunctionKillsBarIfFinalCalculationIsNegative)
{
Bar bar;
RECORD
{
EXPECT_RETURN(bar.CalculateSomethingElse(0, 0), -1).IgnoreArguments;
bar.Kill();
}
Foo foo;
foo.MyFunction(bar);
}
This is a very trivial example, but I hope my point about the tests being more explicit is clear enough. If I'm testing some conditional behaviour at the end of a function, and I've already tested the code up until that point, then I don't need to test it again. More than that, what I am testing becomes much clearer when I don't have the clutter of setting the expectations on the code I don't care about.
In the example above, I don't need to actually test that I call PropertyN/GetPropertyN() since I am using the results in CalculateSomething() and CalculateSomethingElse(), which I am testing. In the last two tests, the result of CalculateSomething() is irrelevant, but I'm still forced to set an expectation using TypeMock (assuming that running the real code will do something nasty).
Although we try and keep our functions as small as possible, we still end up needing a lot of test code for some of our tests. When that test goes wrong, it becomes really hard to work out which code is relevant to the test, and which is just there to make sure the test can actually run ok.
Anyway, thanks again for the help on this one. I'd lo