A couple of days ago, I posted my thoughts on the potentially harmful effects that the new VS11 Fakes framework may have on the testing community. The post seems to have generated more interest than I expected with lots of comments, retweets and a request to discuss the framework with Peter Provost, a Microsoft Visual Studio Program Manager Lead.
I think there have been a number of insightful comments on the blog and rather than respond to all of them I thought it’d be easier and more concise to write a follow-up post. Plus I’d like to share some thoughts on my conversation with Provost.
Firstly, some of the comments on the blog post have added to my fears. And let me be the first to admit that the more I learn the more I realize how little I know. The comments have been insightful and I love the collaboration.
A common response to the post was that shims will allow us to test previously untestable code. A common example was 3rd party libraries. I assume what is meant by testing 3rd party libraries is actually “testing my code that calls 3rd party libraries”. If what was meant was actually testing the 3rd party libraries themselves, for that I prefer integration tests which actually hit the 3rd party libraries and execute the real code. With regard to testing my code which uses 3rd party libraries I think there is a bigger code smell at play. I prefer to wrap my 3rd party libraries behind an anti-corruption layer (see Domain Driven Design, Evans, p. 364). This means wrapping all calls to a 3rd party library with my own delegating implementation which implements an interface. This has benefits beyond testing, it also keeps the 3rd party library (over which you have little control) from bleeding into the rest of your code. If the author of the library changes it’s implementation, you only need to make changes to your adapter. Writing these adapters is really quite simple and easier than it seems when you first start considering it.
And this is a great example of why I’m frustrated with the VS11 Fakes, particularly shims. Instead of continuing to look for solutions that fix the code smells, it becomes too easy to just “shim it”. In my conversation with Provost, even he sees the evil in shims and does not intend for shims to be used this way. He suggested that shims should be used as a temporary means to refactoring untestable code, and by temporary I mean very temporary. He reiterated (with attribution to Martin Fowler) that untested code should not be refactored and that shims are meant to allow you to write temporary tests that allow you to immediately refactor the method while it is under test. Once refactored, and the tests are passing, you should immediately rewrite the unit tests using something other than shims. Michael Feathers refers to shims used this way as a software vise (Working Effectively With Legacy Code, p. 10, Feathers).
The problem is, even Microsoft’s documentation does not state that shims are evil and should only be used for this purpose. Ok, I wouldn’t expect them to use the word evil (that’s just the first thing that comes to my mind) but something more marketable that lets us know that they are not intended to be used except for unusual and temporary purposes. The fact that this is not Microsoft’s clear intent is underscored in TigerShark‘s blog comment “However, that doesn’t seem to be the reason Microsoft created this framework to begin with (but I don’t know)”. It actually is Microsoft’s intent and to Provost’s credit he seemed surprised that the documentation didn’t say this and he felt it was important for Microsoft to make the point that they should be used to enable refactoring to a more testable state. BTW, he posted this morning on this very topic and he does actually call them evil.
But, honestly, how much of a difference is the documentation going to make anyhow? What percentage of us reads the documentation rather than just playing with the framework? And the real issue still remains that MS is releasing this without mocks. There were a couple of comments to my first post stating that MS doesn’t need to (and shouldn’t) attempt to create a replacement for RhinoMocks/Moq since those are already widely used. However, I think MS including mocks would not be so much an attempt to try to get people to move away from other mock frameworks, but rather their endorsement of this way of testing. By providing the fakes framework without mocks there will be many who will argue that they don’t intend for us to use mocks. BTW, Provost did say that he is interested in the mock/verify space and that he would like to see it included at a later date.
This post is getting long, so let me just make one more point. I stated in my first blog post that you can’t do Arrange-Act-Assert with the fakes framework. This is actually incorrect. Peter pointed out that the right way to do stubs is to set values in the stub delegates and then later assert them in the Assert phase of AAA. He also stated that the stubs are better because they are just C#, you don’t have to learn a new framework. By inference this means you have the power to do whatever you want to do inside your stub delegates within the bounds of C#. In my mind that’s part of the problem, and that was highlighted in the original InfoQ article I referenced where the author stated, “Mocks are missing, but you can do assertions within the stub method implementations to overcome this in some scenarios”. Obviously this isn’t what Microsoft intended as Provost confirmed, but the problem is the framework doesn’t force you to do it right so you have all sorts of power to do it wrong.
I don’t argue that there is some value in the fakes framework, I just think it is too easily misconstrued and will be frequently misused.