Shadow copies? More like “phantom” copies…

I’ve just run into a strange issue.  It’s the end of the day so I’ll leave it for tomorrow to work it out.

I have some code that is reading a config file, but when I open the file in TextPad, I see different values than my application does.  I checked it with Notepad.  Notepad agrees with my application, TextPad shows something else.

So I’ll tackle this in the morning, but it is pretty weird.  I think it might be a shadow copy issue (I’m running Vista).

Interesting times.

Advertisements

My Opinions on “Test Driven Development: By Example” by Kent Beck

I’m trying to get into TDD, and to this end I’m halfway through reading this book: http://www.amazon.com/Test-Driven-Development-Kent-Beck/dp/0321146530

I’m not sure I agree with *all* the principles in it though.

Beck proposes a roughly three-step method:

  1. Write a test.
  2. Write whatever it takes in the least amount of time to get the stuff to compile and run successfully (a.k.a “get to a green bar as quickly as possible”).  This apparently can include hard-coding values (gulp).
  3. Refactor to fix up your deliberately dodgy code “remove duplication”.

I understand the concept of test-first, but as a singularly scatterbrained individual, I can’t bring myself to go with “fake it ’till you make it” and “speed is more important than correct code (but only for a short while)”.  How will I remember/find those horrible bits of hard-coded code and ensure I’ve made them correct?  I think the trick to this (though it’s not said in as many words) is to write more tests that test the code from different angles until you get tests that fail with those hard-coded values, and fix them.  As a beginner of TDD, I don’t think I can be sure I’d do this effectively.  Essentially it is suggested that I write bugs into my code deliberately.  Eeeek!

What’s the point of a successfully run unit test if you know it’s testing broken code?  If you write any old code that will get the test working, you’ve totally negated the reason for the test in the first place.

Unit Test + Correct Code should equal Green Bar

Unit Test + Incorrect Code should equal Red Bar

not

Unit Test + Broken Code = Green Bar

I want tests that fail unless the code is correct.  While this isn’t always easy, I can’t bring myself to deliberately write incorrect code just to get a sucessful test.  If I have a green bar, what’s left to tell me what I need to do next?  As I said, apparently you just write more tests.

I prefer a different method:

  1. Write a test.
  2. Write the code that makes the test work.
  3. Refactor if I change my mind about that code, safe in the knowledge that as long as the test passes, the code is okay.

To his credit, Beck does suggest in his book that the whole idea is that you gradually learn how big your changes in-between running tests should be.  That it is a learning process.  When you know what you need to make a given test pass, you are allowed to write that “Obvious Implementation”.

I’m basically not sure I like the recommendation that the process be quite so iterative.  If you have to run your tests after each tiny change, you’re effectively introducing a compilation tax which will quickly get frustrating.

I’d love you hear how you’ve interpreted Beck’s advice for your own TDD process.  I’m sure I’m just missing the point, so enlightenment is always welcome :).