Wednesday, October 29, 2014

Pluralsight Learning Path: Getting to Great with C#

Pluralsight has a lot of courses (like hundreds and hundreds). So it can be daunting to figure out where you need to start. To help get people pointed in a direction that works for them, Pluralsight has a collection of Learning Paths that highlight a set of courses to help you meet a goal -- whether you're working toward certification or want to learn to build a particular type of application.

A new learning path was published today: Getting to Great with C#.

Getting to Great with C#
  • Object-Oriented Programming Fundamentals in C#
  • Defensive Coding in C#
  • Clean Code: Writing Code for Humans
  • C# Interfaces*
  • Abstract Art: Getting Things "Just Right"*
  • Dependency Injection On-Ramp*
  • SOLID Principles of Object Oriented Design
  • Design Patterns On-Ramp*
  • Design Patterns Library
Be sure to follow the link (here it is again)  to get the details on the goals of the learning path and descriptions of all the courses.

The really cool part: I authored 4 of these courses (marked with *). I've had courses included in other learning paths, but I'm excited and honored to have so many of my courses included in a single collection (plus, I know the other authors and courses, and I'm in very good company).

Happy Coding!

Tuesday, October 28, 2014

Rewriting a Legacy App - Part 4: Completing the MVP with Scheduling

It's time to wrap things up for the re-write of the legacy application (at least the immediate features). In Part 1, we looked at the existing home automation application and came up with a minimum viable product (MVP). As a reminder, here are the features that we need:

Needed for MVP:
  1. Send commands through the serial dongle
    This is the purpose of the system.
  2. Send commands for 8 devices on House Code "A"
    These are the only devices that are actively used.
  3. Fire on/off/dim events on a schedule
    To maintain the air conditioner functionality (and current lighting schedule).
In Part 2, we built a test library to send commands through the serial port to the home automation hardware (thus fulfilling requirement #1). In Part 3, we looked at generalizing the functionality so that we could send commands to various devices (thus fulfilling requirement #2).

All that's left is to create a scheduler. Sounds pretty simple, but I also needed to create some wrappers and test objects along the way.

[Update: Code download and links to the entire series of articles: Rewriting a Legacy App]

Concealing Details
As mentioned in Part 3, I wasn't really happy with the way that I left the objects. Here's how we interacted with the SerialCommander and MessageGenerator:


Rather than having the application need to know the details of how to generate the message and then call into the commander, we'll wrap things up into a single class that handles those details. Here's what I want my application code to look like:


So, I created a HouseController object that wraps up the message generation and serial port interaction. This has a SendCommand method that takes a device number and a command. From the HouseController class:


This method uses the static MessageGenerator class to get the message, and then it uses the a private instance of the SerialCommander object (the "Commander" property) to interact with the serial port.

Faking Hardware Interaction
As I was going through creating these objects and moving code around, I ran into a restriction that I found to be a bit of a hassle. I needed to have the serial dongle plugged into my development machine in order for the code to run. If I ran the code without the dongle plugged in, I got the following error when calling into the SerialCommander object:


Since I was comfortable that the hardware interaction was working, I wanted to be able to continue to develop even when the hardware was not plugged in to my dev machine -- specifically, I wanted to keep it plugged in to the machine running the legacy application so that it would continue to function.

In order to do this, I created a simple interface: ICommander. Since the SerialCommander only has one publicly-exposed method, this interface is pretty simple:


With this interface in place, I could add the abstraction to the HouseController class (through the Controller property) and provide a fake implementation that didn't require the hardware.

The FakeCommander implementation is pretty simple:


It doesn't need to actually do anything, but I echo the message to the console window so that I could see that the method was getting called.

Inside the HouseController class, I set up the property so that we could easily use Property Injection to swap out a fake commander:


For more information on Property Injection, check out my materials on Dependency Injection (and specifically a blog article on the Property Injection pattern).

If we do nothing (that is, we do not set this property directly), then the first time it is used, it will automatically create an instance of the SerialCommander object. This is the default behavior that we would like to have in our production environment.

But for testing, we can override the property with a fake or test implementation. Here's our updated application code that injects the FakeCommander:


Notice that after we create the HouseController object, but before we call any methods on it, we set the "Commander" property to our fake object. This lets us run our application without actually interacting with a serial port.

Scheduling Data
What we really need to do is implement a schedule. For that, we'll need to be able to read the data from a persistent location (and in the future, add some UI that will make it easy to manage this data). I decided to keep things pretty simple. I created a ScheduleItem object that represents the data that we need:


And I created a Schedule object which is a collection of ScheduleItems. In addition, this object is able to load data from a CSV file from the file system. I decided on a CSV file because it is fairly simple, and I already had code that I could borrow from another project.

Here's what the schedule file looks like:


And here's what the loading code looks like (in the Schedule class):


There isn't any error handling here, so if the data is bad, we'll run into problems. But we're good for the MVP. We'll work on making this more robust as we need to.

Now that we have the schedule data, we need to execute it somehow.

Executing Scheduled Commands
I added the scheduling functionality into the HouseController class. I'm just using a Timer for this (the Timer is coming from System.Timers -- this seemed to be the best for my needs):


In this code, we set up a timer that is set to fire every 30 seconds. In the constructor, we hook up the event handler and start the timer. In addition, notice the "Schedule" property. This gets instantiated when this object is constructed (and the constructor code of the Schedule loads up the data from the file -- we'll look at this in just a bit).

When the timer fires, it runs the following code:


This code looks at the schedule items and filters the list based on items that are (1) enabled and (2) are scheduled to occur within 1 minute of the current time (I'll talk about this 1 minute window in just a bit). This calculation is done in the TimeDurationFromNow helper method:


This code is dealing with "TimeOfDay" because we want to ignore the date portion of the datetime values and just look at the times. The "Duration" method will give us the absolute value of our subtraction (so we'll always end up with a positive number).

Once we have a list of items that we need to process based on the current time, the scheduled time, and whether they are enabled, we just loop through them and call the "SendCommand" method for each one.

The last step in the timer event handler is to spit out some debug code to our console. This way we can check to make sure that our filter is running correctly.

Testing the Schedule
Testing schedulers is always fun. One way is to update the data file so that the scheduled time is in the not-too-distant future, then start the application and wait. This is not a fun way of doing things, so I cheat just a little bit.

Here's the constructor of our Schedule class:


In addition to loading the schedule from the file, I manually add 3 new schedule items for the not-too-distant future. This way, I don't have to wait very long, and I don't have to constantly update the data file.

Now, if we run our console application (and wait several minutes), we see the following output:


This may look a bit strange, so let's walk through this. The first 4 lines are coming straight from the Main method that we saw earlier. It outputs "Starting Test", turns on device #5, turns off device #5, and then outputs "Test Completed". Everything after that is from the Timer event running.

The first time the timer "ticks", it processes one record (our first test record from above). Remember that the timer fires 30 seconds after our application starts, and our first schedule item is set for 1 minute after the application starts. Since this is within the 1 minute window that we have defined above, that first command is run.

Then we have the output that shows us that 1 schedule item was processed, that we have 17 total items in our schedule (which includes 14 items from the file and our 3 items in code), and that 8 of the schedule items are active (in our file, the "Summer" items are marked as inactive).

If we look at the next set of records (from 5:35), we see that 2 schedule items are processed. The first command is sent (again) since it is still within the 1 minute window, and the second is sent since it enters that 1 minute window.

And if we follow this through, we can see all 3 of our test records go through the process.

Why the 1 Minute Window?
So you've probably noticed that we end up sending each command 3 times (30 seconds apart). This is a result of having the 1 minute window set up in our scheduler. But why do we need this?

This is based on my experience with the hardware. What I've found is that not every command gets registered by the system. I don't know if this is because the commands aren't received or if there is temporary interference in the power lines that prevent the transmission to the module. I just know that it is not 100% reliable.

Because of this, in the legacy application, I set up this processing window so that schedule commands would be sent multiple times. In practice this isn't a problem. If I send a command to turn off the air conditioner and it is already off, then nothing happens. And this is better than a command getting "missed" and a device staying on longer than intended (or not coming on as intended).

I'll need to do some long-term testing to see if this is still needed. But since the problems are intermittent, there's no way that I'll really know until the code is running for a while.

This Completes the MVP
The code isn't pretty (but it's not ugly, either). And there are still a lot of features that I'd like to add. But this completes our minimum viable product -- that is, a product that we can put into production to replace the current legacy system. Let's quickly review:

Needed for MVP:
  1. Send commands through the serial dongle
    This is the purpose of the system.
  2. Send commands for 8 devices on House Code "A"
    These are the only devices that are actively used.
  3. Fire on/off/dim events on a schedule
    To maintain the air conditioner functionality (and current lighting schedule).
We have fulfilled all of these requirements. We can send commands through our serial dongle; we can communicate with 8 devices on house code "A"; and we can send commands based on a persistable schedule. So that's all we need!

More To Come
But even though this is all I *need* to replace the existing application, this is not all that I want from the system. I would like to have a UI to set the schedule (and enable/disable schedule sets). I also want to code up the "dimming" commands (I deferred this for the MVP since I decided it was not critical to initial implementation).

I would also like to have a network-aware interface so that I can send commands remotely. And I would also like something that would keep a record of the state of each device. This last one is a bit difficult to handle. There is no way for us to query the system, which means that we need to maintain state information ourselves. And there's the added difficulty that if someone turns a device on or off using the physical remote, there's no way for our application to know about that.

Wrap Up
This has been an interesting exercise to look at an existing application and work through implementing a minimum viable product. What I discovered was that the "minimum" really wasn't that much compared to what existed in the legacy system. There were a lot of features that aren't needed (but were nice to have) and some features that aren't used at all.

The result is a very small set of features that could be implemented very quickly. It took just a few days to get everything running. Being able to release software so quickly leaves us with a good sense of accomplishment (a usable product) and gives us motivation to keep moving forward with other features.

Running through these types of exercises are important practice for us. It gives us better perspective when we're dealing with our business users. There are a lot of things that are wanted, but we can focus in on one or two items that are really needed and implement them quickly. This makes our users happy. And ultimately, this loops back around to my musing on No Estimates and Partnering with the Business.

What goes around, comes around.

Happy Coding!


November 2014 Speaking Engagements

It's hard to believe that it's getting close to November already (I'm not sure what happened to October). I had a lot of fun at several developer events this past month, and I'm looking forward to more coming up.

Saturday & Sunday, November 15 & 16, 2014
So Cal Code Camp
Los Angeles, CA
Session List
o Abstract Art: Getting Things "Just Right"
o Dependency Injection: A Practical Introduction
o Clean Code, Homicidal Maniacs Read Code, Too!
o Learn the Lingo: Design Patterns
o Learn to Love Lambdas

I missed the LA Code Camp last year (I was already signed up to speak at another event), so I'm looking forward to it. USC is a beautiful campus, and it's great to simply sit out on the lawn at lunch time and talk with other developers. If you're in the LA/OC area, be sure to grab your developer friends and co-workers for a great weekend.

Wednesday, November 19, 2014
Northwest Valley .NET User Group
Glendale, AZ
Group Web Site
o Dependency Injection: A Practical Introduction

I'm looking forward to heading out to NWVDNUG. I haven't been there in person, but apparently I've spoken there before (they watched one of my Pluralsight courses a while back). Dependency Injection is one of my favorite topics because it's not as hard as people think it is. The simplicity of showing loose coupling and the benefits from it is awesome to see. And the user group version of the talk is a bit longer and lets us spend a bit of time in conversation.

Thursday, November 20, 2014
Southeast Valley .NET User Group
Chandler, AZ
Meetup Event
o Get Func<>-y: Delegates in .NET

I'm also looking forward to heading back to SEVDNUG (the last 2 years I was out there in July and August, so I'm looking forward to cooler weather than I've seen before). Delegates are awesome. And as I learn more about functional programming, the more I see that delegates play a key role in bringing functional concepts into C#.

I was out at the Desert Code Camp in Chandler a few weeks back, so I'm looking forward to spending some more time with folks from the Phoenix area. So, if you're in the northwest valley or the southeast valley, come on out. It's bound to be fun and informative.

As a reminder, if you'd like me to come speak at your user group or developer event, you can drop me a note or send a request through INETA: Jeremy's INETA Profile.

Happy Coding!

Monday, October 27, 2014

Parameterized Tests with NUnit

In the last article, we implemented the rules for Conway's Game of Life using Test-Driven Development. For our testing framework, we used MSTest, but we did run into a limitation -- we had to create separate test methods for each test case even when the only difference was an input value.


I like MSTest because it's easy to get started with: (1) it's there -- nothing else to install, (2) creating a test project is as easy as "File | New Project", (3) the test runner is integrated with Visual Studio, and (4) it has the "Run Tests After Build" button (in some versions of Visual Studio). This automatically re-runs affected tests when you rebuild a project.

But it does lack parameterized tests (unless you're building Windows Store Apps). So, let's use a different testing framework that *does* support this: NUnit.

Completed code can be downloaded here: http://www.jeremybytes.com/Downloads.aspx#ConwayTDD
[Editor's Note: I fixed the bad link here - it was pointing to "localhost". D'oh]

Installing NUnit
I'm going to give the "NUnit noob" instructions for getting started since I was a "noob" not long ago. A big barrier to using a different framework is "where do I start?" Fortunately, we can get things through NuGet, and it integrates fairly well with our projects.

NuGetting NUnit
We'll start by adding a new project to our solution. This will be a class library project called "Conway.Library.NUnit". We'll call the initial class "LifeRulesTests" (this is the same as we used for the MSTest project):


Now, we just need to add NUnit. For this, I'll right-click on the new project and select "Manage NuGet Packages". Then we'll search online for NUnit:


And we'll install the first 2 items that come up: NUnit (the testing framework) and NUnit.Runners (which will give us the test runner).

When we install NUnit for the current project (Conway.Library.NUnit), it automatically adds a project reference to "nunint.framework" as well as a "packages.config" that makes sure we have the right package available to the project.

The Test Runner
When we install NUnit.Runners, it creates a solution-level package configuration. It also puts the test runner into our solution folder. If we right-click on the solution and choose "Open Folder in File Explorer", we can navigate down to the "tools" folder:


That's where we'll find "nunit.exe". Run this, and we'll end up with the NUnit test runner GUI:


We'll come back to this in just a bit.

Now, you're probably thinking that having the test runner in the solution folder is not a good idea. After all, if we want to use NUnit with multiple solutions, we would end up with lots of copies of this test runner.

And you're absolutely right. For long-term usage, I would move the test runner folder to a common location that can be easily shared. But we won't worry about that for now.

Building Our First Test
To make sure that everything is working, we'll just bring over one of our existing tests. We'll use the second test that we wrote for MSTest.


The syntax is a little bit different. Notice that our class is decorated with the "TestFixture" attribute (instead of "TestClass"), and our method is decorated with "TestCase" (instead of  "TestMethod").

Let's build our new project and run the test.

Running the Test
To run the test, we just need to open the assembly for our test project in the NUnit runner. For this, we'll just navigate to the bin/Debug folder of the "Conway.Library.NUnit" project and open the "Conway.Library.NUnit.dll" file:


We've got quite a big tree structure here. The top of the tree is navigating through the "dots" of our namespace: Conway.Library.NUnit. Then we have the name of the test class: LifeRulesTests. Then we have the tests themselves.

Now this looks strange because "LiveCells_TwoOrThreeLiveNeighbors2_Lives" is there twice. This will make sense once we add some parameters.

When we run the test, it passes.


Now that we have things set up, we can start to look at parameterization.

Parameterizing Tests
Here's the problem with our current test: even though the test case says "TwoOrThreeLiveNeighbors", we are only testing it with the value of "2". But instead of adding another test for the "3" case, we can simply parameterize this test.

And parameterization is just what it sounds like: we add a parameter to our test method. Here's our parameterized test with some test cases configured:


Let's walk through this. First, notice that we have a parameter for our test: "liveNeighbors". Since we have this parameter, we removed the line from the code that created and initialized the variable of the same name. This tells us that the number of live neighbors will be passed in to the test.

In order to pass in these values, we add attributes for the different test cases. Notice that our "TestCase" attribute now has a parameter. And we have 2 attributes so that we can cover each of our values.

Finally, I removed the "2" from the test name since we are using this same method for both test cases.

Running the Parameterized Test
Now let's build, go back to the test runner, and re-run the test:


The tree makes a bit more sense now. We have a single test method with 2 different test cases. We can see that this one test was run with both "2" and "3" as parameters.

Additional Parameterized Tests
We can do this for our other tests as well. Here's the "over-crowding" test:


This one method works for all 5 test cases.

If we continue this, we end up with 6 test methods total (down from 18 originally):


This shows the tree collapsed to our 6 methods. But notice that the results say "Passed: 18". And we can expand our tree to see all of the test cases:


This shows all 18 test cases even though we only have 6 unique test methods.

Why So Many Tests?
This is way better than writing 18 tests. But does 6 still seem like a lot? It seems like we could combine the tests a bit more.

For example, if we look at the "less than 2" and "greater than 3" tests on the living cell, we see that the starting state is the same ("Alive") and the expected result state is the same ("Dead"). So maybe we could create a "LiveCell_LessThanTwoOrMoreThanThreeLiveNeighbors_Dies" test. This would have 7 test cases for a single test method.

Or we could get even more creative by adding more parameters. What if we passed in the initial state, the number of live neighbors, and the expected result state? Then we could technically have a single test method that could work for all 18 test cases.

And this is where we need to think about finding the right balance. We can't only think about "how can I write the fewest tests possible". We also need to think about how we react when a test fails. If we have clearly named tests, then when it fails, we have a good idea of what part of the code we need to fix. The tests are not simply to help us write the code, they are also to help us fix things when we break the code or when we need to make enhancements to the system.

My Reasoning
The reason that I settled on 6 tests is that the test methods correspond to the rules of our system.

A reminder of the rules:
  1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
For our first 3 rules (acting on living cells), we have 3 corresponding test methods. For the last rule, we do have a test for a dead cell coming back to life, but we also need a test for the negative of this case (where a cell does not come back to life).

I might reduce things down to 5 tests by combining the dead cell tests. This would be "exactly 3 live neighbors comes to life" and "not equal to 3 live neighbors stays dead". But since we have the "greater than" and "less than" values on the live cell tests, it makes sense to me to have similar methods / naming for the dead cell tests.

Wrap Up
Parameterized tests are pretty awesome. It allows us to write fewer tests when we have multiple inputs that result in the same expected output. And it makes it easier for us to test different inputs. In the case of the rules for Conway's Game of Life, this is a natural fit.

We saw that NUnit allows us to easily set up parameters for our test methods and then use attributes to set up the various test cases for each test. And NUnit is not difficult to include in our project -- we can use NuGet to add the framework to our project and also bring down the test runner (if we don't already have it).

NUnit also has a lot more features. And the test runner has plenty of options, too. So, take a bit of time to explore what else is available.

But just thinking about parameterized tests should give us some ideas on how we can make our tests easier to write and navigate without an overwhelming amount of copy/paste between methods.

Happy Coding!

Sunday, October 26, 2014

TDD & Conway's Game of Life

We all need coding practice -- this is one way that we get better as programmers. Recently, I picked Conway's Game of Life (it's pretty cool when you Google it: there's an in-browser version of it). I've always been interested in this -- primarily because I find watching the patterns to be pretty mesmerizing. Check out the Wikipedia article to see some of these patterns.

I'm looking for some TDD practice, so we'll create a method that implements the rules. We won't hook up a UI to this (at least not yet), but we'll get the "business logic" working.

If you're not familiar with TDD (Test-Driven Development), we're going to follow a Red-Green-Refactor pattern. First we'll write a failing unit test (Red), then we'll write just enough code to get the test to pass (Green), and then we'll Refactor our code to make things easier to understand.

We'll use MSTest as our test platform. I like to start here because it's built in and the test runner integrates well with Visual Studio. But we'll see some limitations as we go. In a future article, we'll see that NUnit doesn't have these same limitations and lets us do some cool stuff (but wait for next time for that).

Completed code can be downloaded here: http://www.jeremybytes.com/Downloads.aspx#ConwayTDD
[Editor's Note: I fixed the bad link here - it was pointing to "localhost". D'oh]

[Note: If you'd rather watch a video, check out TDD: Don't Turn Off Your Brain.]

Conway's Game of Life
The rules for Conway's Game of Life are pretty simple:
  1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
We can create a method that takes in the current cell state (live/dead) and the number of live neighbors and then spits out a new state (live/dead). As a note, the rules have the British spelling "neighbours", but I'll be using the American spelling "neighbors" throughout the article.

Getting Started
To get started, I created a class library to hold our method and an MSTest project to hold our unit tests. I know that technically I'm supposed to write a failing test before writing *any* code, but I like to break rules from time to time. So, I created a starting class:


This shows us the "LifeRules.cs" file that will hold our code (again, this is just an empty class right now), and the "LifeRulesTests.cs" file that will hold our unit tests.

Let's implement our rules one by one, and we'll see how things go. For this, I'll just copy the rules into the test file:

Now let's write some tests.

Test #1
For our first test, we need to test that a live cell with fewer than two live neighbors dies. Now, I'm going to "cheat" a bit more in my TDD process. I'm going to stub out the method and parameters before writing my first test. This doesn't bother me in this case. I like the idea of building tests first, but there's a bit of practical stubbing that makes things easier.


Here's the decisions I made. First, I created a CellState enum to handle the "Alive" or "Dead" state for the current cell. Then I created a method that takes the state of the current cell along with the number of live neighbors and returns a new state.

As a starting point, we just return the current state of the cell unchanged.

Now that we have a little bit of a framework, it's easy to write a test for our first rule:


I like to name my tests based on a 3-part scheme recommended by Roy Osherove (who wrote The Art of Unit Testing). The first part is the unit under test -- in this case we're testing a living cell. The second part is the action we're performing -- calling our method with less than 2 living neighbors. The third part is the expected result -- the cell dies. This matches the first rule that we're testing.

[Grammatical Note: The English major in me wants to say "fewer than" since we're dealing with countable items, but the programmer in me says "less than" since that's the name of our comparison operator.]

To arrange this, we set up the current state as "Alive" and say that we have "1" live neighbor. For our action, we call "GetNewState" with these parameters. And then we check our result. In this case, we're expecting that the result will be "Dead".

This test fails:


Before going any further, you're probably saying that my test doesn't really match the description. Although the test says "less than 2", our actual test case is 1. We'll loop back around to this in a bit. For now, let's get our test to pass.

Updating the Method
We're supposed to put in the bare minimum of code to get things to pass. This means that we should probably put in something like this (which tests the "1" case):


But I'm going to put in the entire rule:


When we're doing TDD, it's up to us to decide how big or small our chunks are. I'm comfortable biting off fairly large chunks to start off with. If I find myself getting into trouble, then I'll break things down a bit smaller.

With this in place, our test passes:

Success!

Test #2
Let's move on to the next rule: any live cell with 2 or 3 neighbors stays alive.


Just like our other test, we set up the initial state, run the method, and check the results.

Let's run the tests:


Well this is awkward. When we're doing TDD, we expect Red-Green-Refactor. This test went green immediately.

And that's okay. In our method, if we don't hit the conditional, it simply mirrors the incoming state. So our test immediately passes. We'll accept that and move on.

Test #3
For the third rule, we expect that a living cell with more than 3 neighbors will die from overcrowding.


Again, our test looks pretty similar to the others. We're just changing our input state (4 live neighbors) and the expected result (Dead).

And this test fails:


That's good. Let's adjust our code so that this passes.

Updated Code
We'll just make a few tweaks here.


We've added another conditional to check if the cell is currently alive and has more than 3 live neighbors. If this is the case, then the cell dies.

And our test now passes:


Let's move on to the last rule.

Test #4
I'm sure you're getting the idea now. The last rule states that if a cell is dead and there are exactly 3 neighbors, it comes back to life. The test is pretty easy to write:


Not much different than the other tests -- just our initial state and expected results.

And the test fails:


Updated Code
Let's fix our method so that it can handle this new condition.


We just added another conditional. And now our test passes.


Refactoring
The parts of TDD are Red-Green-Refactor. So far we've seen plenty of Red (initial failing tests) and Green (passing tests after updating code), but we haven't done any Refactoring yet.

I'm going to change our conditionals around a bit. The first two conditionals deal with changing "Alive" cells to "Dead" cells. The third conditional deals with a "Dead" cell becoming "Alive". Everything else just drops through, meaning the state remains unchanged.

Let's add a switch statement and combine some of the conditions:


I think this makes things a bit easier to follow. Not everyone may agree with that, but I like the switch with the conditionals better than the 3 separate conditionals.

We need to re-run our tests to make sure our refactoring didn't break anything:


And everything still passes.

Incomplete Tests
We've implemented all 4 of the rules, but we're not really testing all of the possible states at this point. For Test #1 (live cell with less than 2 neighbors dies), we're only testing for 1 neighbor. We really should add another test for 0 neighbors to make sure we're covering the possibilities.

In addition for Test #2 (live cell with 2 or 3 neighbors lives), we're only testing for 3 neighbors. We should add another test for 2.

The same is true for Test #3 (live cell with more than 3 neighbors dies). We're testing for 4 neighbors, but what about the others. The way the Game of Life is set up, a cell has 8 neighbors, so we should really test for 5, 6, 7, & 8 live neighbors, too.

And for Test #4 (dead cell with 3 neighbors comes to life), we should really test the negative conditions as well. All other states (less than 3 live neighbors or more than 3 live neighbors) and the dead cell stays dead.

That's a lot of tests.

Limitation with MSTest
Many testing frameworks have the concept of data-driven or [see update] parameterized tests -- that is, tests where we can vary the inputs without writing new tests. Unfortunately, MSTest is a bit incomplete in this regard. MSTest for Windows Store Apps does have this capability, but MSTest for Web/Desktop applications does not.

This is where I have to say "You've got to be kidding me, Microsoft." They obviously saw the value of parameterized tests, but not for all environments. (Now I do say this with a lot of love. Visual Studio is an awesome development environment. It's easy to complain about a lot of Microsoft products, but Visual Studio isn't one of them.)

NUnit is another testing tool (which is downloadable with NuGet) that does offer parameterized tests. And we'll look at this in the next article. For now, we'll just implement the tests manually.

[Update 10/28: As Chris points out in the comments, MSTest does offer data-driven tests using the [DataSource] attribute. This allows you to hook up XML, CSV, SQL, or any other data source to hold test data. Here's an MSDN article, but you can find some easier tutorials online as well. With my memory jogged, I remember looking into this a few years back and not liking it because of the weird syntax to access the data inside the test methods themselves. So, I must have blocked it out. To make things interesting, data-driven tests are *not* available for Windows Store Apps (they use parameterized tests as mentioned above). I'd still like to see parameterized tests come in to all versions of MSTest so we can use the same methods across the board.]

Completing the Test Scenarios
Since we're using MSTest here, we'll have to create separate test methods for each of these cases. I won't show you all of the code, but I will show you the Test Explorer with all of these tests.


The good news is that these additional test cases pass with our existing code. For the tests themselves, I renamed the things a bit. For the middle part of the method name, I added a number. This specifies the actual value that is being used for testing.

This isn't the greatest way of doing this. It's really a brute force method. But it does get the job done, and it covers all 18 test cases that we need.

Wrap Up
So we reached our goal: creating a method to process the rules of Conway's Game of Life. And we used TDD to get there. But our current set of tests isn't the most efficient way of testing things.

Since we do have so many scenarios that have the same expected outcome, it's really a shame to have separate test methods. It would be better to parameterize the tests in some way. We could probably do this ourselves (if we're really creative), but it would be better to use a testing framework that supports this. Next time, we'll take a look at NUnit. This allows us to test the same 18 cases with only 6 actual tests (technically, we could have fewer tests, but we'll talk about that, too). So, come back next time to see that in action.

Happy Coding!