Wednesday, August 31, 2016

Code Coverage Should Not Be The Goal

Metrics (such as code coverage) are useful tools. They can tell us if we're headed in the right direction. But when a particular metric becomes the goal, we won't get the results that we expect.

I was reminded of this yesterday as I went through the drive-thru of a restaurant. So, it's time to take a closer look at the real problem of "metrics as a goal".

A Good Metric Becomes Useless
Yesterday, I was in the drive-thru of a local fast food restaurant. After paying at the window, the employee asked me to pull around to the front of the building, and they would bring me my food there. They have asked me to do this three times over the last couple months, so it stuck out a bit more this time.

Here's the problem: The employees at the drive-thru were being judged on the length of each transaction. The restaurant has sensors set up to see how long each car is at the window (and the shorter, the better). To get me off of the sensor, they asked me to drive around to the front of the restaurant. At this point, the employee has to walk around the counter and come outside to bring me the food.

This sounds like a good metric to check ("how long does it take to serve each customer?"). But the metric became the goal. The effect is that the employees were actually working *harder* to meet that goal. It takes them longer to walk out to the front of the restaurant (and it is work that is completely unnecessary). And this also means that it takes longer to actually serve the customer.

Because the metric became the goal, the employees were working harder to meet the metric, and the actual numbers lost their value -- they no longer know how long it *really* takes to serve each customer.

Code Coverage as a Goal
Bosses love metrics because they are something to grab on to. This is especially true in the programming world where we try to get a handle on subjective things like "quality" and "maintainability".

Code Coverage is one of the metrics that can easily get mis-used. Having our code covered 100% by unit tests (meaning each line of code is represented in a test) sounds like a really good quality to have in our projects. But when the number becomes the goal, we run into problems.

I worked with a group that believed if they had 100% code coverage, they would have 0 defects in the code. Because of this, they mandated that all projects would have to have 100% coverage.

And that's where we run into a problem.

100% Coverage, 0% Useful
As a quick example, let's look at a method that I use in my presentation "Unit Testing Makes Me Faster" (you can get code samples and other info on that talk on my website). The project contains a method called "PassesLuhnCheck" that we want to test.

As a little background, the Luhn algorithm is a way to sanity-check a credit card number. It's designed to catch digit transposition when people type in numbers manually. You can read more about it on Wikipedia: Luhn Algorithm.

So let's write a test:


This test is (almost) 100% useless. It calls our "PassesLuhnCheck" method, but there are no assertions -- meaning, it doesn't check the results.

The bad part is that this is a passing test:


This doesn't really "pass", but most unit testing frameworks are looking for failures. If something doesn't fail, then it's considered a "pass".

Note: I said that this test is *almost* useless because if the "PassesLuhnCheck" method throws an exception, then this test will fail.

Analyzing Code Coverage
Things get a bit worse when we run our code coverage metrics against this code. By using the built-in Visual Studio "Analyze Code Coverage" tool, we get this result:


This says that with this one test, we get 92% code coverage! It's a bit easier to see when we turn on the coverage coloring and look at the method itself:


Note: I didn't write this method, I took it from this article: Extremely Fast Luhn Function for C# (Credit Card Validation).

The blue represents the code that the tool says is "covered" by the tests. The red shows the code that is not covered. (My colors are a bit more obnoxious than the defaults -- I picked bold colors that show up well on a projector when I'm showing this during presentations.)

So this shows that everything is covered except for a catch block. Can we fix that?

From my experience with this method, I know that if we pass a non-numeric parameter, it will throw an exception. So all we have to do is add another method call to our "test":


This test also passes (since it does not throw an unhandled exception). And our code coverage gets a bit better:


We now have 100% code coverage. Success! Except, our number means absolutely nothing.
When a number becomes the goal rather than a guide, that number can easily become useless.
Code coverage is a very useful metric. It can tell us that we're headed in the right direction. If we have 0% code coverage, then we know that we don't have any useful tests. As that number gets higher (assuming that we care about the tests and not just the number), we know that we have more and more useful tests. We just have to be careful that the number doesn't become the goal.

Overly Cynical?
Some people might think that I'm overly cynical when it comes to this topic. But I've unfortunately seen this behavior in a couple different situations. In addition to the restaurant employees that I mentioned above, I ran into someone who worked only for the metric -- to the detriment of the customer.

Many years ago, I worked in a call center that took hotel reservations. The manager of that department *loved* metrics. Everyday, she would print out the reports from the previous day and hang them up outside her office with the name at the top of the list highlighted.

There were 2 key metrics on that report: number of calls answered and average talk time. "Number of calls answered" means what you think it means: the number of calls that a particular agent answers in an hour. "Average talk time" tracked how long the agent was on the phone with each customer.

There was a particular agent who was consistently on the top of the report whenever she had a shift. But there was something that the manager didn't know: the agent was working strictly for the metrics.

This agent always took a desk at the far end of the room (away from the managers office). Then she would only answer every 3rd call -- meaning, she would answer and then immediately hang up on 2 out of 3 customers. This got the "number of calls answered" number up -- she was answering 3 times more calls than otherwise. This got the "average talk time" number down -- 2 out of 3 calls had "0" talk time so the average went down. Since the metrics were up, she could take extra breaks and no one would notice.

Not The Goal
So maybe I am overly cynical when it comes to metrics. But I have seen them horribly mis-used. We can have "short" drive thru times while making the experience longer for the customer. We can have "100%" code coverage without actually having useful tests. We can have "short" talk time because we hang up on our customers.

Measuring is important. This is how we can objectively track progress. But when the measurement becomes the goal, we only care about that number and not the reason we're tracking it.

Use those numbers wisely.

Happy Coding!

Monday, August 29, 2016

Being Present - Mid-Year Review

At the beginning of this year, I made a commitment to Being Present at events where I'm speaking. I've been thinking about this the last couple days, so it's probably time to put down some specifics. (And yes, I know it's a bit past mid-year, but we'll just ignore that.)

Lots of Time Away from Home
In case you haven't figured it out, I really like speaking. I like helping developers take an easier path around the hurdles that I had to get over when I was learning these topics. Since January I've spoken at 22 events. These range from local user groups that are 8 miles from home to conferences that are 9 times zones away from where I live.

In all, I've spent 54 days on the road. A regular job would advertise that as 25% travel. That's a lot more than I've done in the past (and I've still got several more trips before the year is out). Fortunately, I don't have much that keeps me from traveling (the cats pretend to not get lonely), so I'm taking advantage of the opportunity while I can.

So how are things going?

Awesome Interactions
I've had a ton of awesome interactions this year. I first made it to the central time zone last year, and I've made some really good friends who make the rounds in that area.

Music City Code is freshest in my mind (since I was there a little over a week ago). It was really great to spend some time with Eric Potter (@pottereric) who I think I first met at Nebraska.Code() last year. Also, Cameron Presley (@pcameronpresley) who I spent some time with at Code PaLOUsa in Kentucky earlier this year. I also had some good conversations with Chris Gardner (@freestylecoder) and Heather Tooil (@HeatherTooill) -- I've seen both of them at other events, but never really sat down to talk. It was great to get to know them better.

Other people I got to know for the first time included Hussein Farran (@Idgewoo), Jesse Phelps (@jessephelps), Paul Gower (@paulmgower), and Spencer Schneidenbach (@schneidenbach).

In addition, I got to catch up with people who I know well from other events, including (but not limited to) Ondrej Balas, Justin James, Jim Wooley, Jeff Strauss, James Bender, Duane Newman, Kirsten Hunter, David Neal, Phil Japikse, and Paul Sheriff. (Sorry, I'm too lazy to include links; I know I've mentioned them in the past.)

And this is really just from Music City Code. If I look back at the other events I've been to, I've met some great people and been able to get to know them better (as an example, I met Matthew Renze (@MatthewRenze) when we shared a ride from the airport to CodeMash; we both went to NDC London; and we hung out again at KCDC). And speaking of KCDC, it was great to spend some time with Cori Drew (@coridrew) who I first met at That Conference last year, and Heather Downing (@quorralyne) who I first met at Nebraska.Code() last year and got to hang out with again at Code PaLOUsa. (More on KCDC: A Look Back at June 2016.)

This makes it sound like I only hang out with other speakers, but that's definitely not the case. I tend to spend a bit of additional time with speakers because we're often staying at the same hotel and/or looking for things to do after all the local folks go home for the night. And repeated interactions at different events reinforce these relationships.

I have great conversations with the non-speaker folks, too.

Other Interactions
I'm always surprised at the folks that I end up running into over and over at an event. At Music City Code, I had a conversation with Eric Anderson one morning, and we kept running into each other throughout the event.

At Visual Studio Live! in Austin, I ended up having dinner with Andrew, Mike, and Mike (differentiated as "Jersey Mike" and "Baltimore Mike"). None of us knew each other before the event, but we walked over to an event dinner together, ended up talking throughout the week, and even rounded things out with really excellent barbecue on the last night.

I made a ton of new friends at NDC Oslo (I mentioned just a few of them previously). CodeMash was awesome because I got to sit down with Maggie Pint to get to know her better (and you can read more about that in the NDC article).

Okay, so I'm going to stop now. I've been going through my notes from the various conferences and there are too many awesome people to mention. I've met a ton of great people. The conversations are useful because I get to hear what other people are successful with and what they are struggling with. Even if those relationships don't continue, we're still the better for having had the conversation.

And when the relationships do continue, it's a great thing.

Being Present
I credit these conversations and these relationships to "being present" at the event. I'm around during the morning coffee time before the event. I'm around during lunch time. I'm around at the breaks. I'm around for the dinners and after parties (with some caveats). And because I know that I can sleep when I get home, I try to be around for the hotel lobby conversations late in the evening.

This gives me a lot of opportunities to interact. I'm not always successful, but the more I'm available, the more conversations I have.

Stepping Out Early
I have stepped out early from parts of events. This is actually something that I put in to my original commitment:
  • This also means that I will be available at the noisy, crowded receptions that clash with my introvert nature (although I reserve the right to find a small group of people to go have coffee with in a quieter location).
I don't usually last very long at receptions or after parties. As an introvert, the noise and activity are overwhelming and suck out all of my energy. So I usually try to find a group of folks where I can "anchor". Sometimes this lets me stay at the party, sometimes it means that we go off somewhere else.

For example, at CodeMash there was a reception at a bar that was *very* loud. But I managed to get into a circle of 4 or 5 people (and stay in that circle), so I was able to manage by focusing on the conversation with the people around me. I managed to do the same thing at the KCDC party. I walked around the venue a little bit and had some good (short) conversations. But when I was saw that I was running out of energy (I even stepped outside for a bit), I found a table of folks where I could "anchor". I could focus on the 5 or 6 people at the table and block out the rest of the activity.

Other events played out a bit differently. At the Music City Code party, things were extremely loud. I had a couple good conversations, but it was overwhelming. A few of us ended up going upstairs to the restaurant (which was a bit quieter) -- our group kept getting bigger as more people stepped out for a "break". I think we ended up with 6 folks having dinner. I went back down to the party for a little while to make sure I had a chance to say goodbye to folks I wouldn't be seeing again. And I ended up talking with Erin Orstrom (@eeyorestrom) about introvert & extrovert developers.

The party at NDC Oslo had a couple bands. I kind of wanted to stay for a little while to hear them, but I ran into a group of folks who were going out to dinner. Since I knew I wouldn't last long at the party, I decided to take the opportunity to go to dinner with Evelina Gabasova (@evelgab), Tomas Petricek (@tomaspetricek), and Jamie Dixon (@jamie_dixon).

I'm still working on how I can best deal with the overwhelming situations. I'd like to be present for the entirety of those, but I know that I need to take care of myself as well.

Tough Decisions
As expected, there have been some tough decisions this year. This is the first year that I've had to decline events because I was accepted at more than one for the same time period. That's a good problem to have, but I want to avoid it as much as possible. It's hard enough for organizers to select speakers and put together a conference schedule; it's even worse when one of the selected speakers can't make it.

When there are multiple events during a particular week, I've decided to submit to only one of them. This has been tough because I don't always make the right decisions. I've "held" a week for a particular event (that I was pretty sure I'd get selected for), and then I don't get selected. By that time, it's too late to submit to the other event for that week. The result is that I had some gaps in my schedule that I would rather have not had. But I'm just playing things by ear at this point. I'm not sure what the "right" events are.

As an example, I would really like to be part of the first TechBash (particularly since Alvin Ashcraft (@alvinashcraft) has been such a great support in sending folks to my blog). But I held that week for another event that I had submitted to (actually 2 more local events that were back-to-back). One of those events didn't accept me; had I known that, I would have planned differently. But it also opened up an opportunity for me to do a workshop at Code Stars Summit (there's still space available), so I've got something good to look forward to that week.

It has been hard getting rejected by events that I really wanted to be at. And it's even harder when the event is happening, and I'm watching folks online talk about how awesome it is. Rejection is part of the process, though. It's normal, and it doesn't reflect on who you are as a person -- at least I keep telling myself that :)

There are some events that I went to this year (and I'm really glad that I did), but I won't be submitting again next year. These are also tough decisions. If you really want me to be at your event, contact me personally, and I'll see what I can arrange. I try to move stuff around for people who send me an invitation.

Falling into Success
I think that my decision to only submit for events that I really want to attend helps me stick with my commitment. If I'm at an event that I want to be at, then I'm more likely to be engaged and excited about it.

I've had some awesome opportunities as a speaker this year. I'm very thankful to everyone who comes to hear me speak and for those who tell me that it was useful to them. I'm looking forward to the opportunities that are still coming this year (Upcoming Events). And I'm also excited about some events that are coming up next year -- I'll announce those as soon as they are official.

In the meantime, I'm glad that I'm conscious about "being present" at the events I'm speaking at. It gives me lots of opportunities to meet new people, catch up with old friends, and expand the amount of awesome that I get from each event. And hopefully it expands the amount of awesome for the folks I talk to as well.

Happy Coding!

Monday, August 15, 2016

Recognizing Hand-Written Digits: Getting Worse Before Getting Better

I took a stab at improving some machine-learning functions for recognizing hand-written digits. I actually made things less accurate, but it's pointing in a promising direction.

It's been a long time since I first took a look at recognizing hand-written digits using machine learning. Back when I first ran across the problem, I had no idea where to start. So instead of doing the machine learning bits, I did some visualization instead.

Then I got my hands on Mathias Brandewinder's book Machine Learning for .NET Developers, and he showed some basics that I incorporated into my visualization. I still didn't know where to go from there. Recently, I've been doing some more F# exploration, and that inspired some ideas on how I might improve the digit recognizers.

To take a look at the history of the Digit Display & Recognition project, check out the "Machine Learning (sort of)" articles listed here: Jeremy Explores Functional Programming.

Blurring the Results
My first stab at trying to improve the recognizers came from reading Tomas Petricek's book Real-World Functional Programming. In the book, he shows a simple function for "blurring" an array:


There's a lot going on here, and I won't walk through it. But this takes a array of values and then averages each item with its neighbors.

Here's an example that creates an array of random values and then runs it through the "blurArray" function:


If we look at the output, the first array is a set of random numbers. The second output shows the result of running it through our blur function one time.

The last result shows the result of running through the blur function three times. And we can see that the values get "smoother" (or "blurrier") with each step.

Applying Blur to the Digit Recognizer
When I saw this, I thought of the digit recognition problem. Our data was simply an array of numbers. What would happen if I ran a similar "blur" over the digit data?

Note: this code is available in the "BlurClassifier" branch of the "digit-display" project on GitHub: jeremybytes/digit-display "Blur Classifier".

The reason I thought of this is because the current algorithms are doing strict comparisons between 2 images (one pixel at a time). But if the images are offset (meaning translated horizontally or vertically by several pixels), then the current recognizers would not pick it up. If I added a "blur", then it's possible that it would account for situations like this.

Blurring the Data
Here's my function to blur the data that we have:


This is a bit more complex than the function we have above. That's because we're really dealing with 2-dimensional data. Each pixel has 8 adjacent pixels (including the row above and below).

I won't go into the details here. I skipped over the edges to make things a bit simpler, and I also weighted the "center" pixel so that it was averaged in 4 times more than the other pixels.

The New Distance Function
With this in place, I could create a new "distance" function:


This takes 2 pixel arrays, blurs them, and then passes them to our Manhattan Distance function that we already have in place. This means that we can do a direct comparison between our Manhattan Distance recognizer and our new Blur Distance recognizer.

The Results
Unfortunately, the results were less than stellar. Here's the output using our Digit Display application:


Note: When comparing the results, the numbers aren't in the same order due to the parallelization in the application. But they should be in the same general area in both sets of data.

There is both good and bad in the results. The good news is that we correctly identified several of the digits that the Manhattan Classifier got wrong.

The bad news is that there are new errors that the original classifier got right. But even with the new errors, it didn't perform any "worse" overall than the original. That tells me that there may be some good things that we can grab from this technique.

But now let's look at another approach.

Adding Some Weight
The other idea that I came up with had to do with how the "best" match was selected. Here's the basic function:


This runs the "distance" function (the "dist" right in the middle) to compare our target item against every item in the training set. In the distance calculation, smaller is better, so this just takes the smallest one that it can find.

But the "best" match isn't always the correct one. So I came up with the idea of looking at the 5 closest matches to come up with a consensus.

Note: this code is available in the "WeightedClassification" branch of the "digit-display" project on GitHub: jeremybytes/digit-display "Weighted Classification".

Here's that function:


This has quite a few steps to it. There's probably a much shorter way of doing this, but this makes it easy to run step-by-step using F# Interactive.

Instead of pulling the smallest value (using "minBy" in the original), it gets the 5 smallest values. It looks something like this (there's some bits left out to make it more readable):


Then it counts up how many of each value. In this case, we have three 6s and two 5s. Then it pulled out the one with the most values in the list. (And 6 is correct in this case.)

To put this into the application, I composed the functions a bit differently to come up with a "weighted" classifier that still used the Manhattan Distance.

The results were not very good:


This actually makes things less accurate overall. But there are some promising items by looking at these results.

First, several of the items that the standard Manhattan Classifier got wrong were correctly identified by the weighted classifier. This did reinforce that the smallest number was not always the correct number.

But there were also a lot of items that this new classifier identified incorrectly. So overall, the performance was worse than the original.

More Refinement
Although this looks like a failure, I think I'm actually headed in the right direction. One thing that I can do to make this more accurate is to add a true "weight" to the calculation. Here's another example from our current approach:


If we look at these values, the distance calculations are fairly close together (within about 1500 of each other.) In this case, we can pretty confidently take the one with the "most" values (which is 2 in this case).

But compare that to this:


Here we have a much bigger gap between our best value and our worst value (over 5000). And there is even a big gap between the first value and the next best value (over 4000). Because of this, I really want to weight the first value higher. A simple consensus doesn't work in this case (especially since we have a "tie").

So even though we get worse results with the current implementation, I think this really shows some promise.

If I can add some "weight" to each value (rather than simply counting them), I think it can improve the accuracy by eliminating some of the outliers in the data.

Wrap Up
I really like having the visualization for what the machine-learning algorithms are doing. This gives me a good idea of where things are going right and where they are going wrong. This is not something that I could get just from looking at "percentage correct" values.

These two approaches to improving the results didn't have the intended effect. But because we could see where they went right and where they went wrong, it's possible to refine these into something better.

I'll be working on adding actual weights to the weighted classifier. I think this holds the most promise right now. And maybe adding a bit of "blur" will help as well. More experimentation is needed. That means more fun for me to explore!

Happy Coding!

Sunday, August 14, 2016

Recognizing Hand-Written Digits: Easier Side-By-Side Comparison

I've been working some more on my code that recognizes hand-written digits. I've actually been trying out a few different approaches to try to improve the machine learning algorithm. But before talking about those, we'll take a look at a few improvements that I've made to the UI.

Note: Articles in this series are collected under the "Machine Learning (sort of)" heading here: Jeremy Explores Functional Programming.

New Features
I added a couple of features to the application. Here's a screenshot:


This code is available on GitHub in the "DetailedComparison" branch of the "digit-display" project: jeremybytes/digit-display "DetailedComparison". (This has been rolled into the "master" branch as well, but the code there may have been updated by the time you read this.)

Record Count
There's now an input field for the number of records to process. Previously, this was a value in the code. This makes it much easier to pick new values based on the screen size.

(Note: I just noticed the typo in the header. D'oh.)

Offset
Previously, we were only able to use records from the beginning of our data set. This offset allows us to start at an arbitrary location in our data file (we'll see why this is important in just a bit).

"Go" Button
Instead of processing things when the application starts up, we now have a button to kick things off. This also means that we can change the parameters and re-run the process without needing to restart the application.

Separate Duration
Each classifier now has it's own duration timer. (The previous version just had a single timer for the entire process.)

Error Counts
This is actually the same as our previous version. But I wanted to point out that we can go through and click on the incorrect items. This changes the color of the item and increments our error count. This makes it easy to compare our two different recognizers.

This is still a manual process. We need a human to make a determination on whether the computer was correct. If I can't make out the hand-written digit, then I do not count it as an error.

Different Offsets
I wanted to have the offset available because I knew that if you keep trying to improve a recognizer against a small dataset, eventually you get really good at that small set. But that doesn't necessarily translate into being a good general-purpose recognizer.

I had done some experimentation by changing the offset in code. But having the parameter available makes things much easier. Here's an example of how different data makes a big difference:


Both of our recognizers performed significantly better with this set of data (which starts at item 500) compared to the data above (which starts at 0).

This is why it's important for us to look at different sets of data. When we start changing things, it may get better in some areas but worse in others.

Code Updates
I made a bit of a significant change in the UI: I created a user control to run the recognizer and process the data. This moved a bunch of code out of the main form, and it also reduced quite a bit of the duplication.

The application displays 2 of the user controls side-by-side, but we could display 3 or more (as many as we'd like, really). The user control makes that really easy.

Here's the code behind the "Go" button:


In our UI, we have 2 panels: LeftPanel and RightPanel. We start by clearing these out.

Then we grab the data. Since we have the parameters in the UI, I figured it was best to get the data in this central location (and only do it one time), and then we can pass the data to each of our user controls. The "LoadDataStrings" method grabs the data from the file based on our parameters.

Then we create 2 "RecognizerControl" objects (this is our user control). This has three parameters: (1) the string for the header, (2) the classifier (our recognizer), and (3) the data that will be processed.

We just create a user control for each of our recognizers and then add them to the panels in the UI. I'll be tweaking this part a bit in the future, but this works for us for now.

As a reminder, this code is available on GitHub in the "DetailedComparison" branch of the "digit-display" project: jeremybytes/digit-display "DetailedComparison".

Wrap Up
These changes aren't real exciting. But they do put us in a place that makes it very easy for us to swap out different recognizers. I originally wanted to add some drop-downs so that we could pick different recognizers, but I wanted to prioritize some other things before tweaking the UI further. That may get added in the future.

I've been playing with a couple of ideas to improve the recognizers. These have been inspired by some of the F# reading that I've been doing as well as some ideas of my own. We'll take a look at those next time.

[Update 08/15/2016: Here's the experimentation with those ideas: Getting Worse Before Getting Better.]

Happy Coding!

Tuesday, August 2, 2016

Jeremy at Live! 360 Orlando 2016

I'll be speaking at Live! 360 in Orlando, FL in December. It will be a great time. I had the chance to speak there last year, and it was a week packed with lots of great sessions, lots of great people, and a bit of fun, too.

I've got three sessions scheduled for Live! 360 Orlando in December. Check out my talks along with all the other great speakers here: Speakers - Live! 360.


If you're not convinced that this is an event you want to attend, you can get a preview by taking a look at the Live! 360 Learning Library.

If you head out there right now, you'll see me! To watch my video for FREE, just fill in your name and email address and click the "Access Now" button.


This recording is from Visual Studio Live! in Austin, TX this past May. You'll be able to see this talk (plus, tons of others) by signing up to go to Live! 360 Orlando in December.

Happy Coding!

Monday, August 1, 2016

August 2016 Speaking Engagements

I'm back on the road in August, and the rest of the year is filling in as well. If you'd like me to come to your event, be sure to drop me a line. Here are some of the things that I'm most passionate about: Presentation Topics.

Thu-Sat, Aug 18-20, 2016
Music City Code
Nashville, TN
Conference Site
o DI Why? Getting a Grip on Dependency Injection
o Unit Testing Makes Me Faster: Convincing Your Boss, Your Co-Workers, and Yourself

I'm really excited about heading to Music City Code in a couple weeks. It's my first visit to Nashville, so I'm looking forward to some new sights and the chance to share with a new group of people. I'm also looking forward to seeing some of my friends from the area.

I'm talking about two things that have been really useful to me: Dependency Injection and Unit Testing. I love talking about DI because I get to to watch the light bulbs go on as people finally "get it". It's not a difficult topic, it's just that we're generally introduced to it completely backwards. When we look at it from the front, it makes a lot more sense.

I also love talking about Unit Testing. Lots of people have had bad experiences with unit testing. But if we pay attention to what we're doing, we can get some amazing benefits from it -- it's made me a faster developer.

A Full Day with Jeremy
If you've wanted to spend a full day with me, here's your chance. On September 30, I'll be conducting a full-day workshop as part of the Code Stars Summit in San Jose, CA.

Friday, Sep 30, 2016
Code Stars Summit
San Jose, CA
Workshop Site
o Getting Better at C#: Interfaces and Dependency Injection

Loosely coupled code is easier to maintain, extend, and test. Interfaces and Dependency Injection (DI) help us get there. In this workshop, we'll see how interfaces add "seams" to our code that make it easier to swap out functionality. We'll also see how DI gives us loose coupling for extensibility and testing. And this doesn't have to be complicated; just a few simple changes to our constructors and properties give us huge benefits. More...

Sign up soon to take advantage of Early Bird discounts.

Coming Soon
This fall will be pretty busy. In September, I'll be at AIM hdc in Nebraska. At the end of the month, I'll be speaking at the SouthBay.NET User Group in Mountain View, CA. And I'll also be doing a full-day workshop as part of Code Stars Summit in San Jose (as already mentioned).

October is filling up, including trips to San Jose, CA for the Silicon Valley Code Camp, to Chandler, AZ for the Desert Code Camp, and to St. Louis, MO for DevUp.

To get details and see a full list of places you can come see me, just take a look at my website: Upcoming Events.

Happy Coding!