I’ve always tried to avoid bug tracking tools and have several times deleted the entire content in such tools (reported bugs) to great success. However, that’s not what I want to talk about here, but if this sounds weird to you go check out Gojko’s post on the subject.
What I want to talk about here is one issue that came up during my team’s retrospective meeting on Friday. One of the improvement points that came up was bugs. Not that there where too many or that we should find more, but that some bugs where fixed and then reappeared as bugs again on a later stage. The suggested solution was: Instead of having the tester pull the buggy story from test and back into dev, he would write a test (integration or unit) that would replicate the bug and check it into source control.
Since we’re doing Kanban, what would the effect of the above action be? It will effectively stop the “assembly line”. In practice CI will report a failing test and the team will stop what they are doing to fix the failing test.
This is of course something we’ll adopt for all bugs, not just bugs that gets reported twice. I’ll leave it up to you to figure out all the benefits (and drawbacks if any) as well as why this solves our problem at hand. I’m looking forward to see how this will work out in practice.
If everything goes as planned NNUG Bergen will in August have two great speakers. Dan North from ThoughtWorks and Christian Weyer from Thinktecture! Is that a great lineup or what?! And the best part… it’s FREE!
Currently the plan is to have Dan North on stage the 25th of August and Christian Weyer the 27th (Monday and Wednesday). Here’s a bit about the two speakers and what they’re going to talk about:
Dan is a principal consultant with ThoughtWorks, where he writes software and coaches teams in agile and lean methods. He believes in putting people first and writing simple, pragmatic software. He believes that most problems that teams face are about communication, and all the others are too. This is why he puts so much emphasis on “getting the words right”, and why he is so passionate about behaviour-driven development, communication and how people learn. He has been working in the IT industry since he graduated in 1991, and he occasionally blogs at dannorth.net.
At NNUG Dan is going to talk about The relationship between Domain-Driven Design and Behaviour-Driven Development.
Christian is co-founder of ThinkTecture, a European software development support company. He has been modeling and implementing distributed applications with Java, COM, DCOM, COM+, Web Services and other technologies for many many years. Recently his focus has been on the ideas and concepts of service-orientation and their practical translation in customer projects with Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF) being the two main technologies applied. Especially the more than natural marriage of WF and WCF currently has gotten his attention.
Christian’s talk will be about WCF, but other than that he’s quite open to suggestions. I’m thinking it would be interesting to hear about why we should move from Asmx to WCF and the benefits (and any drawbacks) we get from that move. What do you want to know about WCF? Drop me a comment and we’ll see what we can do… Be quick though, we need a decision soon.
Lately I’ve been trying to focus on how to make testing work better at our company. We’ve fully integrated our testers into the Scrum teams, but there’s still some things I feel is missing. Especially related to the tools/frameworks we use for testing. One of the things I’m looking into is Fit and FitNesse (Framework for Integrated Test) created by Ward Cunningham in 2002. The first time I got an introduction to Fit was in Nils Christian Haugen’s presentation at JavaBin back in March. This got me very excited, but I’ve hadn’t had time to look enough into it, but now I think I will.
In essence Fit is a framework that lets your user stories’s story tests (or acceptance tests) to be automatically tested/verified. The way you do this is by using a table structure (as showed on the left) to give in values and expected outcome. This is a very nice way of working with tests from a customer perspective. Everyone can understand this by having a short introduction to how it works.
Much like you do with unit tests, this process is automated. The preferred way of authoring unit tests is by using Test Driven Development (TDD). Similarly, working with Fit you can use Story Test-Driven Development (STDD). I really find this way of working to be very interesting and I hope to try this out live soon. Hopefully I can post some more articles on this later when I have some actual experience with it :-)