Test-Driven Development is a Software Engineering practice gaining increasing popularity within the software industry. Many studies have been done to determine the effectiveness of Test-Driven Development, but most of them evaluate effectiveness according to a reduction in defects. This kind of evaluation ignores one of the primary claimed benefits of Test-Driven Development: that it improves the design of code. To evaluate this claim of Test-Driven Development advocates, it is important to evaluate the effect of Test-Driven Development upon object-oriented metrics that measure the quality of the code itself. Some studies have measured code in this manner, but they generally have not worked with real-world code written in a natural, industrial setting. Thus, this work utilizes Open Source Software as a sample for evaluation, separating Open Source projects that were written using Test-Driven Development from those that were not. These projects are then evaluated according to various object-oriented metrics to determine the overall effectiveness of Test-Driven Development as a practice for improving the quality of code. This study finds that Test-Driven Development provides a substantial improvement in code quality in the categories of cohesion, coupling, and code complexity.
The key point is that there are reasons for Uncle Bob’s approach to katas. Rather than getting to the point directly, I’m going to begin with a couple of rambling digressions, as per my usual style. If you are tl;dr (too lazy; don’t read much) then feel free to skip ahead a bit. Specifically, skip ahead to “Code Kata — How It Started.”
With waterfall failure after failure, even the government has decided to make Agile development a priority. Some people have a hard time believing this but I just want to take you through what happened in one department, the biggest one there is, the Department of Defense. Back in 2009 someone inserted this section into the 2010 Defense Acquisition Bill.
Dan North's recent blog post on software craftsmanship has unleashed a lot of blog discussions (which I summarize below, if you're interested). There's a lot in there, but one of his themes particularly resonated with me, hence this post.
Maybe it’s all the time I spend with startups, but while I strongly value Scrum’s ideas behind self-organizing teams & continual feedback – I can’t help but feel Kanban represents the next level of agility, giving us more flexibility and capitalizing on the lessons we’ve learned from Lean.
Paul Dolman-Darral over at Value Flow Quality recently published a list of The Top 20 Most Influential Agile People. This is a great list of established Agile Bloggers, many of whom are book authors. But these people get all of the attention.
I believe estimation, and the way it’s regularly misused, is at the root of the majority of software project failures. In this article I will try to explain why we’re so bad at it and also why, once you’ve fallen into the trap of trying to do anything but near future estimation, no amount of TDD, Continuous Delivery or *insert your latest favourite practice here* will help. The simple truth is we’ll never be very good at estimation, mostly for reasons outside of our control. However many prominent people in the Agile community still talk about estimation as if it’s a problem we can solve and whilst some of us are fortunate to work in organisations who appreciate the unpredictable nature of software development, most are not.
A lot of managers don’t think twice about every day things that slow down developers…
• Slow or unstable development machines • Delays from business folks in clarifying requirements • Little to no feedback on the product until it’s already "done" • Loss of focus from repeated interruptions