27.05.2015 Views

DEVELOPER PRODUCTIVITY REPORT 2013

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>DEVELOPER</strong><br />

<strong>PRODUCTIVITY</strong><br />

<strong>REPORT</strong> <strong>2013</strong><br />

HOW ENGINEERING TOOLS & PRACTICES IMPACT<br />

SOFTWARE QUALITY & DELIVERY<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

1


TABLE OF CONTENTS<br />

INTRODUCTION<br />

WHO CARES ABOUT THE QUALITY AND<br />

PREDICTABILITY OF SOFTWARE RELEASES? 1-4<br />

PART I - METRICS<br />

HOW TO MEASURE QUALITY & PREDICTABILITY 5-8<br />

PART II - PRACTICES<br />

HOW THE THINGS YOU DO AFFECT QUALITY & PREDICTABILITY 9-26<br />

PART III - TOOLS<br />

HOW THE TOOLS YOU USE AFFECT QUALITY & PREDICTABILITY 27-37<br />

TL;DR - LET’S CLOSE THIS OUT!<br />

SUMMARY, CONCLUSIONS & OBSERVATIONS<br />

AND A GOODBYE COMIC ;-) 38-41<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

2


INTRODUCTION:<br />

WHO CARES ABOUT QUALITY<br />

SOFTWARE AND PREDICTABLE<br />

DELIVERIES?<br />

In truth, we all should. You might be “Developer of<br />

the Year”, but if the team around you fails to deliver<br />

quality software on time, then it pays to review your<br />

team’s practices and tools to see if that is somehow<br />

related. So that’s what we did--and guess what: the<br />

things you do and the tools you use DO have an<br />

effect on your software quality and predictability<br />

to deliver. Let’s see how...<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

3


What do we want to achieve with this report?<br />

How do the practices, tools and decisions<br />

of development teams have an effect on<br />

the quality and predictability of software<br />

releases? Seems a bit abstract, right?<br />

In fact, this is one of the most frustrating<br />

aspects of talking about the quality and<br />

predictability of software releases. We all<br />

hear a lot of noise about best practices<br />

in software engineering, but a lot of<br />

it is based on anecdotal evidence and<br />

abstractions. Do we ever see much data<br />

related to the impact of so-called best<br />

practices on teams?<br />

With our survey, the goal was to<br />

collect data to prove or disprove the<br />

effectiveness of these best practices--<br />

including the methodologies, tools and<br />

company size & industry within the<br />

context of these practices.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

2


Our data and metrics<br />

In the end, we collected 1006 responses, which is reasonable for a survey<br />

where all questions are *required--last year over 1800 developers did at<br />

least half of our survey on tools and technologies.<br />

Note: It seems that getting good responses to surveys isn’t easy--most people find<br />

a 2-3 question survey palatable, but aside from just dumb numbers it’s not easy<br />

to learn much from someone in just a few seconds. We narrowed down our scope<br />

to 20 questions, which took our development team about 5 minutes to finish the<br />

one-page form. Still, we didn’t see a flood of respondent participation.<br />

So what metrics did we decide to track in order to understand how best<br />

practices actually work?<br />

After ascertaining that Quality and Predictability where two areas in which<br />

data could be gathered, we continued with further analysis based on tools<br />

used (e.g. VCS or CI servers), practices employed (i.e. testing, measuring,<br />

reviewing) and industry & company size.<br />

Quick note about bias: When analyzing the data, we discovered a couple<br />

of areas where bias was present. Compared to the software industry<br />

as a whole, our respondents represent a disproportionate bias towards<br />

Software/Technology companies as well as Java as a programming<br />

language.<br />

1. Quality of software - determined by the frequency of critical or<br />

blocker bugs discovered after release<br />

2. Predictability of delivery - determined by delays in releases,<br />

execution of planned requirements, and in-process changes (aka “scope<br />

creep”).<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

3


A little history: ZeroTurnaround’s Java and Developer Productivity<br />

Reports from 2009 - Present<br />

If you’ve been following ZeroTurnaround and RebelLabs for a while, you’ll<br />

know that this is our fourth report in as many years. It started back in 2009,<br />

when we began our quest to understand developer productivity by looking<br />

at which Java EE application servers/containers that 1100+ developers were<br />

using and how much time drain from redeploys is associated with each one<br />

(we discovered that between 3-7 work weeks each year were lost to this<br />

process).<br />

So where does that leave us for <strong>2013</strong> and beyond? Issuing another report<br />

on the popularity of IDEs, Build Tools, CI servers, Web Frameworks and<br />

Application Servers was one idea--people loved our 2012 report. But<br />

would learning that Vaadin jumped 1% in popularity from 2012 to <strong>2013</strong>, or<br />

confirming that Eclipse, Subversion, Jenkins, Spring MVC and Tomcat are<br />

still #1 in their respective fields truly be of value to the Java community as a<br />

whole?<br />

In 2011 we expanded our research efforts and this time asked approximately<br />

1000 developers about Build Tools, IDEs, Frameworks and Java EE standards<br />

in addition to App Servers, and again asked about how much of each hour<br />

was lost to restarts. We also asked ~1250 developers in India about tools and<br />

productivity, and saw some interesting differences between India and the<br />

rest of the world.<br />

By 2012, we wanted to go even further. Our Developer Productivity Report<br />

2012 focused on the vast array of tools & technologies that developers use<br />

each day, and looked deeper into what makes developers tick, asking about<br />

developers’ work week, stress and efficiency. Releasing this report was, in<br />

many ways, the unofficial birth of RebelLabs and the idea that high-quality,<br />

educational technical content is something we should continue to focus on.<br />

Instead, we looked to cover the more difficult areas, looking at how tools<br />

and practices affect organizations as a whole--namely with the Quality and<br />

Predictability of software releases. It’s our goal to be the premier provider<br />

of Java development productivity statistics outside of dedicated research<br />

agencies, and we’re completely transparent and honest about our data. We<br />

admit bias. We publish our raw data for your own analysis. So we set down<br />

some goals for how we would proceed.<br />

Moving forward, let’s go to Part I, where we discuss why it’s hard to measure<br />

quality and predictability, and what we did to quantify these metrics.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

4


PART I - METRICS:<br />

HOW TO MEASURE QUALITY<br />

AND PREDICTABILITY<br />

“How much quality goes into your app?” is a question<br />

we did not ask, since it’s ridiculous. But the answer to<br />

that question is what we were looking to learn. How<br />

do you track down abstract measures of quality and<br />

predictability and put numbers to them? Read on….<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

5


On Judging Quality<br />

“People forget how fast you did a job – but they remember how well you did it”<br />

- some guy named Howard Newton<br />

While an end user can quickly see whether your software is “good quality”<br />

or not, it’s not easy to gauge this at the development level. How can you tell<br />

if your app is high quality or low?<br />

We felt the most honest responses would be objective, self-reported metrics,<br />

so we decided that a good measure of quality is to ask “How often do you<br />

find critical or blocker bugs after release?” This seems like a good way to<br />

judge at least minimum requirements, and it’s the key metric of quality that<br />

is most likely to negatively impact the largest group of end users.<br />

It’s a pretty simple formula we’re looking at in this case as well: quality is<br />

100% if no critical/blocker bugs are found post-release. So we converted<br />

plain-language answers into percentages for this question, keeping the<br />

need for assumption as low as possible:<br />

Do you find critical or blocker bugs after a release?<br />

A. No - 100%<br />

B. Almost never - 75%<br />

C. Sometimes - 50%<br />

D. Often - 25%<br />

E. Always - 0%<br />

Based on this question alone, we were able to determine a bunch of yummy<br />

stats for you, and some interesting findings. This table below tells us how<br />

much of the time software is released into production without critical or<br />

blocker bugs:<br />

The % of time that software is released without any critical bugs<br />

Average 58%<br />

Median 50%<br />

Mode 50%<br />

Standard Deviation 19%<br />

Laggards (Bottom 10%) 25%<br />

Rock stars (Top 10%) 75%<br />

FINDINGS AND OBSERVATIONS<br />

1. Most respondents admitted to “Sometimes” releasing apps with<br />

bugs, so users have a 50-50 chance of the app they’re using containing<br />

a critical bug--maybe this is when you start calling it a “feature”? ;)<br />

2. On average, nearly 60% of releases go to production critical-bug free,<br />

so let’s say great job to those folks!<br />

3. The laggards of the group (bottom 10% of respondents) get apps out<br />

the door bug-free only 25% of the time.<br />

4. The rock stars of the group (top 10% of respondents) enjoy releasing<br />

their apps into production without critical bugs 75% of the time.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

6


On Judging Predictability<br />

“Do you miss me?” - Release Dates<br />

Not to beat a dead horse, but we already know software engineering<br />

productivity is hard to measure and impossible to compare. If quality of<br />

software is one half of the coin, then predictability of software delivery is the<br />

other. After all, you can build the world’s best software, but if you cannot get<br />

it to users--then who cares!<br />

We see predictability as a good associative measure of productivity in<br />

software, since if you can predict your delivery well it is likely that you<br />

understand it and can take steps to improve it.<br />

Our thinking is that predictability is 100% if releases are delivered on<br />

time and as planned. Again, we’re are looking for objective, self-reported<br />

answers for measuring predictability, so we asked in simple language three<br />

questions that could be quantified:<br />

How late are your releases (vs initial planned time)?<br />

A. On time or early 0%<br />

B. A bit 10%<br />

C. Moderately 25%<br />

D. A lot 50%<br />

How much of the original plans get done?<br />

A. Everything<br />

B. All but 10%<br />

C. All but 25%<br />

D. Only 50%<br />

E. Only 25%<br />

How much do plans change/expand during development? (scope creep)<br />

A. None<br />

B. 10% creep<br />

C. 25% creep<br />

D. 50% creep<br />

E. 75% or more creep<br />

In truth, predictability is more complicated to measure than simply<br />

frequency/number of bugs, like we did for quality. We overheard some<br />

people suggest that including change in scope in this is controversial,<br />

but this definitely affects your ability to predict your delivery, which is<br />

what we are trying to measure. Plus, we checked; omitting this does not<br />

change any trends!<br />

Of course, we are looking at trends here and trying to represent them<br />

with concrete numbers, so statistical analysis of this type may impact the<br />

accuracy of absolute numbers, but not the relative trends.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

7


Before we show you the stats, let’s see how we combined the three survey<br />

questions above into a mathematical representation for predictability:<br />

1<br />

( (1 + % late) )<br />

PREDICTABILITY = x (% of plans delivered) x (1 - % scope creep)<br />

Example: Simon’s team is pretty good at delivering on time--they<br />

only release late 10% of the time. On average, they get 75% of their<br />

development plans done, and they’ve been able to limit scope creep<br />

to just 10% as well. Based on that, we can calculate that Simon’s team<br />

releases software with 61% predictability.<br />

1<br />

( (1 + 0.10 ) )<br />

MATH! x (0.75) x (1- 0.10) = 0.61 = 61%<br />

Note: This number isn’t quite a probability (it’s not normalized to 100%, but very<br />

close to that). We could normalize over delivery, but chose not to as it didn’t<br />

affect trends, but made the number harder to interpret.<br />

Based on the replies to these questions, we generated some data that tells<br />

us how predictable releases are for development teams.<br />

How predictable are software releases?<br />

Average 56%<br />

Median 61%<br />

Mode 61%<br />

Standard Deviation 24%<br />

Laggards (Bottom 10%) 17%<br />

Rock stars (Top 10%) 82%<br />

FINDINGS AND OBSERVATIONS<br />

1. It looks like the industry can predict deliveries within a margin of<br />

60%. This is in line with anecdotal data on late releases, cut features<br />

and unplanned work creeping up.<br />

2. The rock stars reach the 80% margin, which is probably the<br />

reasonable limit in being able to predict your delivery. So, shoot<br />

for 80% predictability and you won’t be disappointed.<br />

3. There is a large spread between the rock stars and laggards, which<br />

gives high hopes that the worst and the average can improve a lot.<br />

4. When we looked into Predictability by Industry, we saw no significant<br />

relationship, and in fact Predictability by Company Size increases<br />

slightly (3%) for larger companies. We theorize that this is due to<br />

a greater level of organizational requirements, thus more nondevelopment<br />

staff are available to coordinate projects and release<br />

plans as teams scale.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

8


PART II - PRACTICES:<br />

HOW THE THINGS YOU DO AFFECT<br />

QUALITY & PREDICTABILITY<br />

Ok, you’re probably ready for some numbers and goodlooking<br />

graphs now, so let’s get into it. In this section,<br />

we look at how the methodologies (i.e. these so-called<br />

“best practices”) affect the quality and predictability of<br />

software releases.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

9


When looking at Practices, we want to<br />

know what methods and team activities<br />

work best to improve productivity for<br />

the team as a whole. So we’ll be looking<br />

into things like communication methods,<br />

testing and reviews, branching etc. We<br />

asked questions to give us a good view<br />

on each of the following.<br />

Next you’ll find the questions as the<br />

respondents saw them, with our analysis<br />

of the responses. So let’s go!<br />

. Technical debt<br />

. Code quality<br />

. Test coverage<br />

. Pairing up<br />

. Code reviews<br />

. Task estimation<br />

. Branching<br />

. Code testing<br />

. Communication methods<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

10


Do you work on solving technical debt?<br />

(refactoring, performance, builds, etc)<br />

As we wrote in a blog post<br />

showing some early results of<br />

this survey, technical debt is a key<br />

metric in determining developer<br />

productivity. In the same way that<br />

the visible part of an iceberg is<br />

often concealing a massive, hidden<br />

underbelly, technical debt cannot<br />

be measured at first glance, but<br />

has long-term, negative impacts on<br />

your software. Here we can see the<br />

breakdown of how the respondents<br />

react to technical debt.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

11


If we compare these findings based on<br />

the measurements of 1) Predictability<br />

and 2) Quality of software releases, we<br />

can see some trends.<br />

The effects of solving technical debt<br />

on predictability and quality<br />

There isn’t a lot of relation with either<br />

Quality or Predictability. The only thing<br />

that can be said with any certainty,<br />

is that not doing any technical debt<br />

maintenance does impact them<br />

negatively. But there isn’t a lot of<br />

difference from that point on. This is<br />

slightly controversial, and could be<br />

unrelated altogether.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

12


Do you monitor and fix code quality problems?<br />

(e.g. with Sonar)<br />

Code quality coverage is often<br />

a hot-debated topic among<br />

developers. Based on the<br />

responses of over 1000 engineers,<br />

we see half of those asked don’t<br />

fix code quality issues, and most of<br />

those people don’t even monitor<br />

code quality.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

13


When applying this question to differences in<br />

predictability and quality, we see a clear increase<br />

in both with teams that fix code quality issues.<br />

A nearly 10-point increase in predictability<br />

comes with fixing all code quality issues--seems<br />

kind of like a no-brainer now, huh?<br />

The effects of fixing code quality issues<br />

on predictability and quality<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

14


How much functionality (not code)<br />

is covered by automated tests?<br />

Automating functional tests are a good way to add another layer<br />

of stability to your software and make sure that what works in dev<br />

also works in production! Here is what the respondents told us<br />

about how much automated testing is used to verify functionality.<br />

In more plain numbers, we can see an average of one-third (33%)<br />

of software functionality is covered by automated tests--but the<br />

standard deviation is huge, and most often respondents utilized<br />

only 10% of automated testing for their software.<br />

The laggards literally did no automated testing (0%), and the rock<br />

stars covered 75% of functionality with automated tests. Way to go,<br />

rock stars!<br />

How much of your functional testing is automated?<br />

Average 33%<br />

Median 25%<br />

Mode 10%<br />

Standard Deviation 30%<br />

Laggards (Bottom 10%) 0%<br />

Rock stars (Top 10%) 75%<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

15


When relating functional test automation with<br />

predictability and quality, we see a significant gain<br />

in both metrics as the % of (self-reported) test<br />

automation increases. It’s interesting that folks who<br />

don’t do any automated testing (0%) are better off<br />

than folks who do a little (10%). We theorize that<br />

teams who do no automated testing are probably<br />

doing more manual testing and those who do just<br />

a little trust them more than they should. Once<br />

going further with it, we see a steady increase.<br />

The effects of automated functional testing<br />

on predictability and quality<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

16


Do you pair up?<br />

(programming, debugging, support, etc; ad hoc or policy)<br />

For a page on wikiHow, this explains the<br />

benefits of pair programming remarkably<br />

fully:<br />

“Some benefits you can expect: better<br />

code (simpler design, fewer bugs, more<br />

maintainable), higher morale (more<br />

fun!), shared knowledge throughout<br />

your team (both specific knowledge<br />

of your codebase and general<br />

programming knowledge), better time<br />

management, higher productivity.”<br />

http://www.wikihow.com/Pair-<br />

Program<br />

While two-thirds (66%) of respondents<br />

pair up at least sometimes, one in three<br />

teams don’t.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

17


Well, we see a significant increase in quality and<br />

a slight increase in predictability with pairing<br />

up. This is one place where we do wish we<br />

could measure productivity as well, as the main<br />

argument against pairing up is that two people<br />

separately can do more work. We can’t disprove<br />

that, but we can put numbers behind the<br />

increase in quality.<br />

The effects of pairing up on quality and predictability<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

18


Do you do code reviews?<br />

A solid majority (76%) of respondents<br />

review code for at least “Some<br />

commits”. Reviewing code has<br />

become more or less a standard<br />

practice for development teams.<br />

Only a tiny sliver of about 2% of<br />

teams do multiple code reviews<br />

for all commits.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

19


The impact on predictability of releases is very<br />

high! However, the low effect on software<br />

quality is surprising. Looks like programmers<br />

are bad at spotting bugs in code, but good at<br />

spotting software design issues and code smells.<br />

The effects of code reviews on quality and predictability<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

20


Who is involved in the time estimation for the tasks?<br />

When it comes to estimating the type<br />

and timeline of development tasks, we<br />

discovered a direct relationship between<br />

quality & predictability and the job role of<br />

the individual or people assigning the tasks.<br />

The results for ‘job role’ were pretty evenly<br />

distributed, so we decided to relate each job<br />

role with our metrics for predictability and<br />

quality. Check this out.<br />

Put simply, when the estimating person or<br />

group creating the task is less-technical, there<br />

is a decrease in the predictability and quality<br />

of the software release. Interestingly, team<br />

leads or architects have a null effect on both,<br />

whereas estimating with the team as a whole,<br />

along with the healthy discussion, generates<br />

the most gains in the key metrics. If 6%<br />

and 4% gains in predictability and quality<br />

respectively are powerful enough drivers to<br />

involve the team as a whole, then here is the<br />

data to back your move.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

21


How do you branch?<br />

The different styles of branching can<br />

always generate a discussion both inhouse<br />

and in public forums. We asked<br />

to check off the different styles, and<br />

surprisingly, they were pretty evenly<br />

represented. Looking at the relations<br />

to key metrics we see little strong<br />

association either way.<br />

The only clear trend is a slightly negative<br />

one for the single trunk and a positive<br />

one for feature branches. Go figure.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

22


Who tests your code?<br />

Are developers or team leads testing your<br />

code? If you have a QA/testing department,<br />

they are most likely doing it, but perhaps you<br />

should consider automating your tests based<br />

on these results.<br />

If there was even a good case for automation,<br />

it would be this. Automated tests showed the<br />

largest overall improvements both in the<br />

predictability and quality of software deliveries.<br />

Quality goes up most when Developers are<br />

testing the code (also discussed in Sven Peter’s<br />

talk “How to do Kickass Software Development”<br />

at GeekOut <strong>2013</strong>), which means that you<br />

shouldn’t just leave testing to QA team, but<br />

bake it into the development process as<br />

well. The rest of the measurements were<br />

more or less insignificant, although we don’t<br />

recommend letting your customers/users test<br />

your software for you.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

23


How do you communicate?<br />

We asked respondents how they communicate<br />

most often, based on a list of these choices:<br />

. Ad-hoc conversations<br />

. Daily standup<br />

. Meeting several times a week<br />

. Weekly meeting<br />

. Email<br />

. Chat<br />

. Phone/Voice chat<br />

. Teleconferencing<br />

. Forums<br />

. Trackers<br />

. Wikis<br />

The effects of different communication methods<br />

on predictability of software delivery<br />

The choices Email and Ad hoc conversations were<br />

the most common selections, but there was a<br />

fairly even distribution among the rest of the<br />

of the choices. However, when we relate these<br />

findings with Predictability and Quality of software<br />

delivery, we see a some interesting trends.<br />

With regards to Predictability, having Daily<br />

standups and Team chat seem to be the best<br />

ways to communicate, and relying on Email to<br />

communicate is even slightly detrimental to<br />

predictability.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

24


When looking at software quality with regards<br />

to communication methods, the positive trends<br />

are not very strong--Ad hoc conversations and<br />

Weekly meetings both score a 2% increase in<br />

quality. However, the negative trends shed a<br />

little light on how teams communicate:<br />

The effects of different communication methods<br />

on software quality<br />

Teleconferencing naturally implies working with<br />

remote teams, which is expected to decrease<br />

quality slightly. And if you’ve ever heard the<br />

phrase “Too much of a good thing”, then the<br />

law of diminishing returns should be applied to<br />

Meeting several times a week. Whereas a Weekly<br />

meeting is positive, doing it too often appears<br />

to cause “meeting fatigue” and impacts quality<br />

worse than any other communication method,<br />

according to these results.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

25


Summary of main takeaways<br />

. Technical debt - Most teams work on technical debt at least<br />

sometimes, but we saw no significant increases in quality and<br />

predictability of software releases; however, a negative trend<br />

appears by not doing any technical debt maintenance.<br />

. Code quality - over 50% of respondents do not fix code quality<br />

issues. We saw a significant increase in both predictability and<br />

quality in teams that regularly fix code quality issues.<br />

. Test coverage - More than half of respondents have less than<br />

25% test coverage. There is a significant increase in both<br />

predictability and quality as test coverage increases.<br />

. Pairing up - 2/3 of respondents team up at least sometimes,<br />

and there is a significant increase in software quality associated with<br />

this, with a slight increase in predictability. No data about increased/<br />

decreased productivity from pairing up.<br />

. Code reviews - We see that 76% of respondents do code reviews<br />

for at least some commits, which significantly increases<br />

predictability of releases, but does little for software quality.<br />

More is better.<br />

. Task estimation - Simply put, keep managers out of task<br />

estimation. Estimations by the team as a whole significantly<br />

increases software quality and predictability of releases, but even<br />

having individual developers doing it doesn't hurt anything.<br />

. Branching - There was no significant effect on predictability or<br />

quality for branching, but there was a slight negative trend for the<br />

single trunk and a slight positive trend for feature branches.<br />

. Testing code - Of all the choices, automated tests showed the<br />

largest overall improvements both in the predictability and quality<br />

of software deliveries. Quality goes up most when Developers are<br />

testing the code.<br />

. Communication methods - the results here show that daily<br />

standups, team group chat online, and weekly meetings all improve<br />

predictability, while meeting multiple times a week and having a<br />

remote team (teleconferencing) associates with decreased quality.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

26


PART III - TOOLS:<br />

HOW THE TOOLS YOU USE AFFECT<br />

QUALITY & PREDICTABILITY<br />

The previous section showed some pretty interesting<br />

results related to how practices influence the quality and<br />

predictability of software delivery. Now, let’s check out the<br />

relationship between the tools development teams use<br />

and how this affects quality and predictability. Gonna be<br />

fun on a bun!<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

27


Different teams, different motivations<br />

In past Developer Productivity and Java EE productivity reports, we became<br />

well-known by our reporting of tool and technology usage/popularity. But<br />

we covered a lot of this in 2012 and decided that one year wasn’t enough<br />

time to wait before reporting these statistics again. So in this section, we not<br />

only show tool and technology popularity, but also how they correspond to<br />

increases or decreases in predictability and quality. It’s possible that this has<br />

never been done before, so strap on your geek helmets and get ready for<br />

some cool stats.<br />

Here is what we will look at:<br />

. Popularity of tool types in use<br />

. Increase in Predictability based on tool type usage<br />

. Popularity of Version Control Systems (VCS) used by respondents<br />

For the sake of comparison, we will check out tool/technology popularity<br />

numbers from our previous Developer Productivity Report 2012 -- (SPOILER:<br />

If you don’t already, start using Continuous Integration and Version Control<br />

ASAP!)<br />

. Popularity of Continuous Integration (CI) servers used by respondents<br />

. Popularity of Issue Trackers used by respondents<br />

. Popularity of Communication tools used by respondents<br />

. The effect of tools on predictability and quality (only tools with 10%+<br />

popular shown)<br />

. Tool usage by Rock Stars (Top 10%) compared to All Respondents<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

28


What kind of tools do you use in development?<br />

Nothing truly groundbreaking in this chart, but it is surprising that less than half of respondents are using text editors, since only a decade ago<br />

the question was “Vi/Vim or Emacs?”, not “Should a text editor be used?” We’ve come a long way. We can see that Infrastructure as a Service tools<br />

(e.g. such as Amazon Web Services, AWS) are just breaking the 10% usage area. The biggest mystery to us is that there are still people out there<br />

not using IDEs or Version Control. What the heck are you developing? Cloned hamburger meat? ;-)<br />

Tool type usage<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

29


How does predictability change<br />

based on tool type usage?<br />

Which technologies and tool types influence<br />

how predictable your releases can be? Believe it<br />

or not, the same relation set against quality<br />

measurements showed no significant trends-<br />

-it looks like quality is affected by development<br />

practices, but not development tools.<br />

Increase in predictability per tool type<br />

Using Version Control and IDE will significantly<br />

improve the predictability of your deliveries,<br />

and we see a reasonable increase in<br />

predictability for users of Code Quality Analysis,<br />

Continuous Integration, Issue Tracker, Profiler<br />

and IaaS solutions. Use of a Text editor and<br />

Debugger (surprisingly) has little or no effect on<br />

predictability.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

30


Popularity of Version Control Systems (VCS) used by respondents<br />

Subversion (58%) is being threatened by Git (47%) for de facto leadership of the Version Control space. Compared to our Developer<br />

Productivity Report 2012, it ranked as follows: Subversion - 66%, Git - 33%, CVS - 12%, Mercurial - 10%, so we can see a definite trend<br />

moving to distributed VCS.<br />

In this report, we also gathered some numbers on smaller players like Perforce (3.8%), TFS (3.7%), Clearcase (3.4%) and a few others.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

31


Popularity of Continuous Integration (CI) servers used by respondents<br />

The last time we asked about Continuous Integration servers, we saw the following numbers: Jenkins (Hudson) - 49%, Bamboo - 7%,<br />

TeamCity - 5%, CruiseControl - 4%. Jenkins (Hudson) remains firmly in the leading position, and we’ve seen growth for both Bamboo<br />

and TeamCity plus the introduction of Travis CI.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

32


Popularity of Issue Trackers used by respondents<br />

This is the first time we asked about Issue<br />

Trackers, but we can see that they have a<br />

reasonable effect on the predictability of<br />

software releases. Atlassian’s JIRA (57%) is<br />

the most used tool, but it’s interesting to<br />

see GitHub (21%), who is better known as<br />

a very cool VCS tools provider, coming in<br />

at #2 as an Issue Tracker. Other players<br />

that popped on to this radar include<br />

Redmine, Bugzilla, BitBucket and Mantis,<br />

along with a half dozen smaller providers.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

33


Popularity of communication tools used by respondents<br />

We wanted to know which<br />

communication tools were most used<br />

by respondents--after all, we saw<br />

that communication methods can<br />

significantly affect both predictability<br />

and quality. These tools span the realm<br />

of collaboration, from household video/<br />

audio conferencing tools to tools made<br />

for software teams and grouchy coders<br />

rockin’ out on IRC.<br />

Skype takes the lead (39.3%), but it’s good<br />

to see that Confluence (29.7%), a tool<br />

made for technology professionals, is<br />

coming in at #2. The Google consortium<br />

Google Docs, Google Hangout &<br />

Google+ take a nice chunk of in the<br />

communication tool space as well.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

34


The effect of tools on predictability<br />

(only tools with 10%+ popular shown)<br />

Probably one of the coolest parts of this analysis is that we were able to<br />

connect specific tools with predictability of releases. In general, we didn’t<br />

see many significant trends, however it would seem that if you pick a<br />

CI server, go for Bamboo or Jenkins over TeamCity.<br />

In the main competitors for the Version Control space, Git clearly takes<br />

out Subversion once again. Users of Confluence show a positive trend in<br />

the area of predictability, and JRebel, that time-saving tool we’ve heard<br />

about before, shows a statistically significant increase of 8%<br />

in predictability of software delivery.<br />

*Note: In order to maintain objectivity, we didn’t originally include JRebel in<br />

the list of tools. However, we couldn’t help ourselves and matched the emails<br />

provided by 47% of all respondents against our client list. We identified that onethird<br />

of respondents with email address used JRebel, two-thirds didn’t, and used<br />

their data for comparison. The results were too cool to omit :)<br />

TOOL<br />

EFFECT ON<br />

PREDICTABILITY<br />

TeamCity -2%<br />

Google Docs -1%<br />

Subversion -1%<br />

Skype -1%<br />

Git 0%<br />

GitHub 0%<br />

Google Hangout 0%<br />

CVS 2%<br />

JIRA 2%<br />

Confluence 3%<br />

Bamboo 4%<br />

Jenkins (Hudson) 4%<br />

JRebel* 8%<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

35


Tool usage by rock stars (top 10%)<br />

compared to all respondents<br />

As our final analysis, we wanted to see<br />

what the rock stars of this group are<br />

using for tools. As a reminder, rock stars<br />

are respondents in the top 10% for both<br />

predictability and quality of software<br />

delivery. By seeing what these geek gurus<br />

are using in terms of specific, we might<br />

be able to figure out what we can do to<br />

improve. We only choose those tools that<br />

had a large enough base of users to serve<br />

for analysis.<br />

In terms of statistically significant figures,<br />

we can only really say that the rock stars<br />

of this sample population vouches for<br />

Jenkins and doesn’t like Google Drive very<br />

much (we have firsthand experience of the<br />

diminishing return of a bazillion Google<br />

Docs!).<br />

Other trends represented include a definite<br />

move towards Git over Subversion, a<br />

preference for Google Hangouts over Skype<br />

conversations. Atlassian products are hit<br />

or miss here as well--where Confluence<br />

is preferred by the rock star group, JIRA is<br />

shunned and Bamboo comes in with no<br />

discernable change.<br />

TOOL ROCK STARS ALL RESPONDENTS DIFFERENCE<br />

Google Drive 22.31% 26.94% -4.63%<br />

Subversion 54.55 58.15% -3.60%<br />

JIRA 54.55% 56.96% -2.41%<br />

Skype 38.84% 39.26% -0.42%<br />

Github 21.49% 21.67% -0.18%<br />

Bamboo 10.74% 10.44% 0.30%<br />

TeamCity 9.09% 7.75% 1.34%<br />

Confluence 32.23% 29.72% 2.51%<br />

Git 50.41% 47.02% 3.39%<br />

Google Hangout 20.66% 16.70% 3.96%<br />

Jenkins 66.94% 56.06% 10.88%<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

36


Summary of main takeaways<br />

. Tool type usage - Some people still aren't using IDEs or Version<br />

Control, however the all-powerful text editor dropped to below<br />

50% usage.<br />

. Increase in predictability of release per tool type -<br />

The biggest winners are Version Control and IDEs, but predictability<br />

increases with the use of Code Quality Analysis, Continuous<br />

Integration, Issue Tracker, Profiler and IaaS tools<br />

. VCS popularity - The domination of Subversion (58%) is being<br />

threatened by Git (47%) for de facto leadership of the Version<br />

Control space. Mercurial seems to be losing ground to Git as well.<br />

. CI popularity - Jenkins (56%) remains firmly in the leading position,<br />

followed by Bamboo (10%) and TeamCity (8%).<br />

. Issue Tracker popularity - The space is effectively controlled by<br />

JIRA (57%) as the most used tool, but it’s interesting to see<br />

GitHub (21%) at #2.<br />

. Communication tools popularity - Skype (39%) is the most<br />

used communication tool, but Confluence (30%), the tool for<br />

software companies, is #2. Google Docs, Google Hangout and<br />

Google+ comprise the rest of the top 5.<br />

. How specific tools affect release predictability - In CI, take<br />

Jenkins or Bamboo over TeamCity, in VCS select Git over Subversion,<br />

and use tools like Confluence and JRebel.<br />

. What the rock stars do differently from the average<br />

respondent - The only trends to speak of here are that this group<br />

likes Jenkins (a lot!), doesn’t like Google Drive, prefers Git over<br />

Subversion and Google Hangout over Skype.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

37


TL;DR - LET’S CLOSE THIS OUT!<br />

For those of you too lazy to read the entire report, you<br />

can come to this section to see all the juiciest statistical<br />

morsels from the entire report, and the observations<br />

& conclusions we made based on the data we collected<br />

from over 1000 engineers.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

38


Summary of overall findings and a goodbye comic :-)<br />

PART I - METRICS: HOW TO MEASURE QUALITY<br />

& PREDICTABILITY<br />

PART II - PRACTICES: HOW THE THINGS YOU DO AFFECT<br />

QUALITY & PREDICTABILITY<br />

Neither quality nor predictability was significantly affected by industry or<br />

company size, which is good to see.<br />

In terms of Quality, nearly 60% of releases on average go to production<br />

free of critical bugs, so those teams can feel proud. The bottom 10% get<br />

apps out the door bug-free only 25% of the time, whereas the rock stars of<br />

the group enjoy releasing their apps into production without critical bugs<br />

75% of the time.<br />

There are certain practices that significantly influence the quality and<br />

predictability of software releases. Fixing code quality issues (up to<br />

+7% and +9% respectively) and automating tests (up to +9% and 12%<br />

respectively) are excellent practices to utilize across the board.<br />

In terms of improving software quality alone, having developers pair up<br />

(up to +7%), allowing developers to test code (+5%) and avoiding too many<br />

meetings each week (-4%) does the most to increase quality.<br />

When it comes to Predictability, the industry can predict deliveries within a<br />

margin of 60%, which matches up with the anecdotal data on late releases,<br />

features that needed to be cut and unplanned scope creep. The rock stars<br />

get to that enviable 80% for predictability, which as we said is probably the<br />

reasonable limit in being able to predict your delivery.<br />

In terms of predictability of releases alone, doing code reviews for<br />

commits (up to +11%) is the largest single beneficial practice, along with<br />

estimating tasks as a team (+6%); however, involving the management<br />

drops it by -6%.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

39


PART III - TOOLS: HOW THE TOOLS YOU USE AFFECT<br />

QUALITY & PREDICTABILITY<br />

We can also see a relationship between the tools respondents use and how<br />

these tools affect predictability (quality was not significantly affected by<br />

any tool sets, prompting us to remember that quality is based mainly on<br />

practices, not tools).<br />

The tool types that increase the predictability of releases most are<br />

Version Control (+9%) and IDEs (+8%), but Code Quality Monitoring, CI,<br />

Profiler, Issue Tracker, and IaaS solutions (up to +5% for the group) also<br />

improve release predictability. The top 3 individual tools that enhance<br />

predictability are JRebel (+8%), Jenkins / Bamboo (+4%) and Confluence<br />

(+3%).<br />

In terms of popularity, here are the technologies used by over 50%<br />

of respondents: IDE (97%), Version Control (91%), Issue Tracker (79%),<br />

Debugger (71%) and Continuous Integration (68%)<br />

The Top 10 tools/technologies used by respondents: Subversion (58%),<br />

JIRA (57%), Jenkins (56%), Git (47%), Skype (39%), Confluence (30%), Google<br />

Drive (27%), GitHub (22%), Google Hangout (17%) and Bamboo (10%).<br />

Thanks XKCD! http://xkcd.com/678/<br />

Finally, we display what the rock stars of the group (those in the top 10%<br />

for quality and predictability metrics) are using. Basically, the message from<br />

these folks is: use Jenkins, look for something other than Google Drive,<br />

choose Git instead of Subversion, and opt for Google Hangout over Skype.<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

40


Conclusions and Observations<br />

“To improve is to change; to be perfect is to change often.”<br />

- Winston Churchill<br />

STILL NO SILVER BULLET<br />

Sorry to break it to you, but there isn’t any single practice or tool that will<br />

be a true game changer for you when it comes to improving your software<br />

quality or predictability of delivery. Teams in general should focus on a)<br />

improving various work practices, b) using the best tools available and c)<br />

continuously improving the nature of their organizational culture so that<br />

they can meet up with the rock stars hanging out in the top 10%.<br />

RELATION != CAUSATION<br />

We feel that it’s dangerous to think that some practice makes an<br />

organization better...it’s quite possible that it is the other way around: good<br />

organisations pick particular practices to follow! Still it’s a good reason to<br />

try it out and see for yourselves. Looking for direction? Trying following the<br />

rock stars and see if relation equals causation for you!<br />

TEST AND MEASURE<br />

Trying different things is the key to improving your organization. However<br />

you need to pick your metrics carefully and measure the improvement<br />

or lack of such. Treat it as a scientific experiment and record results. The<br />

framework of quality and predictability is a great way to compare even<br />

different projects, and if you try one tool/practice at a time you can compare<br />

to the rest of the organization to see improvement or lack of such.<br />

TOOLS MATTER<br />

If you’re a toolmaker yourself, then you know this to be the case. Not only<br />

do good tools make you more productive, but they also serve as focal point<br />

on which you can enforce the practices that further improve your ability to<br />

predictably deliver quality software. If you’re on a tight budget, using free<br />

stuff is much better than using nothing, but there is a tendency among<br />

developers who purchase tools to fall in love with their new toys.<br />

More specifically, it’s time to use Continuous Integration & Version Control<br />

if you haven’t started already. You’re waaaay behind the curve. And for you<br />

Java developers out there, check out JRebel and see if it makes such a big<br />

difference for you too :)<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround OÜ<br />

41


Contact Us<br />

Twitter: @RebelLabs<br />

Web: http://zeroturnaround.com/rebellabs<br />

Email: labs@zeroturnaround.com<br />

Estonia<br />

USA<br />

Ülikooli 2, 5th floor 399 Boylston Street,<br />

Tartu, Estonia, 51003 Suite 300, Boston,<br />

Phone: +372 740 4533 MA, USA, 02116<br />

All rights reserved. <strong>2013</strong> © ZeroTurnaround Phone: OÜ 1(857)277-1199<br />

Czech Republic<br />

Osadní 35 - Building B<br />

Prague, Czech Republic 170 00<br />

Phone: +372 740 4533<br />

This report is brought to you by:<br />

Jevgeni Kabanov, Oliver White, Toomas Römer<br />

Ladislava Bohacova<br />

42

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!