AmbySoft.com

Designing Agile Surveys: Strategies to Improve Our Approach


This article provides insight into how to effectively survey the agile community, thereby improving the quality of our agile surveys.

My hope is that much of the advice is also pertinent for surveying other communities.

Are Agile Surveys Valuable?

Assuming that the survey is well designed, the results of an agile survey can be very valuable for people who are trying to make informed decisions around whether to adopt agile approaches, or to further expand their adoption efforts. Many people want to see industry data, and surveys are one way to get that. Of course surveys aren’t the only source of information, actual experience with agile techniques also provides important insight, as does academic research and forms of anecdotal evidence such as case studies, experience reports, and conversations. All of these sources of information have their place and each has their advantages and disadvantages. One size does not fit all.

A common reason that people give for not filling out surveys is that they don’t feel that the information is valuable to them. I have to assume that they’re correct in their belief that they’re not getting value out of surveys. However, that doesn’t mean that others don’t see value in survey results. Furthermore, by not filling out a survey, other than saving time of course, all you will accomplish is that you’re making it harder for your voice to be heard by senior decision makers within IT departments (few decision makers are trolling agile mailing lists trying to sniff out the occasional word of wisdom). And yes, despite all the talk about self-organizing teams within the agile community, the fact still remains that senior management in your organization can and will make decisions which affect what you do and how you do it. As a community it behooves us to invest time to provide the best information that we possibly canto decision makers, and effective surveys are part of that strategy. We need to remember that there is a wealth of information available to decision makers showing that traditional strategies are effective in practice, a very good source for example is Capers Jone’s Applied Software Measurement 3rd Edition, and we need to motivate senior management to start questioning some of the advice that they’re getting.

 

Designing An Effective Survey

Here are some quick thoughts based on my experiences over the years.

  1. Know the topic. If you don’t understand the topic that you’re exploring, there’s very little chance that you’ll design an effective survey which explores the topic. Do some reading on the topic first and understand what surveys have already been run regarding agile software development. Get involved with the community, and identify what issues actually need to be explored.
  2. Let people opt out of questions. I typically make questions mandatory but will allow people to indicate that they don’t know what the answer is, or that the question isn’t applicable to their situation, or simply give an option of “other”. If you don’t allow people some way to opt out of a question then you run the risk that they will give the closest answer or simply choose any answer simple to move to the next question, thereby reducing the quality of your data.
  3. Prefer to ask about observable facts over opinions. Ask questions that focus on observable facts that the respondent could realistically answer. For example, instead of asking whether a team is large ask if the number of members was between 1-10, 11-20, and so on. Of course you will often still want to explore people’s opinions in some questions, but always step back and ask yourself if it would be better to instead explore facts.
  4. Keep it short. People are very busy and don’t have the time to fill out long surveys. The longer the survey, the lower the chance that people will fill it out and therefore the lower the applicability of your findings because you’ll have a small sample size. I realize that this is hard because you likely are interested in a lot of issues, but it’s far better to explore a few targeted issues well in most cases.
  5. Explore new issues. It doesn’t make a lot of sense to cover the same ground that’s already been covered by others, unless your goal is to confirm their work (this can be important too). Instead, either try to extend our knowledge by exploring an issue in detail (for example, the DDJ 2008 Agile Adoption survey found that the majority of agile teams were doing some up front requirements and architecture envisioning, the DDJ 2008 Modeling and Documentation survey explored how people were going about doing so). Or, repeat an existing survey for a targeted group. For example, when I present results of various surveys at conferences (I give a presentation called Agile by the Numbers which I’ve given at conferences and to customers around the world) I’m often asked for the detailed numbers for a specific geographic region, such as Scandinavia or South Africa, or for a specific domain, such as banking or manufacturing. If you have access to a mailing list for a targeted group of people then it would be interesting to discover whether they exhibit different trends than the large community does.
  6. Get help. Get some help designing a survey from people with agile experience and with survey experience (get feedback from the Agile Survey Reviewers).
  7. Beta test it. Send it out to a small group of people that you know, hopefully one which is a reasonable representation of the group that you’re targeting, to determine if they understand the questions that you’re asking. There’s nothing worse than finding out that you miswrote a question. For example, the DDJ 2006 Agile Adoption Survey asked about whether people were doing Feature Driven Development (FDD) and many people responded that they did. But, very few people seem to do FDD in practice, even though it’s a very effective approach, but respondents indicated that they did because they were capturing their requirements in the form of feature statements (which is fine, but that doesn’t mean you’re doing FDD). The problem was that few people understood what was actually being asked and misinterpreted it to mean something else. If I had beta tested the survey first I likely would have noticed this abnormal result and hopefully addressed the problem appropriately.
  8. Invest some time to learn about survey design. Read some of  the resources suggested at the bottom of this page.

 

Publishing Your Results

My advice is to:

  1. Make the source questions available. People should see what questions were asked, how they were asked, and in what order they were asked. This puts the results into context and enables people to identify any biases that you may have introduced through your wording. For all of the surveys that I run I make a PDF of the survey available online.
  2. Make your analysis available. It should be as easy as possible for people to learn about the important findings, or at least what you think is important, of your survey. For all the surveys that I run I make a PowerPoint presentation file available that people can reuse in their own presentations, with proper attribution, and often include graphic images of some results which I share on my site (usually I’m using the graphics in an article somewhere online).
  3. Make the source data available. This enables people to analyze the data for themselves, they don’t have to trust your analysis (which will also potentially introduce bias). For all the surveys that I run I make a CSV file of all the source data, with the exception of identifying information (due to privacy concerns), available online. Many surveyors will not make their survey data available because they see it as a competitive resource that they shouldn’t share. My philosophy is that once I’ve used the data for my own purposes, usually to write an article about what’s going on within the IT community, then I had might as well share it with others and hopefully enable them to gain some value too.

Personally, I don’t trust any survey results when the surveyor doesn’t do all three of these things.

 

Known Challenges With Surveys

I would be remiss if I didn’t discuss some of the known challenges with surveys. In addition to design and publishing-related difficulties, there are also a few inherent challenges with surveys which can be difficult to overcome:

  1. You will only get responses from people willing to be surveyed. The opinions of people not willing to be surveyed are important too. 😉 Bottom line is that this is one aspect of selection bias.
  2. You risk getting responses from people with strong feelings about the topic. Even the title of a survey can contribute to this problem This problem is one of the reasons why I now run “State of the IT Union Surveys” — this is a fairly generic title that doesn’t reveal what the specific topic is. These new surveys also address several topics, not a single theme, so as to reduce the respondent drop out rate.
  3. Very often questions capture opinions, not facts. This is perfectly fine as long as the results are presented as opinion (which can be difficult to do sometimes). For example, the 2009 Agile Practices Survey explored how people are adopting agile practices. It’s fair to indicate that certain practices are believed to be more effective than others but it wouldn’t be fair to state that some practices are more effective than others (this is something better left to  more specific research. However, recognize that it is possible to ask factual questions such as the length of time that they’ve been working in IT, their age, and so on (yes, they may still choose to misrepresent information).
  4. The biases of the communities will be reflected in the results. People form communities for a reason. For example, people join the TDD mailing list because they’re interested in TDD and probably even trying to learn TDD. My 2008 Test Driven Development Survey was sent out to that list because I wanted to explore what they were actually doing in practice. Because this community is biased towards TDD they wouldn’t be a good source of  information about TDD adoption rates but they would be a potentially good source of information for how people are actually doing TDD in practice. This is why I indicate who the surveys went out to, so that you can determine what selection bias may have been introduced due to who the survey was sent out to.
  5. It’s circumstantial evidence. There are other alternatives for gathering data, such as ethnographic research, that can provide a higher quality level in the results. These appoaches are expensive and time consuming and are usually performed by university researchers instead of industry practitioners such as myself. Having said that, there is clearly room for the type of research that myself and others are performing via surveys. Yes, it would be wonderful to have incredible amounts of empirical evidence about the efficacy of various software development strategies. Yes, it would be wonderful to have better information that what I’m able to provide. However, the information that is provided by this work proves to be sufficient, or dare I say 2008 just barely good enough (JBGE) to answer many important questions that people have. Perfection is the enemy of good enough.

 

What Does the Agile Community Think About All These Surveys?

It’s really easy to run a survey using online tools such as Survey Monkey so a lot of people do so. This wouldn’t be such a bad thing if the surveys provided value, were designed well, and the results were properly published. However, this often isn’t the case and as a result fewer people choose to fill out online surveys because they feel that their time is being wasted (and sadly it often is).

Common anti-patterns with agile surveys:

  1. Misguided students. A common problem with “agile surveys” occurs when university or college students are given an assignment to do some research pertaining to agile development because they often put together a survey which covers topics which others before them have previously surveyed or they explore issues which reflect traditional (not agile) strategies to development. The students have the best of intentions but due to lack of experience, and often lack of support  from their already overworked professors, they execute the survey poorly. These surveys have almost no hope of finding out pertinent information and are inadvertently making it harder for everyone else because they’re annoying the people they’re hoping to survey and reducing the likelihood that they’re respond to future surveys. My advice to the students is to see some help designing your survey, both from your professors and teaching assistants as well as from the agile community.
  2. Thinly disguised marketing. Every so often a survey is sent out which is nothing more than a marketing gimmick for a consultant or product vendor. My advice is to recognize that you’re not fooling anyone and worse yet are running the risk that all you’re going to accomplish is that you’ll turn people off to whatever it is that you’re trying to sell.

 

Better Approaches Than Surveys

Yes, surveys clearly aren’t ideal. For example, ethnographic research where the researcher(s) spend months and sometimes years directly observing people is clearly more effective. And more expensive and time consuming. Many researches will start with a survey to help them to identify  potential candidates to talk to and then interview them to obtain more details as to what they’re actually doing in practice. Also more expensive and time consuming (and fraught with opportunity for the bias of the researcher to creep in unbeknownst).

Every so often I run into someone who has a negative opinion about surveys. OK, everyone is entitled to their opinions and as I indicated earlier surveys clearly aren’t perfect. Worse yet are poorly designed surveys, and there are a lot of those out there. Fair enough. But, I find that there is a significant difference in the quality of the conversation when the person is a researcher who has real experience trying to actually do research in the IT space versus someone who has never done so. The experienced researchers definitely hope for better but at the same time are often impressed how I was able to get the data that I did get (and yes, they often leverage that data to guide their own work). The inexperienced people tend to have unrealistic expectations, wanting better quality results (fair enough) without realizing what it actually takes to get those results. And naturally they rarely have done any research themselves, nor are they willing to do so (can’t say I blame them), are not able or willing to allow researchers to come into their teams to explore what’s happening, and rarely have any viable suggestions for doing better (and sometimes even complain that I didn’t look into topic X, sigh). Interestingly, when asked where they can find better quality information around whatever topic my survey explored they often admit that my stuff is amongst the best they found. It’s about this point in the conversation I ask what type of cheese they would prefer to have with their wine.

Survey-based research isn’t perfect. That’s why I’m as careful as I can be, I reach out for help when designing a survey, and am 100% open about the design and results. As I described earlier, I always share the questions exactly as asked, described how I got my sample set, and share the data exactly as answered. This is 100% open because I have nothing to hide. Personally, I am skeptical about any research where they don’t work in a 100% open manner. Since I started this in 2006 I’m glad to say that I’m seeing a growing trend in the IT community towards this sort of approach.

Of course there are much worse sources of information than surveys results. Anecdotal evidence, for example “My experience is that 65% of developers eat junk food every day for lunch” is an interesting observation at best. Rumours, for example “I heard an expert say that three quarters of programmers eat junk food every meal”, are even worse. The IT industry is replete with oft-repeated rumours that seem to have little basis in fact, but it’s something that “everybody knows.”

 

Suggested Resources

How to Measure Anything