karldmoore development

A development oriented blog, containing anything I've been working on, reading or thinking about.

Archive for January, 2009

Google Reports The Entire Internet As Malware

Posted by karldmoore on January 31, 2009

For about twenty minutes today google has been reporting every single website as malware. Any search within google returned the normal search results, but every result also included a report that this site may harm your computer. Attempting to click on the search result and progress to the actual website returns a warning page with no possible way of clicking through to the resulting page. Google was effectively blocking every single search result from reaching it’s destination.

googlemalware

I read a thorough discussion of the malware behaviour many months ago, but I don’t think this is quite what google were looking for when they implemented the feature. I’m sure there are a great many retailers who are cursing at potentially lost revenue and users who are now unsure if a website is harmful or not. False positives are never helpful to users who are unsure about their actions at the best of times. Thankfully, the problem seems to be sorting itself out and only seemed to last for twenty minutes. It’s not yet apparent whether the issue has been rectified or simply that the feature has been disabled altogether. It will be very interesting to see how this one is explained!

Update: It seems the problem was bigger than I thought.

Advertisements

Posted in Opinion | Tagged: , , | Leave a Comment »

The Rubik’s Approach

Posted by karldmoore on January 29, 2009

Hiring new staff can be a long and drawn out process, at the end of which you hope you’ve found the right candidate. Vetting resumes, collating a list of potential candidates, telephone screening and then eventually bringing them in for an interview…….. so what’s the plan? This is the most important decision you’re going to make about your interview process; do you give them the list of technical questions, some example code or should you include the the rubik’s approach?

The List of Technical Questions

The candidate is presented with a list of technical questions that start with basic questions and slowly move towards more difficult ones. These could be about language specifics, API’s they claim to know or anything technical that is related to their potential position. Anyone with a basic knowledge of development principles stands a good chance of getting a reasonable score with the basic questions. Most people can memorise answers to the general technical questions, but does that really give you an insight into their ability?

The Example Code Test

The candidate is asked to write some general purpose code or possibly something resembling code they might be expected to work on. Anything general should be quite straight forward for the candidate, but anything that expects them to write code to specific API’s could produce undesirable results. If the candidate has claimed to have a good working knowledge of an API they have no excuse, but if they didn’t use the API yesterday, last month or ever, should that really sway your hiring decision? Is this candidate really better or worse than the one before?

A Different Way?

Joel Spolsky keeps his criteria for hiring staff quite simple; smart, and gets things done. If we approach hiring with such simple criteria; development is about solving problems and a good developer needs to excel at this regardless of their chosen language. They need a natural aptitude to understand a problem, break it down and arrive at a a solution. Presenting a candidate with technical questions or example code rarely tests those natural problem solving abilities in any great deal.

Rubik’s Research

A recent batch of company branded merchandise contained a single rubik’s cube. Over the course of a couple of months, the rubik’s cube was passed around the office, each member of the team having differing degrees of success. One team member was a rubik’s cube wizard, spinning and flicking the squares around to complete the puzzle in what seemed like seconds. This team member also happens to be exceptional at their job and has amazing problem solving skills. This team member is not a developer, but I have no doubt that if they decided to turn their hand to it, they would be an exceptionally productive one.

Some of the other team members just couldn’t break the problem down and struggled to find the patterns that advanced the puzzle. Even after training from the rubik’s cube wizard and written instructions on how to solve the puzzle, some team members still couldn’t progress from the jumbled mess. Some of these individuals could be classified as average (not exceptional, but not bad) developers and this puzzle really seemed to highlight the distinction.

The rubik’s cube is only one example of a problem solving challenge (some would argue one of the hardest), but even when supplied with the answers it still provides a good challenge. Fan’s of the classic game show Krypton Factor might already have an idea of the kind of challenges a candidate could undertake; from the impossible to the absurd. The idea here is simply that by augmenting a normal interview with a puzzle element, it may add some insight into the candidates puzzle solving approach.

Conclusion

The difference between average, good and excellent developers can often be traced back to their aptitude to solve basic problems. If team members are given the solution to problems but still can’t progress further, does this give us an insight into their general analytical approach? Problem solving skills can be taught to some degree, but does the rest just come naturally, is there only so much you can teach? Typical interviews often only touch on this ability and don’t look at it from a pure approach.

Puzzles like the rubik’s cube are a great way to test an individuals problem solving abilities, potentially putting them on a level playing field. These kind of puzzles force individuals to look for patterns, understand the process and apply it; after all isn’t that what development is all about? Next time you have a candidate in for an interview, should you include the the rubik’s approach?

Note: I have tried to find more information on this subject but as yet I’ve found very little real research. I’d be interested to hear about the links between problem solving and programming ability.

Posted in Development, Opinion | Tagged: , , , , , , | Leave a Comment »

Frequent Code Reviews The Key To Success?

Posted by karldmoore on January 27, 2009

Reviews are a widely used technique to analyse code for the presence of defects and potential improvements. Many successful teams continually review code to try to ensure a high level of quality and to constantly improve a developers ability to write good code. The arguments for and against code reviews have been made on many occasions, but one common unknown factor for many teams is the frequency at which they should take place.

Infrequent Reviews

Many teams operate on a feature complete code review. At the end of the cycle of work, several developers sit down together with the author critiquing the delivered work. What often transpires here is that due to the huge mass of code delivered, the chances of a thorough review are rendered utterly impossible. The plethora of code makes it difficult to find a starting point and thus potentially problematic code isn’t allocated the time it necessarily deserves. Any potential recommendations that come out of this kind of code may never see the light of day given the pressing work schedule (if the code was complete why are the changes necessary?). Most importantly for the team, reviewing code so infrequently can lead to demoralisation of its members.

Code reviews may pick apart the authors code; code that they have potentially sweat and cried over (ok maybe not) and worked hard to complete. Does it really make sense to save up all the potential criticism and deliver it in one skull crushing blow? Even if the criticism is perfectly valid, nobody likes to feel good about something only for it to be completely torn apart. Developers complain when customers only let them know what they actually want when they see what they don’t, is the same really acceptable for the code? If infrequent code reviews aren’t the answer, how often can code be successfully reviewed?

Email Reviews

Some teams never let the author of the code commit it to the repository. Instead, they email the code (typically in patch form) to the code owner or one of the principal reviewers. The idea behind this approach is the patches are always reviewed prior to them being committed, and the developers can work productively only focusing on the task in hand. This process has the ability to ensure that every change is scrutinised and verified to maintain the high quality standards that are set down. Having had to work with a system like this in the past, I can honestly say that it was fraught with problems and was one of the most frustrating I’ve ever worked with (I was one of the principal reviewers).

The person applying the patch quickly becomes a bottleneck in the process, how long can people really wait for the patch to be applied? What should the patch reviewer actually do with the patch, make the recommendations themselves or send an email back to the original author to apply the required changes. If multiple people are working on the code base, the chances of conflicts increase. If the patches are applied in the wrong order or peoples timing is just plain unlucky, the person applying the patch has a multitude of problems to deal with. The version control system is full of only one persons name, thus making it incredibly difficult to track down the original owner of the change. Lastly should the worst happen and the build fail………………….. guess who’s change it was that broke it?

Frequent Reviews

People like to know what and how they are doing with their work. If infrequent code reviews lead to a big bang delivery approach, then frequent code reviews take the inverse approach of small nudges in the right direction. By applying these frequent reviews, developers can deal with smaller suggestions and recommendations early in their development. Instead of developers feeling they have completed something only to be told it’s all wrong, they can be guided along the process to ensure they arrive at a right answer. The most difficult question here is; how frequent is frequent? This is a very developer specific metric.

Some developers require frequent attention and when I say frequent I mean every few hours (or more!). As developers become more experienced, this frequency typically reduces until the reviewed eventually becomes the reviewer. Depending on the type of project however, it’s still quite common for very experienced developers to like frequent code reviews. If frequent reviews become very frequent reviews, you might have unwittingly found yourself participating in quasi pair programming.

Pair Programming

Pair programming is not only a great way to develop but also to implicitly review code. A second developer sitting at the keyboard provides instantaneous reviewing of code. The second developer not only reviews the code, but also looks for potential problems and improvements to the evolving code. The second developer isn’t always necessarily the reviewer, and roles can switch between either developer during the exercise.

Having someone take over the keyboard encourages the development to be of higher quality and a second set of eyes prompts the coders to produce good code (obviously given a good pairing of developers). This instant review and feedback can actually reduce the number of bugs introduced into the system. As several developers produced the code, it also means that the team is never reliant on a single developer to address a given area of code. Pair programming is an excellent way of developing an reviewing code, but it’s not without it’s problems with some people finding the feedback is just too often.

Summary

By reducing the time between code reviews, teams can provide better guidance about the eventual quality of code and prevent storing up potential problems. The less frequent the code reviews are the more problems (especially with less experienced staff) that occur. Developers want to feel like that are doing a good job and as such then need small bits of constructive criticism often instead of lots of criticism delivered all at once. Reviewing code is best addressed frequently, providing quicker feedback, and reducing the amount of rework involved. As a developers ability increases, these issues typically subside and they become active in reviewing other peoples code.

If infrequent code reviews are problematic, are frequent code reviews the key to success?

Posted in Development, Opinion | Tagged: , , , , , , , , , , , , | 3 Comments »

Every Time A Build Fails A Fairy Dies

Posted by karldmoore on January 22, 2009

Whenever I receive an angry email from Hudson, I start to have very mixed feelings. My first emotions are pride and relief that we work with a build that catches problems and allows us to be troubled by them early in our process. My second emotions are torment and disappointment when I drill into the failure and see what has actually broken the build.

Everyone breaks the build, that’s just a fact of life. Anyone who has never broken a build; doesn’t write code, has never committed any code, works on a project without any tests, doesn’t write tests for their code or is just a phenomenal developer the likes of which many of us have never or will never see. The frustrating thing about build failures however, are the ones that are so easy to avoid. The thing that is even more frustrating than these build failures are the reasons given by the developer who caused the failure in the first place;

  • It was only a small change
  • Another developer had reviewed the code for me
  • Someone was harassing me to check the code in
  • The build is fixed now isn’t it
  • Everyone else breaks the build
  • I didn’t have time to run the tests
  • I’m not sure what tests I’m supposed to run
  • Nobody else runs the tests

The common thread through all of these statements is the lack of ownership and respect for the other developers. If you are checking in code on a regular basis that you don’t have confidence in or haven’t run tests against, what does that really say about your relationship with the other developers. Is this a team or just a bunch of developers that happen to work together?

If you broke the build, you broke the build. There are no excuses, reasons or justifications required. The only thing that needs to be done is the build needs to be fixed; not in a minute, not in an hour, but now. Everyone breaks the build, but everyone must also ensure they fix the build as well. Collective code ownership applies to every part of the project and nobody should be exempt from having to clean up their own mess.

If failing builds are continually a problem, many teams adopt punishments for developers that break the build. In the past I have found most of these punishments to actually be counter productive. Developers are not stupid, if there is a way to get around the punishment or to make it work in their favour they surely will. Things that punish developers can actually ensure that they commit code less frequently, push them away from the other team members or just be down right illegal. I am pretty sure nobody agreed to public humiliation when they signed their employment contracts. The aim is to change developer’s behavior without introducing more negative problems.

The most effective treatment I have found in the past is to make the build failures more of a team game. This used to be managed manually, but recently one of our team found a Hudson plug in called the Continuous Integration Game which takes a very similar approach (and we really thought our approach was original). Team members receive points for successful commits as well as losing points for failing ones (although we actually use different weightings for the points given). This provides real incentive to change habits, there is nothing like a measurable quality to make people sit up and take notice.

At the end of our two week sprint of work, the developer that loses the game buys cookies for the rest of the team. It’s only a small gesture (around £2 for the whole team) but a symbolic one none the less. The team really do like cookies and they sure don’t want to be the developer that has to buy them for everyone else (nobody likes to lose). When the next sprint starts, the game is reset and everyone looks forward to the cookies at the end of the sprint.

This game has actually injected a healthy attitude into the development team with everyone trying to avoid that failing build. Developers still regularly commit code, but the build failures aren’t as frequent as they once were. But should the worst happen and you receive that angry email, don’t forget to remind your team, every time a build fails a fairy dies!

Posted in Development, Opinion, Testing | Tagged: , , , , , , , , | 5 Comments »

Tests Are Still Code

Posted by karldmoore on January 14, 2009

Whilst performing code reviews recently, one of the major tasks was reviewing the tests accompanying the code. One of the most surprising discoveries, was the lack of attention paid to the tests and how regularly this was becoming an issue throughout the test code.

Test code is just as likely to contain bugs as the code it is supposed to be testing. After all, if the tests are just code, why wouldn’t it contain bugs. The possibilty of it containing bugs could actually be higher as it doesn’t have tests and in many teams isn’t treated with the same respect as the rest of the code. Tests must therefore be kept as simple and easy to understand as possible and employ the same tools and techniques available when writing tests as are used in the rest of the code.

PMD and FindBugs are great tools to identify potential problems within code, but I have only ever seen one project that applies them to the test code. PMD/CPD is a great tool to identify duplicate code blocks, but I have never seen a project apply it to test code. If duplicate code isn’t acceptable, why should it be tolerated in test code?

@Test
public void updateFirstName() {
    ...
    User updated = getUserById(expected.getId());
    assertNotNull(updated);
    assertEquals(expected.getFirstName(), updated.getFirstName());
    assertEquals(expected.getMiddleName(), updated.getMiddleName());
    assertEquals(expected.getLastName(), updated.getLastName());
}

@Test
public void updateMiddleName() {
    ...
    User updated = getUserById(expected.getId());
    assertNotNull(updated);
    assertEquals(expected.getFirstName(), updated.getFirstName());
    assertEquals(expected.getMiddleName(), updated.getMiddleName());
    assertEquals(expected.getLastName(), updated.getLastName());
}

Over time, tests will inevitably suffer from the same problems as the rest of the code. Tests become bloated, complex, duplicated and hard to understand if they are not maintained and refactored regularly just like the rest of the code. The potential problem with refactoring tests is that you are refactoring something which does not have tests to verify your changes, but testing tests is a controversial issue and can probably be better solved by conducting thorough code reviews.

Code reviews seem to be an underrated technique within test code. Having spoken to many other developers, the majority of them had never participated in any test code reviews, with only a small number having used this approach on several occasions. What makes this even stranger, is the fact that many of these developers do conduct code reviews on a semi-regular basis. This is somewhat anecdotal evidence, but it would be very interesting to see some real numbers on this subject. How many teams conduct thorough code reviews of tests and treat it with the same importance as the rest of the code?

Code without documentation is considered by many to not be code complete, but tests without documentation are a pretty common occurrence. Tests should be simple, which means documentation is not necessarily required, but does that same argument apply to the rest of the code? Tests can have intent revealing names which are sufficient in describing the intended purpose of the test and projects like TestDox can take this and turn it into readable documentation. Is this documentation sufficient, if the test fails, does it provide enough of a clue to what was actually being tested?

In the past I have found test documentation to be extremely useful when writting integration tests. This documentation described the tests not in developer terms, but instead in a use case form. By writing a simple parser, it was possible to generate documentation from the tests which would allow the QA team to understand which flows had been executed. More importantly, it also allow them to quickly understand what a individual test was doing when it failed. This documentation was written as an experiment, but it has proved extremely useful to both developers and QA when something goes wrong.

When writing tests, teams need to ensure they don’t neglect this code. Tests should be maintained, refactored, reviewed and measured to ensure the same quality as they would in the rest of their code. Tests are still code.

Posted in Refactoring, Testing | Tagged: , , , , , , , , , , | Leave a Comment »