Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Design/code reviews in a startup
13 points by johnnyice213 on March 20, 2008 | hide | past | favorite | 18 comments
I'm working with 5 other developers in a startup-like environment (within a large defense company actually... go figure!). Until now our code base has been small enough that we've been able to get by without design and code reviews. However, our code base is growing as is our team.

There are several formal processes for doing this (Fagan inspection, over the shoulder, tool-assisted, etc.) A developer at MS even came up with a lightweight inspection (url: http://www.stickyminds.com/getfile.asp?ot=XML&id=7152&fn=XDD7152filelistfilename1%2Epdf).

The problem I see with a lot of these processes is that they interrupt your work flow and decrease productivity. I don't want to take an hour out of every other day to go through these reviews, but would it be too much to spend half a day every other week doing these reviews (and what would be the best day for this -- Friday?)

I think it would be great if we could get a discussion going about what others in similar situations have tried and what works and what doesn't?

Thanks all!!



Every formal code review I ever sat through was a painful experience that went on for longer than planned, covered way less code than planned, resulted in minor improvements at best and had zero long-term impact on the quality of the codebase.

Of the many reasons for this, I'll mention one: by the time a code review takes place, the key design decisions surrounding that part of the system have typically already been made. To do a retroactive redesign during a meeting is out of the question; that would be more work than writing the original code in the first place. So all people end up doing is scrutinizing tiny things like null checks. I'm not saying those don't matter; of course they do, but they're only a small part of making good code.

So the code-review approach boils down to saying "do the big stuff on your own and look over the small stuff together". That seems backwards to me.

(By the way, I'm talking about commercial projects. There's an entire literature around formal inspections, as you know, and good results have been said to be obtained with them, but if you want to go that route you're talking about heavy-duty process control, slowing development to a glacial pace that is unworkable for most commercial software.)

I've personally observed two things that do work. First, pair programming. It's not for everybody (the community here, for example, is largely derisive of it), and I'm not arguing for it in general. But there's no question that it does what code reviews are supposed to (but don't).

The other thing I've observed to work is to write the code individually but review it on checkin. That's how we do it on my current project. We'll either watch the repository and look over changes as they are committed, or email a patch and ask for review. This kind of review isn't as efficient or as comprehensive as writing the code together in the first place, but it does allow major problems to be caught and dealt with quickly (at the cost of some rework).


Unfortunately I work for a big company, and our team has a ridiculous 150+ designers/developers - how anything gets done is beyond me.

Also unfortunately, it fell upon me to review as much of the code produced as possible for some time. It wasn't a horrible job, but it opened my eyes just how beginner some of these developers seemed, and to how useful code reviews can be.

I never attempted to organise tradition 'sit-down' reviews - it wouldn't have been a good use of peoples time (and 90% of the team are in india!) - I just checked out a particular module and scanned it for red flags, repeated code and attempted to build up an understanding of what it was supposed to do.

Doing that caught many design flaws and simplified many chunks of spaghetti code, not to mention abuse of the database (bad queries, unnecessary queries etc) etc, so I think it was a good use of time.

Catching design flaws at this post implementation review stage is not really productive however, as its too expensive to fix them.

My solution to that was to do white board sessions with the designers, then ask someone to write up the outcome in a couple of pages to be quickly reviewed by the other members at that meeting - doing that was essential, as the number of times miss-understandings were caught before a code editor was even opened was amazing!

So basically, I think that code reviews have their place, but my feeling is that a sit down meeting is generally overkill. I also reckon that they are much more important (perhaps exponentially so) in a large team with many potential mediocre programmers than in a startup with several much more competent hackers!

I have never tried pair programming, but I know a lot of people are fans of it - I can see how it would be useful in a big corporation when someone is implementing someone else's vision, but struggle to see how it would work in a small startup. In the startup, surely people would be trying things and molding the vision in ones head until something useful comes out? I certainly hope to give it a try someday to see how it goes ...


Where I work we use a simple version of Scrum.

Following the sprint review at end of every 2-4 week sprint, each developer either gives a short demo of the new features or changes they completed or a short review of the most significant code changes they made. Those who focused on features generally prefer to give demos while those who focused on bug fixes usually ask for quick reviews of just the most important changes. In total these meetings can go a few hours but they only happen on average once or twice a month so they don't interrupt the daily workflow.

We mostly just use our Trac wiki for reviews but I've been thinking of suggesting something like http://www.review-board.org when we get bigger.


A couple arguments in favour of inspections, Fagan-style:

- Although it takes a brutal-seeming amount of time, many teams get a net time savings thanks to earlier bug detection.

- It's a learning experience for everyone involved, so there is a deeper preventative aspect. This can be especially valuable for the new members of your team as you grow.

If you do inspections, and find just one bug that would have been nasty down the line, it's easy to emotionally understand the benefits. Without that experience it's hard to be convinced that it really works. Then again, you feel awful when you go through a 2-hour inspection with four people and don't find anything severe.

Be careful with Fridays, especially in the afternoon. In my experience meetings of any sort don't go well at that time. I'd suggest other weekdays around 10am or 2pm. These seem to be ideal for some reason.

As for frequency of inspections, it depends on how much code you're churning out. I personally think it's reasonable to only inspect particularly challenging, complex, difficult-to-test or critical aspects of the system.


What sort of problems are you trying to catch?

Niggling little bugs or major design flaws? Both?

Code reviews, as this brutal group process done after the code is committed, are wrong IMO. It's better to talk out the design before any code is written, or circulate a design note. (Not a full design document; the only good design docs are written after the code.)

As for catching the niggling little bugs, or generally improving your approach in the small, pair programming and check-in review work well.

Check-in review is simply a convention that someone else has to read your diff before it's checked into the repo. You can make that an informal or formal part of your check-in process. It works really well; not only does it catch mistakes but it puts the incentive on the reviewee to write readable code, so they won't have their colleague complaining they don't understand what's going on.

That may sound like a lot of work but it's not. Once everyone gets to the point where they know how to write readable code, the check-in review takes just a few minutes out of each developer's day.


Design and Code Reviews sit somewhere far on the slow-and-formal-and-clunky hand side of the spectrum between them and pair programming, but essentially achieve the same aims. Both pair programming and peer reviews (of code or otherwise) have the same aims and same results:

1) fewer bugs (peer reviews / pair programming is the most efficient known method to reduce the number of bugs, by far - testing is a distant second)

2) better spread of knowledge around the team

It's important to realise that they're the same tool, just twisted into a different shape. Pair programming is essentially taking the peer review concept and making it continuous rather than applying it at a regular interval.

In a startup environment, I'd recommend pair programming rather than code reviews, because it tends to provide a greater return, and it feels more nimble and fresh.

To ease yourself into it, you could start with pair programming days once a week, and see how people like it...

Daniel


I've been in your position with a team that's quadrupled (including international staff). In the team's I've worked in it tends to work on trust, committed alot of code and senior then it's probably not going to be reviewed. Still earning your stripes then a more detailed design and an over the shoulder before commit and/or a formal review post module completion.

I tend to think you have to trust your team so I've tended even for junior guys to operate in permissive mode (I've yet to revoke someone's commit privileges). Small teams have a great advantage in that they can communicate more efficiently and often deliver quicker. If your team is growing and you can't trust (or grow to trust) what devs are committing then you're going to have longer term problems. Code reviews to me are more about mentoring and growing new staff, not enforcing 'good' code. The first question is always how could you have done this better. Why did you do it this way? Smart teams self correct so getting people to think about these things themselves tend to mean they don't repeat mistakes.

What really works is automating your success conditions, especially dumb stuff like formatting and dumb bug checking. Use continuous integration, in our dev team check in will get it running through all the tests including a full ui regression every night so if it makes it through that then it's already reasonably good. Cruise control integrated with .net and java checkstyle findbugs etc, turning warnings on as errors in the build means simple dumb stuff just doesn't happen. If someone consistently breaks the build or is checking in broken stuff then a switch to review first works. Breaking a build in our teams sends an email to the 'list of shame' (all the devs on the team) so people tend to avoid breaking it. We also don't allow any build to be broken for more than 24 hours (usually < 1 during business hours). It's all about tradeoffs and if it's a good tightly knitted team then code reviews tend to become less important.

In a distributed dev team, working remotely or getting close to release then I've found review board really good. http://www.review-board.org/ I found people commit to the continuous integration system, if it breaks they fix it and upload a diff or patch to review board. People get emailed and can mange their time more effectively, they pick the best time for reviews themselves. A 'ship it' from any of our senior team means it's done and hosed.


You could try a cool method of random inspection. Have a daemon that listens for submissions to your tree. With a random chance (you determine probability) have it email the contextual diff (maybe even highlighted full diff) and the changelist description to a random developer.


We use Crucible:

http://www.atlassian.com/software/crucible/

It's all very informal and the reviewer can do the review in their own time, it works very well.


Why not make it informal - just ask some other developer to give you feedback on your code?


Try pair programming (unofficial, informal, real-time code reviews+collaboration)


Good standards make code reviews trivial.

Well defined, zero tolerance, guidelines virtually eliminate the need for code reviews. Variable naming, standard routines, syntax, how to do iteration, branching, common functions, etc., etc. etc., should all be outlined clearly, agreed upon, and adhered to.

I prefer to look at it this way...

Code Review: No one gets out of this room alive until this is right (whatever that means).

Standards Compliance: Wanna play? Good. Follow the rules.

Oh, and please don't give me that age old argument about how standards limit creativity; if this small price for quality software is too limiting for you, then go play by yourself.


Code reviews should be much more to with medium to big ideas, not small details. Small details should be trivial to a good hacker.

so: "You've got that wired in backwards", "Tim built one of those over here", "When we implement foo, that'll bar" & "That's a security hole because ...." are good.


We have coding standards. The developers are good about following them, and some coding standards problems can even by caught automatically by static analysis tools.

But coding standards don't prevent logic errors, or requirements misunderstandings. Even the best developers make those kinds of mistakes and we catch a significant fraction of them during code reviews.

The other major benefit of consistently doing code reviews is learning. The attendees often pick up useful new techniques. And you spread knowledge of the code base around the team, which reduces the organization's dependence on any one developer and also brings out opportunities for code reuse.


Good standards make code reviews trivial.

If that were so, then you could standardize the production of good code.


I'm totally with you, but bear in mind that standards documents (especially within small start-up teams) tend to be built slowly over time, and reviewed and modified when necessary. Code reviews are great for establishing and tweaking standards when none exist.


Yeah, variable naming standards are nice for consistency and I take it "common functions" is a euphemism for "don't cut and paste," but do you really want to dictate 'how to do iteration' in a way that's more limiting than the implementation language's idioms?


> Variable naming, standard routines, syntax, how to do iteration, branching, common functions

You're seriously wasting time if your code reviews could be done by a batch of regular expressions and a cron job, ala http://search.cpan.org/dist/Perl-Critic/lib/Perl/Critic.pm

I always thought the point was to keep things well factored: find hard to spot duplication or inconsistencies in disparate areas of the code, point out a library/api that could have done the task better, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: