How to quickly determine the health of a project

5 minute read

Often when presented with a new project, you may be asked to quickly evaluate the shape it’s in. So what should you look out for? Here is a list I use to quickly determine the state of a project. For each section, the potential scenarios you’re likely to encounter are ranked from best to worst.

1. What do the git commits look like?

  • The commit messages use a consistent format (such as conventional commits). You can get a good idea about what was done by reading the commit subject. The commit message (if needed) succinctly explains the purpose of the commit. The commit is atomic and covers only one discrete task.
  • The commit message uses a consistent format, but it’s not completely clear what has been done just by reading the subject line. The message body could have explained what was done, but it’s missing. The changes are mostly focused on one task, but also includes other minor changes (typo fixes, removing dead code, etc…) that is not directly related to the task itself.
  • There is no consistent message format. The message subject is purely perfunctory and doesn’t tell you anything (“fixed some bugs”). There is no message body, ever. The commit applies all sorts of changes, most that have nothing to do with one another. Cherry-picking this codebase is an exercise in frustration.

2. How easy is the project to setup and get running?

  • There is a README with installation instructions. You only need the .env file and a single command to get started. The application works.
  • There is no README or it is outdated but the project (at least) follows conventions. You need multiple commands to get the project running. The application mostly works, except for one or two things.
  • You need the author to personally walk you through the setup. There are several onerous or convoluted steps that need to be done in order to get everything set up. Some steps do not work or need workarounds. The application does not work consistently or has bugs that need to be fixed before you can start developing.

3. Is there a linter? What about code styles?

  • The project contains a linter, a code formatter and a commit linter. They all work and the project passes all of them.
  • The project nominally contains a code style but it’s up to each developer to follow it. Linters are run by each developer’s IDE.
  • There are only minimal code styles (if any). There is no linter.

4. Are there any tests?

  • The project has thorough test coverage. The tests are reasonably quick and pass consistently.
  • The project has tests but they only cover some parts of the codebase. The tests are slow and/or flaky.
  • The project does not have any tests or the tests are not maintained.

5. Is there any documentation?

  • There is a README with setup instructions and guides for common operations (backing up databases, publishing, etc…). There are notes about previously encountered issues (“lessons learned”) and how to handle them. The project structure and any unusual features are clearly documented. This is the level of documentation you would expect from a popular open source library.
  • There is a README, but it only covers setup.
  • There is no documentation or the documentation is outdated and not maintained. Knowledge is transferred on a verbal basis.

6. What do the peer reviews look like?

  • The peer review is taken seriously. All code goes through peer review, even the tech lead’s.
  • Peer review is only for the benefit of the reviewer. No knowledge exchange takes place.
  • Peer review isn’t being done or is just a formality.

7. The code sample

This step is entirely optional. You’ll need a random code sample, so just pick any file that looks interesting.

  • It is almost immediately obvious what the code does 1. The code is no more complex than it needs to be and any non-essential elements are encapsulated and hidden. Everything is neat and tidy.
  • You can guess what the code does by reading it but you can’t be 100% sure. The solution is more complicated than it needs to be. Some stuff could probably be moved elsewhere but overall, the code is usable.
  • It’s difficult to understand what the code does or where it fits in with the rest of the project. There is commented out code and print statements left over from debugging. The code style is messy and inconsistent.
  • Only the author understands how the code works.

You’ve probably noticed that only the last step actually goes into the code (and it’s optional!). Why? For at least two reasons; first, it’s very unlikely a project that passes the first six checks with flying colors has crap code. If somebody is writing bad code, he’s definitely not going to bother with documentation or crafting good commit messages or proper peer review. And second, it’s a lot easier (and faster!) to make an assessment if it doesn’t require you to delve into the actual codebase. Heck, you can get a lot of this information from an informal chat or via email, without knowing anything about the project, the company or the people.

I’d like to particularly mention the “should” men here. These are the kind of people that say: “Yes, we know we should do X or Y but **insert excuse**. There is, objectively speaking, no difference between people that know they should do something but don’t and people that don’t see a point in doing it all. The end results are the same.

Finally, a word of warning - there is one thing this test may not catch and that’s overengineering. How can you tell you’re on an overengineered project? You probably won’t until you actually start working on it. If you notice that everything seems to be done “by the book” but it’s still taking you a long time to implement even trivial changes then you’re probably on an overengineered project.

  1. Yes, adding doc tags to your code is totally cheating. And in this case, you should definitely cheat. 

Updated: