When writing software you will inevitably need to choose an external code library to assist you with some specific feature or behaviour.
You’re also likely to discover that there are many tools available that claim to solve the same problem. So how do you choose which one to use?
You should carefully compare the value of each individual tool that is under consideration, and to do that we typically utilize a comparative table.
But before we get into it... time for some self-promotion 🙊
A comparison matrix table should take into consideration the following properties:
- Activeness: is the repo/source code actively maintained (e.g. features, bug fixes etc).
- Dependencies: how large is the dependency graph.
- Design: does the interface follow golang syntax/language best patterns.
- Documentation: is there sufficient documentation to support the code.
- Extensibility: is it part of a larger ecosystem and is it easy to extend.
- Performance: which implementation is more performant.
- Popularity: which tool is more popular (based on GitHub stars, so not scientific!).
- Stability: how buggy is the software (based on GitHub issues, so not scientific!). †
- Suitability: how effective is it (e.g. what core APIs/features does it provide).
- Usability: how easy to use it is (warning: this is subjective).
† stability: we can’t really judge the quality of issues raised, nor can we suggest that more issues is an indicator of poorly written software (e.g. a more popular tool is likely to have a larger number of issues raised compared to a tool with very few users).
But we can at least compare the number of issues that have been closed vs those that are left open and stale (e.g. a code base may be very active and still have poor user engagement by ignoring opened issues).
Although again, this is flawed because a code base with a much higher number of users (and thus a much higher number of issues raised) may not be able to review issues as quickly as a code base with very few users.
We will use the following emojis to identify comparative outcomes:
- ✅: clear winner
- ❌: clear loser
- ⚖️: balanced results
- 🗑: no winner/no loser
Note: I will sometimes mark both tools as neither ‘winner’ or ‘loser’ due to the comparative value not necessarily being indicative of either. For example, the data point might just be ‘of interest’ (such as the date of when the project started).
|Tool A||Tool B|
Winner: Tool A
We would have these results stored inside some kind of shared document for historical purposes, and once a winner is identified, then we should declare what the proposed steps are for integrating the chosen tool (as that in itself can sometimes determine whether the tool is the winner or not, depending on deadlines and/or ‘time to market’ etc).
But before we wrap up... time (once again) for some self-promotion 🙊