link1 - manual code audits - they look through the kernel code for exploits.
link2 - dedicated team of 6 people looking for bugs in key files.
Both are talking about auditing the source code.
OK: linux kernel project has 1000s of people auditing the code, including top academics around the world and phd students wo get to write a paper on it if they discover something ... so they are motivated.
Individual projects will use different methods. Core systems get similar treatment.
The two BSDs in your example tend towards a Cathedral development model.
The gnu/linux projects are strongly biased to the bazaar model, and so tend to be more reactive in terms of bug fixes ... the user is also the auditor. This is usually fine because the bugfixes occur very fast, in general, compared with other models.
I think the jury is still out about which methods gets you the most secure code in practise.
GNU/Linux distros respond to the variability in the wider community by having hierarchies or repositories where core files undergo additional development, including bugfixing, before they get included, and others allow a range of tested third-party code if you want that stuff.