I found out recently (within the past year) that there are a good many analysts who do no tool testing or validation of their own, and tend to use tools 'blindly' (my term), often based on recommendations from others.
I can believe it. Of course, there will always be a limit to the amount of testing that any examiner can do, but if you rely on a tool which you've made no effort to verify then you're asking for trouble.
As many analysts don't have programming skills (not a requirement) and don't tend to ask questions of others, they don't really know what's available within various data structures, and therefore cannot determine the sufficiency of the data that they're retrieving.
I'm not sure any of us is ever perfectly placed to judge the sufficiency of the data we're using, but I'd agree that each of us could strive to be less 'imperfectly placed' by being more alive to the evolving state of knowledge about available artifacts and methods by which they may be retrieved.
I'm not sure any of us is ever perfectly placed to judge the sufficiency of the data we're using, but I'd agree that each of us could strive to be less 'imperfectly placed' by being more alive to the evolving state of knowledge about available artifacts and methods by which they may be retrieved.
Agreed. I tend to (prefer to) do this by interacting with other analysts, even ones outside of my office, whom I trust to do more than just stare blankly back at me, and then run off and use what I've shared.