behavior that looks like poor performance, when in fact the tester is doing what he thought I wanted him to do.
Sometimes, when one problem is fixed, more are created.
Whenever testers try to improve one aspect of their work, other aspects may temporarily suffer. For instance, doing more and better bug investigation for some problems may increase the chance that other problems will be missed entirely. This performance fluctuation is a normal part of self-improvement, but it can take a test manager by surprise. Just remember that testing, like any software program, is an interconnected set of activities. Any part of it may affect any other part. Overall improvement is an unfolding process that doesn't always unfold in a straight line.
Something may work well in one environment, and crash in another.
A tester may perform well with one technology, or with one group of people, yet flounder in others. This can lead those of us who spend a long time in one company to have an inflated view of our general expertise. Watch out for this. An antidote may be to attend a testing conference once in a while, or participate in a tester discussion group, either live or online.
Problems and capabilities are not necessarily obvious and visible.
As with testing a software product, I won't know much about it just by dabbling with the user interface or viewing a canned demonstration. I know that to test a product I must test it systematically, and the same goes when I'm evaluating a tester. This means sustained observation in a variety of circumstances. I learned long ago that I can't judge a tester from a job interview alone. All I can do is make an educated guess. Where I really learn about a tester is when I'm testing the same thing he's testing, working right next to him.
Testers are not mere software products, but I find that the parallel between complex humans and complex software helps me let go of the desire for simple measures that will tell me how good a tester is. When I manage testers, I collect information every day. I collect it from a variety of sources: bug reports, documentation, first-hand observation, or second-hand reports, to name some. About once a week, I take mental stock of what I think I know about each tester I'm working with, triage the "bugs" I think I see, and find something that's good or recently improved about each tester's work. It's a continuous process, just like real testing-not something that works as well when pushed to the last minute before writing a performance review.