down to ranking, do a hierarchical weighting. Don't give Administration the heaviest weight because there were the most bullet items there to be evaluated. Decide on your evaluation weights at a macro level. Perhaps choose the areas I picked out above. Take 1000 points and distribute them to those areas. Then it doesn't matter how many bullet items are evaluated in each area. If necessary, go another level in some areas. Perhaps CM Capabilities should be broken down into Workspace Management, CM Manager Functions, and Basic Change and Version Control.
Then get some consensus on the weighting - different experts will have expertise on different areas of the ranking tree. Recognize that as part of your input gathering. Don't treat your testers' input on administration with the same weight as your SCM administrator's input. And recognize that there are 150 developers, 25 testers, 40 managers and 3 administrators - a little feature for each developer is a lot more important that a big feature for an administrator.
Finally, look to the future. Don't pick a tool that's good for the next 5 years - pick one that will last you 20 years or more. You don't know where your product is going to end up. The first CM tool I designed is soon to enter its 30th year of operation on its initial project. Did I forsee this 30 years ago? Did I build a nice GUI? (Hint: the term didn't even exist, although text-based CRT displays were getting popular). No. But I did make sure it was built on fundamental principles and that it was scalable beyond the day's technology. So look at the technology, look at the scalability, look at the performance, look at the standards - make sure it will stand the test of time.
So what are your favorite evaluation criteria? What did I miss? How do you approach your evaluations? Drop me a note or start a thread...