Functional testers often introduce recorded steps into the test script that were recorded by accident or erroneously then the test managers expect that the core automators can "fix" any poorly recorded test script. The test manager should separate the roles of the functional tester and the technical tester ("core automator") and only have the technical tester recording automated test scripts.
An analogous situation would be with software development where you have a team of functional analysts gathering requirements and creating the documentation for the project's specifications from which the development team would work off the functional specs to write the programs for the application. The functional analysts would NOT initially start coding in an object oriented language or some other programming language and then the development team would go and "fix" the programs that the functional analysts coded. The same principle applies to the automation of test scripts. The functional testers perform the initial test planning activities and the creation of the test scripts, test scenarios, test cycles, test sets, etc then the technical tester would work off the fully documented test scripts and test scenarios and proceed to automate the test script with an automated testing tool.
The functional tester and the technical tester can complement each other's work for instance after the technical tester has created a robust test script the functional tester can verify that the script correctly plays back against the application and provide sign-off on the automated test script.
This article illustrates some commonly held misconceptions about automated testing and articulates some insights that are optimal for working with automated testing tools such as guidelines for automating test scripts. The reader also has is informed on techniques for selecting ad acquiring an automated testing tool, how to implement the test tool within a QA team, and what are the expected benefits of working with automated testing tools.