< instead of <=) are often unit-level errors. But do developers commonly have the test design skills to design boundary tests that are both thorough and efficient? If not, the boundary tests can be done at a higher level. Or, if you aren't coordinating all the various levels, you might be doing it unnecessarily at more than one level.
Who should veer off the happy path?
Who is doing negative testing? Tests that result in an error as the expected result can be half of your tests or even 90 percent of your tests. Inexperienced test designers at all levels of testing often focus too much on the "happy path" positive tests and don't adequately test the robustness of the system when it encounters errors. You also have to watch out for people who like the instant gratification they get from negative tests and who don't pay attention to the mundane but more important happy path tests.
Unit tests are good for making sure you exercise your hard-to-reach, error-handling code. But you also want to make sure the error handling works all the way through the system. It's not easy to decide how much negative testing should be done at each level.
Write one assertion per test?
Some developers say that the best way to write unit tests is with only one assertion per test, and anything more complex should be split into more than one test. Some system test designers, especially those who are testing against formal requirements, say the same thing. But I've found that complex tests that are more like user scenarios are much more likely to find bugs. So the challenge is balancing these two ideas and coordinating which tests levels we use the nasty but productive complex test scenarios.
What level are the bug fix regression tests?
Some organizations add new regression tests when they find a bug that wasn't caught by an existing test suite. If you have unit tests and system tests, where do you put the new regression tests? Do you analyze whether it's a simple unit-level bug or something that can only be found with a higher-level test? Most teams probably send it off to a system test team and don't think about where the test should go.
Maybe you have some challenges to share about how to coordinate the testing at the various levels. Just keep in mind that the first challenge is to open the dialog between the people who are doing the different levels of testing.