During unit testing, different variables such as load, the amount of data, security, etc., can be reviewed to determine their impact on performance. In most cases, test cases are simpler and tests are shorter in unit performance testing. There are typically fewer tests with limited scope, e.g., fewer number of variable combinations than we have in a full stress and performance test.
We shouldn't underestimate the power of the single-user performance test. If the performance of the system for a single user isn't good, it won't be any better for multiple users. Single-user testing is conducted throughout the application development lifecycle, during functional and user acceptance testing. Gathering performance data can be extremely helpful during these stages. In fact, the single-user performance test may facilitate the detection of performance issues earlier. Single-user performance can provide a good indication of what business functions and application code need to be investigated further. Additionally, between single-user tests and load tests there are also functional multi-user tests as described in Karen Johnson's article.  A good test with a few users can also help identify a lot of problems that may be very difficult to diagnose during load testing.
While early performance engineering is definitely the best approach (at least for product development) and has long been advocated, it is still far from commonplace. The main problem here is that mentality should be changed from a simplistic record/playback performance test occurring late in the product lifecycle to a more robust true performance engineering approach starting early in the product lifecycle. You need to translate "business functions" performed by the end-user into component and unit-level usage, and end-user requirements into component and unit-level requirements, etc. You need to go from the record/playback approach to utilizing programming skills to generate the workload and also to create stubs to isolate the component from other parts of the system. You need to go from "black box" performance testing to "grey box."
Another important kind of early performance testing is infrastructure benchmarking; the hardware and software infrastructure is also a component of the system.
Noting the importance of early performance work, quite often, it is just not an option. If you are around from the beginning of the project and know that you will be involved, a few guerilla-style actions can save you (and the project) a lot of time and resources later. Still, the case when you get on the project just for predeployment performance testing is, unfortunately, typical enough. You need test the product for performance before going live as well as you can in the given timeframe. The following sections discuss what you still can do in such situations.
Don't Underestimate Workload Generation
I believe that the title of the Andy Grove book Only the Paranoid Survive  relates even better to performance engineers than it does to executives. I can imagine an executive who isn't paranoid, but I can't imagine a good performance engineer without this trait. And it is applicable to the entire testing effort from the scenarios you consider to the scripts you create to the results you report.
Be a Performance Test Architect
There are two large elements that require architect-type expertise:
1) Gathering and validation of all requirements (first of all, workload definition) and projecting them onto the system architecture.
Too many testers consider all the detailed information that they obtain from the business people (i.e., workload descriptions, scenarios, use cases, etc.) as the "holy script." But business people know the business, and they rarely know anything about performance engineering. So obtaining requirements is an iterative process and every requirement submitted should be evaluated and, if possible, validated.  Sometimes performance requirements are based on reliable data, sometimes they are just a pure guess, but it is important to understand how reliable they are.
The load the system can handle should be carefully scrutinized: The workload is an input to testing, while response times are output. You may decide if response times are acceptable even after the test—but you should define workload before.
The gathered requirements should be projected onto the system architecture. It is important to understand if included test cases add value testing different set of functionality or different components of the system. From another side, it is important to make sure that we have test cases for every component (or, if we don't, we know why).