The word agile in this paper doesn't refer to any specific development process or methodology. Performance testing for agile development projects is a separate topic not covered in this paper. Rather, "agile" is used as an application of the agile principles to performance engineering. It is stated in "Manifesto for Agile Software Development" :
We are uncovering better ways of developing software by doing it and helping others do it.
Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more.
The goal of this paper is to demonstrate the importance of "left-side" values for performance engineering and give a few specific illustrations.
I have never read nor heard anybody argue against testing early. Nevertheless, it still rarely happens in practice. Usually there are some project-specific reasons like tight schedule or budget preventing such activities (if somebody thought about them at all).
Dr. Neil Gunther in his book Guerilla Capacity Planning  described the reasons why management (consciously or unconsciously) resists early testing. While Dr. Gunther's book presents a broader perspective on capacity planning, the methodology presented, including the guerrilla approach, is highly applicable to performance engineering. Not many projects schedule all necessary performance engineering activities with proper time and resources allocated. It may be better to realize from the beginning that it won't be the case and proceed in a "guerilla" fashion: conduct smaller performance engineering activities that don't require extensive time and resources, even beginning with just a few key questions, and then expanding those further if time permits.
The software performance engineering (SPE) approach to the development of software systems to meet performance requirements has long been advocated by Dr. Connie Smith and Dr. Lloyd Williams.  While their methodology doesn't focus on testing initiatives, SPE cannot be successfully implemented without some preliminary testing and data collection to determine both model inputs and parameters as well as to validate model results. Whether you are considering a full-blown SPE or guerilla–style "back-of-the-envelope" approach, you still need to obtain baseline measurements on which to build your calculations. Early performance testing at any level of detail can be very valuable at this point.
One rarely discussed aspect of early performance testing is unit performance testing. The unit here maybe any part of the system like a component, service, or device. It is not a standard practice, but should be. As we get later in the development cycle, it is more costly and difficult to make changes. Why should we wait until the whole system is assembled to start performance testing? We don't wait in functional testing, why should we in performance? The predeployment performance test is an analogue of system or integration tests, but usually it is conducted without any "unit testing" of performance.
The main obstacle here is that many systems are pretty monolithic: If there are parts, they don't make much sense separately. But there may be significant advantages to test-driven development. If you can decompose the system into components in such way that you may test them separately for performance, then you will only need to fix integration issues when you put the system together. Another problem is that large corporations use a lot of third-party products where the system appears as a "black box" and is not easily understood, making it more difficult to test effectively.