Locking Down Wireless

[article]
Securing Holes in Wireless Applications Through Development Practices

Mobile computing is another industry that shows no sign of slowing down. Enterprise deployments of notebook PCs, tablet PCs, and PDAs continue to grow by leaps and bounds. Gartner predicts that by 2010, 80 percent of key business processes will entail the exchange of real-time information involving mobile workers. Unlike current security standards for immobile hardware and software, wireless application security has its own set of challenges.

In order to lock down wireless devices and applications, data must be protected while it is on mobile devices, while traveling between devices and the organization, and while it is at the organization. Data stored on mobile devices is more at risk than data stored on servers, simply because of its greater potential to fall into the wrong hands. Without adequate security measures data could be exposed to unauthorized use, potentially disclosing trade secrets, confidential personnel records, and financial information.

Despite recent security breaches, consumers and businesses have attained a certain level of trust regarding online activity and interactions. Businesses must provide valuable, interactive services to their customers in order to compete. Consumers do not hesitate to provide financial information via their banks' Web sites, use Web applications to shop online, book flights via the Web, or access corporate intranets to communicate sensitive, internal information. Similarly, for many consumers, carrying around sensitive personal and business-related data on a wireless device has become a way of life--many simply can't imagine not having the information at their fingertips.

Tackling Security from the Beginning
Protecting customer data and information should be top priority for every organization, but many don't know where to start. The first step is to ensure that the wireless applications used are as secure as possible. Unfortunately, security is not always a priority during the application development lifecycle. Secure application development requires a constant balancing act between functional requirements and business drivers, deadlines and limited resources, and risk and flexibility. Security needs to be incorporated into all phases of the application development lifecycle. Developers must focus on the security risks inherent in the development process and apply security principles specific to the programming languages, operating systems, and technology they use.

Historically, businesses have held third-party software providers responsible for releasing unsafe applications. However, vulnerabilities in custom in-house applications are also common and pose significant risks to sensitive material such as consumers' personal financial information. Developers on both sides must design the appropriate security features--encryption, authentication, auditing--for the level of security required.

Developers should follow a few simple steps when designing wireless applications to greatly reduce security risks. In order to protect information housed on wireless devices, developers should consider requiring users to log in to the wireless application, which would eliminate non-user access to personal and business-related data. Developers should also limit the amount of time a wireless device can remain inactive without requiring users to log in again, or prompt users to reenter passwords at intervals. Developers should also consider limiting the amount of data the application retains in memory in order to reduce the amount of information that could be stolen.

Developers should familiarize themselves with secure coding techniques for the programming language and platform they use. Until recently, many of these techniques were not taught in higher education classes for software developers. Therefore, some software engineers may not know of these techniques and may develop code unaware that they might be creating potential problems. For example, there are standard pieces of the C/C++ programming language that are not secure and should be used with great caution or left out of the process altogether.

Common Vulnerabilities
There are several types of vulnerabilities that developers may fail to identify before an application goes to production that can affect both commercial and custom applications. They include authorization bypass, SQL injection vulnerabilities, buffer overflow, and information leaks.

Authorization bypass occurs when a normal user is able to access information from a Web site or other type of application that was meant to be used by an administrator or a select group of individuals.

Custom in-house applications are particularly vulnerable to SQL injection vulnerabilities. Many are connected to the Internet and are vulnerable to this type of attack, which exploits Web applications using client-supplied data in SQL queries without first removing potentially harmful characters. In this situation, data provided by a user, such as an account number and username, is used to query additional data on the SQL database. A knowledgeable attacker injects commands into a database that allow him to take control of the database--even accessing user account information and details.

A buffer overflow is another vulnerability that has plagued the commercial software industry and custom applications. It occurs when a program or process tries to store more data into a memory buffer--a temporary data-storage area--than it is intended to hold. Since buffers are created to hold a limited amount of information, the extra data spills over into adjacent memory, corrupting the valid data held in the overstuffed buffer. In many cases the overflow will enable an attacker to execute commands of his choice on the machine. Applications written in Java and other "safer" languages are still susceptible to buffer overflows when they interact with services and libraries written in native code.

Information leaks also pose a threat to applications. A single information leak often is not a serious problem, but has the potential to provide an attacker with the information necessary to launch other attacks. Error messages are a prime source of information leakage. For example, if a user purposely enters an erroneous string of text rather than an eight-digit account number, an insecure application will send the user the detailed error message generated by a component of the application, such as an SQL server. This message often provides the attacker with valuable information concerning what type and version of an SQL database is being used and how the system is constructed. These details allow the attacker to refine the attack and more easily compromise sensitive data.

Building Security into the Wireless Development Lifecycle
Each major phase of development--requirements collection, application design, and application implementation--can introduce vulnerabilities into an application. A holistic approach to building security into the development lifecycle will save tremendous amounts of time and money, because problems are identified early in the process and continually addressed during each step. Security practices should be in place during requirements planning, design time, implementation, and testing in order to catch the majority of problems as early in the cycle as possible.

It is less expensive and less disruptive to discover design-level vulnerabilities during the design, rather than discovering them during implementation or testing and forcing a costly redesign of pieces of the application. For example, if proper authentication of administrators is not built into the program from the beginning, it is much more time consuming and risky to fix during the final QA phase.

QA people who understand the importance of testing security and functionality should conduct application testing. They should apply security-testing processes that test that the security features are working properly. Additionally, they should perform negative testing to determine how the application handles unexpected data such as long strings, special characters, and error conditions. QA should use a problem-tracking system to prioritize security issues alongside other program defects, so that security issues can be fixed just like any other program flaw. Common methods to test applications include load-testing tools and tools that will generate input data for cross-site scripting, SQL injection, and buffer-overflow testing.

Threat Modeling
Threat modeling and countermeasures are important steps in the secure development lifecycle--ideally done when the wireless application's design is near completion. Threat modeling is an exercise in which developers identify which assets or pieces of sensitive information are housed by the application and which need protecting, in order to identify potential threats to that application. For example, what sort of data are wireless devices communicating back to the application?

Countermeasures can be implemented to test the application to ensure it does not leave private information vulnerable to potential attackers. Input filtering, one example of a countermeasure, is a technique used by programmers to protect an application from attack by limiting the size and format of input to exactly what the application is expecting. For example, if an application is designed to accept a username that is all alphabet characters and a maximum length of eight characters, the application should reject all input that is longer than eight characters. This will help protect the application from performing unintended operations from unexpected input. Developers also should closely examine bandwidth, CPU time, and disk space to mitigate denial of service risks.

Additionally, developers should employ a thought process in which they imagine themselves as an attacker who knows everything about what the application can do. They then should enumerate and categorize those threats to come up with ways to mitigate the risks. If that can't be done, the design needs to be changed and re-implemented. Organizations that need to comply with regulatory requirements, particularly in the financial services industry, should consider enlisting a third party for a penetration test, which will provide validation of the application's security.

Furthermore, all software developers should obtain training on basic application security principles. They also should take a more holistic approach to application development, building countermeasures into the design process and conducting rigorous QA testing.

Conclusion
By focusing on the security risks inherent in the wireless application development process, developers can apply these principles to any programming language or technology. Architects, developers, and project managers can learn how to proactively integrate security principles into software engineering practices to prevent vulnerabilities from entering the code base. While there is not one "silver bullet" for building secure wireless applications, developers can employ multiple processes and tools that examine vulnerabilities in different ways to ensure application security before production.

With these security measures built into the application and potential security risks addressed at each stage of development, businesses can be assured of a lower vulnerability to attacks. Businesses with a more secure stance can confidently enjoy the enhanced productivity achieved through wireless and mobile technology.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.