Sensitive Data Exposure is leaving valuable data at risk of being stolen or altered. Leaving sensitive data easy to steal and/or change is a recurring problem with web services. Commonly shared sensitive information such as credit card numbers, passwords, and personal information are frequently stolen due to poor security practices.
Data in clear text is exposed. All data in storage that is in clear text is completely visible as soon as an attacker gains access to the storage system. All clear text data in transit is not only completely visible, but also easily intercepted and changed. The obvious solution is to encrypt or hash the data (depending on whether you will need to read it in the future).
While protecting and encrypting data can go quite far in terms of protection, the best solution to keep data out of an attacker's hands is to keep it out of your own hands. This is a very powerful practice when dealing with user accounts; accounts that do not have any sensitive data are secure by default.
There are many ways an encryption or hashing function can be "weak" or "broken", and using such a function could be as bad of a practice as leaving the data in clear text. Even with strong functions, improper use of such functions could leave data exposed (dictionary attacks, and rainbow tables).
By simply attempting every string in a large dictionary of commonly used passwords, this attack takes advantage of how many people do not use strong passwords to protect their accounts.
This is the simple use of a table of password hashes and their corresponding input strings. Each table is specific to the hash function, and most tables assume the hash was done exactly once on the exact input string.
This is simply sniffing a network for packets and doing whatever you want to them (read, modify...etc.). This is not usually considered an attack, but it can allow attackers to read unprotected sensitive information.
When using a hash function, OWASP recommends that you use functions from the SHA-2 family; older hash functions have been "broken" in one way or another. OWASP also recommends that you both "Salt" and rehash your entries.
Adding "Salt" is simply concatenating a random number to the string being hashed. This makes the distribution of passwords a lot more uniform; without salt, identical password strings will have identical password hashes. A password's "salt" number must be stored as well, so this does not make a single password much harder to crack. Adding "salt" simply forces attackers to focus on one account at a time.
"Iterating" a hash is the practice of repeating a hash function on it's output multiple times. "Iterating" hardens hashes against rainbow tables, and forces attackers to increase their computation time significantly. OWASP recommends using at least 1000 iterations to slow any attackers down by at least a factor of 1000.
It should also be noted that using a keyed function is a good way to harden your data, as it adds a layer of knowledge (the key) that the attacker does not have.
It should be noted that encryption is only needed to store data that needs to be reproduced in the future. For the sake of authenticating data, it is better to use hashing functions.
The government uses AES-256 for highly sensitive documents. If a encryption key is exposed in the code, the data is as exposed as clear text. So keep those keys hidden.
Protocols such as TLS, SSL, and VPN are what services tend to use when establishing a secure connection with a client. In most cases, they are used to establish an HTTPS connection where both client and server completely encrypt all HTTP messages using an algorithm decided by the protocol. OWASP has a long list of recommendations for configuring TLS encryption ciphershere.
While the protocols will do all the heavy lifting, securing a web service involves properly using the protocols. OWASP lists out several guidelines to help harden web pages:
The use of certificates helps authenticate your website and prevent Man In The Middle attacks on clients. The idea behind certificates is to have a trusted certificate authority (CA) verify the authenticity of a document. This works through a chain of trust where each web page has a certificate who's validity depends the validity of the certificate that signed it; this chain continues back to the root CA which was signed by a trusted third party. In the context of websites intended for public use, OWASP recommends purchasing the TLS certificate from a recognized certificate authority. NOTE: It is always necessary to provide all certificates in a chain or the client may not complete the chain of trust.
Public key certificates (used by common protocols such as SSL and TLS) rely on hash functions to compress documents before they are signed. While hashing documents provides a consistent signing time, it also provides an opening to collision attacks. Here is an example of a MITM attack that could result:
To put it into a real life example, the following was possible when RapidSSL was signing certificates using MD5 hashes:
For full details on the exploit, click here.
To protect against MITM attacks that can steal sensitive data from users, always opt for more collision-resistant hashing algorithms (ideally anything from SHA-2 family). This makes attacks with forged certificates very unlikely since finding collisions becomes infeasible to any attacker. In general, use Certificate Authorities that support collision resistant hashes when signing.