Wednesday, July 6, 2016

who is using my Wi-Fi !!!

Wireless Networks are predominantly week. Suppose you are wearing your hacker's hat and want to experiment hacking into a Wireless signal/network, where would you start?

Wireless meant to be any communication without physical connection between source and destination like Wi-Fi, mobile networks, Bluetooth, IrDA, via satellite or anything like that. But in general terms when we talk about wireless, mostly it is Wi-Fi which we target to crack, even though other communication channels may also be victims of fraud.

Long time back, I got suspicious that my Wi-Fi at home is compromised and someone is using my bandwidth, since Comcast sent me a text message that I crossed 300 GB. Even when all the device - laptop, security cameras, smart TV, printer, phones iPad at home were switched off, my cable modem used to blink [the data transfer blue light] continuously when connected to Wi-Fi router, I thought someone is using my Wi-Fi router. My Wi-Fi router, I bought from Craigslist, so I thought its software was compromised, I did a little googling and installed dd-wrt via firmware upgrade, still it blinked. I applied MAC filtering still it did; I hide the SSID and changed password to 41characters long, applied WPA-2 personal, still it did. For one month we survived only on direct connection from cable modem to single device, either TV or laptop when used, no Wi-Fi at home at all. Later I realized the culprit is smart TV and netflix, so ignored the blinking of cable modem henceforth.

And now this discussion makes me think, all measures I took to secure Wi-Fi were not full proof and even the mac address can be spoofed!!! hfff….

To break-in Wi-Fi, I must have a virtual or actual target, if this is a virtual target [created by me], half of the story is worthless, and the moment I am 100% sure about underlying mechanism used to secure a Wi-Fi network, half of the battle is done. Tools like Kismet and NetStumbler [page 728 Shon Harris] could help understand what the Wi-Fi is broadcasted from, what’s the underlying technology used. After that I have tools like AirSnort and WEP-Crack [page 719, 728 Shon Harris]. aircrack-ng included in backTrack, cowpatty or reaver are some other tools that may be used.

But before starting using these tools I must be aware of basic terms and technologies ( like network standards, how Wi-Fi actually works, WEP, WPA2-PSK, WPA2-AES, channels and which channels can be used for rogue access points and so on), what may be the physical tools required ( like Wi-Fi adapters, Attennas or simply a routed mobile device !!), and last but not the least - make sure the tools I am planning to use or downloaded "FREE" are not backdoors to my test machine.


References

Security: How many $'s per Lbs. ?

I am sure all of you remember Data Theft from Home Depot, Target and OPM. This is just a short list. These organizations have fully mature IT security organizations. What do you think they were doing wrong to allow such intrusions?

We may have "fully mature" security in place, but if leanings and change in the way we tackle cyber-attacks is governed by actual exploits, certainly there is something wrong.

1. Responsibility. Like Home Depot, Target and OPM, we might be able to handover bad name after an incident to a vendor, but actual loss will not be recovered. For example in case of OPM, multiple applications used shared resources, if OPM had proper infrastructure in place, when one system is hacked, it would have stopped there. Same is the case with Home Depot and Target; boundaries of access were not clearly defined. All resources and information must have been given clear category and classification in all the cases.

2. Weakest link in line of defense defines how much secure we are, say in case of OPM, security information and event management partially covered monitoring of the key components. OPM allowed users to gain access without two factor authentication at some places, at others it was implemented.

3. it’s not always possible to build the whole ship in-house and assume that it won't fail.  On the other hand security is not something for which we could pay from Organization's budget, make our own personal benefit and get rid of on papers. We need services from third parties and this must not be the weak link, now to endorse a third party vendor and his services/ products, holistic processes and expertise must be attained by us. Say in case of Target, they are blaming the air conditioner firm, but who brought them in? Was target not concerned at all, they might be vulnerable to attacks.


4. We may not be termed as a secured IT organization, by only putting guidelines and secured architectures on paper, what does matter also, how well they are followed. For example in case of OPM, the vulnerabilities exploited, it was not like, they were never aware of anything at all, many loop holes were present since many years and reported, but no action taken. Reason, mentality of being lazy: the system is still running, will see later, ignore the warnings.

5. Spend Wisely. Gaining a shield against hackers may not be cheaper or one time investments, we must have to constantly involved in penetration testing and be aware of where the technology is moving, opening new secret doors for intruders. It will also need constant expenditure on hiring right talent and spending on upgrading skills of existing employee. Does this means, if we have a budget for security testing and we outsource it to same vendor, who will copy paste what he found during last cycle and we are done and my part of bribe is 100% sure ? No, findings must be given a value in terms of loss that might occur and fixed well in time. To get deeper analysis, we might also rotate third party vendors who do security testing.

6. How about vulnerabilities found by penetration testing vendors getting leaked to hackers, this is one of the nightmares which OPM may have experienced. Here comes the smartness of decision maker, how to tackle it. Involving too less employees in this activity of reviews might mean meager reviews and the right eyes being closed with lid of $'s. Involving too many may itself introduce vulnerabilities and risk to organization as a whole. Well said that 8th layer might be the weakest layer.

7. Fearless and full of doubts. Based on vulnerabilities reports, rebuilding the whole system sometimes might be advisable with some extra expenditure, than to keep on patching old unsecured system.

References


Blame China, I forgot to zip my ass:
vs

OWASP TOP 10 - A5-Security Misconfiguration

Please focus on A5 -
Explain how you would mitigate.


Identify threat sources
An anonymous user, a user with less privileges entering system intending to perform higher privileged action, or an employee getting benefited without revealing his own identity may try to exploit security misconfigurations
Identify events 
default accounts being used to access, downloading unprotected files dues to misconfiguration, getting content authorized to a user of higher privileges, using a feature available due to misconfiguration, exposure of logs to user since they were configured to be created in wrong place, download code files and reverse engineering them, more detailed error messages be used by hackers, hackers exploiting server technology being exposed in the html source served to client machine and so on. 
Identify Vulnerabilities
presence of default access accounts, presence of unprotected files, leaving misconfiguration which allows lesser privileged user to get more secured content/ function, presence of logs in public library or available outside server/without authentication,  unnecessary ports being kept open/default ports being used, availability of code files to be downloaded, exposing error message in detail to end user, exposing server technology in headers or html rendered and so on. 
Determine Likelihood of Occurrence 
determine how much is the likelihood of exploitation of detected vulnerabilities. 
Determine Magnitude of Impact 
determine, for each exploitation, how much trouble an organization will be in. 
Determine Risk 
Establish risk based on impact and likelihood. 


1. automate the process of installation, configuration and deployments using PowerShell or anything convenient available as per the system to avoid human errors (preventive, technical)
2. keeping test, QA, prod environment same and configured with same automated scripts, but passwords and user name should be different, alsousernames should not be obvious to guess (preventive, technical)
3. All software patches must be deployed, but after rigorous testing. establish robust communication channel from software providers to get alerts (preventive, technical)
4. robust architecture (preventive, technical)
5. Automated scans and penetration testing after every release (detective, technical)
6. Follow the product guidelines for system accounts, it also includes limiting their permission to prescribed level (preventive, technical)
7. make sure unnecessary functions, ports or protocols are disabled and default ports are not used(preventive, technical)
8. password used must be of legitimate strength. (preventive, technical)
9. Monitor application logs by admin and trusted dev (detective, technical)
10. Redo risk analysis triggered by 9.and 5. above.

Reference

1. https://www.owasp.org/index.php/Top_10_2013-A5-Security_Misconfiguration
2. https://hemantrohtak.blogspot.com/2016/05/service-applications-in-sharepoint-2013.html
3. Shon Harris Book Page 1103 (Common software development weakness enumeration list)
4. Shon Harris Book Page 1109 (OWASP)
5. http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-30r1.pdf

Perfect Encryption

Explain the concept of "Perfect Encryption? Why is it not practical?


Perfect encryption is achieved when probability of cracking is no better than guessing the message, even if hacker knows the encrypted message. And even if the actual message is retrieved, no guarantee is that it is the actual message.
 In one-time pad, a true random (not reused) secret key (at least as long as the message) is used and combined with actual message in parts (bit by bit) using modular addition (XOR). So, theoretically, Perfect encryption is achieved here.
But requirement of secured exchange of the secret key makes it perfect in theory only. If my exchange is secured, I may rather want to send plain text securely. Second thing, I doubt is, how to claim true randomness of the secret key. Third thing, sending such a secured message itself is alarming.


Reference:
  1. https://www.quora.com/Cryptography-What-is-a-perfect-cipher-and-why-is-the-one-time-pad-a-perfect-cipher#
  2. Shon Harris Book - page 771

How can you show that OTP is perfect?

I do not advocate one-time pad is perfect in practical but, it may be considered theoretically perfect:-

As per, http://people.seas.harvard.edu/~salil/cs127/fall06/docs/lec3.pdf  [2 Perfect Secrecy], the definition of perfect encryption is :
Perfect encryption is achieved when probability of cracking is no better than guessing the message, even if hacker knows the encrypted message. And even if the actual message is retrieved, no guarantee is that it is the actual message.
In Proposition 4 is this paper, author mathematically proves one-time pad to be perfect encryption which satisfy above.


Asymmetrical encryption

Symmetrical encryption requires the participants in the communication to use a shared secret key. Asymmetrical encryption does not require the sharing of keys. The process uses a public key to encrypt and a private key to decrypt in order to achieve privacy. Both processes are efficient but Symmetrical is fast as compared with Asymmetrical. In this discussion, you are going to explain the details of Asymmetrical encryption from the time the client issue a request for the handshaking to received the keys. Explain how are keys generated?


how are keys generated:
Rivest-Shamir-Adleman (RSA), El Gamal, Elliptic curve cryptosystem (ECC), Digital Signature Algorithm (DSA), Elliptic curve DSA (ECDSA) are some algorithms defined and accepted to be strong and difficult to crack considering the computation power available today. These algorithms define how keys(public/private) are generated. For example, RSA is based upon factors of large prime numbers; El Gamal is based upon calculating discrete logarithms in a finite field; Elliptic curve works uses factoring prime number based on elliptic curves similar to RSA but with less computation power.

How Asymmetrical encryption works: 
1. Achieving Confidentiality: client encrypt content with public key and send message to server, server uses private key to decrypt the message.
2. Achieving nonrepudiation: server encrypt message with private key, and send to client, client decrypt using public key, it verifies that right server sent the message. same thing happens in case of digital signature.
But here is the catch, server sends an encrypted message which is supposed to be decrypted using public key, but public key everyone knows. So, next routes (3 & 4) explained gives possible work around.
3. Achieving both together: We will still name one as client and the other as server. calling sender sender when it becomes receiver at a point creates confusion. Client encrypt message with his private key, his public key is with server too. After first encryption above, the cipher text is again encrypted using server's public key.  The resultant cipher text is sent to server. server decrypt using his own private key. now this result of first decryption is decrypted again using client's public key.
4. Getting more practical: Now practically, client may not have a private and public key combination. If named entity as client in point 3 is an actual server then point 3 makes a sense. But what if client is my desktop. So here is how we get more practical: During handshake, client may send a tiny bit of symmetric ‘magic word’ encrypted with public key (of server) - to the server. Now this symmetric ‘magic word’ is decrypted with private key (of server)  and stored with server too. All future communications between client and server follow point 1 and 2. but a slight change:
a) Now client encrypt actual text with ‘magic word’, then with public key (of server) and then send it to server, server decrypt it with private key (of server) , then decrypt with ‘magic word’ and then process.
b) Server first encrypt with ‘magic word’, then with private key  (of server)  and send to client, client now decrypt with public key  (of server)  and then with ‘magic word’.

Reference:
1. http://robertheaton.com/2014/03/27/how-does-https-actually-work/
2. Shon Harris Book
3. Peter H. Gregory Book


For Key generation - Which one is better?

There are two ways to look at it: CIA and cost.
Confidentiality may not be in pieces, either 0 or 1. Given that, no algorithm can be 100% full proof, but could be difficult and time consuming to break, we could only move towards perfection.
Diffie-Hellman is vulnerable to man in the middle attack, RSA is slow, but elliptic curves could give similar results with less computing power required.
So "which one is better" may be answered, in terms of key length, required to achieve a given security level. Better algorithm may be the one, which need smaller key to achieve same level of security. Reference: http://www.ijcscn.com/Documents/Volumes/vol5issue1/ijcscn2015050103.pdf, Page 21 , Table 1.


Monday, June 20, 2016

Security and Risk Management - Some basic terms

  1.  Risk is the probability of threat agent exploiting vulnerability.
  2. Threat is the danger of threat agent exploiting vulnerability.
  3.  Data Classification is a way of putting information under named categories (mostly by Data Owner) based on it's worth and loss involved if wiped off/ leaked out / edited by unauthorized person. Ultimately based on category were information is lying, Data Custodian may choose different controls and spend more or less to keep the data safe and destroy safely when no longer needed.
  4.  AV (Asset Value) is the $ worth of entity under risk of exposure to threat under quantitative risk analysis.
  5.  EF (Exposure Factor) is percentage loss of asset value a single exposure may do.
  6.  SLE (Single Loss Expectancy) defines how much money an organization may probably loose when exposure happens once. Under quantitative risk analysis, Asset value is multiplied by Exposure Factor to get SLE. Say, I have laptop (asset) worth $200 (asset value) and if my son (threat agent) finds laptop kept on the table (not closing it and locking is vulnerability) and he throws water on it (threat), based on previous experience, I know it costs $100 to change damaged parts. So EF (Exposure Factor) is 100/200 (= 0.5). So next time I don't use cupboard (Physical control) to lock laptop, there is a risk of my son (threat agent) to throw water on it (exploit vulnerability). And SLE will be $200 (AV) * 0.5 (EF) = $100 (SLE), the single repair cost. EF is uncertainty here, next time threat agent may have more water in his glass.
  7.  ARO (Annualized Rate of Occurrence) defines probable yearly frequency of exposure. Under quantitative risk analysis, this is multiplied by SLE (single loss expectancy) to get ALE (annual loss expectancy). Say, if ARO is 5, it means exposure may happen five times in a year, if ARO is 0.5, it means threat agent may be successful once in two years.
  8.  Policy is version controlled and dated set of principles and concise & unambiguous statements formulated to ensure compliance with industry standards, to define behavior and activities of subjects or just to inform the subjects, thereby playing the role of an enabler to achieve business objectives. It should clearly define consequences of noncompliance with policy documented.
  9.  SLA (Service Level Agreements); as discussed under CobiT > Deliver & Support > Define service levels; is a ‘formal’ / ‘legal and formal’ agreement between customer and vendor where various essential properties of service are defined including ways to measure & report deviation and corresponding ownership is agreed upon. Customer and vendor could be two departments of same organization too. Based upon what type of service is being formally documented, SLA could include mandatory level of availability, response time to issues based on category, reporting planned downtime, who is responsible for what and who takes up ‘unforeseen things not documented here’ and so on.
  10.  CobiT (Control Objectives for Information and Related Technology) is business-focused, process-oriented, controls-based and measurement-driven IT (Information Technology) Governance framework developed and promoted by ISACA (Information Systems Audit and Control Association) and ITGI (IT Governance Institute) for IT management targeting needs of internal/external stakeholders across the enterprise.

Conducting Risk Assessment

Conducting Risk Assessment
Risk Assessment is part of Risk Management Process. The purpose of Risk Assessment is to identify threats, internal and external vulnerabilities, potential loss and probability of loss, the end result being determination of risk. Under Risk Assessment, risk is determined based on adverse effects due to the event and likelihood of occurrence. Risk Assessment is employed at organization level, mission/business process level, and information system level.
NIST Special Publication 800-30 Revision 1 suggests Risk Assessment as an ongoing activity throughout the system development life cycle and closely interacting with components of Risk Management. Risk Assessment Process (under Risk Management Process) includes preparation, conducting assessment, communicate results and maintain the assessment. Maintaining the assessment and communication may trigger steps to conduct assessment repeatedly. The second step of Risk Assessment – conducting assessment may be further understood going through activities involved in this step:
Identify Threat Sources
Threat Sources are identified at every level - organization level, mission/business process level, and information system level. And they are identified based on taxonomy – adversarial (adversary capability, intent and targeting / non adversarial), accidental, structural and environmental.
Identify Threat Events
The purpose of this activity is to identify potential threat events, relevance of the events, and the threat sources that could initiate the events.
Identify Vulnerabilities and Predisposing Conditions
The purpose of this activity is to identify vulnerabilities and predisposing conditions that affect the likelihood that threat events of concern result in adverse impacts. As in case of identification of threat sources, these are also identified and categorized based on different levels & taxonomies and tagged for severity – quantitative/ qualitative.
Determine Likelihood of Occurrence
In this activity, based on threat source, vulnerability and implemented safeguards, likelihood of occurrence is formulated and determined. Without diligent efforts in previous activities and proper knowledge and documentation of safeguards/ controls in place, this activity may give false results.
Determine Magnitude of Impact
Purpose of this step is to determine impact based on first three activities and maximum worth of affected entity in terms of value of loss / unavailability.
Determine Risk
Purpose of this step is to determine risk based on impact and likelihood determined previously. 



Can a risk mitigation create value to an organization based on the COBIT framework?
Risk is not something tangible, but can be minimized with help of CobiT framework. Risk minimization makes sure, risk doesn't exceed risk appetite of the organization, thereby helping organization to survive and grow, based on CobiT framework. With help of CobiT, risk may be tied to business strategy, thereby helping make better informed decisions within risk tolerance by risk mitigation. Further, even though CobiT may not help much define risk analysis methods, but it helps establish a link b/w risk scenario and appropriate response via enablers(controls), also how to manage risk ( Risk function and Risk Management).

References

United States. Joint Task Force Transformation Initiative, & National Institute of Standards Technology. (2012). Guide for conducting risk assessments (Revision 1.. ed., NIST special publication ; 800-30). Gaithersburg, MD: U.S. Dept. of Commerce, National Institute of Standards and Technology. doi:10.6028/NIST.SP.800-30r1