Pages

Wednesday, October 21, 2015

What is Threat Modeling

“If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”
….. Sun Tzu

Threat is an undesirable situation where a possibility exists that a threat agent or actor can intentionally or unintentionally exploit a vulnerability that exists in a system. The vulnerability can be technical or non-technical. Threat modelling is technique to visualise all such undesirable situations in a single frame in the ecosystem in which the system is supposed to function. In simple terms it is finding “what all can go wrong and why and how”.
Threat modeling is primarily used during systems development to anticipate and neutralize the threats. These threats are mostly technical and solutions expected would be technical. This type of modeling is also called application threat modeling. Microsoft’s STRIDE (Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service and Elevation of privilege) categorization used in SDLC is a popular model which starts off with the requirement of Data Flow Diagram(DFD) of the Application.  This is used by “Microsoft Threat Modeling Tool 2014”. This model can also be used for existing application to address threats in its working environment. Threat modeling will result in a comprehensive document of threat enumeration and analysis with mitigation solutions.
In relation to SDLC (software or system development life cycle), it does not avoid code review but helps in focusing on critical issues so that time and efforts are minimised. Therefore it is a Risk based approach. The process helps in systematically documenting the attack surface. Threat modeling is not a panacea for system compromise attempts and in no way it should be considered as an encouragement to develop complex systems with spaghetti code. In my opinion “KISS” (Keep It Simple Stupid) is the ultimate advice to avoid security problems. Simplicity dramatically reduces the attack surface.

Threat modelling is closely related to attack trees. Attack trees were actually popularised by Bruce Schneier in 1999. However the purposes for which they have evolved are quite different. These two have the following different approaches:

Threat Modelling
  • Application or system oriented. May be asset oriented. Extrapolated from characteristics of systems, their interfaces, connections and flow of data between themThreat modeling is the general term used in such cases and uses DFDs. This process is also termed as Tool assisted code review.
  • Primarily used for risk management during application or system development by enumerating attack surfaces.
Attack Trees
  • Attacks and attacker oriented. Extrapolated from predicted behavior of attackers and the multiple paths they can follow.  Attacker will focus on Intended and unintended behavior of the system. Attack trees, Threat trees or Misuse case or Abuse trees are generally used. In an Attack tree the whole attack process is synthesized and shown as set of possible steps. Attack trees have tree structure with child nodes using AND and OR operators and the parent node being a attackers ultimate goal.
  • Primarily used for risk management of a already deployed and in use application or a system. This can be done on periodical basis whenever there is a significant change in technology or threat landscape.
We all do threat modeling every day for risks associated with our daily activities.  We do it in our own creative way.  Then why do we need these models? These are organized, structured and documented activities.  How useful are these organized activities? Well the chances of some known attacks or compromises would reduce but would not make the system completely attack proof Threat Modeling should be holistic for best results. Brain storming and group discussions should be used over and above the analysis provided by a tool. 

Threat Categorization Frameworks
  • STRIDE - Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service and Elevation of privilege (high level categorisation)
  • DREAD - Damage, Reproducibility, Exploitability, Affected users and Discoverability (Attacker's point of view)
  •  ASF - Application Security Frame
  • CAPEC - Common Attack Pattern Enumeration Classification

Pros and Cons of Threat Modeling.
           Pros
A systematic and recorded process
Will give a headway
Great for people who just initiated into the security world
A fast method to address known exploitation methods( Attack vectors)

           Cons
False sense and confidence of having dealt with all possible threats, attack         vectors and vulnerabilities
May limit thinking boundaries and limit free flow of ideas to visualise all possible   bad scenarios 


A simple two tier web application threat modelling using Microsoft’s Threat modelling Tool which is based on STRIDE.





Tools
  • Trike framework and tool for risk management 
  • Microsoft Threat Modeling Tool 2014  
  • Seasponge
  • Seamonster  
  • SecurITree by Amenaza  
  • ThreatModeler by MyAppSecurity


References
    https://www.owasp.org/index.php/Application_Threat_Modeling

Monday, September 28, 2015

SCADA AUDIT

AUDITING SCADA SYSTEMS
Introduction

SCADA (supervisory control and data acquisition) is a system operating with coded signals over communication channels so as to provide control of remote equipment (using typically one communication channel per remote station)                       en.wikipedia.org/wiki/SCADA
Industrial control systems (ICS) are computer-based systems that monitor and control industrial processes that exist in the physical world. SCADA systems historically distinguish themselves from ICS systems by being large-scale processes that can include multiple sites, and large distances. SCADA can be considered class of ICS.
Originally, SCADA systems were not connected to the Internet. Security was traditionally not an issue in SCADA systems. However the same is not true now.
Where is SCADA used?
Electric power generation, transmission and distribution
Water and Sewage network systems
Environment and facility monitoring and management
Transportation networks
Manufacturing processes

Functions of SCADA
DATA ACQUSITION- Furnishes status information & measures to operator
CONTROL - Allows the operator to control the devices e.g. circuit breakers, transformers, tap changer etc from a remote centralised location.
DATA PROCESSING - Includes data quality & integrity check, limit check, analog value processing etc.
TAGGING - Operator can identifies any specific device & subjects to specific operating restrictions to prevent from unauthorized operation
ALARMS - Alerts the operator of unplanned events & undesirable operating conditions in the order their severity & criticality
LOGGING- Logs all operator entries, alarms &selected entries
TRENDING- Plots measurements on selected scale to give information on the trends e.g. one minute, one hour etc.
HISTORICAL REPORTING - To save & analyze the historical data for reporting, typically for a period of 2 or more years & to archive.

SCADA Components and Subsystems
SCADA has following components:
1.             Operating equipment: pumps, valves, conveyors, and substation breakers that can be controlled by energizing actuators or relays.
2.                  Local processors: communicate with the site’s instruments and operating equipment. This includes the Programmable Logic Controller (PLC), Remote Terminal Unit (RTU), Intelligent Electronic Device (IED), and Process Automation Controller (PAC). A single local processor may be responsible for dozens of inputs from instruments and outputs to operating equipment.
3.                  Instruments: in the field or in a facility that sense conditions such as pH, temperature, pressure, power level, and flow rate.
4.                  Short-range communications: between local processors, instruments, and operating equipment. These relatively short cables or wireless connections carry analog and discrete signals using electrical characteristics such as voltage and current, or using other established industrial communications protocols.
5.                  Long-range communications: between local processors and host computers. This communication typically covers miles using methods such as leased phone lines, satellite, microwave, frame relay, and cellular packet data.
6.                  Host computers Human–Machine Interface or HMI: act as the central point of monitoring and control. The host computer is where a human operator can supervise the process, as well as receive alarms, review data, and exercise control.
                                                       
 Sub Systems
Supervisory system
HMI
RTU
PLC
Communication Interface
SCADA programming


SCADA vendors
Some SCADA vendors are Asea Brown Boveri(ABB), Siemens, Alstom ESCA, Telegyr Systems, Advanced Control Systems(ACS), Harris and Bailey.




SCADA protocols
SCADA protocols are communications standards to pass control information on industrial networks. There are many of these protocols but prominent ones are MODBUS, DNP3, EtherNET/IP, PROFIBUS, IEC 61850 and Foundation Fieldbus. The choice of protocol is based on operating requirements, industry preference, vendor and the design of the system. In an oil refinery an operator workstation might use the MODBUS/TCP protocol to communicate with a control device such as a Programmable Logic Controller (PLC). Alternatively, in power utility’s SCADA system, a master located in a central facility could use the DNP3 protocol to query and control slave Remote Terminal Units (RTU) distributed in remote sub-stations. Some other protocols are: ICCP, ZigBee, C37.118, and C12.22
Known Issues
1.         In a SCADA system, the programmable logic controllers (PLCs) are directly connected to infield sensors that provide data to control critical components (e.g. Centrifugal or turbines). Often the default passwords are hard-coded into Ethernet cards the systems use. Those cards funnel commands into devices, allowing administrators to remotely log into the machinery. Hard-coded passwords are a common weakness built into many industrial control systems, including some S7 series of PLCs from Siemens. Hardcoded or saved passwords are also found in windows registry of various host machines. PLCs and RTUs are web enabled for remote access and this creates a window of opportunity for attackers.

2.         Lack of Authentication and medium control in SCADA systems is another major issue. Investigation of past SCADA incidents demonstrated that mobile storage mediums are the main vectors used to infect control systems, despite that host networks being isolated from the Internet. Establishing strong controls on every medium used could prevent serious incidents. Strong authentication must be implemented to ensure secure communication, and to allow personnel to access main functionalities provided by systems. The administration console for any network appliance must be protected. Wireless and wired connections to the SCADA network and remote sites must be properly defended.



Steps to perform an audit
1.      Identify all connections to SCADA networks. Evaluate security of connections. Identify systems that serve critical functions.
2.      Conduct VA  of Network Connectivity by mapping of all networked assets and the digital communication links that connect them
3.      Check for default settings and configurations of all systems
4.      Check for unnecessary services
5.      Check if  security features provided by device and system vendors  are effectively activated.
6.      Check  authentication and medium control.
7.      Check for proper network segregations.
8.      Check internal and external intrusion detection systems
9.      Check if  24-hour-a-day incident monitoring takes place
10. Perform technical audits of SCADA devices and networks, and any other connected networks, to identify security concerns
11. Conduct physical security surveys and assess all remote sites connected to the SCADA network to evaluate their security
12. Identify and evaluate possible attack scenarios
13. Check if  cyber security roles, responsibilities, and authorities for managers, system administrators, and users  have been defined.
14. Check for ongoing risk management process
15. Check for configuration management processes
16. Scrutinise routine self-assessments  reports
17. Check for system backups and disaster recovery plans and BCP
18. Check for  availability of  policies
19. Interview all key personnel
Tools

SamuraiSTFU, plcscan, modscan, metasploit, Nessus, nmap, wireshark, tcpdump, modlib(scapy extension), Bus-pirate, CERT NetSA Security Suite, NetWitness, Lancope, Arbor, 21CT, Checkmarx, Contrast Security, Burp Suite Professional, NTOSpider, Netsparker, Appscan, sqlmap, Zulu, GPF/EFS, PFF, ImmDbg, Sulley, gdb, MSF, RTL-SDR/HackRF plus GNURadio/RFCat, binwalk, IDA Pro etc etc

https://www.thalesgroup.com/sites/default/files/asset/document/thales-cyber-security-for-scada-systems.pdf

http://resources.infosecinstitute.com/improving-scada-system-security/

Friday, June 5, 2015

Data Loss / Leak Prevention


Strictly speaking data  “Loss” can be due to machine failure, power failure, data corruption, data / media  theft etc and the means of protection is backups, disaster recovery strategies and redundancies.

Though “L”  is used to represent either  “Loss” or “Leak” in the acronym DLP, it is actually more about “LEAK”  as it is understood in the industry. Which actually  means sensitive data crossing over to unauthorized area from a authorized area due to various leak vectors.

DLP is actually more of a concept or strategy with functional sub components (e.g  email scanning, encryption of data at rest and so on). These components are used to enforce the strategy outlined by policy statements. Such policies can be
  • Acceptable use policy( AUP),  
  • Data sharing on detachable/portable media  policy, 
  • Data classification policy,  etc
DLP is actually part of IRM (Information Risk Management). Data sanitation, using test data and  Data masking is also part of DLP strategy.

However, we have DLP tools which claim to handle all functionalities of DLP strategy and therefore some technologists believe that by just deploying such tools  DLP is implemented.  This is grossly wrong.  DLP is more about strategy, awareness,  training  plus the technology. Data leak more often than not happens due to poor employee discipline or awarness.

DLP addresses three areas
  • ·         Data at rest ,    e.g data in databases, data on a drive of a laptop / usb storage
  • ·         Data in transit,   e.g  emails or web forum postings, upload to cloud, data on network
  • ·         Data in use, e.g  file copy or print operations
Output actions of a DLP tool:
  • Quarantine, 
  • encrypt, 
  • block, 
  • notify

DLP Tool is deployed at end points,  email gateways, network gateways for url filtering.
Sub functions or processes of DLP  tool are:
  • Monitor, 
  • Detect and 
  • Prevent
DLP tools are good at monitoring structured data as in  PII(Personally Identifiable Information) and as in PCI(Payment card industry)  but they are difficult to use for unstructured data.

For data at rest Data discovery or discovery scanning is used. Pattern matching or string comparison is used for structured data. hashing is generally used for unstructured data.

Before a DLP tool is deployed one should clearly define  and identify the sensitive data which need protection and also all possible  internal and external threats

A Checklist
  1. Policies and user awareness campaigns.
  2. Encryption for data at rest and in transit.
  3. File shares mapped  with access rights.
  4. Consolidation of inventories.
  5. Control of  external HDD and usb storages devices (mobile and portable storage devices)
  6. Disabling of all usb ports for usb storage devices.  
  7. Disabling of all unwanted  inbuilt DVD readers/writers.
  8. Air gap maintenance  disconnected networks.
  9. Secure  file delete policies and procedures.
  10. Access controls on laptops and full disk encryption.
  11. Use of  VLANS
  12. Consolidation of file servers 
  13. Strict data classification policies.
  14. Data retention policies. Destruction of old and unwanted data files.
  15. Deploying RMS/DRM/IRM  solution
  16. PKI based email  
  17. Effective Identity provisioning and management.
  18. Content and Gateway screening