Pages

Monday, February 13, 2017

Selling and Buying “Cyber Security”

It is important to remember that in the current times “Cyber security” is like any other commodity or product which are sold or bought in the market. And hence there are many marketing techniques employed.
Not all attacks are sophisticated. Not all attacks lead to catastrophic results (this is irrespective of whether an attack method is sophisticated or simple). But companies have to make money. Also there is an intense competition between well-established big companies and the relatively new and small start-ups. No wonder, considerable chunk of the security bug revelations are by these comparatively smaller firms as they try to garner visibility and credibility. So their claims of vulnerabilities are based on facts but their hypothesis of consequences may not be entirely true. Many a times they also tend to give fancy names and celebrity status to their discoveries which creates a halo. There is no real data of actual compromises due to such big name security concerns. Many a times we want to classify vulnerabilities as critical due to lack of more details and be on a safer side.
Marketing world is full of wild promises. You just have to look into the cosmetics and pharma industry to understand this. Today, “Cyber Security” is no different.

We need to understand the degree of relevance to our organisations context. Need of the hour is to take a balanced view and take a risk based realistic approach.

Tuesday, October 4, 2016

OCTAVE Risk Management Frame work in a Nut Shell



OCTAVE  Stands for Operationally Critical Threat Asset and Vulnerability Evaluation.

  • It is by the organisation itself- using in-house domain experts and IT security resources.
  • Can be quick, flexible and focuses on critical risks.
  • Its main focus is on operational risk
  • Collaborative effort- using workshops, questionnaires, walk through, scenarios  and so on.
  • It basically has three steps:
    • Organisation wide view. This step has multiple processes
      • Identification of organisation assets  at all levels(management, operations)
      • Understanding threat to these assets and creation of threat profiles.
    • Technological view. Identification of critical assets and Infrastructure vulnerabilities.
      • Vulnerability assessment  and risk analysis using above generated threat profiles.
      • Evaluation of risk based on a criteria
    • Risk treatment strategy 
      • Categorisation of risk and deciding its mitigation plan
  • OCTAVE has two variants
    • OCTAVE-S :  A leaner version.
    • Allergo:  Has a focus on Information systems. It has 8 steps categorised into 4 phases.


Saturday, January 23, 2016

SMTP AUTH

Acronyms

SMTP     Simple Mail Transport Protocol
MTA       Mail Transport Agent
MDA      Mail Delivery Agent
MSA      Mail Submission Agent
SASL      Simple Authentication Security Layer
GSSAPI  Generic Security Services Application Program Interface
CRAM    Challenge Response Authentication Mechanism
SCRAM   Salted Challenge Response Authentication Mechanism

Introduction

What is it?  SMTP AUTH is a mechanism which can be additionally enabled for authentication between various elements of an email system consisting of MTAMDA, and MUA.  MTA and MDA would have SMTP and MUA would be an end point agent called Mail client.  SMTP AUTH includes authentication between SMTP-to-SMTP or SMTP- to-Mail client.  It is provided by “SMTP service extension” (ESMTP).  ESMTP can work with or without TLS (STARTTLS) and is only concerned with authentication.  If the whole channel is to be made confidential then TLS should be used 

ELHO keyword (in place of HELO) initiates the SMTP AUTH. The mechanism is started by ‘EHLO’ command and subsequent ‘AUTH’ keyword command with options initiates a SASL authentication mechanism.  SASL is a generic abstraction layer and not application dependent and hence provides separate authentication layer  for applications (SASL aware applications). RFC 4954 spells out the SMTP AUTH standard.

Advantages

a. Authentication between Mail servers and other elements in the email distribution chain.
b. Provides mobile users who switch hosts the ability to use the same MTA without the need to  reconfigure their mail client's settings each time. 
c. User based external email access rather than IP based.

Mechanisms for SMTP AUTH

Commonly used SASL mechanisms with ESMTP are: 

  • PLAIN :  A single string from client to server is sent which is a Base64 representation of the credentials. RFC 4954 use of TLS for using this machanism.
  • LOGIN :  Again uses Base64 encoding however, credentials are exchanged in a set of client - server dialog.
  • GSSAPI : For use with mechanisms like kerberos.
  • CRAM-MD5 : Better than PLAIN and LOGIN mechanisms. Plaintext attacks possible and does not authenticate the server(refer RFC 4954). Also requires that password be stored in plain text in many implementations.
  • DIGEST-MD5 : MOre secure than CRAM-MD5 as it uses nounce. This mechanism also requires that password be stored in plain text in many implementations.
  • SCRAM  family (SCRAM-SHA-1 was a replacement for DIGEST-MD5).
  • EXTERNAL : for external authentication.
Other registered mechanisms are listed at http://www.iana.org/assignments/sasl-mechanisms/sasl-mechanisms.xhtml  
Extract of example from RFC4954   where    S -> Server Message and     C-> client message

“4.1. Examples

   Here is an example of a client attempting AUTH using the [PLAIN] SASL
   mechanism under a TLS layer, and making use of the initial client
   response:

   S: 220-smtp.example.com ESMTP Server
   C: EHLO client.example.com
   S: 250-smtp.example.com Hello client.example.com
   S: 250-AUTH GSSAPI DIGEST-MD5
   S: 250-ENHANCEDSTATUSCODES
   S: 250 STARTTLS
   C: STARTTLS
   S: 220 Ready to start TLS
     ... TLS negotiation proceeds, further commands
         protected by TLS layer ...
   C: EHLO client.example.com
   S: 250-smtp.example.com Hello client.example.com
   S: 250 AUTH GSSAPI DIGEST-MD5 PLAIN
   C: AUTH PLAIN dGVzdAB0ZXN0ADEyMzQ=
   S: 235 2.7.0 Authentication successful     “
TCP 587 is generally used by MSA and generally indicates SMTP AUTH usage. SMTP AUTH can also be used on tcp port 25.  MTA to MTA would generally use tcp port 25.  If SMTPS is being used then port 465(not approved by IANA) may be used indicating use of TLS. Therefore SMTP AUTH may impact the perimeter or network security devices and may require a redrafting of access rules.  Corresponding configuration would also be required on mail clients.

Recommendations

CRAM-MD5 is listed as Limited and Digest-MD5 is listed as obsolete by IANA (http://www.iana.org/assignments/sasl-mechanisms/sasl-mechanisms.xhtml  ).  
Both CRAM-MD5 and DIGEST-MD5 require weak hashes or unsalted passwords to be stored for carrying out the authentication. Also MD5 has been proven to have certain vulnerabilities.  Considering these facts, SCRAM-SHA-1 or   SCRAM-SHA-1-PLUS or better as listed in the IANA site should be considered if the Servers support.  ‘PLUS’ refers to additional feature of channel-binding which prevents MiTM attacks.

References:

RFCs and IANA
Simple Authentication and Security Layer (SASL)
SMTP Service Extension for Authentication.    This document obsoletes RFC 2554.
ESMTP and LMTP Transmission Types Registration
SMTP Service Extension for    Secure SMTP over Transport Layer Security.
· https://tools.ietf.org/html/rfc6409           Message Submission for Mail
· http://www.ietf.org/rfc/rfc2831.txt Using Digest Authentication as a SASL Mechanism
· https://www.ietf.org/rfc/rfc2554.txt              SMTP Service Extension  for Authentication   (obsolete)
SCRAM-SHA-256 and SCRAM-SHA-256-PLUS  Simple Authentication and Security Layer (SASL) Mechanisms
(SCRAM-SHA-1and SCRAM-SHA-1-PLUS)Salted Challenge Response Authentication Mechanism (SCRAM) SASL and GSS-API Mechanisms

Others

Other email authentication mechanisms apart from SMTP AUTH


Wednesday, October 21, 2015

What is Threat Modeling

“If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”
….. Sun Tzu

Threat is an undesirable situation where a possibility exists that a threat agent or actor can intentionally or unintentionally exploit a vulnerability that exists in a system. The vulnerability can be technical or non-technical. Threat modelling is technique to visualise all such undesirable situations in a single frame in the ecosystem in which the system is supposed to function. In simple terms it is finding “what all can go wrong and why and how”.
Threat modeling is primarily used during systems development to anticipate and neutralize the threats. These threats are mostly technical and solutions expected would be technical. This type of modeling is also called application threat modeling. Microsoft’s STRIDE (Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service and Elevation of privilege) categorization used in SDLC is a popular model which starts off with the requirement of Data Flow Diagram(DFD) of the Application.  This is used by “Microsoft Threat Modeling Tool 2014”. This model can also be used for existing application to address threats in its working environment. Threat modeling will result in a comprehensive document of threat enumeration and analysis with mitigation solutions.
In relation to SDLC (software or system development life cycle), it does not avoid code review but helps in focusing on critical issues so that time and efforts are minimised. Therefore it is a Risk based approach. The process helps in systematically documenting the attack surface. Threat modeling is not a panacea for system compromise attempts and in no way it should be considered as an encouragement to develop complex systems with spaghetti code. In my opinion “KISS” (Keep It Simple Stupid) is the ultimate advice to avoid security problems. Simplicity dramatically reduces the attack surface.

Threat modelling is closely related to attack trees. Attack trees were actually popularised by Bruce Schneier in 1999. However the purposes for which they have evolved are quite different. These two have the following different approaches:

Threat Modelling
  • Application or system oriented. May be asset oriented. Extrapolated from characteristics of systems, their interfaces, connections and flow of data between themThreat modeling is the general term used in such cases and uses DFDs. This process is also termed as Tool assisted code review.
  • Primarily used for risk management during application or system development by enumerating attack surfaces.
Attack Trees
  • Attacks and attacker oriented. Extrapolated from predicted behavior of attackers and the multiple paths they can follow.  Attacker will focus on Intended and unintended behavior of the system. Attack trees, Threat trees or Misuse case or Abuse trees are generally used. In an Attack tree the whole attack process is synthesized and shown as set of possible steps. Attack trees have tree structure with child nodes using AND and OR operators and the parent node being a attackers ultimate goal.
  • Primarily used for risk management of a already deployed and in use application or a system. This can be done on periodical basis whenever there is a significant change in technology or threat landscape.
We all do threat modeling every day for risks associated with our daily activities.  We do it in our own creative way.  Then why do we need these models? These are organized, structured and documented activities.  How useful are these organized activities? Well the chances of some known attacks or compromises would reduce but would not make the system completely attack proof Threat Modeling should be holistic for best results. Brain storming and group discussions should be used over and above the analysis provided by a tool. 

Threat Categorization Frameworks
  • STRIDE - Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service and Elevation of privilege (high level categorisation)
  • DREAD - Damage, Reproducibility, Exploitability, Affected users and Discoverability (Attacker's point of view)
  •  ASF - Application Security Frame
  • CAPEC - Common Attack Pattern Enumeration Classification

Pros and Cons of Threat Modeling.
           Pros
A systematic and recorded process
Will give a headway
Great for people who just initiated into the security world
A fast method to address known exploitation methods( Attack vectors)

           Cons
False sense and confidence of having dealt with all possible threats, attack         vectors and vulnerabilities
May limit thinking boundaries and limit free flow of ideas to visualise all possible   bad scenarios 


A simple two tier web application threat modelling using Microsoft’s Threat modelling Tool which is based on STRIDE.





Tools
  • Trike framework and tool for risk management 
  • Microsoft Threat Modeling Tool 2014  
  • Seasponge
  • Seamonster  
  • SecurITree by Amenaza  
  • ThreatModeler by MyAppSecurity


References
    https://www.owasp.org/index.php/Application_Threat_Modeling

Monday, September 28, 2015

SCADA AUDIT

AUDITING SCADA SYSTEMS
Introduction

SCADA (supervisory control and data acquisition) is a system operating with coded signals over communication channels so as to provide control of remote equipment (using typically one communication channel per remote station)                       en.wikipedia.org/wiki/SCADA
Industrial control systems (ICS) are computer-based systems that monitor and control industrial processes that exist in the physical world. SCADA systems historically distinguish themselves from ICS systems by being large-scale processes that can include multiple sites, and large distances. SCADA can be considered class of ICS.
Originally, SCADA systems were not connected to the Internet. Security was traditionally not an issue in SCADA systems. However the same is not true now.
Where is SCADA used?
Electric power generation, transmission and distribution
Water and Sewage network systems
Environment and facility monitoring and management
Transportation networks
Manufacturing processes

Functions of SCADA
DATA ACQUSITION- Furnishes status information & measures to operator
CONTROL - Allows the operator to control the devices e.g. circuit breakers, transformers, tap changer etc from a remote centralised location.
DATA PROCESSING - Includes data quality & integrity check, limit check, analog value processing etc.
TAGGING - Operator can identifies any specific device & subjects to specific operating restrictions to prevent from unauthorized operation
ALARMS - Alerts the operator of unplanned events & undesirable operating conditions in the order their severity & criticality
LOGGING- Logs all operator entries, alarms &selected entries
TRENDING- Plots measurements on selected scale to give information on the trends e.g. one minute, one hour etc.
HISTORICAL REPORTING - To save & analyze the historical data for reporting, typically for a period of 2 or more years & to archive.

SCADA Components and Subsystems
SCADA has following components:
1.             Operating equipment: pumps, valves, conveyors, and substation breakers that can be controlled by energizing actuators or relays.
2.                  Local processors: communicate with the site’s instruments and operating equipment. This includes the Programmable Logic Controller (PLC), Remote Terminal Unit (RTU), Intelligent Electronic Device (IED), and Process Automation Controller (PAC). A single local processor may be responsible for dozens of inputs from instruments and outputs to operating equipment.
3.                  Instruments: in the field or in a facility that sense conditions such as pH, temperature, pressure, power level, and flow rate.
4.                  Short-range communications: between local processors, instruments, and operating equipment. These relatively short cables or wireless connections carry analog and discrete signals using electrical characteristics such as voltage and current, or using other established industrial communications protocols.
5.                  Long-range communications: between local processors and host computers. This communication typically covers miles using methods such as leased phone lines, satellite, microwave, frame relay, and cellular packet data.
6.                  Host computers Human–Machine Interface or HMI: act as the central point of monitoring and control. The host computer is where a human operator can supervise the process, as well as receive alarms, review data, and exercise control.
                                                       
 Sub Systems
Supervisory system
HMI
RTU
PLC
Communication Interface
SCADA programming


SCADA vendors
Some SCADA vendors are Asea Brown Boveri(ABB), Siemens, Alstom ESCA, Telegyr Systems, Advanced Control Systems(ACS), Harris and Bailey.




SCADA protocols
SCADA protocols are communications standards to pass control information on industrial networks. There are many of these protocols but prominent ones are MODBUS, DNP3, EtherNET/IP, PROFIBUS, IEC 61850 and Foundation Fieldbus. The choice of protocol is based on operating requirements, industry preference, vendor and the design of the system. In an oil refinery an operator workstation might use the MODBUS/TCP protocol to communicate with a control device such as a Programmable Logic Controller (PLC). Alternatively, in power utility’s SCADA system, a master located in a central facility could use the DNP3 protocol to query and control slave Remote Terminal Units (RTU) distributed in remote sub-stations. Some other protocols are: ICCP, ZigBee, C37.118, and C12.22
Known Issues
1.         In a SCADA system, the programmable logic controllers (PLCs) are directly connected to infield sensors that provide data to control critical components (e.g. Centrifugal or turbines). Often the default passwords are hard-coded into Ethernet cards the systems use. Those cards funnel commands into devices, allowing administrators to remotely log into the machinery. Hard-coded passwords are a common weakness built into many industrial control systems, including some S7 series of PLCs from Siemens. Hardcoded or saved passwords are also found in windows registry of various host machines. PLCs and RTUs are web enabled for remote access and this creates a window of opportunity for attackers.

2.         Lack of Authentication and medium control in SCADA systems is another major issue. Investigation of past SCADA incidents demonstrated that mobile storage mediums are the main vectors used to infect control systems, despite that host networks being isolated from the Internet. Establishing strong controls on every medium used could prevent serious incidents. Strong authentication must be implemented to ensure secure communication, and to allow personnel to access main functionalities provided by systems. The administration console for any network appliance must be protected. Wireless and wired connections to the SCADA network and remote sites must be properly defended.



Steps to perform an audit
1.      Identify all connections to SCADA networks. Evaluate security of connections. Identify systems that serve critical functions.
2.      Conduct VA  of Network Connectivity by mapping of all networked assets and the digital communication links that connect them
3.      Check for default settings and configurations of all systems
4.      Check for unnecessary services
5.      Check if  security features provided by device and system vendors  are effectively activated.
6.      Check  authentication and medium control.
7.      Check for proper network segregations.
8.      Check internal and external intrusion detection systems
9.      Check if  24-hour-a-day incident monitoring takes place
10. Perform technical audits of SCADA devices and networks, and any other connected networks, to identify security concerns
11. Conduct physical security surveys and assess all remote sites connected to the SCADA network to evaluate their security
12. Identify and evaluate possible attack scenarios
13. Check if  cyber security roles, responsibilities, and authorities for managers, system administrators, and users  have been defined.
14. Check for ongoing risk management process
15. Check for configuration management processes
16. Scrutinise routine self-assessments  reports
17. Check for system backups and disaster recovery plans and BCP
18. Check for  availability of  policies
19. Interview all key personnel
Tools

SamuraiSTFU, plcscan, modscan, metasploit, Nessus, nmap, wireshark, tcpdump, modlib(scapy extension), Bus-pirate, CERT NetSA Security Suite, NetWitness, Lancope, Arbor, 21CT, Checkmarx, Contrast Security, Burp Suite Professional, NTOSpider, Netsparker, Appscan, sqlmap, Zulu, GPF/EFS, PFF, ImmDbg, Sulley, gdb, MSF, RTL-SDR/HackRF plus GNURadio/RFCat, binwalk, IDA Pro etc etc

https://www.thalesgroup.com/sites/default/files/asset/document/thales-cyber-security-for-scada-systems.pdf

http://resources.infosecinstitute.com/improving-scada-system-security/

Friday, June 5, 2015

Data Loss / Leak Prevention


Strictly speaking data  “Loss” can be due to machine failure, power failure, data corruption, data / media  theft etc and the means of protection is backups, disaster recovery strategies and redundancies.

Though “L”  is used to represent either  “Loss” or “Leak” in the acronym DLP, it is actually more about “LEAK”  as it is understood in the industry. Which actually  means sensitive data crossing over to unauthorized area from a authorized area due to various leak vectors.

DLP is actually more of a concept or strategy with functional sub components (e.g  email scanning, encryption of data at rest and so on). These components are used to enforce the strategy outlined by policy statements. Such policies can be
  • Acceptable use policy( AUP),  
  • Data sharing on detachable/portable media  policy, 
  • Data classification policy,  etc
DLP is actually part of IRM (Information Risk Management). Data sanitation, using test data and  Data masking is also part of DLP strategy.

However, we have DLP tools which claim to handle all functionalities of DLP strategy and therefore some technologists believe that by just deploying such tools  DLP is implemented.  This is grossly wrong.  DLP is more about strategy, awareness,  training  plus the technology. Data leak more often than not happens due to poor employee discipline or awarness.

DLP addresses three areas
  • ·         Data at rest ,    e.g data in databases, data on a drive of a laptop / usb storage
  • ·         Data in transit,   e.g  emails or web forum postings, upload to cloud, data on network
  • ·         Data in use, e.g  file copy or print operations
Output actions of a DLP tool:
  • Quarantine, 
  • encrypt, 
  • block, 
  • notify

DLP Tool is deployed at end points,  email gateways, network gateways for url filtering.
Sub functions or processes of DLP  tool are:
  • Monitor, 
  • Detect and 
  • Prevent
DLP tools are good at monitoring structured data as in  PII(Personally Identifiable Information) and as in PCI(Payment card industry)  but they are difficult to use for unstructured data.

For data at rest Data discovery or discovery scanning is used. Pattern matching or string comparison is used for structured data. hashing is generally used for unstructured data.

Before a DLP tool is deployed one should clearly define  and identify the sensitive data which need protection and also all possible  internal and external threats

A Checklist
  1. Policies and user awareness campaigns.
  2. Encryption for data at rest and in transit.
  3. File shares mapped  with access rights.
  4. Consolidation of inventories.
  5. Control of  external HDD and usb storages devices (mobile and portable storage devices)
  6. Disabling of all usb ports for usb storage devices.  
  7. Disabling of all unwanted  inbuilt DVD readers/writers.
  8. Air gap maintenance  disconnected networks.
  9. Secure  file delete policies and procedures.
  10. Access controls on laptops and full disk encryption.
  11. Use of  VLANS
  12. Consolidation of file servers 
  13. Strict data classification policies.
  14. Data retention policies. Destruction of old and unwanted data files.
  15. Deploying RMS/DRM/IRM  solution
  16. PKI based email  
  17. Effective Identity provisioning and management.
  18. Content and Gateway screening



Sunday, August 31, 2014

How Trustworthy are Mobile Apps or Applications ?



   Have a look at the the permissions being asked by these Andriod Apps. What do you make out of these snapshots? Which App is more secure - not from the point of view of security for the App but security for the user who downloads and uses them. The one on the extreme left(first one) is likely to have more access to your device than the other two. The makers would argue that these permissions are required for the features and functionality of the App. The question here is who will decide how much permission is required and who will guarantee that these permissions would not be abused. The one on the extreme right-bottom is more trustworthy as it requires no special permissions. But how many of us really pay any attention to such details?
    Unlike the applications on a Desktop, with mobile apps it is difficult to find what they are really doing behind the scenes. The situation is compounded by the fact that the mobile devices contain enormous amount of personal, sensitive and financial data. The email apps are always online.The device is always connected to internet in most cases. The passwords and others credentials are just there begging to be stolen away.

   Personally i prefer to use my browser on my mobile to access the various web sites and web applications rather than download an App. Re-entering usernames and passwords every time you access an web application or a website should not bother you if your are worried about safety of your data and online identity.

    Having written what i wanted to convey, i would like to clarify that i am not aware that the above mentioned Apps are untrustworthy in anyway. I just picked them up for an illustration.

Friday, June 20, 2014

ISO 27001 : 2013 Changes

                                            



ISO 27001:2005
ISO 27001:2013
1
Annex A :has 133 controls
Annex A : has 114 controls

11 new controls
  • A.6.1.5 Information security in project management
  • A.12.6.2 Restrictions on software installation
  • A.14.2.1 Secure development policy
  • A.14.2.5 Secure system engineering principles
  • A.14.2.6 Secure development environment
  • A.14.2.8 System security testing
  • A.15.1.1 Information security policy for supplier relationships
  • A.15.1.3 Information and communication technology supply chain
  • A.16.1.4 Assessment of and decision on information security events
  • A.16.1.5 Response to information security incidents
  • A.17.2.1 Availability of information processing facilities

  • 2
    Annex A: 11 control objectives
    Annex A: 14 control objectives (A5 to A18)
    3
    Five implementation sections
    ·         4-ISMS
    ·         5-Management  responsibility
    ·         6-Internal  ISMS  audits
    ·         7-Management review  of  the  ISMS
    ·         8-ISMS  improvement
    Seven implementation sections
    ·         4-Context
    ·         5-Leadership
    ·         6-Planning
    ·         7-Support
    ·         8-Operation
    ·         9-Evaluation
    ·         10-Improvement
    4
    Non standard format
    Annex SL format(MS standard)
    5
    Process based approach
    Non process based
    6
    Structured around PDCA deming cycle
    No emphasis on PDCA cycle
    8
    Separate class of ‘Preventive’ controls
    ‘Preventive’ controls removed
    9
    Requires ‘Documents’ and ‘Records’
    Instead requires ‘Documented Information’
    10

    Parts derived from ISO 31000:2009 Risk mgmt
    11
    ‘control objectives and controls from Annex A shall be selected and implemented’
    ‘produce a “statement of applicability(SOA)” that contains the necessary controls’
    12
    Has Annex A, B and C
    Has Annex A
    13

    New important term ‘Risk owner’
    14
    Emphasis on documentation of Internal Audits
    No need to document internal audit