Thursday, December 1, 2011

QNX operating system

I worked in the QNX platform when I am dealing with IED simulators. So I thought to write some information about QNX operating system and state resourceful links.

In IED environment priority scheduling is really critical. Breaker tripping and relevant communication should gain highest priority when fault is detected by relay hardware. to fulfill this fast operation IED uses RTOS (real time operating system).Its ultra-reliable nature means QNX software is the preferred choice for life-critical systems such as Power grid,air traffic control systems, surgical equipment, and nuclear power plants. The QNX Neutrino RTOS is the latest incarnation of the QNX real time operating system, which has been powering mission-critical application. The QNX® Momentics® Tool Suite is a comprehensive, Eclipse-based integrated development environmen. Companoes rely on  QNX® Neutrino® RTOS and the QNX® Momentics® development suite to build products that enhance their brand characteristics – innovative, high-quality, dependable.


Socket programming in QNX

When I wanted to compile NetPIPE latency measuring tool on QNX platform. It gave me warnings saying listen(), bind() methods are undefined. I figured out that socket libraries are not linked in NetPIPE make file.
The QNX TCP/IP socket interface is a set of library functions and header files. All memory models of the socket interface library are provided. They're called socketx.lib, where x denotes the memory model. For a description of memory models, see the documentation for the cc utility.
These libraries are installed in the /usr/lib directory.The header files are installed under the /usr/include directory.

Then I added below line to NetPIPE make file to link socket libraries and it compiled perfectly:

cc -l socket

Using IPERF

Iperf is a tool to measure the TCP bandwidth and the quality of a network link.

The quality of a link can be tested as follows:
- Latency (response time or RTT): can be measured with the Ping command.One way Latency can be measured by NetPIPE
- Jitter (latency variation): can be measured with an Iperf UDP test.
- Datagram loss: can be measured with an Iperf UDP test.

Iperf can be installed  on any UNIX/Linux or Microsoft Windows system. One host must be set as client, the other one as server. Clent send predefined size of packets from client to server. Sever receive packets sent by client and measure time taken to collect packets. By dividing total packet received over time taken, iperf decide throughput measurement.  

By default, the Iperf client connects to the Iperf server on the TCP port 5001 and the bandwidth displayed by Iperf is the bandwidth from the client to the server.
If you want to use UDP tests, use the -u argument.
The -d and -r Iperf client arguments measure the bi-directional bandwidths.

For TCP measurement I used,

Server side:
iperf -s

Client side:
iperf -c -F -m -M

Monday, November 28, 2011

Saturday, October 22, 2011

Instrument transformers CT and VT

CT types : Bushing type, Bar type, window

VT Types: Electromagnetic VT, Capacitive VT

Directional relays

Types of directional relays:
  • Directional over current
  • Directional ground
  • Directional comparison
MTA - Maximum Torque angle

    Friday, October 21, 2011

    Over current protection

    We need relay coordination to minimize the disruption due to fault and operate the nearest relay first. We can not coordinate fuses because the melting time is fixed.

    For Fuse:
    total clearance time = pre arching time + arching time
    The time current characteristic of a fuse has two curves - minimum melt curve and total clearing time

    Sectionalizers  can not interrupt a fault. It counts the number of time it seen the fault and operate after preset number. Reclosers have limited fault interrupting capability.


    Types of over current protection:
    1.  Instantaneous relays - current only ( can not discriminate the fault currents If1 = If2)
    2.  Definite time relays - time only ( Faults near to the source has higher currents and it also show higher time)
    3. IDMT inverse definite minimum current relays - both current and time
    Consideration of  coordination:
    • Maximum and Minimum momentarily Short circuit current
    • Maximum and minimum ground fault current
    • Total time interval
    Coordination parameters
    • TAP value (pick up current in secondary CT) - Pick up current
    • Time Dial (TD) - Time multiplier setting (MTS) or time dial setting (TDS)
    • Instantaneous TAP (IT)
    • Extremely inverse characteristic
    We use extremely inverse relay characteristic in the industry. There should not be any crossing between fuse characteristics and relay characteristics ( we can adjust this using the correct TDS in the relay.when TD is increased it will become more inverse ). It means fuse characteristic always has to be lower than relay (0.20 s gap). If we coordinate two over current relays their characteristic should have 0.4 s gap.

    With CT ratio 500:5  and TAP value 5 means.
    Pick up current in CT secondary = 5A
    Pick up current in CT primary = 500 A

    Thursday, October 20, 2011

    Fault Calculation

    To obtain more accurate results calculation has to be carried out in different time ranges
    • Sub Transient- large current 50 ms
    • Transient - after 0.5 s
    • Steady state- after 1 s
    When we analyze large systems with different voltage levels we use base quantities and per unit values .Decaying DC component is considered using the asymmetric factor. Understanding the positive, negative and zero sequence components are required in unbalanced faults.

    • For transmission lines positive and negative impedance are same and equal to the per unit impedance of the line. Zero impedance depends on the grounding configuration. Therefor obtain it from manufacture data sheet. 
    • Balanced generator only generate positive sequence voltage.
    • For motors has same positive and negative impedance. Since most of the motors are ungrounded it doesn't has a zero impedance.
    L-L-L or L-L-L-G  => positive sequence
    L-G => positive, negative, zero in series
    L-L => positive, negative in parallel
    L-L-G => positive, negative, zero in parallel

    Line to ground voltage = line to line voltage / root(3)

      Aspects of Protection system

      Reliability
      • Dependability - protection should operate when it should operate
      • Security - protection should not operate when it should not operate
      Speed
      We need fast protection to minimize the damage

      Selectivity/Discrimination
      Zones of protections are determined by the CTs
      Understand the type of the fault and fault location

      Cost
      Cost/beneficial analysis
      fast operation and duplication require additional cost
      Back up protection

      ANSI reference numbers
      21 - Distance relay
      50 - Instantaneous over current
      51 - time delayed over current
      52 - circuit breaker
      67 - Directional over current
      87 - Differential

      Faults in power systems

      There are two categories of faults:

      • Active Faults : Current flow from one phase to another or phase to ground. Two sub categories solid faults(complete breakdown of insulation) and incipient faults(faults start from very small beginning).
      • Passive Faults : These are not real faults, but stressing the system to its maximum capacity, and ultimately active fault occurs. (Overloading, over voltage, under frequency and power swings)

      Transient Faults : do not damage the insulation permanently and allow the system to re energized after a short time period (lightning strike, momentary tree contact)
      Permanent faults : does not disappear when the power is disconnected. equipment has to be repaired.

      Symmetrical faults are balanced faults. Sinusoidal are equal about their axis. represent steady state operation.
      Asymmetrical faults displays a dc offset and become a symmetrical fault after some time

      Wednesday, October 19, 2011

      Availability of hardware in substations

      The software and hardware in the substation are designed in a way that they meet high availability requirements. The means high reliability ( Long mean-time-to fail MTTF) and short down time ( Short mean-time-to recover MTTR). MTTF means statistical time until component need a repair.Short down times can be achieved by extensive diagnostic functions, modular hardware designs, fast reconfiguration and automatic restart after a power supply failure.

      The redundancy recomended:
      • Repair faulty parts in process and bay level in less than two hours and less than 4 h for station level.
      • Standby hardware exist physically connected and per-configured.
      • Warm standby - standby HW constantly supervise active HW, automatically takes over in a failure. Time stamped events may be lost. commands are reusable after 10-30 sec
      • Host standby -  standby HW constantly supervise active HW, it takes over in a failure. No Time stamped events  lost. commands are reusable after 1-5 sec. at bay level switch over time is less than 100ms.
      Self supervision of IEDs
      • Insensitive against EMI
      • A/D conversion may subjected to aging should be supervised by a reference signal
      • Watchdog should supervise the response time from processing algorithms
      • Checksums are used detect failure in mamory
      • Loss of power should be checked
      Supervision of communication:
      • All communication devices ( Star couplers, routers, switches) are subjected to self supervision
      • Detection of errors, check the response time and counting lost messages






        Communication requirements of Substation Automation architecture

        Introduction of microprocessors in to substation allows process data in digital form. to convert analog data in to digital ADCs are used. These digital data is not distorted due to aging of the hardware and can easily exchanged  by serial communication. But these serial communication introduces additional delays.Also information processing hardware must withstand harsh environment in the substation, specially EMI.
        The data is acquired at the process level by means of remote i/o units (RIO) and intelligent sensors (PISA = process interface for sensors and actuators). The process bus connects them to the bay level equipments.

        Communication requirements:
        Maximum allowed age - worse case response time can be tolerated. This means, that this time must be guaranteed in normal operation.
        Data integrity - degree of communication safety in the case of disturbances. If data is directly influence the process those data has higher integrity.
        Exchange method - Spontaneous mean communicated as soon as it happens. Request means communicated on request by some function or human.

        Alarm - 1s - Medium - Spontaneous
        Commands - 1s - High - Spontaneous
        Process sate data -2s (binary) 5-10s (measured) - Medium - Spontaneous (gives overview of the process state)
        Time stamped events - 10s - low - On request (used for later analysis)
        Interlocking data - 5ms - high - Spontaneous (used to prevent dangerous commands)
        Interlocking data (state info) - 100ms - high - On request
        Trip from protection - 3ms - high - Spontaneous (used to clear faults)

        The actual communication throughput capacity must be higher than needed for normal operation( at least 10% higher). When we design the communication system we should avoid the single point of failure.
        No communication message failure shall lead to a unsafe action. This can be tackled using communication error detection mechanisms and making transmission media immune to disturbances( reduce number of bit errors). Today all process buses are typically a Hamming distance of at least 4 - 6, to detect transmission errors. This is sufficient for medium integrity. In substation error rate is higher than the telecommunication environment. There for we use glass fiber in the process bus and special communication procedures like "select before use" is introduced.
        No lost or late message is allowed to lead to unsafe action.Messages could be lost due to buffer overflows or overloaded routers and switches. There for lost messages and loss of message source should be detected. In IEC 61850 topical flag is used to indicate data is up to date.Glass fiber can cover a distance up to 2000m with out loosing transmission speed. while plastic fiber is used for shorter length( tenth of meters). Also plastic fiber is aging sooner than glass.

        Today we can place the microprocessor based relays close to the process. In new architecture physical signal marshaling is replaced by logical signal marshaling, which means complexity is the same. Electrical CAD systems are replaced by signal engineering tools.

        For redundancy we duplicate the protection devices at least in HV substations.
        To provide passive safety in logical nodes it sends at least two telegrams before a command is executed. This two step approach is called select before approach (SBO). HMI send the select command to CBC node. Then CBC sends selected command back to HMI. Then only HMI sends the operate command to the exact switch.

        Substation Automation Structure

        Business benifits of substation automation are :
        • Better information, Higher productivity
        • Intelligent automation, Higher productivity and higher availability

        Substation automation structure include Station level, Bay level and process level.

        Station Level provides Remote communication to Network control center (NCC) ,HMI, Station level automation, data evaluation and archiving, condition monitoring, events and alarms, station level protection and  Data exchange. Station level equipments are always separated in to two rooms - operation room and the communication equipment room.
        Bay Level provides bay level automation, time synchronization, condition monitoring, bay level protection, bay level control, object protection and data acquisition.

        Process Level includes GIS or AIS switchgear, instrument transformers, power transformers and surge arresters.Output of VT (100v or 200v) and CTs (1 or 5A)

        Time synchronization has two general approches;
        Separate synchronization pulse - separate wire for all the IEDs
        Using communication buses- master clock broadcast time telegrams and slaves regularly ask for time

        Wednesday, August 24, 2011

        Functions and Protocols in the OSI Model

         Application Layer

        The protocols at the application layer handle file transfer, virtual terminals, network management, and fulfilling networking requests of applications. A few of the protocols
        that work at this layer include:
        • File Transfer Protocol (FTP)
        • Trivial File Transfer Protocol (TFTP)
        • Simple Network Management Protocol (SNMP)
        • Simple Mail Transfer Protocol (SMTP)
        • Telnet
        • Hypertext Transfer Protocol (HTTP)

        Presentation
        The services of the presentation layer handle translation into standard formats, data compression and decompression, and data encryption and decryption. No protocols work at this layer, just services. The following lists some of the presentation layer standards:
        • American Standard Code for Information Interchange (ASCII)
        • Extended Binary-Coded Decimal Interchange Mode (EBCDIC)
        • Tagged Image File Format (TIFF)
        • Joint Photographic Experts Group (JPEG)
        • Motion Picture Experts Group (MPEG)
        • Musical Instrument Digital Interface (MIDI)

        Session
        The session layer protocols set up connections between applications, maintain dialog control, and negotiate, establish, maintain, and tear down the communication channel.
        Some of the protocols that work at this layer include:
        • Network File System (NFS)
        • NetBIOS
        • Structured Query Language (SQL)
        • Remote procedure call (RPC)

        Transport
        The protocols at the transport layer handle end-to-end transmission and segmentation into a data stream. The following protocols work at this layer:
        • Transmission Control Protocol (TCP)
        • User Datagram Protocol (UDP)
        • Secure Sockets Layer (SSL)/Transport Layer Security (TLS)
        • Sequenced Packet Exchange (SPX)

        Network
        The responsibilities of the network layer protocols include internetworking service, addressing, and routing. The following lists some of the protocols that work at this layer:
        • Internet Protocol (IP)
        • Internet Control Message Protocol (ICMP)
        • Internet Group Management Protocol (IGMP)
        • Routing Information Protocol (RIP)
        • Open Shortest Path First (OSPF)
        • Novel Internetwork Packet Exchange (IPX)

        Data Link
        The protocols at the data link layer convert data into LAN or WAN frames for transmission, convert messages into bits, and define how a computer accesses a network. This layer is divided into the Logical Link Control (LLC) and the Media Access Control (MAC) sublayers. Some protocols that work at this layer include the following:
        • Address Resolution Protocol (ARP)
        • Reverse Address Resolution Protocol (RARP)
        • Point-to-Point Protocol (PPP)
        • Serial Line Internet Protocol (SLIP)

        Physical
        Network interface cards and drivers convert bits into electrical signals and control the physical aspects of data transmission, including optical, electrical, and mechanical requirements.
        The following are some of the standard interfaces at this layer:
        • High-Speed Serial Interface (HSSI)
        • X.21
        • EIA/TIA-232 and EIA/TIA-449

        Tuesday, August 9, 2011

        RTU and Substation automation

        Remote Terminal Unit (RTU) lies between the substation and the network controlling center(NCC). Basically RTU interfaces the devices in the physical world to the SCADA. An RTU can be interfaced with the Central Station with different communication media (usually serial (RS232, RS485, RS422) or Ethernet). RTU can support standard protocols (Modbus, IEC 60870-5-101/103/104, DNP3, ICCP, etc.) to interface any third party software. 

        Saturday, August 6, 2011

        Common Data classes

        IEC 61850-7-3 defines common data classes for a wide range of well known applications. The
        core common data classes are classified into the following groups:

        – status information,
        – measurand information
        – controllable status information,
        – controllable analogue information,
        – status settings,
        – analogue settings
        – description information.

        There are services to exchange these data.The services defined in IEC 61850-7-2 are called abstract services.

        The four main building blocks of the Substation Automation System


        • the substation automation system specific information models(logical nodes and data)
        • the information exchange methods(interface)
        • the mapping to concrete communication protocols, (mapping to MMS and TCP/IP)
        • the configuration of a substation IED.

        Friday, August 5, 2011

        Access Control Administration

        Once an organization develops a security policy, supporting procedures, standards, and guidelines, it must choose the type of access control model: DAC, MAC, or role-based. After choosing a model, the organization must select and implement different access control technologies and techniques. Access control matrices, restricted interfaces, and content-dependent, context-dependent, and rule-based controls are just a few of the choices.

        Centralized Access Control Administration

        AAA protocol is the authentication protocol used,  AAA stands for authentication, authorization, and auditing.Depending upon the protocol, there are different ways to authenticate a user in this client/server architecture. The traditional authentication protocols are Password Authentication Protocol (PAP), Challenge Handshake Authentication Protocol (CHAP), and a newer method referred to as Extensible Authentication Protocol (EAP).

        Remote Authentication Dial-In User Service (RADIUS) is a network protocol and provides client/server authentication and authorization, and audits remote users. RADIUS uses UPD. Terminal Access Controller Access Control System (TACACS) provide same functionality as RADIUS with a few differences
        in some of its characteristics.TACACS uses TCP.  RADIUS encrypts the user’s password only as it is being transmitted from the RADIUS client to the RADIUS server. Other information, as in the username, accounting, and authorized services, is passed in cleartext.  TACACS+ encrypts all of this data between the client and server and thus does not have the vulnerabilities inherent in the RADIUS protocol.

        RADIUS is the appropriate protocol when simplistic username/password authentication can take place and users only need an Accept or Deny for obtaining access, as in ISPs. TACACS+ is the better choice for environments that require more sophisticated authentication steps and tighter control over more complex authorization activities, as in corporate networks.

        Diameter is another AAA protocol that provides the same type of functionality as RADIUS and TACACS+ but also provides more flexibility and capabilities to meet the new demands of today’s complex and diverse networks. RADIUS and TACACS+ are client/server protocols, which means the server portion cannot send unsolicited commands to the client portion.Diameter is a peer-based protocol that allows either end to initiate communication.

        Decentralized Access Control Administration

        A decentralized access control administration method gives control of access to the people closer to the resources—the people who may better understand who should and should not have access to certain files, data, and resources. But centralized Access Control Administration is recommended in implementations to maintain the privacy of the system.

        Access Control Models

        The main characteristics of the three different access control models are important to understand.
        • DAC (Discretionary Access Control) Data owners decide who has access to resources, and ACLs are used to enforce the security policy.
        • MAC(Mandatory Access Control) Operating systems enforce the system’s security policy through the use of security labels. Eg: security clearance,In a military environment, the classifications
          could be top secret, secret, confidential, and unclassified.A commercial organization might use confidential, proprietary, corporate, and sensitive.
        • RBAC(Role-Based Access Control) Access decisions are based on each subject’s role and/or functional position.
        Once an organization determines what type of access control model it is going to use, it needs to identify and refine its technologies and techniques to support that model.

        Access Control Techniques

        Access control techniques are used to support the access control models.
        • Access control matrix Table of subjects and objects that outlines their access relationships
        • ACL Bound to an object and indicates what subjects can access it
        • Capability table Bound to a subject and indicates what objects that subject can access
        • Content-based access Bases access decisions on the sensitivity of the data, not solely on subject identity
        • Context-based access Bases access decisions on the state of the situation, not solely on identity or content sensitivity
        • Restricted interface Limits the user’s environment within the system, thus limiting access to objects
        • Rule-based access Restricts subjects’ access attempts by predefined rules

        Thursday, August 4, 2011

        Single Sign On(SSO) Technologies

        If the user has to enter different User ID and User password every time he access a service like printer, file server, it becomes overhead to the user to remember all the usernames and passwords. They tend to write them down and then the security is exposed. Managing user password and renewing them is an overhead to the administrators too. If user has to remember on password only enforce more security in to that password using longer passwords with higher entropy.  SSO offers one time user authentication (User ID and Password) and he is good to access all the services. One bottleneck in achieving SSO is the inadequate system interoperability of services.

        Examples of Single Sign-On Technologies 
        • Kerberos Authentication protocol that uses a KDC (Key Distribution center) and tickets, and is based on symmetric key cryptography 
        • SESAME(Secure European System for Applications in a Multi-vendor Environment) Authentication protocol that uses a PAS(Privileged attribute server like KDC) and PACs(Privileged attribute certificates), and is based on symmetric and asymmetric cryptography 
        • Security domains Resources working under the same security policy and managed by the same group 
        • Thin clients Terminals that rely upon a central server for access control, processing, and storage

        Wednesday, July 27, 2011

        Access control and markup languages

        Organizations need a way to control how their information is used internally within their applications. Extensible Markup Language (XML) is the standard that provides the meta data structures to allow this expression of data. Organizations need to be able to communicate their information, and since XML is a global standard, as long as they both follow the XML rules, they can exchange data back and forth.Users on the sender’s side need to be able to access services on the receiver’s side, which the Service Provisioning Markup Language (SPML) provides. The receiving side needs to make sure the user who is making the request is properly authenticated by the sending company before allowing access to the requested service, which is provided by the Security Assertion Markup Language (SAML). To ensure that the sending and receiving companies follow the same security rules, they must follow the same security policies, which is the functionality that the extensible Access Control Markup Language (XACML) provides.

        Simple Object Access Protocol, is a protocol specification for exchanging structured information in the implementation of Web Services in computer networks. It relies on Extensible Markup Language (XML) for its message format, and usually relies on other Application Layer protocols, most notably Remote Procedure Call (RPC) and Hypertext Transfer Protocol (HTTP), for message negotiation and transmission.

        This XML based protocol consists of three parts: an envelope(which defines what is in the message and how to process it),header and body.

           
         

           
         

        Access control

        Access control is what subject can control what objects and what type of commands and operations they can carry out.

        Access control categories:
        • Administrative controls (personal controls, Supervisory structure, security awareness training, testing)
        • Physical controls ( Network segregation, Perimeter Security, Computer controls, work area separation, cabling, control zones)
        • Technical controls (System access, network architecture, Network access, encryption and protocols, auditing)
        Access control types

        • Preventive - keep undesirable events form happening
        • Detective - identify undesirable events that have taken place
        • Corrective - correct undesirable events that have taken place
        • Deterrent - Discourage security violations form taking place (we are serious about security "Beware of dogs")
        • Recovery - Restore resources and capabilities after a violation or accident
        • Compensation - provide alternatives to other controls (based on cost/benefit analysis)

        Process of getting access in to the system
        • Identification - publicly known information but shouldn't be descriptive(username, userID)
        • Authentication - Something you know(password,pin),something you have(smartcard,token) and something you are(biometrics). Strong authentication is two of authentication components.
        • Authorization - ACL
        • Accountability
        It is important to asses the your passwords by trying to crack the password your self using the tools available. Password can be cracked using dictionary attack and exhaustive attacks.Rainbow table make password cracking easier by machining hash values. As solution for this we can use one time passwords with a authentication server(challenge response authentication).

        Smart cards are good method of authentication.There are two types of smart cards, contact and contact(in/out chip) less(small antenna inside). Fault generation is one of the attack against smart card. Fault generation is manipulating the something outside the card(reader) to get into the data in smart card. Then there are software attacks exploiting the software flaws inside the card. Side channel attack means we are not doing anything to the card, just watch and gather information(gathering radiation, time it took to authenticate).Micro probing is connecting to the circuits directing by peeling of the chip on the card.

        Data classification and clearance

        Data classification is really important in the industry. There are lot of news in security leakages due to poor data classification.In the military data classification and clearance has higher importance. In military they classify data as unclassified data, confidential data, secret data and top secret data. why we don't call all the data top secret and consider it done. If we do so we waste cost in putting unnecessary security measures and waste lot of man power in managing them. So it is really important to design data classification model appropriate to our industry.Also it is important to define the security clearance.we have to define who are the data owners, what are their responsibilities and data classification in the organization. To start we should build a security policy which outline everything that we decided upon.Then we have our procedures, guidelines and standards to define it further.

        Too many classification levels are impractical and add confusion. Too few classification levels gives the perception of little value and use. And there should be no overlap between classification levels.It is very common that lot of companies have three classification levels.Also we should follow a standardize approach for our information classification criteria.

        Weakest link in security is people.That's why employ management is really important when you look in at enterprise security.80% of threat are internal and 20% are external(80/20 rule).People make mistakes.Policies should enforce in recruiting people, firing people and security training.

        Hiring and Firing procedures:

        Pre employment
        • Background check
        • security clearance
        • Credit check
        • drug screening
        Termination procedures:
        • Complete an exit interview (review non-disclosure agreement)
        • Individual must surrender ID, keys and company assets
        • User's accounts must be disabled

        Tuesday, July 26, 2011

        Want to be a Database administrator

        A personal database is typically maintained by the individual who owns it and uses it. However, corporate or enterprise-wide databases are typically important enough and complex enough that the task of designing and maintaining the database is entrusted to a professional, called the database administrator (DBA).

        The DBA is responsible for many critical tasks:

        • Design of the Conceptual and Physical Schemas - Based on the user's requirment, the DBA must design the conceptual schema (decide what relations to store) and the physical schema (decide how to store them).
        • Security and Authorization: The DBA is responsible for ensuring that unauthorized data access is not permitted
        • Data Availability and Recovery from Failures: The DBA must take steps to ensure that if the system fails, users can continue to access as much of the uncorrupted data as possible.DBA is responsible for implementing procedures to back up the data periodically and maintain logs of system activity (to facilitate recovery from a crash).
        • Database Tuning: Users' needs are likely to evolve with time. The DBA is responsible for modifying the database, in particular the conceptual and physical schemas, to ensure adequate performance as requirements change.

        Features of DBMS

        A very important advantage of using a DBMS is that it offers data independence.Application programs are insulated from changes in the way the data is structured and stored.users can be shielded from changes in the logical structure of the data, or changes in the choice of relations to be stored. This property is called logical data independence.the conceptual schema insulates users from changes in physical storage details. This property is referred to as physical data independence.

        A DBMS provides a specialized language, called the query language, in which queries can be posed.
        Query languages:

        • Relational calculus- based on mathematical logic, and queries in this language have an intuitive, precise meaning. 
        • Relational algebra-based on a collection of operators for manipulating relations
        A DBMS enables users to create, modify, and query data through a data manipulation language (DML). Thus, the query language is only one part of the DML, which also provides constructs to insert, delete, and modify data. Let's discuss the DML features of SQL in a later blog post.

        An important task of a DBMS is to schedule concurrent accesses to data so that each user can safely ignore the fact that others are accessing the data concurrently.A locking protocol is a set of rules to be followed by each transaction.A lock is a mechanism used to control access to database objects.Two kinds of locks are commonly supported by a DBMS: 

        • Shared locks on an object can be held by two different transactions at the same time
        • Exclusive lock on an object ensures that no other transactions hold any lock on this object

        A DBMS must ensure that the changes made by incomplete transactions(system interruption,crashes) are removed from the database.To do so, the DBMS maintains a log of all writes to the database.A crucial property of the log is that each write action must be recorded in the log (on disk) before the corresponding change is reflected in the database itself.This property is called Write-Ahead Log, or WAL.

        Introduction to Database

        What is a database? A database is a collection of structured data. A database captures an abstract representation of the domain of an application.
        • Typically organized as “records” called as entities
        • and relationships between records
        A DBMS is a (usually complex) piece of software that sits in front of a collection of data, and mediates applications accesses to the data, guaranteeing many properties about the data and the accesses.A data model is a collection of high-level data description constructs that hide many low-level storage details. A DBMS allows a user to define the data to be stored in terms of a data model. Most database management systems today are based on the relational data model.A widely used semantic data model called the entity-relationship (ER) model allows us to pictorially denote entities and the relationships among them.

        A description of data in terms of a data model is called a schema.In the relational model, the schema for a relation specifies its name, the name of each field (or attribute or column), and the type of each field.

        In addition to the relational data model (which is used in numerous systems, including IBM's DB2, Informix, Oracle, Sybase, Microsoft's Access, FoxBase, Paradox, Tandem, and Teradata), other important data models include ,
        • Hierarchical model (e.g., used in IBM's IMS DBMS)
        • Network model (e.g., used in IDS and IDMS)
        • Object-oriented model (e.g., used in Objectstore and Versant)
        • Object-relational model (e.g., used in DBMS products from IBM, Informix, ObjectStore, Oracle, Versant, and others).
        The database description consists of a schema at each of these three levels of abstraction:  
        • Conceptual Schema : Describes the stored data in terms of the data model of the DBMS.Describes all relations that are stored in the database
        • Physical Schema : specifies additional storage details.summarizes how the relations described in the conceptual schema are actually stored on secondary storage devices such as disks.
        • External Schema :allows data access to be customized (and authorized) at the level of individual users or groups of users
        A data definition language (DDL) is used to define the external and coneeptual schemas (SQL is a well known DDL).The process of arriving at a good physical schema is called physical database design.The process of arriving at a good conceptual schema is called conceptual database esign.

        Enterprise security architecture

        Layered approach : provide layers of defense that the attacker has to break before accessing an asset

        Industries follow this approach and then think their system is secure. But they forget that the remote access and wireless network doesn't have enough layers in position. Security requirement can be identified as functional requirements and assurance requirements.Organization choose to be certified against the BS7799 standard to provide confidence to their customer base and partners. That is why industries make effort to comply with these standards.

        Sometime numbering of the IT security standards are confusing.BS7799 security standard has two parts. After ISO took BS7799 under their wings, they introduced their own numbering.

        BS7799 part 1 - ISO17799 outlines control objectives and a range of controls that can be used to meet those objectives
        BS7799 part 2 - ISO27001 outlines how a security program can be setup and maintained.

        COBIT defines the method of building the IT inf structure. COBIT is control objective for information related technology.this is not just about security.COBIT is a whole structure how to set up IT infrastructure. In COBIT there are four domains ;
        1. Planning and Organization
        2. Acquisition and implementation
        3. delivery and support
        4. Monitoring
        In security we are just looking at the delivery and support domain.COBIT is great but it is really time consuming to implement.For security professional there are special things to learn form COBIT
        • Management of IT security
        • IT security plan
        • Identity management
        • User account management
        • Security testing,surveillance and monitoring
        Whole point of COBIT is keep IT alignment with business.It has performance indicators and define goals.COBIT is a very high level approach to the information security. That's how the auditors look at. They look at the control objective and check whether the control is in place.

        security governance is that security is controlled by not just IT but with board members and senior management.Everybody who suppose to be involved should involve in the security.Security policy, standards, baseline, guideline and procedures have to act together to realize strong security.

        Data owner is the person who responsible for protecting the data.custodian usually the IT department to the actual security setup to make sure it meet that protection level.

        Risk management

        Risk management is difficult because we are looking at the future. Most of the time enterprises have the question, "what is acceptable risk level". They have to comply with the regulations, look at their assets that they have to protect, asses the importance of their assets to understand their sufficient security level. How much enough security is a cost benefit balance.

        (1) Planing the risk management:
        • Identify Teams
        • Identify Scope
        • Identify Methods (Qualitative and quantitative)
        • Identify tools
        • Understand acceptable risk level

        every company has a different risk appetite.That means how much risk they are willing to take.acceptable risk level has to be set in the enterprise. business derives are going to help define the acceptable risk level and the management has to set the level. Team is just bring the information to the management.But this is very abstract. Then we define security policies. Security policies should reflect the acceptable risk level in the system.

        (2) Collect Information:
        • Identify Assets
        • Assign value to assets
        • Identify vulnerability and threats
        • Calculate risks
        • Cost/benefit analysis
        • Uncertainty analysis

        Collecting information is a time consuming process.it is really important to identify the assets that are to be protected.There are tangible(hardware) and intangible(data, reputation) assets. Intangible assets are harder to protect. how to assign a value to an assets?Have to determine the cost, adversary, reliability and criticallily.We have to consider if something happen to a specific asset what will cost to the company in near term and long term.
        We have to determine the type of analysis we are going to carried out. whether it is qualitative or quantitative is depends on the requirement of the company. managers like to see quantitative analysis. Quantitative has do with monetary values and qualitative is opinion based.

        Qualitative analysis is commonly used in the industry.Experts will rate the level of risk.If we defines levels according to the probability of occurrence vs consequences of occurrence, there are levels like minor risks,high incidence risks, contingency risks and significant risks. we have to address the significant risk first and then the rest.

        Single Loss expectancy (SLE) = Asset value x exposure factor


        Exposure factor is the percentage of the damage that we think take place if the vulnerability is exploited. We look at one asset and one threat, then we calculate the cost impact of this on the company.

        Probability of something to take place, we call it Annual rate of occurrence (ARO). ARO is number of expected incidents annually. ARO is annual metric. once year means ARM is 1.0.

        Annual loss expectancy (ALE) = SLE x ARO

        ALE is the potential loos that company can be gone through.This is how we determine which risk we correct first. This help us to categorize the treats and define the road map and budget allocation.

        Purely quantitative analysis can't take place, but purely qualitative analysis can. We can be exact on the values that will happen in the future. That is why most of the industries choose qualitative analysis over quantitative analysis.

        Losses can be potential or delayed.We have to look at what are the potential losses and what the delayed loses. Potential means what will happen quickly.Incident of a virus attack, the potential loss will be the inaccessibility of server. And the delayed loss will be loss of reputation.

        cost/benefit calculation for countermeasure system also depends on lot of variables.It depends on variables like cost, maintenance fee, impact on productivity and number of man powers.

        Value of countermeasure = (ALE before we put the counter measure) - (ALE after putting the countermeasure) - Annual cost of the countermeasure

        If this value is negative it means implementing the countermeasure is not cost beneficial.Not just cost, there are whole list of things that we have to look at in a countermeasure. Does it fall in to least privilege, is it flexible, does it provide uniform protection, is it modular in nature,does it require human intervention, does is provide auditing functionality,does it been tested and can it be tested. when when people are involved that is where mistakes are taken place.

        Disadvantages of quantitative analysis are it requires large amount of preliminary work, formulas are complex and inflexible and there are no real standards on how to carry this out.In qualitative approach assigning rating values are simple, allow for flexibility in processes and reporting results and it requires less preliminary work. Disadvantages of qualitative analysis are it is subjective, it is opinion and hard to map in to the budget. But this the most used in the industry.

        Following formulas are conceptual formulas and you can not put values in to those.

        Total risk = Threats x vulnerability x asset value

        Total risk is when we didn't put any countermeasures in place. If we act upon the vulnerability that is the residual risk.Residual risk shows that countermeasure reduced the risk but not get rid of all the threats.

        Residual risk = Threats x vulnerability x asset value x control gap
        (control gap = what the control can not protect against)

        Total risk - Controls = Residual risk

        When we showed the results of the analysis they need some confidence on the information that used for the analysis. uncertainty analysis assign the amount of trust on the information that we are using.

        Management is liable to take action on the risks.Four ways to dealing with risks: mitigation, transfer, acceptance and avoidance.Management need to know what to do with the information they collected.

        (3) Management's responses to identified risks:
        • Risk mitigation - implement countermeasures
        • Risk transfer - Third-party involvement like purchasing cyber insurance
        • Risk acceptance - Informed decision, no action taken when it is not cost beneficial
        • Risk avoidance - decide to stop activity
        Risk acceptance:
        • cost decision
        • pain decision
        • visibility decision

        RISK MANAGEMENT

        PLAN -> COLLECT INFORMATION -> DEFINE RECOMMENDATIONS

        Due diligence and due care on secure systems

        Standards are best practices and it is better to follow open standards to build secure systems. In interconnected systems everybody depends on others. Following open standards make the interconnection easier and leverage interoperability. Introducing propriety security systems is not the best practice. When building a secure system we should consider following control categories. 
        • Administrator controls (Defining policies, Awareness training, Risk management)
        • Technical controls (Routers, IDS, Encryption, Auditing)
        • Physical controls (locks, security guards)
        All of these categories have to work together to achieve holistic security.But in the real enterprises there are gaps between technical people and managers. Technical people complain that top management don't listen to their requests. And the managers says that they only hear the request of more money. These gaps creates vulnerabilities in the system. Technical people have to understand to make a business case according to the business drives.

        Companies consider only the technology when they building security programs. They should consider technology, business process and people using them. Security people have to understand the regulations and legal requirements (Federal laws, State laws).When laws come in to agencies (regulatory bodies) they define regulations. Also security people have to understand the business drivers and the level of risk.

        Due diligence and due care is important in building security systems.Due  diligence is accessing the vulnerabilities in the system and the due care is do something about it and fix the problem. Due diligence is uncovering potential dangers, carrying assessments, perform analysis on assessment data, implement risk management and researching and understanding the vulnerabilities, threats and risks.If you brought in to court because of an attack on the your enterprise security system, due diligence is your protector.

        Regulation enforce industries to comply with the security. Regulations are important to prevent corruption. USA took a serious look on regulation  after the ENRON downfall.

        Sunday, July 10, 2011

        Why Power needs montoring

        Power can not store like water and gas. So power has to generate according to the demand. Power generation sense the load demand by falling frequency. We can not always direct power in a specific path, because current always follows the path of least resistance. This means power transmission and distribution has to be monitor and control through out the day. Smart grid is the solution for the most of the problem in the power control and motoring. From power generation to the power consumption transformers steps voltage up and down to efficiently deliver power to the customer. Mainly electric grid consist of power genration, power transmission and power distribution. Substations transforms voltage from high to low or reverse. Electrical power may flow through several substations form power generation plant to the consumer.

        Network security for Substation automation systems

        Power system reliability has achieved higher priority in equipment implementation in the power industry over the years. Power industry has paid more attention on the information structure that supports the monitoring and controlling the power system after the August 14, 2003 black out. Initial power equipment problem in 2003 blackout, on-going and cascading failures were due to problems in providing right information to the right place within the right time.

        Communication protocols are one of the most critical parts of power system operations, responsible for communication between equipment and controlling them. Theses protocols rarely incorporated any security measures since these protocols were very specialized. “Security by Obscurity” has been the primary approach because only operators are allowed to control breakers from highly protected control center. With the increasing electricity market force security by obscurity is no longer a valid concept. Because the electricity market is pressuring market participants to gain any edge they can, it is all about winning bids and loosing bids. Also the older communications protocols are being replaced by standardized, well-documented protocols that are more susceptible to hackers and other security breaches.  Since power systems failure has greater scope and cost, it is obvious that the security in the power system is crucial factor.

        Now there are two infrastructures to be managed in the power systems, one is power system infrastructure and the other is information infrastructure. With saying that we can see any unreliability in the information infrastructure can cause the power system unreliable. So the information system has to ensure its reliability level to provide the required reliability level in the power system.

        The International Electrotechnical Commission (IEC) Technical Council (TC) 57 Power Systems Management and Associated Information Exchange is responsible for developing international standards for power system data communications protocols. IEC TC57 has developed three widely accepted protocols, and has been the source for the IEC 61850. Those three protocols are IEC 60870-5, DNP 3.0 and IEC 60870-6.

        IEC 61850 protocol security


        IEC61850 is an Ethernet (IEEE 802.3) based communication protocol used for control and automation of electric substations using microprocessor based Intelligent Electronic Devices (IED’s). It was developed jointly by the IEC (International Electrotechnical Commission) and the IEEE with the aim of providing a flexible and interpretable communication system which could be easily integrated into the infrastructure of existing substations.

        IEC61850 is a protocol used for control and automation of substations. In substation automated by IEC 61850, the IED’s communicate via this protocol. An IED could be any measuring instrument which has a microprocessor such as a current transformer or voltage transformer or a protection device such as a relay. They are communicating peer to peer, broadcast messages and as client server. For an example if the current transformer detect over current in the line it broadcast the value to protective devices to act accordingly.
        IED network within a substation contain two main busses. 

        They are namely process bus and the station bus.
        • Process Bus -Transfers unprocessed power system information to the processing IED’s
        • Station Bus - integrates all process buses together and provides the interface to external networks. Human Machine Interfaces (HMI) are connected to the station bus.

        IEC61850 has been designed considering the security aspects of the communication. The existing security mechanisms of IEC61850 are mentioned in IEC62351-4 and IEC62351-6.
        These include:

        • IEC62351-4 specifies the ciphers used by IEC61850 for encryption. In addition, IEC62351-6 specifies the use of Transport Layer Security (TLS).
        • Security for IEC61850 profiles using VLAN’s. Partitioning of the network into VLAN’s prevent unauthorized access of IED’s outside the designated VLAN.
        • Security for Simple Network Time Protocol (SNTP) via the mandatory use of the authentication algorithms of RFC2030. This prevents tampering via false time stamp packets.
        • Explicit countering of man-in-the-middle attacks and tampering using the Message Authentication Code (MAC) of IEC62351-6.
        • Explicit countering of replay attacks via the specialized processing state machines mentioned in IEC62351-4.
        IEC 62351 - Data and communication security
        IEC has published a standard for data and communication security in power systems as IEC 62351, wich includes parts 1 to 7.

        ·         IEC 62351-1: Data and Communication Security – Introduction
        ·         IEC 62351-2: Data and Communication Security – Glossary of Terms
        ·         IEC 62351-3: Data and Communication Security – Profiles Including TCP/IP
        ·         IEC 62351-4: Data and Communication Security – Profiles Including MMS
        ·         IEC 62351-5: Data and Communication Security – Security for IEC 60870-5 and Derivatives (i.e. DNP 3.0)
        ·         IEC 62351-6: Data and Communication Security – Security for IEC 61850 Profiles
        ·         IEC 62351-7: Data and Communication Security – Security Through Network and System Management


        IEC 61850 profiles that run over TCP/IP will use IEC 62351-3, in which the primary security measures are IPSec and TLS. It specifies the use of Transport layer security (TLS) which is commonly used over the Internet for secure interactions, covering authentication, confidentiality, and integrity. IEC 62351-4 provides security for profiles that include the Manufacturing Message Specification (MMS) with TLS.

        IEC 61850 also contains three protocols (GOOSE, GSE, and SMV) that are multicast datagrams and not routable, designed to run on a substation LAN or other non-routed network. The main protocol, GOOSE, is designed for protective relaying where the messages need to be transmitted within 4 milliseconds peer-to-peer between intelligent controllers. Encryption or other security measure which will affect the transmission rate is not acceptable here. So authentication is the only security measure acceptable and IEC 62351-6 provides mechanism where theses profiles can digitally sign the messages.

        IEC 62351 Part 5 relates to the specialties of serial communication. Here, additional security measures are defined to especially protect the integrity of the connections. This part also specifies the key management necessary for the security measures. IEC 62351 Part 7 describes security related data objects for end-to-end network and system management (NSM) and also security problem detection. These data objects support the secure control of dedicated parts of the energy automation network. IEC 62351 Part 8 addresses the integration of role-based access control mechanisms into the whole domain of power systems.

        Security in power system operation

        Security requirements of power systems are different from the other industries. As an instance internet environment is vastly different form the power system environment. So it is critical to have a better understanding of the security requirements and the potential impact of the security measures on the communication requirements of the power system operations.

        Security services have been developed for industries that do not have strict performance and high reliability requirements as power industries do.

        • Denial of service has far more impact in the power industry than many typical internet transactions. Preventing authorized dispatcher form accessing the power system substation control has serious consequences than preventing a customer form accessing his bank account.
        • Communication channels used in power system are narrow band which permitting the overhead needed for encryption and key exchange.
        • In power system industries some substations and equipments are located in unmanned remote areas which makes lot of security measures are difficult to implementation
        • Wireless communication are becoming used for many applications but have to be more careful in implementing in power system because the noisy electrical environment in the power substations.

         Power system security uses large variety of communication methods and performance characteristics, single security measure cannot counter all the security threats.  For instance VPNs only secure the transport level protocols, so we need additional security measures to protect the application level protocols.  In power system communication authentication plays a larger role in many security measures, because authentication of control actions is far more important than the data through encryption. Security truly is an “end-to-end” requirement to ensure authenticated access to sensitive power system equipment, reliable and timely information on equipment functioning and failures, backup of critical systems, and audit capabilities that permit reconstruction of crucial events.

        GOOSE/SMV protection

        GOOSE is stand for Generic Object Orientated System-wide Events. By using GOOSE with station bus communication, aims to replace the conventional hardwired logic necessary for intra-relay coordination. When IED detect an event it multicast values to notify devices which have registered to receive the data. Because this information is time critical, performance requirements are stringent. In the GOOSE communication no more than 4ms is allowed to elapse from the time an event occur to the time of message transmission.   In order to replace the conventional method of using contacts and wires, the performance of the GOOSE messaging, i.e. transfer time should be less than 3ms for a Trip GOOSE command and 20ms for a Block GOOSE command as specified in IEC 61850-5 'Communication requirements for functions and device models’. The amount of data that would be generated after the event is based on the network topology IEDs follow, number of IEDs in the network and the type of the event. We can see that in this scenario collision are quite possible, so the GOOSE messages are retransmitted multiple times by each IED.  GOOSE model groups data value in to data sets to be published. In peer to peer data value publishing GOOSE model has several attributes that can be used to control the publishing process.

        GOOSE messaging is a very important in multi vendor interoperability. The purpose and the advantages of GOOSE:
        • Only a single LAN cable/fibre is required instead of connecting conventional metallic wiring between protection devices or between protection devices and primary equipment. This reduces the total cost involve in building a system in substation.
        • Multi vendor interoperability, Connection between IEDs provided by different vendors is much easier to achieve
        • Modification or addition of data communications between IEDs can be easily achieved by the re-configuration of the IEDs’ GOOSE settings, rather than by complex metallic wiring.
        Virtual LAN vulnerabilities
        In the power industry they use VLAN for layer2 security. Virtual LAN (VLAN) technology is used to create logically separate LANs on the same physical switch. Each port of the switch is assigned to a VLAN.
        VLAN is not secure enough for GOOSE and SMV messages. VLAN has security vulnerabilities and only VLAN implementation is not enough for GOOSE/SMV communication. VLAN switch implementations have been susceptible to a variety of Denial of Service attacks, including traffic flooding, MAC flooding and CAM table poisoning (CAM refers to the Content Addressable Memory used to list MAC addresses reachable through each switch port).
        VLAN switch configurations and deployments have been vulnerable to a number of spoofing and man-in-the-middle attacks. The most well known exploits include the following. (Links at the end of this article lead to detailed descriptions.)

        • MAC address spoofing
        • VLAN tag spoofing (where the attack computer falsely identifies itself as a member of a VLAN by spoofing the IEEE 802.1q tag )
        • ARP cache poisoning
        • Connection hijacking following a successful ARP attack
        •  Multicast Brute Force Attack
        •  Random Frame Stress Attack
        •  Private VLAN Attack
        Power industry has paid more attention on the information structure that supports the monitoring and controlling the power system. Communication protocols are one of the most critical parts of power system operations, responsible for communication between equipment and controlling them. IEC61850 protocol which is used for control and automation of substations has been designed considering the security aspects of the communication. The existing security mechanisms of IEC61850 are mentioned in IEC62351 standard. Security requirements of power systems are different from the other industries. . In order to maintain security in power systems, constant vigilance and monitoring are needed as well as adaptation to changes in the overall power system. The main purpose of the security protection is to detect the attack and eliminate it form the system. Power system security uses large variety of communication methods and performance characteristics, single security measure cannot counter all the security threats.  In IEC 61850 there are mainly five message types used for communication sample measured values, GOOSE, MMS, GSSE and time sync. There are four types of information exchange methods. They are Client/server services, GOOSE/GSE Management Services, GSSE Services and Time Sync exchange. IEC 61850 profiles that run over TCP/IP will use security measures IPSec and Transport layer security (TLS).Client server and the GSSE information exchange which uses MMS are using network layer and transport layer security measures to achieve secure combination. With stringent performance requirement in GOOSE and SMV message communication, encryption or other security measures which may significantly affect transmission rates are not acceptable. Therefore, authentication is the primary security measure for GOOSE and SMV. VLAN has security vulnerabilities and VLAN implementation alone is not enough for GOOSE/SMV communication. We should research on the better authentication which matches to the substation communication requirements and we can implement that authentication value as an extension to the GOOSE message.



        Performance analysis of IPSec

        With the development of web services, more social networks and commercial networks are introduced in to the internet. These internet applications deal with various types of data. Securing data over networks is becoming a more critical issue on the internet. Network security should provide confidentiality, integrity and authenticity to data networks. Network layer security protection is essential to the internet communication. The IP Security (IPSec) protocol is the most famous, secure and widely deployed security protocol that secures the data communication on the internet at the network layer. The performance evaluation of IPSec is an important factor in network security.  It is important to achieve network security without degrading the performance level in the communication system. In this paper, we analyze the IPSec performance as a network security gateway security protocol.

        IPSec security protocol acts in the network layer. And it has two modes of operation: Transport mode and Tunnel mode. There are two major protocols in the IPSec protocol suite: the Authentication Header (AH) protocol and the Encapsulating Security Payload (ESP) protocol. ESP provides confidentiality, Integrity and authenticity for the communication. AH ensures authenticity and integrity of the data protected. IPSec consults Security Policy Database (SPD) and Security Association Database (SAD) to determine the way to security the IP packets. The security policy determines the security services offered to the IP flow.  The Security Associations (SAs) act as the contract between two communicating entities. They determine the IPSec protocol used in the transforms, the keys, and the duration that the keys are valid. The Internet Key Exchange (IKE) creates SAs dynamically on behalf of IPSec and manages the SAD. IKE provides key management schemes for communicating entities. Establishing IPSec connection requires two phases. Phase 1 performs mutual authentication and produces the encryption key required to protect Phase 2 transactions. Phase 2 negotiates the cipher and authentication algorithm to protect future communication.

        Security is a critical factor to the development of the internet. IPSec is a suite of protocols that provides source authentication, data integrity and data confidentiality at the network layer, in both the Ipv4 and Ipv6 environments. Linux kernel 2.6 is a powerful platform for the development of security gateway. We have analyzed the performance of the security gateway in different configurations of ESP tunneling. 

        When the compression is applied, we can see a drop in IPSec performance. This performance decrease happens due to the relation between the encryption algorithm speed and the compression algorithm speed. When we apply the compression to the higher speed encryption algorithm in IPSec, it will cause the throughput to degrade.  HMAC-MD5 shows a higher performance than the HMAC-S HA1 in both with compression and without. Also AES performs better than the other encryption mechanism. DES and 3DES have lower throughput than others, because of their time consuming encryption process.

        The increase and decrease of throughput is based on a combination of elements: the residing layer, the header size and the relevant speed between the compression, the encryption and the transfer. AES presents better encryption capability than DES and 3DES. HMAC-MD5 has better authentication compatibility than HMAC-SHA1. We can achieve higher network security and lower performance degradation by implementing ESP tunnel with AES encryption and HMAC-MD5 authentication.

        Binary data modulation with coding


        When we design a communication system, we need to consider transmitter and receiver structures, probability of error, bandwidth occupancy of the modulated signal and bandwidth efficiency. Communication performance is critical factor in achieving error free transmission.


        Let’s discuss the main blocks in a communication system and the way to organize them to achieve higher performance. Data to be sent is generated in the data source and fed in to the channel encoder. The purpose of the channel encoder is introducing redundancy bits to combat the effects of noise and interference over the channel. Channel coding is referred as a signal transformation designed to improve communication performance. Convolution coding is a type of channel coding. The importance characteristic of convolution coding is that coder has memory. K is a parameter called constraint length in the convolution coder. The output n-tuple emitted by coder is not only a function of input k-tuple, but also it is a function of the previous K-1 input k-tuple. 


        The output of the channel encoder fed in to the digital modulator. The primary purpose of the digital modulator is to map the binary information sequence into signals suitable for transmission over the channel. We have to test Coherent phase shift keying (PSK) and Coherent frequency shift keying (FSK) modulation techniques in our communication system. Coherent receiver means receiver has a phase recovery circuitry. Receiver knows both the frequency and the phase of the carrier signal used in the transmission. PSK means signals carries information in the phase, and FSK means signal carries information in the frequency.


        Modulated signal is transmitted via communication channel. Communication channel is the physical channel we use to transmit data for transmitter to the receiver. The essential feature in this physical medium is the transmitted signals are corrupted in a random manner due to various mechanisms. Here we have used Additive White Gaussian Noise channel for our simulations. Modulated signal transmitted over the channel and then converted back to sequence of binary data in the demodulator. Chanel decoder attempts to decode the channel encoded sequence. As the channel decoder in the receiver we can use maximum likelihood decoding and Viterbi decoding. We used Viterbi decoding for our simulations. The signal at the channel decoder in our scenario is the approximation of the original data.


        We need to consider probability of error, bandwidth occupancy of the modulated signal and bandwidth efficiency of the communication system in designing. With coding we can achieve lower probability of error without increasing the signal to noise ratio. By introducing channel coding to the modulation, we raised the performance of the communication system. But when we used coding we need higher bandwidth for data transmission. Bandwidth the price we have to pay for the higher performance with channel coding.


        We can achieve lower bit error probability in using BPSK modulation over BFSK modulation. BPSK showed higher performance than BFSK for same signal to noise ratio.With coding we can increase the performance of the communication system. When we use channel coding we can achieve the same probability of error level that we have in modulation with lower signal to noise ratio. We saw that code rate ½ perform well over code rate 1/3 at lower probability of error levels. But there is a price that we have to pay when we use channel coding in communication systems. When we use channel coding we need higher bandwidth, because we transmit more bits in the same bit duration. So higher code rates will result higher spectral occupancy. Normally, BPSK has less spectral occupancy than BFSK. 


        BPSK is the best modulation technique for binary data transmission. BPSK with channel coding raised the performance level of the communication system, resulting higher utilization of the bandwidth.

        Tuesday, March 29, 2011

        Crypto concepts used in IPSEC

        IPSEC is very complicated and very extensible system for network security. IPSEC uses symmetric cyphers in encryption and HMAC for data authentication. Internet key exchange is basically an authenticated Diffie-hellman exchange. There are several ways of authentication, one way of authentication is digital signatures, another involves HMAcing a shared secret, third involves public key encryption to authenticate a peer.

        IPSEC standard key exchange, IKE has an option for perfect forward secrecy. It adds extra overhead while doing Diffie-Hellman exchange in each rekey interval. Denial of service attacks are possible to force the computer to do unnecessary work while trying to achieve security, which leads to shutting down the computer. DOS can be launched against cryptographic systems, when the attacker cause the system to do more work in response to the attack than is necessary to launch the attack. Thankfully, IPSEC and IKE are constructed with partial defenses against denial of service attacks, but merely increase the cost and complexity to launch them.

        Monday, March 28, 2011

        Killed the presentation

        I recently found that I am good at planing things and I wanted to try my skill on the course presentation. I wasn't good presenter before, but with my new plan I have become one. I wrote few steps how to prepare for a presentation. It really worked for me. After I followed the below steps when  I am get ready for the presentation, I was well organized and very confident about the presentation.

        Presentation plan:
        • Prepare the presentation
        • Write down the exact points that you are going to say in a paper
        • Practice the presentation several times
        • check timing
        • Try to remember the important slides
        • Keep the flow of the presentation in mind
        • Be normal and calm in presenting
        • Keep the point paper when you present

        Thursday, March 24, 2011

        IPTV Customer has to be managed

        When I was at IPTV industry I worked closely with customer account management software called Geneva. It is an product of IBM cognos version 7. account management was really important in billing the customer. We have enrolled a single customer which used PSTN, broadband and IPTV in to one account. so he had different products namely PSTN, internet and IPTV in the same account. We define price plans for each IPTV channel package we are creating in the system. In the stage of customer creation we add the specific package to the customer with the price plan. We add the IPTV as the parent product and then add other services like TSTV, VOD and SVOD as child products with relevant price plans. In this way we have control over the product price plans and it is important as in marketing aspects. Billing is according to the base package, VOD subscription and channel subscription of the user. when we create a product in the GENEVA, a work order passed to the workforce management software to initiate the work order. So the product creation in the GENEVA is the starting point in IPTV provisioning. Account information is important in product creation, if it is an existing customer information like  telephone number and billing address are in the system. If it is new customer all the information has to enter accurately and first we have to begin with PSTN connection, then broadband connection and finally IPTV provisioning.

        IPTV is not normal TV

        Most of the telecommunication companies all over the world moving towards convergence in their network with triple play voice, video and data(broad band internet). IPTV which stands for Internet protocol TV is a technology that multicast TV channels and VOD on broadband networks.  bandwidth of the network plays a main role in the IPTV business. Other aspect of IPTV is the video encoding and compression mechanism used in video delivery. I have closely worked with UT starcom IPTV system which is called as UT Starcom's rolling stream IPTV system(http://www.utstar.com/). IPTV aand Satellite TV is two different technologies which are competing to the same market segment.

        Let me explain the IPTV architecture first. First let's talk about  how live channels are transmitted over broad band networks. The trick play function on live channels, gives you the control over what you are watching. it gives you the pausing function where you can pause the live stream and go back in time. Time shifting is the term that is used in IPTV for going back in time to watch something already been telecast. Live channels get transmitted via satellite and fibers to the IPTV head end from television stations.

        Sunday, March 20, 2011

        My new NIKON D3100

        I brought a new NIKON D3100 las week. This DSLR camera is know as entry level DSLR camera and this really suits to an armature photographer. Camera is lighter in weight and has a good grip to hold the camera one hand. This camera has 14.2 Megapixels that gives additional 2 MP than its predecessor D3000. And also it has 1080 p HD movie recording facility improved over D5000. D3100 gives in camera retouching options that we can finish a quality picture in camera it self. I like that NIKON now give you good quality pictures and HD video recording too. D3100 has a help guide to beginner in photography and which helps me a lot to get to know with the camera. I took some snaps around and I noticed that the color of the pictures were great . D3100 live view lever is positioned in a great place in the rear of the camera, and it gives easy access to the video recording. I am planing to shoot some videos in London,ON area with D3100 and make a short documentary about it. Let's see how well it goes. I have found that the USB cable is not coming with the camera and it was little bit disappointed. But I could get my first photos in to my computer because I had a SD card reader in my laptop.I am still discovering my camera and it is quite a beauty.