Abstract
The Transport scrubber converts ambiguous network flows (TCP/IP) into well-behaved
flows that are interpreted equally by all downstream endhosts.The Fingerprint scrubber
restricts an attackers ability to determine the operating system of protected host.
Abstract
1. Introduction:
The world is surging towards a digital revolution where computer networks mediate every
aspect of modern life. Today the threat to the information on the network has grown to the
greatest extent. Information is the most vital aspect of every organization. Access to the
internet can open the world to communicating with customers and vendors, and is an
immense source of information. But the same opportunities can open your network to possible
attacks by thieves and vandals. These attackers and hackers try to harm a system and disrupt
information exploiting vulnerabilities by using various techniques, methods, and tools.
Network security is a broad topic and covers multitude of sins. Our project
concerns at making the network secure by detecting and neutralizing network attacks.
It involves outsmarting the intelligent, dedicated and sometimes well-funded adversaries;
thus, maintaining integrity and secrecy of the valued information.
2. Brief Idea of Project:
Protocol scrubbers are transparent, active interposition mechanisms for explicitly removing
network scans and attacks at various protocol layers of IPv4. In our software we introduce
two types of scrubbers viz., Transport scrubber and Fingerprint scrubber. The transport
scrubber supports downstream passive network-based intrusion detection systems by converting
ambiguous network flows into well-behaved flows that are unequivocally interpreted by all
downstream endpoints. The TCP scrubber is based on a novel simplified state machine that
performs in a fast and scalable manner. The fingerprint scrubber restricts an
attackers ability to determine the operating system of protected host. The fingerprint
scrubber is built upon the TCP scrubber and removes additional ambiguities from flows
that can reveal implementation specific details about a hosts operating system.
3. The Problems faced & our Project as a solution:
There are many different implementations of TCP/IP available in market today. These
implementations vary significantly in many respects which gives rise to various ambiguities.
Attackers can use these ambiguities to deceive network security systems, which can be a real
threat to commerce, banking and mission critical applications. The aim behind our project is
to explicitly remove network scans and attacks at various protocol layers by
standardizing code.
Some of the attacks and their causes (ambiguities in TCP/IP) are listed below:
a. Insertion and Evasion attacks: These attacks can subvert the NIDS due to the
ambiguity in the NID and the end host in which either NIDS accepts a retransmitted
packet over original packet end end host rejects it (Insertion) or end host accepts it
while NIDS rejects it (Evasion).
b. Weakness in Sequence numbers: It has long been recognized that the ability to
know or predict ISNs can lead to manipulation or spoofing of TCP connections.
Systems relying on random increments to make ISN numbers harder to guess are still
vulnerable to statistical attack.
c. Fingerprinting attacks: The behavior of TCP for unexplained combinations of
TCP header flags like SYN|ACK|FIN is not same for different OSs (operating systems).
Different OSs will return the same TCP options in different orders. Similarly,
ambiguities arise in IP headers, like unused combinations of TOS bits. ICMP
responses may contain different amount of data payload for different operating
systems. Different OSs implement ICMP rate limiting at different rates.
d. Port scanning: One can detect if a particular port is open or closed by sending a SYN
packet on that port and examining the reply.
Contribution towards the solution:
i. To overcome insertion and evasion attacks we are maintaining reassembly queue
of unacknowledged data. Whenever any unacknowledged data is retransmitted,
the original data is copied over the data in the packet to prevent possible ambiguity.
ii. We provide a canonical ordering of TCP options known to user. Unknown options
are included after all known options. The TCP options, which were introduced
to give performance benefits, are not touched. An e.g. of such an option is
the selective acknowledge (SACK) option.
iii. We store a random number when a new connection is initiated. Each TCP segment
for connection traveling from trusted interface to un-trusted interface has its
sequence number incremented by this value. Each segment for the connection
traveling in the opposite direction has its ACK number decremented by this value.
iv. We force all TCP connections to go through 3WHS else they are blocked.
This blocks port scans that do not make use of 3WHS.
v. We standardize all the issues, which are operating system dependent. These include
the initial window size, the TCP options such as maximum segment size (MSS),
TCP window scale, TCP timestamp, the IPID field, TOS bits, the time to live
field, etc. The above issues are standardized in response to the seven probe packets
sent by NMAP.
vi. Similarly, the ICMP scanning techniques used in fingerprinting are taken care of by
standardizing the no of data bytes sent back in error messages and by rate limiting
these messages.
4. How project works?
We have implemented Protocol scrubber in Linux 2.6 kernel on a gateway (or an
IP forwarding router) of a network which we intend to protect.
Transport Scrubber:
We take a state-full and a stateless approach towards the solution.
In the state-full approach we maintain the state of the connections. In this approach we make
sure that every packet to be scrubbed passes through our state diagram and any invalid packets
( for ex: invalid flags on ) are discarded or modified and packets showing normal behavior are
forwarded to end host. Also, during state maintenance we keep any unacknowledged data in
a TCP reassembly queue, the reason being to remove insertion & evasion attacks. Whenever
any unacknowledged data is retransmitted we copy the old data over new data and keep the
data always in consistent state. In this way, all the end hosts interpret data as a uniform
data and any ambiguity is removed.
Fingerprint Scrubber: TCP/IP stack fingerprinting is the process of determining the
identity of a remote host's operating system by analyzing packets from that host. Freely
available tools (such as nmap and queso) exist to scan TCP/IP stacks efficiently by quickly
matching query results against a database of known operating systems. Now, to counteract
fingerprint scrubber standardizes the responses (i.e., fields which are OS dependent) sent by
different operating systems because these are the ones, which glean information about the
internal Operating System implemented by the end host. These fields, which are to be
standardized, are predefined based on the feasibility of modifying them and their effect on
the performance. Thus, such a step defeats Nmaps attempt towards fingerprinting in most
of the cases and in the remaining ones, fakes it.
5. Marketability of the project :
The protocol scrubber can be merged into existing firewall technologies with great ease,
to enhance its performance and provide security measures for combating the deadliest of
attacks. It removes attacks as an active participant in the flows behavior, whilst
functioning as a fail-closed, real time NID system. Can be placed in front of critical
network infrastructure or eventually built into routers and switches. Accordingly, the
protocol scrubber can be considered as a commercially viable entity, able to complement
recent network security components.
6. Mentionable aspects:
The TCP scrubbers in-kernel implementation provides for greater performance advantages
over a user space transport proxy. By maintaining only 3 states for inbound connections,
TCP scrubber scales significantly better than endpoint TCP stack which has much more
complex state machines like scheduling timer events, round-trip time estimation,
window size calculations.