Sborník příspěvků, anglicky

Transkript

Sborník příspěvků, anglicky
PROCEEDINGS OF THE CONFERENCE
5.–7. 5. 2009, IDET BRNO, CZECH REPUBLIC
Published by
Partners of One Day of the Conference
May 5, 2009
May 6, 2009
May 7, 2009
Press Partner
General Partners of the Conference
PROCEEDINGS OF THE CONFERENCE
5.–7. 5. 2009, IDET BRNO, CZECH REPUBLIC
© 2009 by University of Defence
Kounicova 65
662 10 Brno
Czech Republic
First published in the Czech Republic in April, 2009
Published by University of Defence, Czech Republic
Cover designed by Omega design, s.r.o.
Title: Security and Protection of Information 2009
Subtitle: Proceedings of the Conference
Authors: Jaroslav Dockal, Milan Jirsa (editors)
Conference: Security and Protection of Information, held in Brno, Czech republic, May 5-7, 2009
ISBN: 978-80-7231-641-0
Contents
Introduction ..............................................................................................................................................2
Flow Based Security Awareness Framework for High-Speed Network ....................................................... 3
Pavel Čeleda, Martin Rehák, Vojtěch Krmíček, Karel Bartoš
Video CAPTCHAs ................................................................................................................................ 14
Carlos Javier Hernandez-Castro, Arturo Ribagorda Garnacho
Possibility to apply a position information as part of a user’s authentication ........................................... 23
David Jaroš, Radek Kuchta, Radimir Vrba
Hash Function Design - Overview of the basic components in SHA-3 competition ............................... 30
Daniel Joščák
Measuring of the time consumption of the WLAN’s security functions ................................................. 38
Jaroslav Kadlec, Radek Kuchta, Radimír Vrba
Experiences with Massive PKI Deployment and Usage .......................................................................... 44
Daniel Kouřil, Michal Procházka
Securing and Protecting the Domain Name System ............................................................................... 53
Anne-Marie Eklund Löwinder
A system to assure authentication and transaction security ..................................................................... 62
Lorenz Müller
Security analysis of the new Microsoft MAC solution ............................................................................ 77
Jan Martin Ondráček, Ondřej Ševeček
Cryptographic Protocols in Wireless Sensor Networks ........................................................................... 87
Petr Švenda
Risk-Based Adaptive Authentication .................................................................................................... 105
Ivan Svoboda
DNSSEC in .cz ................................................................................................................................... 112
Jaromír Talíř
Security for Unified Communications ................................................................................................. 113
Dobromir Todorov
Integrating Competitor Intelligence Capability within the Software Development Lifecycle ................ 115
Theo Tryfonas, Paula Thomas
Validation of the Network-based Dictionary Attack Detection ............................................................ 128
Jan Vykopal, Tomáš Plesník, Pavel Minařík
Security and Protection of Information 2009
1
Introduction
It is once again a great pleasure to present the proceedings of the 5th scientific NATO Conference “Security
and Protection of Information”, held on May 5-7, 2009, at the Brno Trade Fair and Exhibition Centre.
The Conference had been organized by Brno University of Defence and is held under the auspices of the
security director of the Czech Ministry of Defence.
This Conference is part of an accompanying programme of the IDET (International Fair of Defence and
Security Technology and Special Information Systems) fair. The high quality of this 5th Conference in this
year is guaranteed by the Programme Committee, which consists of the most respected authorities in the
information security field in the Czech Republic and abroad.
The Conference was organized to promote the exchange of information among specialists working in this
field and to increase awareness regarding the importance of safeguarding and protecting secret information
within the Armed Forces of the Czech Republic as well as in the Czech Republic generally. Companies and
businesses that produce or sell security technology and services have provided significant assistance in
preparing this conference.
The issues that the Conference deals with can be divided into three thematic areas: information security in
general, computer network security and cryptography. We have chosen as invited speaker several of the
best security specialists.
The Armed Forces of the Czech Republic consider it necessary to hold international conferences and
workshops dealing with security and protection of information regularly in the Czech Republic. The aim
of such conferences is both to inform the wider military public and specialists and to provide a forum for
exchange of information and experience. This means that the purpose of all our security conferences is not
only to offer up-to-date information, but also to bring together experts and other military and civilian
people from many fields who share a common interest in security.
Jaroslav Dočkal, PhD
Chairman of the Programme Committee
2
Security and Protection of Information 2009
Flow Based Security Awareness Framework
for High-Speed Networks
Pavel Čeleda
Martin Rehák
Vojtěch Krmíček
Karel Bartoš
[email protected]
[email protected]
[email protected]
[email protected]
Institute of Computer
Science, Masaryk
University, Brno
Czech Republic
Department of
Cybernetics, Czech
Technical University in
Prague, Czech Republic
Institute of Computer
Science, Masaryk
University, Brno
Czech Republic
Department of
Cybernetics, Czech
Technical University in
Prague, Czech Republic
Abstract
It is a difficult task for network administrators and security engineers to ensure network security awareness
in the daily barrage of network scans, spaming hosts, zero-day attacks and malicious network users hidden
in huge traffic volumes crossing the internet. Advanced surveillance techniques are necessary to provide
near real-time awareness of threads, external/internal attacks and system misuse.
Our paper describes security awareness framework targeted for high-speed networks. It addresses the issues
in detailed network observation and how to get information about who communicates with whom, when,
how long, how often, using what protocol and service and also how much data was transferred. To
preserve users' privacy while identifying anomalous behaviors we use the NetFlow statistics. Specialized
standard and hardware-accelerated flow monitoring probes are used to generate unsampled flow data from
observed networks.
We use several anomaly detection algorithms based on network behavioral analysis to classify legitimate
and malicious traffic. Using network behavioral analysis in comparison with signature based methods
allows us to recognize unknown or zero-day attacks. Advanced agent-based trust modeling techniques
estimate the trustfulness of observed flows. The system operates in unsupervised manner and identifies
attacks against hosts or networks from the network traffic observation. Using flow statistics allows the
system to work even with encrypted traffic.
Incident reporting module aggregates malicious flows into the incidents. The intrusion detection message
exchange format or plain text formated messages are used to describe an incident and provide human
readable system output. Email event notification can be used to send periodical security reports to tools
for security incident reports management.
Presented framework is developed as a research project and deployed on university and backbone
networks. Our experiments performed on real network traffic suggest that the framework significantly
improves the error rate of false positives while being computationally efficient, and is able to process the
network speeds up to 1 Gbps in online mode.
Keywords: intrusion detection, network behavior analysis, anomaly detection, NetFlow, CAMNEP,
FlowMon, Conficker.
Security and Protection of Information 2009
3
1 Introduction
Ensuring security awareness in the high-speed networks is a human intensive task in these days. To
perform such surveillance, we need a highly experienced network specialist, who has a deep insight to the
netwok behavior and perfectly understands the network states and conditions. His usual work procedures
consist of observing the traffic statistic graphs, looking for unusual peaks in volumes of transferred bytes
or packets and consequently examining particular suspect incidents using tools like packet analyzers, flow
collectors, firewall and system logs viewers etc. Such in-depth traffic analysis of particular packets and
flows is a time consuming and requires excellent knowledge of the network behavior.
The presented framework introduces a new concept which dramatically reduces the requirements for
network operator experiences and overtakes long-term network surveillance. The system operator doesn't
need to constantly observe network behavior, but can be focused on the security incident response and
resolution. The system can report either classified incidents or all untrustfull traffic. Consequently, the
system operator receives reports about security incidents and examines them - checks, if their are not false
positives and in case of need he performs further necessary actions.
The paper is organized as follows: After a short system overview, we present our FlowMon based traffic
monitoring platform including generation and collection of NetFlow data. Then we describe CAMNEP
framework and the reduction of false positives using trust modeling techniques. Finally, we conclude with
real life example discussing detection of Conficker worm. We were not able due to the limited number of
pages discuss all parts in deep detail. More detailed description of used methods and algorithms is
available in our previous publications [10, 13].
2 System Architecture
The architecture of our system has four layers, which are typically distributed and where each layer
processes the output of the layer underneath, as shown in Figure 1.
At the bottom of the whole architecture, there is one or more standard or hardware-accelerated FlowMon
probes [13] generating NetFlow data and exporting them to the collector. The open-source NfSen [4] and
NFDUMP [3] tools are used to handle NetFlow data. These tools also provide storage and retrieval of the
traffic information for further forensics analysis. TASD (Traffic Acquisition Server Daemon) is an
application managing communication between acquisition layer and agent layer via TASI protocol. TASD
reads NetFlow data, performs preprocessing to extract aggregated statistics, estimates entropies and sends
these data to the upper layer.
The preprocessed data is then used for cooperative threat detection, performed in the third layer called
CAMNEP (Cooperative Adaptive Mechanism for Network Protection). Suspicious flows and network
anomalies are either visualized in graphical user interface or passed to the top level layer, which is
responsible for visualization and interaction with network operator. Email reports and event notifications
are send to standard mailbox or to ticket request system e.g. OTRS (Open source Ticket Request System).
4
Security and Protection of Information 2009
Figure 1: System block structure.
3 Flow Based Network Traffic Awareness
To be able to provide permanent network situational awareness we need to acquire detailed traffic
statistics. Such statistics can be complete packet traces, flow statistics or volume statistics. To efficiently
handle high-speed traffic the trade-off between computational feasibility and provided level of information
must be chosen.
Figure 2: Traffic monitoring system deployment in operation network.
Security and Protection of Information 2009
5
•
Full packet traces traditionally used by traffic analyzers provide most detailed information. On the
other hand the scalability and processing feasibility for permanent traffic observation and storing
in high-speed networks is problematic including high operational costs.
•
Flow based statistics provide information from IP headers. They don't include any payload
information but we still know from IP point of view who communicates with whom, which time
etc. Such approach can reduce up to 1000 times the amount of data necessary to process and
store.
•
Volume statistics are often easy to obtain in form of SNMP data. They provide less detailed
network view in comparison with flow statistics or full packet traces and doesn't allow advanced
traffic analysis.
In our work we have decided to use NetFlow data for their scalability and ability to provide sufficient
amount of information. NetFlow initially available in CISCO routers is now used in various flow enabled
devices (routers, probes). Flow based monitoring allows us to permanently observe from small end-user
network up to large NREN (National Research and Education Network) backone links.
In general, flows are a set of packets which share a common property. The most important such properties
are the flow's endpoints. The simplest type of flow is a 5-tuple, with all its packets having the same source
and destination IP addresses, port numbers and protocol. Flows are unidirectional and all their packets
travel in the same direction. A flow begins when its first packet is observed. A flow ends when no new
traffic for existing flow is observed (inactive timeout) or connection terminates (e.g. TCP connection is
closed). An active timeout is time period after which data about an ongoing flow are exported. Statistics
on IP traffic flows provide information about who communicates with whom, when, how long, using
what protocol and service and also how much data was transferred.
To acquire NetFlow statistics routers or dedicated probes can be used. Currently not all routers support
flow generation. Enabling flow generation can consume up to 30 - 40 % of the router performance with
possible impacts on the network behavior. On the other hand dedicated flow probes observe the traffic in
passive manner and the network functionality is not affected.
In our system we use FlowMon probes [13]. The FlowMon probe is preferred one due to implemented
features which contain support for NetFlow v5/v9 and IPFIX standard, packet/flow sampling,
active/inactive timeouts, flow filtering, data anonymization etc. The probe firmware and software can be
modified to add support for other advanced features. Hardware-accelerated probes support line-rate traffic
processing without packet loss. Standard probes are based on commodity hardware with lower
performance. The FlowMon probe was developed in Liberouter project and is now maintained by
INVEA-TECH company.
To provide input for probes TAP (Test Access Port) devices or SPAN (Switched Port Analyzer) ports can
be used. TAP devices are non-obtrusive and are not detectable on the network. They send a copy (1:1) of
all network packets to a probe. In case of failure the TAP has built-in fail-over mode. The observed line
will not be interrupted and will stay operational independent and any potential probe failure. Such
approach enables us to deploy monitoring devices in environments with high reliability requirements.
SPAN (port mirroring) functionality must be enabled on router/switch side to forward network traffic to
monitoring device. It's not necessary to introduce additional hardware in network infrastructure but we
need to reconfigure the router/switch and take in count some SPAN port limits. Detailed comparison
between using TAP devices or SPAN ports is described in [14].
6
Security and Protection of Information 2009
4 Agent-Based Anomaly Detection Layer
The anomaly detection layer uses several network behavior analysis [12] (NBA) algorithms embedded
within autonomous, dynamically created and managed agents. Each detection agent includes one anomaly
detection method and a trust model, a knowledge structure that aggregates a long-term experience with
specific traffic types distinguished by agent.
Anomaly detection (AD) paradigm is appropriate for NetFlow data processing due to its nature and due
to the relative effective low-dimensionality of network traffic characteristics, either in the volume [5] or
parameter distribution characteristics [6]. The anomaly detection methods use the history of traffic
observations to build the model of selected relevant characteristics of network behavior, predict these
characteristics for the future traffic and identify the source of discrepancy between the predicted and
actually observed values as a possible attack. The main advantage of this approach is its ability to detect
the attacks that are difficult to detect using by the classic IDS systems based on the identification of
known malicious patterns in the content of the packets, such as zero-day exploits, self-modifying malware,
attacks in the ciphered traffic or resource misuse or misconfiguration.
The use of the collaborative agent framework helps us to address the biggest problem of the anomalybased intrusion detection systems, their error rate, which consists of two related error types. False Positives
(FP) are the legitimate flows classified as anomalous, while the False Negatives (FN) are the malicious
flows classified as normal. Most standalone anomaly detection/NBA methods suffer from a very high rate
of false positives, which makes them unpractical for deployment. The multi-stage collaborative process of
the CAMNEP system removes a large part of false positives, while not increasing the rate of false
negatives, and deploys a range of self-optimization and self-monitoring techniques to dynamically measure
and optimize the system performance against a wide range of known attack techniques.
The collaborative algorithm, which has been described in [11] is based on the assumed independence of
false positives returned by the individual anomaly detection algorithms. In the first stage of the algorithm,
each of the agents executes its anomaly detection algorithm on the flow set observed during the last
observation period, typically a 5 minute interval. The AD algorithms update their internal state and
return the anomaly value for each of the flows in the observed batch. These anomaly values are exchanged
between the agents, so that they all can build the identical input data for the second stage of processing,
when they update their trust models.
The trust models of each agent cluster the traffic according to the characteristics used by the anomaly
detection model of the agent. This results in a different composition of clusters between the agents, as any
two flows that may be similar according to the characteristics used by one agent can be very different to
other agent's models. Once the agents build the characteristic profiles of behavior, they use the anomaly
values provided by the anomaly detection methods of all agents to progressively determine the appropriate
level of trustfulness of each cluster. This value, accumulated over time, is then used as an individual
agent's assessment of each flow.
In the last stage of the algorithm, the dedicated aggregation agents aggregate the trustfulness values
provided by the detection agents using either predetermined aggregation function, or a dynamically
constructed and selected aggregation function based on the self-monitoring meta-process results. The final
assignment of the final anomaly/trustfulness values positions the flows on the [0,1] interval and allows the
user to visualize the status of the network during the evaluated time period.
The flows evaluated as untrusted or suspicious (i.e. being under dynamically determined thresholds on the
on the trustfulness interval) are analyzed to extract meaningful events that are then classified to several
generic and user defined categories of malicious behavior before being reported to the user.
The deployment of this relatively complex algorithm is straightforward, as it is able to autonomously
adapt to the network it is deployed on and to change the algorithm properties to match the dynamic
Security and Protection of Information 2009
7
nature of network traffic. On the lower level, the algorithm is based on the assumption of independence
[1] between the results provided by the different agents. This reduces (in several stages described above)
the number of false positives roughly by the factor of 20 and more with respect to the average standalone
AD method, making the AD mechanism fit for operational deployment.
5 CAMNEP's Contribution To Security Awareness
The CAMNEP system provides a wide spectrum of tools that facilitate the network surveillance. It can be
run in two basic modes - a server side mode with no graphical interface, providing only reporting
capabilities and a GUI mode with full graphical interface providing sophisticated tools for interactive
network analysis based on the system outputs.
The server side mode has several possibilities of incident reporting. The network administrator can define
the format of the report (short text, full text, XML, flows list), the receivers of the report (email addresses
or disk folders), intervals of generating these reports, a level of details of reports etc. The system also
provides basic web interface, where the incident reports can be accessed remotely by web browser. In this
mode the network operator just regularly checks the security incident mailbox or ticket system connected
to the CAMNEP and consequently performs further analysis of reported security incidents using third
party tools or the CAMNEP run in the GUI mode.
The GUI mode provides full graphical interface with advanced tools for detailed incident analysis on the
flow level. These tools include the trustfulness-based histogram of the current traffic, ability to select and
analyze a subset of traffic, in order to display various traffic characteristics such as significant source and
destination IP addresses, ports, protocols, entropies, aggregations etc. The network operator can also
perform DNS name resolution, ping and whois requests and other, user defined actions directly from the
graphical interface. The interface also allows an experienced user to check particular anomaly detection
methods behavior, set the weights of anomaly detection methods in the trust model, display the graphical
representation of the trust models etc.
The integration of several anomaly detection methods into the CAMNEP system allows to detect a wide
scale of anomalous network traffic and network attacks. It includes preliminary intruders activities during
reconnaissance phase like fingerprinting, vertical scans of the particular machines and horizontal scans of
the whole networks. In the case of more serious network threats the system detects ssh brute force attacks
and other brute force attacks on passwords, can reveal botnet nodes, worm/malware spreading, Denial of
Service and Distributed Denial of Service attacks and other various classes of network attacks specified by
CAMNEP operator. It is particularly relevant as an extrusion detection tool that allows the administrators
to detect the misuse of their network assets against the third parties, typically as a part of a botnet.
The network administrator can configure the reporting from the system on a graduated level of severity,
based on the incident degree of trust/anomaly and the attack class it is attributed to. False positives can be
ignored, irrelevant attacks logged, more sever incidents can generate tickets in the event management
system and the most important events can be pushed directly to the administrator via email. Due to its
nature, the system is able to detect new attack variants, that may not fall into any of the predefined
classes. Such incidents fall into the default incident category and the administrators can investigate them,
typically when the affected hosts start to misbehave in the future.
6 Real World Example - Conficker Worm Detection
In this part we will describe the use case demonstrating the framework capabilities from the user
(e.g. network administrator or incident analyst) perspective at real world example. In the previous papers
we have presented the possibilities of the CAMNEP system to detect horizontal and vertical scans [7, 8],
here we will focus on the description of the Conficker worm [9] and how it was detected by our system.
8
Security and Protection of Information 2009
This experiment was performed on real network data acquired from academic network links where the
system is deployed to observe both incoming and outgoing traffic.
Conficker, also known as Downup, Downadup and Kido, is a computer worm that surfaced in October
2008 and targets the Microsoft Windows operating system. It exploits a known vulnerability in Microsoft
Windows local networking, using a specially crafted remote procedure call (RPC) over port 445/TCP,
which can cause the execution of an arbitrary code segment without authentication. It was reported that
Conficker had infected almost 9 million PCs [2] at the time of this writing (March 2009). Its authors also
proactively modify the worm code and release new variants, in order to protect the botnet from the
orchestrated international response against its command and control infrastructure.
Figure 3: Conficker worm spreading activity in monitored network.
Figure 3 illustrates the Conficker worm propagation inside the university network. An initial victim was
infected during phase I. The main phase - phase II - started at 9:55:42 with massive scanning activity
against computers both in local network and internet with the goal to discover and infect other vulnerable
hosts (destination port 445 - targeting Microsoft security issue described above). One hour later, a lot of
university computers are infected and again try to scan and propagate (phase III) to further computers
both in university network and internet.
7 Traditional NetFlow Analysis Using NFDUMP Tool
In the following text, we will show how the worm progressed in the campus network and propagated
beyond from the infected machines. To protect user privacy all IP addresses and domain names are
changed (anonymized). The infected host 172.16.96.48 - victim.faculty.muni.cz inside the university
network started to communicate at 9:41:12, 11.2.2009 :
Flow start
09:41:12.024
09:41:12.537
09:41:14.446
09:41:14.446
09:41:21.692
09:41:21.692
09:41:21.763
09:41:24.182
09:41:24.470
09:41:26.069
09:41:39.635
09:41:40.404
09:41:40.405
09:41:40.407
09:41:42.134
Duration Proto Src IP Addr:Port
0.307
0.109
30.150
30.148
3.012
3.012
31.851
20.344
0.049
31.846
0.103
0.000
0.000
0.101
0.108
UDP
UDP
ICMP
UDP
UDP
UDP
UDP
UDP
UDP
UDP
UDP
UDP
UDP
UDP
UDP
172.16.96.48:49417
172.16.96.48:60435
172.16.92.1:0
172.16.96.48:137
172.16.96.48:60436
172.16.92.1:53
172.16.96.48:5353
172.16.96.48:60438
172.16.96.48:138
172.16.96.48:60443
172.16.96.48:55938
172.16.96.48:60395
172.16.92.1:53
172.16.96.48:52932
172.16.96.48:51504
Dst IP Addr:Port
Flags Packets Bytes Flows
->
224.0.0.252:5355
->
224.0.0.252:5355
->
172.16.96.48:3.10
->
172.16.96.255:137
->
172.16.92.1:53
->
172.16.96.48:60436
->
224.0.0.251:5353
-> 239.255.255.250:3702
->
172.16.96.255:138
-> 239.255.255.250:1900
->
224.0.0.252:5355
->
172.16.92.1:53
->
172.16.96.48:60395
->
224.0.0.252:5355
->
224.0.0.252:5355
Security and Protection of Information 2009
.....
.....
.....
.....
.....
.....
.....
.....
.....
.....
.....
.....
.....
.....
.....
2
2
25
25
2
2
9
6
3
14
2
1
1
2
2
102
102
3028
2238
162
383
867
6114
662
2254
104
50
125
100
104
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
9
09:41:42.160
09:41:42.461
09:41:43.243
09:41:43.244
09:41:43.244
09:41:43.246
09:41:43.246
09:41:43.437
09:41:43.631
09:41:43.673
09:41:44.374
09:41:45.170
09:41:45.876
09:41:45.881
09:41:52.792
09:41:54.719
0.099
0.112
0.000
0.000
0.000
0.000
0.384
0.192
0.000
0.000
0.105
14.645
0.109
0.000
0.109
0.000
UDP
UDP
UDP
UDP
UDP
UDP
TCP
TCP
UDP
UDP
UDP
UDP
UDP
UDP
UDP
UDP
172.16.96.48:52493
172.16.96.48:55260
172.16.96.48:64291
172.16.96.48:50664
172.16.92.1:53
172.16.92.1:53
172.16.96.48:49158
207.46.131.206:80
172.16.96.48:63820
172.16.92.1:53
172.16.96.48:51599
172.16.96.48:137
172.16.96.48:61423
172.16.96.48:54743
172.16.96.48:52975
172.16.96.48:62459
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
224.0.0.252:5355
224.0.0.252:5355
172.16.92.1:53
172.16.92.1:53
172.16.96.48:64291
172.16.96.48:50664
207.46.131.206:80
172.16.96.48:49158
172.16.92.1:53
172.16.96.48:63820
224.0.0.252:5355
172.16.96.255:137
224.0.0.252:5355
224.0.0.252:5355
224.0.0.252:5355
172.16.92.1:53
.....
.....
.....
.....
.....
.....
A.RS
AP.SF
.....
.....
.....
.....
.....
.....
.....
.....
2
2
1
1
1
1
4
3
1
1
2
11
2
1
2
1
102
102
62
62
256
127
172
510
62
256
104
858
102
51
104
62
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
14 minutes later, at 9:55:42, it starts massive scanning activity against computers both in local network
and internet with the goal to discover and infect other vulnerable hosts (destination port 445).
Flow start
09:55:42.963
09:55:42.963
09:55:42.963
09:55:42.964
09:55:42.965
09:55:42.965
09:55:42.965
09:55:42.965
09:55:42.966
09:55:42.966
09:55:42.966
09:55:42.967
09:55:42.967
09:55:42.967
09:55:42.968
09:55:42.968
09:55:42.968
09:55:42.968
09:55:42.969
09:55:42.969
09:55:42.969
09:55:42.969
09:55:42.969
Duration Proto Src IP Addr:Port
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
172.16.96.48:49225
172.16.96.48:49226
172.16.96.48:49224
172.16.96.48:49230
172.16.96.48:49238
172.16.96.48:49235
172.16.96.48:49237
172.16.96.48:49234
172.16.96.48:49236
172.16.96.48:49239
172.16.96.48:49243
172.16.96.48:49244
172.16.96.48:49245
172.16.96.48:49246
172.16.96.48:49258
172.16.96.48:49248
172.16.96.48:49259
172.16.96.48:49254
172.16.96.48:49262
172.16.96.48:49268
172.16.96.48:49261
172.16.96.48:49260
172.16.96.48:49263
Dst IP Addr:Port Flags Packets Bytes Flows
->
100.9.240.76:445
-> 209.13.138.30:445
->
71.70.105.4:445
->
150.18.37.52:445
-> 189.97.157.63:445
->
46.77.154.99:445
-> 187.96.185.74:445
->
223.62.32.43:445
-> 176.77.174.109:445
-> 121.110.84.84:445
-> 153.34.211.79:445
->
59.34.59.14:445
-> 172.115.82.70:445
->
196.117.5.44:445
->
78.33.209.5:445
->
28.36.5.3:445
->
91.39.4.28:445
-> 112.96.125.115:445
->
197.63.38.5:445
->
36.85.125.20:445
-> 170.88.178.77:445
-> 175.42.90.106:445
->
15.70.58.96:445
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
48
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
One hour later, a lot of university computers are infected and again try to scan and propagate to further
computers both in university network and internet:
Flow start
10:48:10.983
10:48:25.106
10:48:25.894
10:48:26.001
10:48:26.948
10:48:27.466
10:48:28.443
10:48:28.473
10:48:28.797
10:48:29.267
10:48:29.409
10:48:29.492
10:48:29.749
10:48:30.159
10:48:31.116
10:48:31.117
10:48:31.117
10:48:31.117
10:48:31.118
10:48:31.119
10:48:31.127
10:48:31.129
10:48:31.131
10
Duration Proto
29.934
30.826
30.189
32.111
10.745
24.770
28.866
10.572
30.748
32.783
7.773
34.993
26.004
12.609
3.004
3.003
3.003
3.003
3.002
3.001
2.993
2.991
2.990
TCP
UDP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
TCP
Src IP Addr:Port
172.16.96.31:50076
172.16.96.49:63593
172.16.96.47:51875
172.16.96.49:63778
172.16.96.50:52225
172.16.96.35:55484
172.16.96.37:53098
172.16.96.38:60340
172.16.96.37:53174
172.16.96.34:64769
172.16.96.34:64756
172.16.96.44:57145
172.16.96.43:52707
172.16.96.49:63902
172.16.96.31:50766
172.16.96.31:50768
172.16.96.31:50769
172.16.96.31:50770
172.16.96.31:50776
172.16.96.31:50778
172.16.96.31:50784
172.16.96.31:50791
172.16.96.31:50797
Dst IP Addr:Port Flags Packets
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
->
145.107.246.69:445
38.81.201.101:445
169.41.101.97:445
43.28.146.45:445
104.24.33.123:445
109.18.23.97:445
102.124.181.67:445
222.50.79.96:445
212.82.132.58:445
34.56.183.93:445
89.109.215.111:445
32.113.4.81:445
138.8.147.38:445
203.101.75.18:445
194.125.49.68:445
193.114.216.37:445
37.107.5.111:445
126.96.239.95:445
43.87.170.91:445
103.13.70.122:445
200.68.202.35:445
56.39.208.87:445
59.104.110.104:445
Security and Protection of Information 2009
AP.S.
.....
AP.S.
AP.S.
AP.S.
AP.SF
AP.S.
AP.S.
AP.S.
AP.S.
AP.S.
AP.S.
AP.S.
AP.S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
...S.
30
6
29
18
10
102
15
23
19
17
17
15
16
22
2
2
2
2
2
2
2
2
2
Bytes Flows
1259
1408
1298
906
537
146397
804
4549
861
1696
3037
2562
1725
2316
96
96
96
96
96
96
96
96
96
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
8 Worm Detection And Analysis With CAMNEP
The detection layer presented in Section 4 sorts the flows by trustfulness, placing the potentially malicious
events close to the left edge of the histogram that we can see in Figure 4. The red peak (highlighted by the
aposteriori GUI-level filtering) can be easily discovered, and analyzed in the traffic analysis tool, as
displayed at Figures 5, 6, and 7. These figures illustrate the several relevant characteristics of event flows
during the initial attack phase. Internal representation of the event in the trust model of one of the agents
can be seen in Figure 8.
Figure 4: Aggregated trustfulness of flows during the Conficker worm spreading activity.
We can see that the Conficker traffic (leftmost peak, red) is separated from the rest of the traffic.
Figure 5: Incident analysis window representing
Figure 6: Incident analysis window representing
Conficker worm traffic distribution by destination Conficker worm traffic distribution by a number of
ports - the majority of traffic is the typical
bytes in flows - the majority of traffic
Conficker worm destination port 445 used for file
is 96 bytes large.
sharing on Microsoft Windows machines.
Security and Protection of Information 2009
11
Figure 8: 3D representation of the trust model
showing the whole traffic. The Conficker worm
Figure 7: Analyzer window representing Conficker traffic (marked and red colored) is clearly separated
worm traffic distribution by source ports shows
from the legitimate traffic situated
great variability (only 21 are shown).
on the top of the sphere.
9 Conclusion
In our work, we extend possibilities of security tools especially NetFlow collectors [4] used by CERT
(Computer Emergency Response Teams) to detect network security incidents. We target the problem of
high knowledge demands on network security engineers and limits of human operator to efficiently
supervise all network traffic in near real-time.
Presented flow based network intrusion detection system is able to identify significant malicious traffic
events often hidden in a normal traffic overview. In our pilot deployment we showed that instead of
analyzing thousands lines of flow data, or observing only the aggregated values for the whole network, the
operator can efficiently investigate only reported events. Real world example shows Conficker worm
detection and describes the analysis using raw NetFlow data compared to straightforward trustfullness
histogram of CAMNEP.
Early in 2009 the CAMNEP tool was deployed for operational use of Masaryk University CERT.
Together we are working on best practices how to deploy and use CAMNEP by network security
engineers. We are also working on improved long-time stability and reduction of the false positive and
false negative rates.
Acknowledgement
This material is based upon work supported by the ITC-A of the US Army under Contract No.
W911NF-08-1-0250. Any opinions, findings and conclusions or recommendations expressed in this
material are those of the author(s) and do not necessarily reflect the views of the ITC-A of the US Army.
This work was supported by the Czech Ministry of Defence under Contract No. SMO02008PR980OVMASUN200801 and also supported by Czech Ministry of Education grants 6840770038 (CTU) and
6383917201 (CESNET).
12
Security and Protection of Information 2009
References
[1]
Paul Barford, Somesh Jha, and Vinod Yegneswaran. Fusion and filtering in distributed intrusion
detection systems. In In Proceedings of the 42nd Annual Allerton Conference on Communication,
Control and Computing, 2004.
[2]
F-Secure. Preemptive Blocklist and More Downadup Numbers, 2009. http://www.fsecure.com/weblog/archives/00001582.html.
[3]
Peter Haag. NFDUMP - NetFlow processing tools. http://nfdump.sourceforge.net/, 2007.
[4]
Peter Haag. NfSen - NetFlow Sensor. http://nfsen.sourceforge.net/, 2007.
[5]
Anukool Lakhina, Mark Crovella, and Christophe Diot. Diagnosis Network-Wide Traffic
Anomalies. In ACM SIGCOMM '04, pages 219 - 230, New York, NY, USA, 2004. ACM Press.
[6]
Anukool Lakhina, Mark Crovella, and Christophe Diot. Mining Anomalies using Traffic Feature
Distributions. In ACM SIGCOMM, Philadelphia, PA, August 2005, pages 217 - 228, New York,
NY, USA, 2005. ACM Press.
[7]
Rehak Martin, Pechoucek Michal, Grill Martin, Bartos Karel, Celeda Pavel, and Krmicek Vojtech.
Collaborative approach to network behavior analysis. In Global E-Security, pages 153 - 160.
Springer, 2008.
[8]
Rehak Martin, Pechoucek Michal, Celeda Pavel, Krmicek Vojtech, Bartos Karel, and Grill Martin.
Multi-Agent Approach to Network Intrusion Detection (Demo Paper). In Proceedings of the 7th
International Conference on Autonomous Agents and Multiagent Systems, pages 1695 - 1696. InescId,
2008.
[9]
Phillip Porras, Hassen Saidi, and Vinod Yegneswaran. An analysis of conficker's logic and
rendezvous points. Technical report, 2009. http://mtc.sri.com/Conficker.
[10]
Martin Rehak, Michal Pechoucek, Karel Bartos, Martin Grill, Pavel Celeda, and Vojtech Krmicek.
CAMNEP: An intrusion detection system for high-speed networks. Progress in Informatics, (5):65 74, March 2008.
[11]
Martin Rehak, Michal Pechoucek, Martin Grill, and Karel Bartos. Trust-based classifier
combination for network anomaly detection. In Cooperative Information Agents XII, LNAI/LNCS.
Springer-Verlag, 2008.
[12]
Karen Scarfone and Peter Mell. Guide to intrusion detection and prevention systems (idps).
Technical Report 800-94, NIST, US Dept. of Commerce, 2007.
[13]
Pavel Celeda, Milan Kovacik, Tomas Konir, Vojtech Krmicek, Petr Springl, and Martin Zadnik.
FlowMon Probe. Technical Report 31/2006, CESNET, z.s.p.o., 2006.
http://www.cesnet.cz/doc/techzpravy/2006/flowmon-probe/.
[14]
Jian Zhang and Andrew W. Moore. Traffic trace artifacts due to monitoring via port mirroring. In
Proceedings of the Fifth IEEE/IFIP E2EMON, pages 1 - 8, 2007.
Security and Protection of Information 2009
13
Video CAPTCHAs
Carlos Javier Hernandez-Castro, Arturo Ribagorda Garnacho
{chernand, arturo}@inf.uc3m.es
Security Group, Department of Computer Science
Carlos III University
28911 Leganes, Madrid, Spain
Abstract
In this article we propose some video related manipulations that are hard to analyze by automated (even
AI−based) mechanisms, and can thus be used as the base for secure (low fraud and insult rates)
human/machine identification systems. We study these proposals in some deep, including attacks. We
additionally highlight some ways that allow the use of public on-line video repositories for this purpose.
Finally, we address the associated accessibility problems.
Keywords: video analysis, CAPTCHA, HIP.
1 Introduction
Today, the main computer networks and in particular the Internet have -in most economic developed
countries- bandwidth enough to transmit videos in real time. Also, processing power has grown in a way
that permits fast video manipulation, in some cases almost in real-time. This is not only due to
improvements in transistor size and speed, but also to the development of instruction sets specially
oriented to multimedia tasks (as MMX and SSE-1/2/3/4 from Intel, AltiVec from Motorola & IBM, and
3DNow! from AMD). The appearance and widespread use of multi-core processing units and graphics
processing circuitry that can be tailored to multi-purpose parallel computing has also helped. Video
manipulation is a highly parallelizable task well suited for this new processing scenario. The necessary
bandwidth and processing power is here, and both bandwidth and multimedia processing power show
a path of steady progression.
In this article, we propose some ways of automatically manipulating video so that it cannot be easily told by any automated system, even if the processing power of the attacker is large - whether the original video
has been manipulated or not, but it can be - in most cases - easily told by an human, except for some
accessibility problems that will be addressed later. Thus, these systems can be used as the base for a new
generation of CAPTCHAs 1 / HIPs 2.
Many on-line public video repositories (e.g. YouTube, Veoh, Google video, etc.) have been created lately,
proving that this scenario of high bandwidth and more processing power is already here. We propose
taking them as a source for videos on which we could apply the aforementioned transformations, possibly
after applying some filtering in order to avoid video specimens that are specially easy to analyze.
The rest of this article is organized as follows: Section 2 briefly introduces the most relevant automatic
video analysis techniques known to date. Section 3 describes the proposed transformations that can be
used to automatically generate video CAPTCHAs. Then, Section 4 explores some possible attacks against
1
Completely Automated Public Turing test to tell Computers and Humans Apart
2
Human Interactive Proof
14
Security and Protection of Information 2009
the proposed schemes, together with recommendations to implement them correctly. The issues
concerning the usage of public video repositories as a source for our transformations are explored in
Section 5. Accessibility problems are considered in Section 6. Lastly, we extract and present some
conclusions in Section 7.
2 Video analysis
The state-of-the-art on automatic video analysis has been explored in depth. The following techniques are
specially relevant to our proposal:
•
video indexing and retrieval [ 7 ][ 5 ][ 4 ][ 16 ] algorithms, including those that use audio as the
inde-xing technique [ 17 ], or OCR [ 18 ], as well as querying by elements or image
attributes [ 20 ],
•
shot/scene detection (also useful for indexing): boundary shot changes, fades, wipes
[ 10 ] [ 11 ] [ 14 ] [ 21 ], and object tracking [ 15 ],
•
video relating (for copyright management): studying similarity between videos [ 6 ],
•
surveillance: detection of human faces [ 8 ], moving objets [ 9 ], object placing, real-time vehicle
tracking [ 13 ],
•
detection of man-made objets in subsea videos [ 12 ],
•
detection of people talking [ 19 ],
•
automatic soccer video analysis and summarization [ 22 ].
Most of the methods currently used are based on time-dependent change analysis to detect moving objets
from background, audio-video correlation, image characteristics change, etc. Some of this techniques
(specially the ones devoted to surveillance) are mostly suitable for videos taken from a still camera.
Currently there is no way to automatically relate a video (scene collection, shot collection, etc.) to a highlevel description of what is happening in it, much less in an automatic way, and thus to make decisions
upon logical continuity, desync, etc. We think that the AI techniques involved into answering that
question are well beyond the maturity needed for doing so. We based our video CAPTCHAs in this high
level analysis of video content, and on transformations related to the understanding of the video history
and semantics.
3 Constructing Video CAPTCHAs
We present now some automatic methods of manipulating a video source to create a test on the resulting
video. This test can be used for constructing CAPTCHAs, given some conditions on the initial video:
3.1
Time inversion
Starting from any video, we filter out its audio track. Then, we randomly select to invert (or not) it's video
sequence. The tested subject has to tell if this video sequence has been inverted. This is a very difficult test
for a machine, as usually a human will need to restor to common sense and some basic logic to pass. Even
if not every individual is able to always pass the test, his success ratio would be much bigger than that of a
computer. In the attack section we will address some measures to avoid common pitfalls in the
implementation of this approach, together with some attacks against this scheme.
Security and Protection of Information 2009
15
3.2
Split and permute
Given any video, we analyze its sequences and split it (by any of the scene-detection methods that exists)
thus converting it into a sequence ABCDE. Then, we present them ordered (ABCDE) or unordered
(ABEDC, EDCAB, etc.) and expect the tested subject to answer wether they are ordered or not. Or take
out some sequences (ABEFG) and ask if it is a continuous video or not.
A similar way of using this idea to construct an HIP is by permuting one scene (i.e. ABDCE), then asking
the tested individual which is it.
For this transformation we can use state-of-the-art sequence analysis to split the video into multiple parts,
and it is not compulsory to filter out the audio sequence - although recommended.
This scheme can be applied - with some restrictions - to, for example, to video from soccer games. We
have to avoid some clues then (indexing, scores on screen, etc.).
3.3
Non matching sources
This technique consists in taking two different video sources, dividing them into sequences ABCDE and
FGHIJ, then mix some of them (i.e. ABFGE) so that the tested subject has to decide wether it comes
from a single video or the mix of two unrelated ones. Later, we will address some possible attacks against
this scheme.
3.4
Audio/video desync
This approach consists in taking a subsequence of a video, and substitute its audio track for a complete
different audio or, alternatively, desync (advance or delay) its own track. The tested subject has to decide
there is a correspondence between the video and the audio sequence. As with the previous scheme, we
address later some possible attacks against this scheme, which is particulary tricky as the audio sequence is
compulsory here, opening the way to further automatic analysis.
3.5
Video inversion
Even for humans it is generally very hard to tell if a video has suffered left-to-right inversion, apart from
some occasional hints as licence plates, signs, subtitles, etc. (that can also be used by an OCR-based
technique). However, it is relatively easy for any individual to tell if a video has been switched
upside−down. In the attack section we comment some possible problems against this scheme, that have to
do with typical image characteristics and lightning.
Note that the transformation of turning a video upside-down is completely different from rotation, as
rotation can be easily automatically analyzed (just by an statistical analysis of the direction of the lines on
the picture).
3.6
Tag categorization
This possibility is based on using a video repository that admits tag classification as the source, so we can
take a subsequence of a video and ask the prover whether this video is related to a particular tag (taken or
not from those corresponding to the video).
Of course, one has to consider the possibility that the particular subsequence chosen has little or nothing
to do with the tag, but if the subsequence is long enough to be a representative part of the whole video or
is made from many subsequences of the original video this chance could be arbitrarily decreased.
16
Security and Protection of Information 2009
4 Possible Attacks and Pitfalls
In this section we address general attacks that can be used against all or some of the aforementioned
transformations, as well as some pitfalls that have to be avoided before using a video-based CAPTCHA in
production. Of course, other common CAPTCHAs implementation guidelines [ 3 ] should be followed as
well.
4.1
Denial of service
As creating these tests requires higher processing power than the average CAPTCHA test, it is more easy
to launch denial-of-service (DoS) attacks upon them. One easy way to avoid these DoS attacks is to place
common, classical CAPTCHAs (very lightweight) before accessing video CAPTCHAs. Other -possibly
better - method is to precompute a large amount of these video CAPTCHAs (let's say, 1000) and use
them as a door-entrance to the realtime CAPTCHAs, changing those pre-computed transformed videos
from time to time.
4.2
Insult ratio
Not all these tests are easy even for humans, depending on the video source and the precise parameters
used (length of subsequences, audio presence, video quality, original video distortion, etc.) and the
transformations used. All these global parameters and the particular techniques should be measured for
assuring that the human success level is adequate. However, all these techniques could be designed to be
relatively easy for the average human (except for accessibility problems) and quite hard for any algorithm,
so a compromise leading to a very low insult ratio should be possible.
4.3
Attacks and pitfalls of time inversion
There are common pitfalls that will have to be avoided in order to prevent easy-to-analyze specimens:
videos in which the effect of gravity could be located (i.e. if there's a ball or other not animated object
moving, or if many objects fall), videos in which the cameras are moving forward so the main trend for
the objects is to get bigger with time, videos with fade-out to the next scene and other scene transition
effects, walking people, captions moving, etc.
One plausible and very general way to filter out these specimens is to make a database of inverted and non
inverted videos and train a simple neural network in detecting some of them based in the analysis of short
sequences of images taken at precise intervals (say, 0.5 secs). That neural network should be able to give
three answers: cannot tell, not inverted and inverted. If we train it to maximize the ratio of correct
inverted and non inverted answers, giving less negative weight to the the cannot tell ones, the network can
be trained to distinguish the easiest videos, so we can use it to discard bad specimens. We can train
different networks for different ways to tell (objects getting bigger, captions moving from left to right with
occidental alphabet, etc.).
There is a variant of this transform that is harder to attack even for easy specimens, because most
automatic video analysis methods will not be so certain. In this transform, only one sequence or
subsequence of the video is -or not- time inverted, and we show the complete video to our tester. Any
automatically approach that relies in statistical analysis for easy specimens (number of objects getting
bigger, as an example) will have problems with this transform. If we do automatical analysis sequence by
sequence, there will be a large number of false positives (videos with no reverse sequence that have
a sequence that appears to have been reversed), and if we analyze the whole video, this little transform
wouldn't change much the whole statistics. Of course, we have to be careful so no start-of-sequence
picture corresponds to just one another in the same video stream.
Security and Protection of Information 2009
17
4.4
Attacks against non matching video sources
If we are analyzing a transformed video and all it's sequences can be classified onto just two different types
(according to objects appearing, fourier analysis, backgrounds, faces, etc.), then there's a chance that the
video is a composition of two different sources. If there are only of one kind, then it's quite probable that
the video has only one source. To avoid this kind of analysis, we can compose the video from more than
two sources, or we can filter out this kind of videos, preventing them as inputs to our transforms.
To filter them, we have to verify that the original video sequence or sequences that are going to be used as
input have different subsequences with different Fourier data and different characteristics that could be
automatically processed (background, faces, objects), etc. so that there is not an easy and automated way
of telling if sequences correlate even if it is quite easy to tell sequences apart.
4.5
Attacks against non matching audio/video sources
Mouth-to-vocal correlation can be used by an attacker to show that one audio track is not the one
corresponding to a certain video track, so clear mouth-vocal related videos have to be filtered out for the
CAPTCHAs based on audio/video correlation (but this is not necessary, surprisingly, for the desync
technique). The same happens with background noise and other general audio characteristics related to
scene change, etc. This can be avoided using smooth scene changes, including some audio noise, and
filtering out part of the original background noise.
4.6
Attacks against audio/video desync
One way to automatically attack the audio/video desync technique is to locate mouths and/or scene
changes in the video and examine the delay from vocal sound and background sound. This only works,
however, if the delay is small enough to clearly correlate mouth movement/scene change with
vocal/background noise sound. If the delay is large enough, this study does not give useful results apart
from deciding whether there is desync or not - but cannot easily tell if it is in advance or in delay. The
typical delay time should be chosen, then, upon a previous automatic analysis of the video.
4.7
Attacks against video inversion
On many videos the upper part is more continuous (more correlated, less entropic) and emits/reflects
more light (Sun light, lamps, etc.) than the rest. On others, we can detect common objects as walls,
houses, lamps in the ceiling, etc. Videos have to be chosen without this condition. To filter out easy
specimens we propose a method similar to the one devised for easy specimens of reverse-time videos, with
the difference that in this case we only analyze one image instead of a short series of images. Analyzing a
low percentage of images of the video, or of each sequence, those kinds of videos can be filtered out by
means of a Neural Network.
Other ways of attacking this CAPTCHAs are based on face recognition and a later analysis of face
illumination, for example. This recognition and later analysis can also be applied to any kind of object.
Videos, then, should be pre-processed to normalize the brightness of every particular piece (8x8 subpixel
set, for example) of every image and of the entire image, preventing this kind of attack.
5 Using public video repositories
Public video repositories can be used as a source for videos, to automatically transform them into video
CAPTCHAs. In this case, we have to avoid some other possible attacks that have to do with video
repository indexing and poisoning.
18
Security and Protection of Information 2009
We also have to be careful and, as always, avoid the usage of easy specimens.
5.1
Video indexing
When using public video repositories as video sources, there exists the additional possibility for a powerful
attacker of indexing every video from the source by using some snapshots as the index key. These could
have been taken every short interval (each second, or each 0,1% of its length, etc.) This indexation would
then serve as a means to correlate the resulting CAPTCHAs against its original source, which is
something we want to avoid.
Fortunately, this naive attack can easily be circumvented by distorting every image of the video sequence
in a way that prevents matching against the original source. This applies also to the audio track, that has
to be distorted in a way that prevents it from been used as an index key. For this, we can trim a video in
its beginning and end, and also in some middle parts. It could be also effective to delete one in each -say20 to 24 pictures, or to unperceptively rotating it, so this video indexation becomes increasingly difficult
and costly.
Of course, more sophisticated video and audio analysis can be done and be used as the basis for an index:
for example, audio Fourier transforms of some short sequences, or discrete Fourier transforms of still
images or sequences. This is more difficult to circumvent, but it is also much less efficient as you have to
make a choice between general or very specific classification, and if you want many indexing matches even
after transformations, there will be many close matches - and then no clear match - if the original video is
transformed in a way that also transforms this measures - like adding low and high noise, modifying the
color palette, altering it's brightness (variably during the video sequence), etc.
Even then, there are other different video indexing mechanisms that have to be prevented. Notably,
indexing based in objects, face or background recognition. For evading those we propose using the same
algorithms currently in use for object recognition and background recognition to 'filter-out' those
elements and replace them (backgrounds) or distort them (objects, faces: bluring, enlarging, moving,
replacing, etc.) in a way that is more difficult to correlate with the original video. Finally, we have to be
careful again with key elements that can serve for indexing, as subtitles or any other written elements [ 18
]. For this ones we have to proceed much in the same way.
5.2
Video poisoning
Again, if we are using public video repositories as our source, typically by selecting videos more or less at
random once they pass a bad specimen recognition phase, there is the chance that an attacker will fill
them with videos that can be very close to each other and/or very easy to analyze, or possibly could
include watermarks -that are invariant to the aforementioned transformations- for aiding recognition, etc.
If this can be the case, we have to avoid this kind of videos -for example, relating them, and also applying
more severe transforms- but in a cautious way. For example, we cannot rely only on voting by peers, as
most current on-line voting schemes can be circumvented [ 1 ], and if not, it would reduce the available
choice range and make easier for an attacker to index the video source.
If we have no indexing capabilities of the video source, then other methods of analysis have to be applied
to circumvent these CAPTCHAs. We think that given the present level of AI research, the former
presented CAPTCHAs have no easy way of analysis that permits beating them consistently by an
automated algorithm. It is clear that some of the former CAPTCHAs have a yes/no scheme of answer, so
just chance gives us a 50% of passing the test, thus more than one test should be used in each case to raise
the trust level of the complete test.
Security and Protection of Information 2009
19
5.3
Codec information
Some additional care should be taken with codec compression, in order to avoid sync information to leak
from the study of the codec bit-rate (if it is variable) or the image quality. One way to prevent this
automatic analysis is by recoding every video - possibly with a little frame offset. Thus, key-frames in sync
wouldn't tell anything about if the video has been transformed or not.
5.4
Audio indexing
Special countermeasures have to be taken for the audio/video desync test, the only one in which it is
compulsory to use an audio track (we can filter it out in all the other tests). We have to take special care
that the video cannot be indexed using it's audio sequence. For that we can apply pitch transformations,
add noise, insert short bursts of sounds - possibly taken from other audio tracks, etc.
5.5
Video tags
This test is specialy useful with public on-line video repositories that are tagged, but only if there is no
easy indexation of those videos helped by the same use of tags. So if tags can be taken from a pool of
categories (so tag collision is frequent) this can be done, in other case, the use of this scheme is not
recommended.
5.6
Other video sources
Other video sources can be studied for the aforementioned transformations. Among them, there are:
public TV broadcasts, online video games footage, and synthetic video - computer animations - created
on demand. The later two have the properties of being easily characterizable, so it would be possible to
avoid creating easy to tell specimens for most of the transforms mentioned.
6 Accessibility of blind people or people with vision problems
Unluckily, these tests are not suited for blind people or people with vision problems. There is no direct
and easy way to adapt them for impaired people, so another kind of test should be used [ 2 ] to allow
them into the system.
One possibility is adapting these new tests for an audio only version. Some of the tests presented can be
adapted in this way (split and permute, non matching sources, tag categorization), others have a nondirect transformation way (like time inversion, converting it into a split-and-reorder test) but not all
(video inversion, audio/video desync). The risk of using and audio only version of these tests is that the
quantity of processed information is significatively less and more easy to index if coming from a public
source, so the measures aforementioned to prevent audio indexing have to be reinforced. Even in this case,
we think that the audio version of those tests will be generally weaker against the proposed (and, possibly,
new) attacks, and that a higher number of tests should be carried out if using them.
7 Conclusions
The constant improvement in transmission and processing power make this highly complex video
CAPTCHAs a real solution for many applications. Today, this solution is best suited for environments in
which a low fraud rate and low insult rate are compulsory, and more important that the price of the
processing power needed. In a few years this HIPs could be applicable to develop a high-security
CAPTCHA system for nearly all purposes.
20
Security and Protection of Information 2009
Some of the transforms we propose here are based in high cognitive procedures that are quite difficult to
analyze automatically, specially when the number of easy specimens have been reduced. If we are using
public video repositories as our video source, we have to take some additional security measures, apart
from filtering out easy specimens. Most of this aditional security measures are directed towards reducing
the advantages the attacker can extract from the indexation of those repositories.
Another strong point of this CAPTCHAs is that they can be done in a way that is appealing and
interesting to the human to pass, much like a game. The difficulty and low appeal of some CAPTCHAs is
an emerging concern affecting some sites, that we can avoid using this video CAPTCHA schemes.
We are currently working in the implementation of all the proposed techniques, and of some of the
attacking algorithms, which will lead to real figures in terms of security, accessibility, time and computing
power needed for the deployment of these proposals.
References
[1]
Hernandez, J.C., Sierra, J.M., Hernandez, C.J., Ribagorda, A.: Compulsive voting in Proceedings
of the 36th Annual 2002 International Carnahan Conference on Security Technology, pp: 124133. ISBN: 0-7803-7436-3
[2]
W3C Working Draft, Techniques for WCAG 2.0. G144: Ensuring that the Web Page contains
another CAPTCHA serving the same purpose using a different modality.
[3]
Caine. A. and Hengartner, U.: The AI Hardness of CAPTCHAs does not imply Robust Network
Security, in IFIP International Federation for information Processing, Volume 238, Trust
Management, eds. Etalle, S., Marsh, S.. Springer Boston, pp. 367-382.
[4]
Sibel Adali, Kasim S. Candan, Su-Shing Chen, Kutluhan Erol, V.S. Subrahmanian: Advanced
Video Information System: Data Structures and Query Processing, 1996.
[5]
Michal Irani: Video Indexing Based on Mosaic Representations, 1998.
[6]
C.H. Hoi, W. Wang and M.R. Lyu.: A Novel Scheme for Video Similarity Detection, in
Proceedings of International Conference on Image and Video Retrieval (CIVR2003), pp 373-382.
Lecture Notes in Computer Science, vol. 2728, Springer, 2003.
[7]
Kien A. Hua, JungHwan Oh.: Very Efficient Video Segmentation Techniques.
[8]
Tat-Seng Chua, Yunlong Zhao, Mohan S Kankanhalli: Detection of Human Faces in Compressed
Domain for Video Stratification, 2002.
[9]
Rita Cucchiara, Costantino Grana, Massimo Piccardi, Andrea Prati: Detecting Moving Objects,
Ghosts, and Shadows in Video Streams, in Proceedings of the Australia-Japan Advanced Workshop
on Computer Vision 2003.
[ 10 ] A. Miene, A. Dammeyer, Th. Hermes, O. Herzog: Advanced and Adaptive Shot Boundary
Detection, 2001.
[ 11 ] JungHwan Oh, Kien A. Hua, Ning Liang: A Content-based Scene Change Detection and
Classification Technique using Background Tracking in Proceedings of SPIE: Storage and Retrieval
for Media Databases, 2000.
[ 12 ] Adriana Olmos, Emanuele Trucco: Detecting Man-Made Objects in Unconstrained Subsea Videos in
Proceedings of the British Machine Vision Conference, 2002.
[ 13 ] Margrit Betke, Esin Haritaoglu, Larry S. Davis: Multiple Vehicle Detection and Tracking in Hard
Real Time, 1996.
Security and Protection of Information 2009
21
[ 14 ] John S. Boreczky, Lawrence A. Rowe: Comparison of video shot boundary detection techniques, in
Storage and Retrieval for Image and Video Databases (SPIE), 1996.
[ 15 ] Harpreet Sawhney: Compact Representations Of Videos Through Dominant And Multiple Motion
Estimation, in IEEE Transactions on Pattern Analysis and Machine Intelligence, 1996.
[ 16 ] Lawrence A. Rowe, John S. Boreczky, Charles A. Eads: Indexes for User Access to Large Video
Databases, in Storage and Retrieval for Image and Video Databases (SPIE) 1994.
[ 17 ] Nilesh Patel, Ishwar Sethi: Audio Characterization for Video Indexing, in Storage and Retrieval for
Image and Video Databases (SPIE), 1996.
[ 18 ] Toshio et al.: Video OCR: Indexing Digital News Libraries by Recognition of Superimposed Captions,
1999.
[ 19 ] Ross Cutler, Larry Davis: Look Who's Talking: Speaker Detection Using Video And Audio
Correlation, in IEEE International Conference on Multimedia and Expo (III), 2000.
[ 20 ] Myron Flickner et al.: Query by Image and Video Content: The QBIC System. IEEE Computer
Magazine, September 1995 (Vol. 28, No. 9) pp. 23-32.
[ 21 ] Hong Jiang Zhang, Atreyi Kankanballi, Stephen W. Smoliar: Automatic partitioning of full-motion
video, in Readings in Multimedia Computing and Networking (2001). Morgan Kaufmann. ISBN:
1558606513.
[ 22 ] Ekin, A., Tekalp, A.M., Mehrotra, R.: Automatic soccer video analysis and summarization, in IEEE
Transactions on Image Processing, July 2003, 12th volume 12, issue #7.
22
Security and Protection of Information 2009
Possibility to apply a position information as part of a user’s authentication
David Jaroš, Radek Kuchta, Radimir Vrba
[email protected]; [email protected]; [email protected]
Faculty of Electrical Engineering and Communication
Brno University of Technology
Brno, Czech Republic
Abstract
This paper describes basic possibilities of applying position information for the user authentication. There
are described available methods for determining position with using Global Position System, Bluetooth
and Wi-Fi wireless networks and also IQRF non standard communication network. A user authentication
device developed for the secured authentication is described.
Keywords: GPS, authentication, position, Bluetooth, Wi-Fi.
1 Introduction
Authentication and authorization are asked almost everywhere in the today’s world. People must be
identified when they download emails, read newspaper over the Internet, fill out forms for the
government, access company private information, etc. For the user authentication some private credentials
are usually required. In many cases users are using their unique identification number or username, and
password. If one of these values is wrong, new enter is required. If more than selected number of attempts
has been done, user’s account is locked. This described scenario is insufficient and some extra information
is required for many situations and systems. The paper describes new possibility of using position data as
one of the authentication information. When an information system has information about authenticated
person’s position, it can change access rights or show only part of accessible data. There is more than one
possibility how to get position. In all of them authenticated user has to have own identification device with
position information. The authentication device can use different methods to determine current position.
One of the possibilities is using GPS (Global Position System) or planed Galileo - European satellite
navigation system. In this case, identification device is using satellite communication to obtain right
position. The other chance is using company wireless network. In this case identification device is
wirelessly connected to the selected wireless access point. There is possible to use Wi-Fi or Bluetooth
wireless communication network. In all described contingencies the identification device could be a GSM
cellular phone. So user doesn’t need any special device, it only needs software implementation and because
cellular phone is connected to the GSM network, it is also other chance to find position.
Security and Protection of Information 2009
23
2 Position determination
When position of a subject in the space is needed to know three basic conditions have to be fulfilled. First,
the space where the subject is found should be described. The most often coordinates are used to describe
of the space [3]. Secondly, enough of anchor points with known position have to be had. And finally,
distance between the subject and anchor points have to be found. Number of used anchor points depends
on dimension of the space [1]. Example for two dimensional system is depicted in Figure 1.
General equation used for position determination is
( x − m) 2 + ( y − n) 2 = r 2 ,
(1)
where m and n are coordinates of the centre of a circle (position of an anchor point), x and y are
coordinates of points on the circle (possible position of the subject) and r is radius of the circle (distance
between the anchor point and the subject). Then we can get equation system for getting position of our
subject [4]
( x − 6) 2 + ( y − 14) 2 = 5,832
2
2
( x − 7) + ( y − 7) = 5,65
(2)
2
( x − 17) 2 + ( y − 11) 2 = 6 2
x = 11; y = 11
When equation system is solved, coordinates of position of the subject will be gotten (for our example
11,11).
Figure1: Position determination with three wireless connection access points.
24
Security and Protection of Information 2009
3 Global Position System
GPS (Global Position System) was developed by the United States Department of Defense. It is a satellite
based navigation system. First GPS were used just for military objectives. Presently GPS is used in public
sector and it is for free.
Whole main system consists of transmitters that orbit earth and receivers that process specially coded signal
from transmitters (satellites). Around the earth orbit 24 satellites in standard case. Satellites orbit in six
equally spaced orbital planes. For determination actual position of receiver, receiver needs to determine
signal from at least four satellites (3 - dimensional space).
Distance between receiver and transmitter (satellite) is calculated using known delay of signal [8].
Transmitted signal contains coded time stamp, time when signal has been transmitted. When the signal is
received, time delay is calculated. For this reason clocks in the transmitter and receiver must be very
accuracy and synchronized. Transmitter contains atomic clock and accuracy time for receiver is calculated
from several received signals. From known delay and known propagation speed of the signal distance
between transmitter and receiver could be calculated. For calculating actual receiver position we need to
know position of satellites. This information is transmitted with time stamp and is coded in signal, called
ephemeris. Whole principle can be depicted in Figure 2.
Figure 2: Principle of GPS system.
3.1
Features of standard OEM GPS module
Nowadays GPS modules [6] offer up to 50 channels, they could receive signal from 50 satellites at the
same time. They use various methods for fast fix, first fix can take under 1 second. Position update rate can
be 4 Hz. Communication between GPS module and target application is provided by standard serial buses
(UART, SPI, IIC, USB) and commonly by protocol NMEA 0183 [7].On the market is wide range of
manufacturers of OEM GPS modules (u-blox, sirf, navsync…).
Security and Protection of Information 2009
25
4 Bluetooth
Bluetooth is wireless technology that was designed for exchanging data or voice signal over short distances.
Bluetooth use ISM (industrial, scientific, medical) unlicensed bandwidth 2,4 GHz. Range of Bluetooth is
divided to 3 classes dependent on maximum permitted power. Maximal range is 100 m for class 1 with
maximum permitted power 100 mW. Data rate for Bluetooth version 2.0 is up to 3 Mbps with EDR
(Enhanced Data Rate) [9], [10].
Bluetooth can be used also for roughly localization of devices with this technology. In the area where we
do localization must be a number of devices with known position (Access Points). Accuracy of determinate
position is dependent on density of access points and their classes or maximum power. Accuracy is
proportional to area where is localized device probably found. Situation can be described by Figure 3.
In the first case the area of possible position is formed by circle area of range. For this reason it is better to
use AP’s with smaller range (class 2, 3), but in the other hand it is increasing probability not to localize the
device. In the second case localized device is in ranges of both access points, so it is able to reduce possible
position to area that is common for each range.
Figure 3: Bluetooth localization.
Bluetooth method of localization can be useful for smaller area with high density of APs with short range,
for example in factory area.
Variety of following cases can be derived from this simple method. [2] It is possible to combine this way of
localization with the other wireless technologies like Wi-Fi, Zigbee or IQRF [5].
5 Authentication device
For described methods of position determination some user device is needed. In the global model,
identification with position information can be described as in Figure 4. In this sequence diagram, two
26
Security and Protection of Information 2009
main objects are available. The first one is a user’s authentication device; the second one is an
authentication server. There are two main steps also. In the first step user’s credentials are sent to the
authentication authority. If user is known and fulfils all requirements, authentication authority will ask
authentication device for the position information. If the user is in the selected range, he will be
authenticated. It is a basic scenario that can be used for user’s authentication with position information.
In the previous paragraph some basic possibility of using position information is described, in this part the
easiest version of the authentication device is described. Block diagram of the first generation of the user
authentication device connected to the user terminal is shown in Figure 5. User is using user terminal to
connect to the authentication server in this scenario. This terminal is interconnected with the user
authentication device via USB data bus. Described authentication device consist of central processor unit
and secured data repository, where user’s credentials are stored. To start authentication process user has to
unlock the device. In this case keyboard is used to enter input password. The authentication device is
connected to the user terminal over the USB. There is a software tool that allows interconnection between
the user authentication device and the authentication server on the client terminal.
If user is using standard computer without installed software or public computer, the authentication device
also contains alphanumeric display where authentication instructions are shown. Thus, web access can be
used for the user’s authentication.
Authentication device also contains finger print reader which can fill keyboard authentication or can
increase security features.
User
Authentication device
Authentication server
Zpráva1
Hello
Hello, I'm server XYZ. Who are you?
I'm user ABC.
Authentication failed
I don’t know you. Bye.
Hello user ABC. Where are you?
Position
Authentication failed
You’re out of allowed range. Bye.
Internal authentication process
Create user
Authenticated user
Position check
You’ve been authenticated.
Figure 4: Communication sequence used for user authentication with position information.
Security and Protection of Information 2009
27
Figure 5: The user’s authentication device connected over the user’s terminal to the authentication server.
6 Feature work
We are on the beginning of our work now. Some basic communication tests have been done and the first
version of the authentication device has been developed. This device is using Global Position System to get
information about the current position. In the next step we plan to create software tool for standard
cellular phone with integrated GPS chip. The second generation of the authentication device will contain
sophisticated software and wireless communication hardware. This configuration will allow connection to
the company network over the Bluetooth and Wi-Fi access points. In this scenario authentication
authority will connect to the authentication device over the wireless network with known topology.
7 Conclusions
The basic possibilities of user authentication enhanced with position information were described in this
paper. The basic version of the user authentication device was also described. The device is using GPS for
the position determination. Also other possibilities of wireless networks for position determination were
described. The main aim of this paper was to give review of potentiality integration user position
information to the authentication process with using user authentication device.
8 Acknowledgement
This research and the paper has been supported by the Czech Ministry of Education, Youth and Sports in
the frame of MSM 0021630503 MIKROSYN New Trends in Microelectronic Systems and Nanotechnologies
Research Project, and partly in 2C08002 Project - KAAPS Research of Universal and Complex
Autentification and Authorization for Permanent and Mobile Computer Networks, under the National
Program of Research II.
28
Security and Protection of Information 2009
Reference
[1]
MONAHAN, Kevin, DOUGLASS, Don. GPS Instant Navigation : A Practical Guide from Basics to
Advanced Techniques. 2nd edition.: Fine Edge Productions, 2000. 333 s. ISBN 0938665766.
[2]
BRUCE, Walter, GILSTER, Ron. Wireless LANs End to End.: Wiley, 2002. 384 s.
ISBN 0764548883.
[3]
CUTLER, Thomas J. Dutton's Nautical Navigation , 2003. 664 s. ISBN 155750248X.
[4]
LARSON, Ron. Geometry. : Houghton Mifflin Harcourt , 2006. 1003 s. ISBN 0618595406.
[5]
GISLASON, Drew. Zigbee Wireless Networking., 2008. 288 s. ISBN 0750685972 .
[6]
DYE, Steve, BAYLIN, Frank. The GPS Manual: Principles & Applications. Baylin/Gale Productions,
1997. 248 s. ISBN 0917893298.
[7]
LARIJANI, L. Casey. GPS for Everyone : How the Global Positioning System Can Work for You .
Amer Interface Corp, 1998. 359 s. ISBN 0965966755.
[8]
MICHAEL, Ferguson, RANDY, Kalisek, LEAH, Tucker. GPS Land Navigation : A Complete
Guidebook for Backcountry Users of the NAVSTAR Satellite System . Glassford Publishing, 1997. 255
s. ISBN 0965220257.
[9]
ROSS, John. The Book of Wireless : A Painless Guide to Wi-Fi and Broadband Wireless (Paperback).
No Starch Press , 2008. 352 s. ISBN 1593271697.
[ 10 ] BAKKER, Dee M., GILSER, Diane McMichael. Bluetooth End to End , 2002. 330 s.
ISBN 0764548875.
Security and Protection of Information 2009
29
Hash Function Design
Overview of the basic components in SHA-3 competition
Daniel Joščák
[email protected]
S.ICZ a.s.
Hvězdova 1689/2a, 140 00 Prague 4;
Faculty of Mathematics and Physics,
Charles University, Prague
Abstract
In this article we bring an overview of basic building blocks used in the design of new hash functions
submitted to the SHA-3 competition. We briefly present the current widely used hash functions MD5,
SHA-1, SHA-2 and RIPEMD-160. At the end we consider several properties of the candidates and give an
example of candidates that are in SHA-3 competition.
Keywords: SHA-3 competition, hash functions.
1 Introduction
In 2004 a group of researchers led by Xiaoyun Wang (Shandong University, China) presented real
collisions in MD5 and other hash functions at the rump session of Crypto conference and they explained
the method in [10]. In 2006 the same group presented a collision attack on SHA–1 in [8] and since then a
lot of progress in collision finding algorithms has been made. Although there is no specific reason to
believe that a practical attack on any of the SHA–2 family of hash functions is imminent, a successful
collision attack on an algorithm in the SHA–2 family could have catastrophic effects for digital signatures.
In reaction to this situation the National Institute of Standards and Technology (NIST) created a public
competition for a new hash algorithm standard SHA–3 [1]. Except for the obvious requirements of the
hash function (i.e. collision resistance, first and second preimage resistance, …) NIST expects SHA–3 to
have a security strength that is at least as good as the hash algorithms in the SHA–2 family, and that this
security strength will be achieved with significantly improved efficiency. NIST also desires that the SHA–3
hash functions will be designed so that a possibly successful attack on the SHA–2 hash functions is
unlikely to be applicable to SHA–3.
The submission deadline for new designs was October 31, 2008. 51 algorithms were submitted for the
competition. A lot of new ideas appeared in the submissions but candidates also contain some several
common properties. We try to summarize common building blocks which appeared and categorize the
submission according to them. The information about NIST’s organization of the SHA-3 competition,
algorithm speed and current state of attacks and are taken and can be found at NIST web page [1],
projects eBash [5] and Hash ZOO [4]. Very good comparison and categorization of the candidates can be
found in [7].
30
Security and Protection of Information 2009
2 Desired properties
In this section we briefly present definitions of properties that good hash functions and candidates for
SHA-3 algorithm must have.
Collision resistant: a hash function H is collision resistant if it is hard to find two distinct inputs that
hash to the same output (that is, two distinct inputs m1 and m2, such that H(m1) = H(m2)).
Every hash function with more inputs than outputs will necessarily have collisions. Consider a hash
function SHA256 that produces 256 bits of output from an arbitrarily large input. Since it must generate
one of 2256 outputs for each member of a much larger set of inputs, the pigeonhole principle guarantees
that some inputs will hash to the same output. Collision resistance doesn't mean that no collisions exist;
simply that they are hard to find.
The birthday paradox sets an upper bound on collision resistance: if a hash function produces N bits of
output, an attacker can find a collision by performing only 2N/2 hash operations until two outputs happen
to match. If there is an easier method than this brute force attack, it is considered a flaw in the hash
function.
First preimage resistant: a hash function H is said to be first preimage resistant (sometimes only preimage
resistant) if given h it is hard to find any m such that h = H(m).
Second preimage resistant: a hash function H is said to be second preimage resistant if given an input
m1, it is hard to find another input, m2 (not equal to m1) such that H(m1) = H(m2)
A preimage attack differs from a collision attack in that there is a fixed hash or message that is being
attacked and in its complexity. Optimally, a preimage attack on an n-bit hash function will take an order
of 2n operations to be successful.
Resistant to length-extension attacks: given H(m) and length of m but not m, by choosing a suitable m'
an attacker is not able to calculate H (m || m'), where || denotes concatenation.
Efficiency: computation of a hash function must be efficient i.e. speed matters. Hash functions are widely
deployed in many applications and it is important to have fast implementation on different architectures.
During the first SHA-3 conference organized by NIST organizer announced they initially focus on Intel
Architecture 32-bit (IA-32) and Advanced Micro Devices 64-bit (AMD64) but performance on other
platforms will not be overlooked. They asked if submitters adjust tunable parameters of candidates to run
as fast as SHA-256, SHA-512 on IA-32 and AMD64, are the algorithms secure? If not its chances in
competition are lower.
Memory requirements and code size is very important for implementation on various embedded systems
such as smart cards.
HMAC construction: hash function must have at least one construction to support HMAC (or
alternative MAC construction) as a pseudorandom function (PRF) i.e. it is hard to distinguish HMACK
based on H from a random function.
3 Current hash functions
We briefly describe four the most known and used hash algorithms to show an evolution of the hash
functions. All of the functions use the same message padding (adding bit “1”, then zeroes and length of the
message such that padded message is multiple of the block-size for compression function). All of the
functions use the Merkle-Damgård construction from a compression function which is shown in Figure 1.
All but RIPEMD-160 uses Davies-Meyer construction of compression function from a block cipher. And
Security and Protection of Information 2009
31
all of the functions use a very simple register instruction: logical operators or, and, xor in simple nonlinear
function, modular addition, shift and rotation. Functions mainly differ (except the obvious length of the
registers, message blocks and outputs) in complexity of the message expansion function and step function
which are part of the compression function. The newer the function is, a more complex message expansion
and step function is used.
M1
IV
M2
f
Mn
f
f
output
Figure 1: Merkle-Damgård construction.
3.1
MD5
MD5 was designed by Ron Rivest in 1991. It was a successor of previous MD4 and the length of output is
128 bits long. The message expansion was very simple - identity and permutations of message-block
registers. Step function is shown on Figure 2. The first cryptanalysis appeared in 1993 [6]. Real collisions
are known since 2004 [10]. It is not recommended to use this function for cryptographic purposes any
more.
Figure 2: MD5 step function, F is simple nonlinear function (taken from wikipedia).
3.2
SHA-1
Specification was published in 1995 as the Secure Hash Standard, FIPS PUB 180-1, by NIST. The output
of the function has a length of 160 bits. It was a successor of SHA0 which was withdrawn by NSA shortly
after its publication and was superseded by the revised version. SHA-1 differs from SHA-0 only by a single
bitwise rotation in the message schedule of its compression function; this was done, according to NSA, to
correct a flaw in the original algorithm which reduced its cryptographic security. It is the most common
hash function used today.
32
Security and Protection of Information 2009
In 2006 a collision attack on SHA–1 was presented in [8]. No real collisions were found till today but the
complexity of the attack is claimed to be roughly 261. It is not recommended to use this function for new
applications.
Figure 3: SHA-1 step function, F is simple nonlinear function (taken from wikipedia).
3.3
SHA-2
SHA-2 is a family of four hash functions SHA 224, SHA 256, SHA 384 and SHA 512. The algorithms
were first published in the draft FIPS PUB 180-2 in 2001. The 386 and 512 bit versions use different
constants, 64 bits long registers and 1024 bits long message blocks in compression functions. Otherwise
they are the same. SHA-2 functions have the same construction properties as SHA-1, but there weren’t any
successful applications of the previous attacks on SHA-1 or MD5 published. This is believed to be due to
their complex message expansion and step function. Nowadays users are strongly encouraged to move to
these functions.
Figure 4: SHA-2 step function, Ch, Ma, ∑0 and ∑+ are not so trivial functions (taken from wikipedia).
Security and Protection of Information 2009
33
3.4
RIPEMD-160
RIPEMD-160 is a 160-bit cryptographic hash function, designed by H. Dobbertin, A. Bosselaers, and B.
Preneel. It is intended to be used as a secure replacement for the 128-bit hash functions MD4, MD5. The
speed of the algorithm is similar to the speed of SHA-1 but the structure of the algorithm is different as
shown on Figure 5. It uses a balanced Feistel network known from the theory of block ciphers. There are
no successful attacks known on RIPEMD-160 and the function is together with the SHA-2 family
recommended by ETSI 102176-1.
Figure 5: RIPEMD compression function.
4 Building blocks
In this section we provide a list of common building blocks that appeared in SHA-3 competition. The list
may not be complete and there may be some others common properties of the candidates. For each
candidate we tried to summarize pros and cons and some examples of that design strategy. The links for
the documentation of the candidates can be found at NIST web site [1].
4.1
Feedback Shift Register (FSR)
Linear and nonlinear feedback shift registers are often used in stream ciphers. Because of their good
pseudorandom properties, easy implementation in hardware and well known theory, they are good
candidates to use as a building block in compression function.
Pros: efficiency in HW, known theory from stream ciphers, easy to implement.
Cons: implementation in SW may be slow, possible cons of stream cipher such as long initialization.
Examples: MD6, Shabal, Essence, NaSHA.
34
Security and Protection of Information 2009
4.2
Feistel Network
A Feistel network is a general method for transforming any function into a permutation. The strategy has
been used in the design of many block ciphers and because hash functions are often based on a block
cipher it is used there as well. A Feistel network works as follows: Take a block of length n bits and divide
it into two parts, called L and R. A round of the cipher can be calculated from the previous round by
setting Li = Ri-1 and Ri = Li-1 XOR f(Ri-1, Ki), where Ki is the subkey used in the i-th round and f is an
arbitrary round function. If L and R are of the same size, the Feistel network is said to be balanced; if they
are not, the Feistel network is said to be unbalanced.
Pros: theory and proves from block ciphers.
Cons: can not be generalized.
Examples: ARIRANG, BLAKE, Chi, CRUNCH, DynamicSHA2, JH, Lesamnta, Sarmal, SIMD, Skein,
TIB3.
4.3
Final Output Transformation
Method used in some of the hash function to prevent length extension attack.
Pros: helps to prove properties and countermeasure the length extension attack.
Cons: two different transformation (compression function and output transformation).
Examples: Cheetah, Chi, Crunch, ECHO, ECOH, Grostl, Keccak, Lane, Luffa, Lux, Skein, Vortex.
4.4
Message expansion
Method for preparing the message blocks to be an input for the step of the compression function similar to
key expansion in block ciphers.
Pros: theory from block ciphers known as key expansion.
Cons: can not be generalized,
Examples: ARIRANG, BLAKE, Cheetah, Chi, CRUNCH, ECOH, Edon-R, Hamsi, Khichidy, LANE,
Lesamnta, SANDstorm, Shabal, SHAvite-3, SIMD, Skein, TIB3.
4.5
S-box
Used for substitution to obscure the relationship between the key (message block) and the ciphertext
(value of intermediate chaining variable). Because of the extension and known properties of AES, the
majority of hash function submitted to the first round used S-Boxes from AES.
Pros: theory from block ciphers (key expansion), speed in HW,
Cons: often implemented as look-up tables which can be viewed as a door to possible side channel attacks.
Examples: Cheetah, Chi, CRUNCH, ECHO, ECOH, Grostl, Hamsi, JH, Khichidy, LANE, Lesamnta,
Luffa, Lux, SANDstorm, Sarmal, SHAvite-3, SWIFFTX, TIB3. (33 out of 51 candidates uses S-Boxes)
4.6
Wide Pipes
Countermeasure to prevent multi-collisions and multi-preimages of Joux type [8]. Wide pipe design
means that intermediate chaining variable is kept longer than the length of hash output e.g. 512 bits for
256 bit hash.
Security and Protection of Information 2009
35
Pros: prevent multi-collisions,
Cons: more complex and not as efficient to produce chaining variable of double length with the good
properties of chaining variable.
Examples: ARIRANG, BMW, Chi, Echo, Edon-R, Grostl, JH, Keccak, Lux. MD6, SIMD.
4.7
MDS Matrixes
Good diffusion properties in the theory of block ciphers are often achieved by using of Maximum
Distance Separable Matrixes. These matrixes might be helpful also in the design of hash functions.
Pros: mathematical background and proven diffusion properties
Cons: memory requirements
Examples: ARIRANG, Cheetah, ECHO, Fugue, Grostl, JH, LANE, Lux, Sarmal, Vortex.
4.8
Tree structure
Tree structure of hashing is an intuitive approach which takes advantage of parallelism from independent
compression function threads and countermeasure current attacks on Merkle-Damgård construction.
Pros: parallelism, resistant against current attacks on SHA-1 and MD5
Cons: memory requirements and “modes” of operation
Example: MD6.
4.9
Sponge structure
Works as “absorbing” the message or “squeezing” the message to produce an output. Absorbing works as
follows:
• Initialize state
• XOR some of the message to the state
• Apply compression function
• XOR some more of the message into the state
• Apply compression function …
Squeezing works as follows:
• Apply compression function
• Extract some output
• Apply compression function
• Extract some output
• Apply compression function …
Examples: Keccak, Luffa.
4.10 Merkle-Damgård like structure.
Structures very similar to Merkle-Damgård constructions of hash functions are still very popular. The
Merkle-Damgård construction is shown in Figure 1, the suggested techniques use various chaining of
intermediate variables or context.
Pros: known structure, speed
36
Security and Protection of Information 2009
Cons: how to prevent previous attacks, multi-collisions and extension attack.
Examples: ARIRANG, CRUNCH, Cheetah, Chi, LANE, Sarmal.
5 Conclusion
We have tried to present the latest overview in the design of hash functions. We showed the traditional
design techniques and presented some of the building blocks of the algorithms submitted to the SHA-3
competition along with their pros and cons.
References
[1]
National Institute of Standards and Technology: Cryptographic Hash Project
http://csrc.nist.gov/groups/
ST/hash/index.html
[2]
National Institute of Standards and Technology: SHA-3 First Round Candidates
http://csrc.nist.gov/
groups/ST/hash/sha-3/Round1/submissions_rnd1.html
[3]
Souradyuti Paul. First SHA-3 conference organized by NIST
http://csrc.nist.gov/groups/ST/hash/sha3/Round1/Feb2009/documents/Soura_TunableParameters.pdf
[4]
IAIK Graz, SHA-3 ZOO http://ehash.iaik.tugraz.at/index.php?title=The_SHA3_Zoo&oldid=3035
[5]
Daniel J. Bernstein and Tanja Lange (editors). eBACS: ECRYPT Benchmarking of Cryptographic
Systems. http://bench.cr.yp.to, accessed 27 March 2009
[6]
Bert den Boer; Antoon Bosselaers. Collisions for the Compression Function of MD5. pp. 293–304.
ISBN 3-540-57600-2
[7]
Ewan Fleischmann and Christian Forler and Michael Gorski: Classification of the SHA-3
Candidates Cryptology ePrint Archive: Report 511/2008, http://eprint.iacr.org/ version 0.81, 16
February 2009
[8]
A. Joux: Multicollisions in iterated hash functions. Application to cascaded constructions.
Proceedings of Crypto 2004, LNCS 3152, pages 306-316.
[9]
Wang X., Yin Y. L., and Yu H.: Finding collisions in the full SHA-1. In Victor Shoup, editor,
Advances in Cryptology - CRYPTO ’05, volume 3621 of Lecture Notes in Computer Science,
pages 17 – 36. Springer, 2005, 14 - 18 August 2005.
[ 10 ] Wang X. and Yu H.: How to Break MD5 and Other Hash Functions. In Ronald Cramer, editor,
Advances in Cryptology - EUROCRYPT 2005, volume 3494 of Lecture Notes in Computer
Science, pages 19 – 35. Springer, 2005.
Security and Protection of Information 2009
37
Measuring of the time consumption of the WLAN’s security functions
Jaroslav Kadlec, Radek Kuchta, Radimír Vrba
kadlecja | kuchtar | vrbar @feec.vutbr.cz
Dept. Of Microelectronics, Faculty of electrical engineering and communication,
Brno University of technology,
Brno, Czech Republic
Abstract
Securing of the communication is the key parameter for all wireless networks. WLAN’s vulnerability to
security threads is solved by several security mechanisms but all of these mechanisms have negative impact
to the communication speed and final network performance. Time consumption of different security
mechanisms used in wireless networks limits availability for several time-sensitive applications. This paper
is focused on the performance tests of the WLAN IEEE 802.11g. New testbed for measuring of the basic
network parameters with 10 ns resolution was developed and used for measuring influence of different
security level which can be applied in IEEE 802.11g. Three main parameters of network communication
in the wireless networks are bandwidth, delay and jitter. Knowing values of those three parameters allows
us to decide if the WLAN’s parameters are acceptable for using in real-time applications or not. Based on
the measured network dynamics we can select which real-time application is suitable for this wireless
network. In our measuring we focused on the basic security mechanisms like WEP, WPA and WPA2. We
also measured impact of the additional encrypted communication tunnel on the WLAN performance
because VPN tunnel with encryption functions is one of the widely used security solutions.
Keywords: WLAN, security functions, measurement.
1 Introduction
Wireless digital communication starts to increase its prominence for the industrial automation domain.
Wireless LANs based on IEEE 802.11 and other wireless concepts based on 802.15 (Bluetooth and
ZigBee) were introduced and still more and more producers of wireless systems try to offer complete
wireless solution for specific network applications which require high level of security, high speed of
network communication and well defined QoS parameters. To design remote mechanisms (telemonitoring, teleservice etc.) using wireless communication, an increasing number of communication
technologies is available, but using of these technologies in real-time applications is limited by strict quality
of service and performance requirements.
2 Measurement scenario
Measurement scenario is mixture of software based packet sniffers and special network boards for precise
time measuring of the network communication with 10 ns resolution. Measuring platform with two
Siemens EB200 development boards for network testing was created for our measurement. These boards
have two independent ports for Ethernet connection and each of these ports have free running timer for
adding time stamps to sending packets. Measurement application using the first network board for
burning packets with time stamps into a tested system (network, device etc.) and the second one receives
packets and evaluates all important parameters. Synchronizing of free running timers is done by special
38
Security and Protection of Information 2009
hardware solution that allows us to reach 10 ns highest possible resolution. Boards are controlled by special
software. The software tool was developed especially for this application. User can set up all parameters of
testing scenario in the tool and measured results are stored and displayed during testing process in real
time. Due to EB200 Siemens board we can manage each single bit in Ethernet packet from lowest network
layers to UDP frames and TCP frames. Lowest network function handling allows us to have precise
control of all packets in the network and timing of sending and receiving of these packets into the EB200
network cards buffers. In sending packets is as a payload used sequential number and random bytes with
length 100 bytes. This packet size has also influence to final value of measured delay and packet length of
146 bytes (100 bytes payload and 46 bytes packet header) was determined as an optimal value by several
test cases with different network components.
Figure 1: Software tool for measuring network parameters.
We made several performance tests of the WLAN with network topology made by computer with two
EB200 cards, one notebook with Wi-Fi connection and one wireless access point D-LINK DWLG700AP. At the beginning we measured parameters of WLAN without any security functions. These
values are used as a reference value for all security functions tests and remove all influences of used network
components and notebook’s network bridge from final values. All measurements were done with 5000 test
packets for ensuring sufficient level of input data for statistical evaluation.
Security and Protection of Information 2009
39
Figure 2: Used measuring scenario of wireless network performance parameters.
All measurements of wireless network were done in laboratory conditions to prevent influence of external
parasitic disturbances. Wireless network had only one active connection and wasn’t loaded by any other
network traffic. Wireless signal strength and quality were optimized to reach the best wireless network
conditions with minimized level of interferences.
3 Results
We measured four widest used security functions in Wi-Fi networks. The first measured security function
was 64-bit WEP (Wired Equivalent Privacy). This protocol uses the stream cipher RC4 for confidentiality,
and the CRC-32 checksum for integrity. Standard 64-bit WEP uses a 40 bit key (also known as WEP-40),
which is concatenated with a 24-bit initialization vector (IV) to form the RC4 traffic key. Measured mean
of time consumption of WEP64 is 142 ms. Packet latencies’ histograms have similar shape for all
measured security protocols except WPA2 which uses block cipher encryption.
Figure 3: Histogram of measured packet latencies of WEP security protocol.
40
Security and Protection of Information 2009
Second measured security function was extended 128-bit WEP protocol using a 104-bit key size. Biggest
size of used key and the same encryption mechanism and hash function evoke longest required time for
packet encryption. Measured mean of time needs is 146 ms.
Third measured security protocol was WPA which is defined in IEEE802.11i standard. WPA protocol
uses also RC4 cryptographic function but upgrades using and exchanging of shared secret (TKIP - Wired
Equivalent Privacy). WPA increased the size of the IV to 48 bits and alters the values acceptable as IVs.
This allows WPA to use the same algorithm as WEP, but plugs the hole by controlling the IV values going
into the algorithm and make WPA more resistant to security threads. For ensuring integrity WPA
incorporates an algorithm known as Michael instead of simple CRC used by WEP. This algorithm creates
a unique integrity value, using the sender's and receiver's MAC addresses. However, Michael uses a simple
encryption scheme that can be cracked using brute-force methods. To compensate for this issue, if Michael
detects more than two invalid packets in under a minute, it halts the network for one minute and resets all
passwords. This reset function wasn’t call during our measurements. Added algorithms and functions such
as MD5, SHA-1, HMAC, PMK and PTK decrease speed of WPA according to WEP. Measured mean of
time consumption on the packet preparing is 152 ms.
Last measured security protocol was WPA2. WPA2 main difference of WPA2 to WPA is using of block
cipher AES instead of stream cipher RC4. Different type of encryption generates the biggest time
consumption for packet encryption which is 177 ms and aligned histogram of packet latencies (see fig. 4).
Figure 4: Histogram of measured packet latencies of WEP security protocol.
On the fig. 5 are shown histograms of all measured security protocols’ latencies. Most of the packets are
below latency 5000 ns. For better lucidity has Y-axis logarithmic scale. Peak on the value Other is done by
the used Wi-Fi technology and wasn’t included in security functions’ results. Value of this peak is very
similar across all measured security functions.
Security and Protection of Information 2009
41
Figure 5: Latency histogram of all measured scenarios.
All measured parameters are summarized in the following table.
Security
function
Mean
[ns]
Deviation Encryption
[ns]
delay [ns]
WEP64
142 761 1 645 948
12 799
WEP128
146 798 1 693 578
16 836
WPA
152 359 1 789 710
22 396
WPA2
177 946 2 049 761
47 983
No encryption 129 962 1 773 390
-
Table 1: Comparing of measured wireless network security functions.
4 Conclusion
A set of measurements of WLAN security functions’ performance is proposed in this paper. For
measurement purposes a sophisticated tool with high precision resolution and absolute control of network
traffic was created. Our measured results are unique due to their precision. Manufacturers of wireless
network devices do not provide this kind of measurements which are very important for real-time
applications. We focused on the wireless network device parameters and security technologies definition
for wireless mobile platform and on the measurements of available technologies for wireless network
applications. After analyzing our results we can simply decide which WLAN security function can be
applied in planned application. Based on those results it is possible to find wireless security level which
perfectly fits on target WLAN application. The main gap between the security level and the network
performance which was measured in this paper couldn’t be improved but user can choose optimal settings
for his application. Our results are related only for one Wi-Fi AP of one manufacturer. Values of security
functions’ time consumption can vary for different Wi-Fi AP types from different manufacturers. Our
future work will be focused on these different Wi-Fi AP types and compare implementation of basic
security functions into the final wireless network devices.
42
Security and Protection of Information 2009
5 Acknowledgments
The research has been supported by the Czech Ministry of Education in the frame of MSM 0021630503
MIKROSYN New Trends in Microelectronic Systems and Nanotechnologies Research Project, partly
supported by the Ministry of Industry and Trade of the Czech Republic in a Project - KAAPS Research of
Universal and Complex Autentification and Authorization for Permanent and Mobile Computer
Networks, under the National Program of Research II and by the European Commission in the 6th
Framework Program under the IST-016969 VAN - Virtual Automation Networks project.
6 References
[1]
J. W. Mark and W. Zhuang, Wireless Communications and Networking. Prentice Hall, 2003,
ISBN: 0-13-040905-7
[2]
The Cable Gay, May 2005, Wi-Fi Protected Access 2,
http://www.microsoft.com/technet/community/columns/cableguy/cg0505.mspx#EFD
[3]
VAN – Virtual Automation Network, Real Time for Embedded Automation Systems including
Status and Analysis and closed loop Real time control, Real-time for Embedded Automation
Systems deliverable, 6th Framework Program, 2007, http://www.vaneu.org/sites/van/pages/files/D04.1-1_FinalV1_2_060702.pdf
[4]
VAN – Virtual Automation Network, Specification for wireless in industrial environment and
industrial embedded devices, Wireless in Industries - deliverable, 6th Framework Program, 2007,
http://www.van-eu.eu/sites/van/pages/files/D03.2-1.pdf
[5]
SARKAR, N.I., SOWERBY, K.W. Wi-Fi Performance Measurements in the Crowded Office
Environment: a Case Study. In International Conference on Communication Technology, 2006.
ICCT \'06.. Guilin : [s.n.], 2006. s. 1-4. ISBN 1-4244-0800-8.
[6]
Alexander Wiesmaier, Marcus Lippert, Vangelis Karatsiolis. The Key Authority – Secure Key
Management in Hierarchical Public Key Infrastructures. Department of Computer Science.
Darmstadt, Germany : Proc. of the International Conference on Security and Management (SAM
2004), 2004. p. 5
[7]
IEEE. Draft 4 Recommended Practice for Multi-Vendor Access Point Interoperability via an InterAccess Point Protocol Across Distribution Systems Supporting IEEE 802.11Operation. s.l. : IEEE,
2002. Draft 802.1f/D4
[8]
IEEE. Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications. 1999. IEEE Standard 802.11
[9]
Molta, Dave. 802.11r: Wireless LAN Fast Roaming. Network Computing. [Online] 4 16, 2007.
[Cited: 5 1, 2007.]
http://www.networkcomputing.com/channels/wireless/showArticle.jhtml?articleID=198900107.
[ 10 ] CELINE, Graham. Creating Wi-Fi Test Metrics [online]. Advantage Business Media, c2009 [cit.
2009-03-12]. Accessible from WWW:
<http://www.wirelessdesignmag.com/ShowPR.aspx?PUBCODE=055&ACCT=0031546&ISSUE=
0505&RELTYPE=PR&ORIGRELTYPE=FE&PRODCODE=0000&PRODLETT=B>.
Security and Protection of Information 2009
43
Experiences with Massive PKI Deployment and Usage
Daniel Kouřil, Michal Procházka
{kouril,michalp}@ics.muni.cz
Masaryk University, Botanická 68a, 602 00 Brno, and
CESNET, Zikova 4, 160 00 Praha 6,
Czech Republic
Abstract
The Public Key Infrastructure (PKI) and X.509 certificates have been known for many years but their
utilization in real-world deployment still presents many shortcomings. In this paper we present experiences
gained during utilizing a large-scale distributed infrastructure serving a lot of users. We will start with a
short presentation of the emerging European grid infrastructure and the EU Ithanet project, which both
provide examples of real-world environments based on PKI. Then we will describe problems that arose
during PKI deployment in the grid and solutions developed for their eliminations or mitigations,
including our contributions. Two main views will be presented. First, we will focus on issues that can be
addressed by operators of certification authorities and/or the infrastructure. In this part the problem of
trustworthiness of CAs will be presented as well as accreditation procedures, which are employed to assess
individual CAs and to give guidance to relaying parties as to accepting the CAs. Following that, a short
survey of revocation checks mechanisms will be given and discussed.
The second part of the paper will deal with aspects related to users' usage of PKI. We will describe
methods how a certificate can be obtained, especially focusing on utilization of the identity federations and
their combination with standard PKI. We will also describe approaches to support the Single Single-On
principle, based on short-lived or proxy X.509 certificates. Attention will be also paid to ways of protecting
of private keys. We will describe issues related to usage of the smart card technology and employment of
online credential repositories.
Keywords:
PKI, grids, authentication, certification authorities, Single Sign-On principle.
1 Introduction
People can usually achieve a particular goal easier and faster if they collaborate with and share their
knowledge between each other. The same approach can be applied in research, too. In order to be able to
produce significant results, contemporary research projects usually to concentrate various experts. It is
common for the activities to attract people from multiple different institutions that may even be located in
different states. Such arrangement often makes it possible to concentrate a substantial knowledge and
expertise of many people, which is necessary for current research to be performed not in isolated islands
but as broad collaboration. Regardless how stimulating such an environment is, it also breeds brand new
problems concerning the organization of collaborating people. One of the key problems is establishment of
the virtual group of researchers and their mutual communication. Current research also often requires
access to sophisticated devices and is resource-intensive in terms of computational, network or storage
facilities. Providers of such resources do not want to grant access to anybody but want to keep control over
users who are allowed to access. Also, activities sometimes require limiting access to their internal
communication and document, since they may contain sensitive data that must not leak or be tempered
with for any reason.
44
Security and Protection of Information 2009
Achieving a required level of security may be easy in a closed environment that connects only several
people who already know each other or even come from the same institution. However, moving one level
higher, fulfilling security requirement gets more complicated in an environment linking hundreds or even
thousands of users from many different countries or institutions want to collaborate.
One of the main problems to address is strong authentication in such an environment, which would
provide such a level of confidence in users’ identities that is acceptable for majority of the participants. An
authentication system based on the Public Key Infrastructure (PKI) [1] uses a decentralized management of
users’ information and therefore it is suitable as the authentication system for distributed environments.
Authentication data of user in PKI world is represented by a personal public-key certificate providing
a digital identification in the digital world. The relationship between the user and her digital certificate is
approved by a Certification Authority (CA). In PKI every entity holds its key pair that is used for
asymmetric cryptography. That means the data encrypted by a public key can be decrypted only with the
corresponding private key and vice versa. A personal digital certificate binds the key pair to its owner and
provides information about the owner identity. Each certificate contains a public key and information
about the person such as her name, institution and location. The certificate along with all necessary
information is signed with the private key of a CA, whose identification is also included in the certificate.
In order to make the CA operation scalable, the model of PKI introduced the concept of Registration
Authorities (RA) that are responsible for proper authentication of applicants who ask for certificates. In this
model the CA signs certificates requests that are validated by authorized RAs. Issued certificates are used to
authenticate the users or services among each other, the set of entities that trust a particular CA (or
multiple CAs) is called relaying party.
The principles described in previous paragraph apply to PKI based on the ISO X.500 schema and
accommodate certificates following the X.509 standard [2]. There are also other mechanisms to provide
public key infrastructure, with Pretty Good Privacy (PGP) being the most popular among them. In this
paper we will only focus on X.509 certificates and set of CAs, since it provides a higher level of assurance
for operations of distributed systems.
While the current PKI technology seems to be suitable and mature enough to cover large distributed user
communities, there are still organizational and technical aspects that can negatively influence the overall
level of security of the system. Over past years we have participated in several projects focused on
establishment and routine operation of large infrastructures, where the PKI is used as the primary
authentication mechanism. Despite the projects covered completely different user communities,
interestingly enough the conclusions regarding user’s view on the PKI is almost identical. In this paper we
describe how PKI was implemented in such large environments and summarize precautions taken to make
the PKI more usable for users and system operators.
2 PKI in real-world deployments
The PKI features fit the requirements on building a robust environment with reliable authentication
mechanism for connected users and services. Because PKI provides a very good level of scalability it is
suitable as an authentication mechanism for large-scale distributed environments with hundreds or
thousands users. But PKI also has some limitations that can be encountered when one tries to deploy it in
that scale. These limitations are not visible in small-scale solutions but they play an important role in the
security of the larger systems. In this section we describe two environments, where PKI was selected as the
authentication mechanism.
Security and Protection of Information 2009
45
2.1
PKI in grids
Grid systems are an emerging concept supporting collaboration and resource sharing. A grid system allows
its users to tie together various types of resources that often span multiple institutes or even countries.
Most current systems provide facilities to perform high-performance computations or handle large amount
of data. One of the most famous contemporary examples is the grid infrastructure built to process data
generated by the Large Hydron Collider at CERN.
Grid systems not only provide the infrastructure to access the resource but also introduce other basic
services provided as added value compared to ad-hoc systems that just couple the resources. One of such
additional generic services is security provisioning. Grid users can easily establish secure communication
between each other using the security function embedded in the basic grid middleware. Utilizing these
services makes it easy for users to start their collaboration without having to bother with technical aspects
of security mechanisms.
Unlike other distributed systems (e.g., peer-to-peer architectures), grid systems have always aimed at
providing a high level of security. The PKI was natural choice, since the PKI features fit the requirements
on building a robust environment with reliable authentication mechanism for connected users and
services. Various approaches to building a grid system have appeared in the past, but majority of them
based security on PKI. Several weaknesses have been spotted during the years of grid systems development,
which also lead to several improvements and new approaches that increased the overall level of PKI-based
systems.
2.2
PKI in the ITHANET project
We also participated in the EU ITHANET project, whose goal was to build an international network for
thalassaemia research comprising Mediterranean and Black-Sea countries.
While preparing the collaborating infrastructure, we had to solve communication protection to prevent
from leakage of sensitive data about patients. We designed and set up a solution based on virtual networks
that made it possible for the participants to join the community. PKI was chosen as the authentication
mechanism that is scalable enough to cover the community and also allows to be easily integrated with the
VPN solution.
We established a dedicated CA for users who did not posses a digital certificate already and also provide
them with client tools enabling to establish the virtual network tunnels. First results showed that users
were not able to manipulate with the digital certificates and private keys properly. Further experiences
revealed that the PKI technology and principles behind the digital certificates are too complex for user
without any computing background. The conclusion was to make the security as transparent as possible
for the end-users, since that is the only way how to retain a sufficient level of security.
3 Operating a PKI
Experiences from the grid systems suggest that it is possible to build a generalized PKI suitable for a wide
range of applications. Being a provider of very basic middleware services, a grid infrastructure is not tied
with any particular application. Since PKI is provided as part of the middleware layer, it also independent
on applications being operated on the grid. Such an arrangement is different from the majority of current
systems where the infrastructure provider and application provider are the same entities. For example, if
contemporary electronic bank systems allow bank clients to authenticate using digital certificates, they
require the people to obtain a certificate issued by the PKI managed by the bank. A similar situation can
be seen in other areas, such as access to internal information systems at universities or corporations. Using
a single PKI for every single application is clearly not a wise option, despite there can be several reasons for
46
Security and Protection of Information 2009
a user to posses multiple digital identities. On the other hand though, establishing a trusted and properly
operated PKI is a really difficult problem requiring a lot of resources. It is therefore useful if an established
PKI is open for other applications that are willing to accept the rules of the PKI.
In this section we describe several issues concerning operating a trusted PKI. We also depict solutions and
approaches that are used to address or mitigate the problems.
3.1
Trustworthiness of CAs
PKI in grid and other general infrastructures supposes that a user is in possession of a single key-pair and
corresponding certificate, which is used to access many different applications. Such an arrangement makes
the life of the user easier and more secure as well since there is only one private key to secure. In order to
provide users with such general certificates, it is necessary to establish a CA that is willing and capable to
issue them. The CA is a corner-stone in the generalized PKI, since trust in it determines the overall
trustworthiness of the PKI-based system.
From the users’ viewpoint the most important task of a CA is to ensure a sufficient level of authentication
that each applicant must achieve during obtaining a certificate. Currently there are many tools enabling
fast and easy establishment of technical backgrounds for a CA. However, establishment of a CA is
principally an administrative problem since a lot various operations aspects must be covered to ensure the
CA works in a well-defined manner. Every of aspects plays an important role and must not be neglected in
the design of the CA. The CA operator must also guarantee that it is ready to provide long-term support
on daily basis, to make sure that every certificate issued will be maintained properly throughout its
lifetime. For example, if a need appears to revoke a particular certificate, the CA must process the request,
revoke the certificate and update its revocation information. These steps must be done immediately
whenever the need arises. Similarly, resource providers and infrastructure operators sometimes ask the CA
to provide information about a particular certificate owner, e.g., during resolution a security incident in
which the user is involved and the CA provider must be ready to hand out the information to them.
A large-scale distributed environment usually comprises a lot of CA, each one being operated by a different
institution. With the high number of the CAs it is hard for the relaying parties to decide if a particular CA
is trusted enough and if it fulfils the requirements of the end-users. Seriously established CA provides
documentation that is available to the relaying parties, in which they describe the procedures taken to
operations of the CA. Regardless how useful these documents are, they are hardly studied by the end-users
who do not have the sufficient expertise or time to study them in detail.
A similar situation became evident in the early days of grid systems, since they engaged lot of CAs, making
the orientation among them difficult for an ordinary user. To ease the life to relying parties the
International Grid Trust Federation (IGTF) [3] has been established, which conducts accreditation of CAs
based on policy documents the CAs submit for review. A list of IGTF-accredited CAs is made publicly
available for all relying parties, along with other additional information, such as location of the CA
policies, revocation information contact points, and so on. Having a single repository of trusted CAs is
very convenient for the end-users, since they can easily install all CAs that were accredited without having
to examine the individual CAs. Trusted anchors (i.e., the CA root certificates) make it possible for the
users to establish mutual communication between each other without having to pass a difficult procedure
of establishment a trusted relationship.
The IGTF is composed of representatives of CA managers, identity providers, and large relying parties.
The IGTF maintains several authentication profiles, which specify the requirements on establishment and
operation of a CA. Currently, there are profiles available for classic CAs, CAs issuing short-lived
certificates, and CAs linked to existing systems for user management. The IGTF also defines procedures
Security and Protection of Information 2009
47
for accreditation based on the profiles, which is performed by the IGTF members. Currently there are over
80 CAs accredited by the IGTF from the whole world.
3.2
Users’ identification
In the standard PKI world, a certificate owner is identified by the name of the CA that issued the
certificate (i.e. issuer name) and the subject name of the certificate. The IGTF makes the identification
mechanism simpler by introducing a uniform name space for the subject names for all accredited CAs. As
part of the accreditation procedure the applying CA must define one or multiple prefixes that will be used
to build all the subject names of the certificates it is issuing. During the accreditation phase the IGTF
verifies that the prefixes are not allocated to other CA and prefixes are reserved after a CA has been
accredited. In such an arrangement subject names of different CAs cannot clash.
The specification of prefixes assigned is also available from the IGTF repository as a signing policy
specification. The policy is checked by the relying parties whenever a certificate is being verified. The
checks make it possible for the relying parties to verify that the CA really complies with its obligations.
Using this name space policy it is possible to identify a certificate bearer only based on the subject name,
since the issuing CA is determined by the prefix used. This naming schema makes life easier for e.g.
administrators maintaining access control policies, since a single line with the subject name is sufficient to
be used in configuration files.
3.3
Revocation checks
While verifying a digital certificate it is inevitable to also consult the CA to check that the certificate has
not been revoked. Many environments based on PKI do not pay attention to perform proper checks of
revocation information. Also several applications do not support revocation checks, which decrease the
security level of the system. For example, popular browser Mozilla Firefox exposes a severe bug preventing
from automatic CRL updates. Neglecting checks of revocation status may lead to severe violation of
security since it is the only way of detecting that a certificate has been compromised.
There are two basic mechanisms available for revocation checks. The first one utilizes a list of revocation
certificates that is periodically issued by each CA. The second way enables to perform on-line checks by
direct contacting the issuing CA. As the latter approach the Online Certificate Status Protocol (OCSP) [4]
is widely used.
The use of CRL is simple and supported by multiple contemporary applications. CRLs are fetched by the
relying at regular interval (several times a day), which introduces a delay to the distribution of revocation
information. Therefore, when a certificate is being verified, the relying party may not have the most
current information available, even though the CA has already published an updated CRL.
If there are demands for having access to the most current information one has to employ on-line checks
using OCSP. However, before selecting the particular mechanism, it is also important to take into
consideration the time required by the CA to handle revocation. Usually a CA is expected to process
a revocation request within a day. Especially operating an off-line CA where the cryptographic material is
stored on a disconnected machine, the staff must visit the computer room, perform the revocation, store
the CRL on an external device (USB stick) and bring it back to their desktop to publish it on the CA web
page or other CRL distribution point. These actions are time-intensive and the staff need not perform
them immediately. The decision concerning the revocation checks should therefore always consider this
time, since if a private key is compromised, the attacker can manipulate with it all the time. Consideration
must be made if it is really worth doing the on-line checks, which only shorten the overall time in which
the private key is exposed to an attacker.
48
Security and Protection of Information 2009
In large-scale PKI it has turned out that there is a non-negligible overhead concerning revocation checks.
Periodic downloading of CRLs can easily presents a few millions requests each day against a server of
a single CA, which can also require a large amount of data to be transported. Large data transfers can
produce problems for mobile clients, for whom sufficient network parameters are not available. A similar
problem faced operators of the Armenian academic CA, since Armenia had a very weak international
connection, which was not sufficient for the repeated large downloads. The solution was to move the CRL
to a server hosted in Europe, where much better connectivity is available.
4 Using a PKI
In the introduction section we described a PKI as a general service provided to the end-users without any
link to existing application providers. In this section we describe approaches that make the PKI more userfriendly. Such adaptations may also lead to a more secure environment, since users do not seek for ways to
bypass existing security procedures that have been set by the system designers.
4.1
Easy access to certificate
The basic mechanism of receiving a certificate is similar for most CAs. It usually consists of two phases,
with the first one being generation of a key-pair and transferring the public key to the CA. In the second
phase, the CA verifies the applicant, and possibly their possession of the private key. The order of the
phases depends on the particular procedures employed by the CA. The main principle for key
management is sufficient security of the private key, which often means that the private key cannot get out
of the user’s control, not even to the CA. In order to deliver their public to the CA the users create
a standard certificate request, which is sent to the CA.
During the second phase the user must prove their identity as required by the CA policy. Highly trusted
CAs require the applicants to visit personally a CA contact point and present their government id card. If a
CA covers a large area, it usually operates a set of registration authorities located closely to the user
communities. The RA takes care of the authentication step and communication the result to the CA,
which is still the only subject that can access the signing private key.
Both the phases play an important role in the way how users perceive the PKI. Comfortable tools are
necessary to generate key-pair, store the private key safely and create the certificate request. The interaction
with the RA has also been identified as crucial since users often do not understand why they are sent to the
RA, which may (along with non-intuitive clients tools) lead to negative attitude users towards the PKI.
From the point of view of a PKI operator it is therefore more suitable to introduce mechanisms that ease
the process of obtaining certificates. A good choice is an on-line CA, which is available as an ordinary www
page and is able to accept certificate request and issue certificates on demand without any intervention of
the CA staff. Compared to classic CA, an on-line CA is more convenient to run since the operators do not
have to work with disconnected machine and most operations are performed automatically.
Also users profit from the on-line service since all the cryptographic operations are done by the browser,
based on proper HTML tags in the page. In order to generate a certificate a user only has to fill out an online form, which is acceptable for most users.
On the other hand, on-line CAs must take steps to secure the signing key properly, since the service is
available on-line and therefore exposed to attacks from the network. The IGTF requires that an on-line
CA stores the private key in a hardware device (HSM), which ensures that the key cannot be extracted
even if the CA machine gets hacked. Currently, there are several commercial or open-source applications
suitable to build an on-line CA.
Security and Protection of Information 2009
49
In order to ease the second phase described, i.e., the identity vetting procedure, it is possible to make use
a mechanism that links the local user management systems with the RA agenda. Such an arrangement can
be achieved using the identity federation model.
An identity federation is an infrastructure connecting user management systems from different institutions
to provide standardized access to information about users maintained by their systems. Federations provide
a virtual bus layer to which systems for user management and end applications can connect and share
authentication and authorization data. Every organization participating in a federation manages its users
by a local user management system. An Identity Provider (IdP) service is built on the top of each local user
management system, providing a standardized interface to access authentication information and other
attributes about the users. Any party in the federation can get this information by calling the IdP service
using a standardized protocol. End services (Service Providers― SP ) are able to process the data returned
by the user's home IdP and use them to make access control decisions. Before users are allowed to use a
service, they have to present a set of attributes issued by their home IdP. These attributes are provided to
users or to a service working on their behalf upon proper authentication of the user with the IdP.
An on-line can be operated as a standard SP in this model, leveraging the existing authentication methods
and additional attributes. CESNET, the Czech NREN operator, is going to provide such a service on the
top of Czech academic identity federation eduid.cz [5].
4.2
Single Sign-On
For each large system it is important to provide a single sign-on (SSO) mechanism, which makes the life of
users’ easier yet retaining a sufficient level of security.
The grid environment introduced a special type of public-key certificates - the proxy certificate [6].
A proxy certificate is made by the user herself, with the user's private key acting as a CA signing key. The
proxy certificate model is primarily used for delegation of user's identity into the grid world, to support
batch job submissions and other operations that the user cannot directly assist with. Grid services use
clients' proxy certificates to be able to contact other services on behalf of the clients. Grid credentials
formed by the proxy certificates and associated private key are usually stored on a filesystem secured by
proper filesystem permissions but without any additional protection by a passphrase. To reduce the
potential damage caused by a stolen proxy credential they are usually short-lived with the lifetime set to
couple of hours. Proxy certificates also make it possible to build a Single Sign-On system, where user
creates a proxy certificate only once a day and using the proxy certificate she can then access grid services
for the whole day without providing any other authentication data or creating new proxy certificate.
The second possibility to provide a SSO mechanism in the PKI world is to use standard X.509 certificates
with limited lifetime. Such certificates can be used in similar way how proxy certificates are. There are
tools that enable to integrate retrieval of short-lived certificates from an on-line CA as part of the standard
desktop logon process. In that way, the use of such certificates can be entirely transparent for the users,
which is much more comfortable for ordinary users.
4.3
Private key protection
Deployment of the PKI based methods in large scale multi user Grid environments reveals drawbacks that
are not easily visible in small closed installations. One of the most important factors with direct influence
on the overall security of any PKI based environment is secure management of private keys. Too many
users see their private key as just another file they can freely copy and distribute among machines. The files
containing private keys are encrypted with a passphrase but the users often select too weak passwords that
can be broken using a brute-force or directory attack. Also the file system access protection does not often
provide sufficient level of security. Private keys stored is such files can be captured by a malicious
50
Security and Protection of Information 2009
administrator or sometimes even ordinary users and can be further misused to perform an attack or
unauthorized access to private data. The private key owner may not even notice the key compromise for
very long time. On the other hand, offloading a private key to a disconnected computer makes the private
key unusable. And even keeping it on a personal machine only usually leads to complications during the
authentication process, making life with certificates difficult.
Current efforts to address these private key hygiene issues focus on removal of the long-term private keys
from the user's desktop. One possibility is to use a specialized credential store service – online credential
repositories – which maintains the long-term private keys and only provides access to short-term credentials
(proxy certificates) derived from these long-lived ones. For instance, the MyProxy [7] service is very widely
used for such a purpose. A MyProxy server provides a secure storage where the users can load their
credentials assigning them a password that can be used later to download a proxy certificate derived from
the credential stored in the repository. MyProxy servers are used in multiple scenarios ranging from access
to grid portals to support of long-running jobs.
The other option is to use a specialized hardware device which is able to maintain the private key and
perform basic operations - hardware token (or smart card). Such a device ensures that the private key never
leaves the token and prevents the key to be ever exposed to unauthorized users. The smart card technology
allows to mount two-factor authentication where the user must prove to the end system something she has
(i.e. the smart card) and also something she knows (i.e. the smart card password opening access to the
private keys). These two factors must be presented at the same time. The biggest challenge tied with the
use of smart cards lies in the user support, mainly if the PKI (and smart cards) are operated by providers
different from the users‘ home institutes. In this scenario, the users‘ local support staff do not have either
experience or mandate to solve problems caused by third-party devices, while the staff at the PKI provider
do not know the users‘ local environment.
5 Acknowledgement
The work has been supported by the research intent “Optical Network of National Research and Its New
Applications” (MSM 6383917201) of the Ministry of Education of the Czech Republic.
6 Conclusions
In this paper we presented several views and experiences gained concerning a design and operation of
a large-scale PKI. We especially focused on real-world examples experienced by the grid community in
Europe.
References
[1]
Housley, R., and Polk, W., and Ford, W., and Solo, D. Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile. IETF RFC 3280. April 2002.
[2]
ITU-T Recommendation X.509. Information technology - Open Systems Interconnection - The
Directory: Public-key and attribute certificate frameworks. http://www.itu.int/rec/T-REC-X.509/e.
2005
[3]
Home page of IGTF. http://www.gridpma.org/
[4]
Myers, M., and Ankney, R., and Malpani, A., and Galperin, S., and Adams, C. X.509 Internet
Public Key Infrastructure Online Certificate Status Protocol – OCSP. IETF RFC 2560. 1999.
[5]
Home page of eduid.cz. http://www.eduid.cz/
Security and Protection of Information 2009
51
[6]
Tuecke, S., and Welch, V., and Engert, D., and Pearlman, L., and Thompson, M..: Internet X.509
Public Key Infrastructure (PKI) proxy certificate profile. IETF RFC 3820. June 2004
[7]
Basney, J., Humphrey, M., Welch, V. The MyProxy Online Credential Repository, Software:
Practice and Experience. 2005.
52
Security and Protection of Information 2009
Securing and Protecting the Domain Name System
Anne-Marie Eklund Löwinder
Quality and Security Manager
[email protected]
The Foundation of Internet Infrastructure (.SE)
PO Box 7399
SE-103 91 Stockholm
SWEDEN
Abstract
I’ve been working with DNS and DNSSEC since 1999, and have been responsible for the deployment of
DNSSEC in the country code top level domain (ccTLD) from the very beginning.
This paper shows that there are various potential and real problems surrounding the DNS. During the
tutorial I will present the results from a threat analysis of the Domain Name System, DNS. I will then go
on to focus on how to design the DNS infrastructure to protect it from different kinds of attacks as well as
explain different techniques available to make the DNS more resilient.
Although it is admirable that the design of the DNS scaled so well with the growth of the Internet, the
original inventors did not take security issues as seriously as is required today.
(Distributed) Denial of Service attacks currently have no effective counter measures except for extreme
over-provisioning and containment. DDoS attacks are not specifically a problem for the DNS, but are
a danger for all core Internet services (as a whole). The counter measures that can be taken are only
effective if they are taken by a group of networks at large; individually there is not much that can be done
against large scale attacks. DDoS mitigation is an active research field and academic solutions have been
developed, however it might take a while until they reach the operational world. The paper recommends
the use of Best Current Practice in DNS as a way forward. With a carefully prepared design of the
infrastructure, a more robust and resilient domain name system will be created. Diversification in every
possible way is also desirable and will prove helpful.
This paper explains that one type of weakness in the DNS, namely data integrity, can be solved by
employing DNSSEC. Transport and availability of the DNS is more difficult to guarantee.
Keywords: Domain Name System, DNS, DNSSEC, .SE, DDoS-attack, name server, data integrity,
infrastructure, ccTLD.
1 The role and importance of the Domain Name System
One of the corner stones of a reliable Internet infrastructure is the Domain Name System (DNS in short).
The DNS is often referred to as “the phone book” of the Internet. It keeps track of the mapping of, for
example, a human readable web site name (like www.iis.se) to the slightly more arcane form of an IP
address that the computer needs to initiate communication (in this case 212.247.7.229). The DNS
consists of a set of servers dedicated to routing users to requested resources on the Internet such as web
sites when browsing the Internet with your favourite web browser, and ensuring that e-mails find their way
and arrive at its intended destination.
Security and Protection of Information 2009
53
In short, a stable DNS is vital for most companies to maintain a working and efficient operation. If
a company's external DNS servers become unavailable, no one will be able to reach its site nor will any
e−mail get through, resulting in lost productivity and customer dissatisfaction. Investing in a reliable DNS
infrastructure is therefore a valuable insurance policy against downtime.
The DNS’ main task is to translate names suitable for use by people (such as www.iis.se) into network
addresses suitable for use by computers (e.g. 212.247.7.210). Although the DNS was invented in the early
days of the Internet, more than 25 years ago, the design is such that it manages to be scalable to the size
and the dynamics of the Internet of today.
However, the expansive growth of the Internet was not foreseen and the design did not take into account
the abuse patterns associated with this today.
Name servers exposed to the Internet are subject to a wide variety of attacks:
Figure 1: DNS related threats.
•
Attacks against the name server software may allow an intruder to compromise the server and gain
control of the host. This often leads to further compromise of the network.
•
Distributed denial of service attacks, even one directed at a single DNS server, may affect an entire
network by preventing users from translating host names into IP addresses.
•
Spoofing attacks that try to induce your name server to cache false resource records, could lead
unsuspecting users to unwished sites.
•
Information leakage from a seemingly innocent zone transfer could expose internal network
topology information that can be used to plan further attacks.
•
A name server could even be an unwitting participant in attacks on other sites.
The DNS is still far from secure. There are a lot of known vulnerabilities and there is no doubt that all
users of applications and services on the Internet are very strongly dependent on the domain name system
to work properly. That is the main reason why Sweden has been a leading country and an early adopter of
DNSSEC, a subject on which more will follow.
54
Security and Protection of Information 2009
2 The DNSCheck Tool
DNSCheck is a program developed by .SE. It is designed to help people check, measure and hopefully also
understand the workings of the Domain Name System, DNS. When a domain (also known as a zone) is
submitted to DNSCheck the tool will investigate the domain's general health by traversing the DNS from
the root (.) to the TLD (Top Level Domain, like .SE) to eventually the name server(s) that holds the
information about the specified domain (like for instance iis.se). Some other sanity checks are also
performed, such as for example measuring host connectivity, validity of IP addresses and control of
DNSSEC signatures.
Figure 2: DNSCheck user interface.
3 Health Check on SE
Year 2007 .SE started a project to run health checks on a number of domains considered critical for
society. We analyzed the quality and availability of the domain name system in the .se zone as well as a
number of other key functions. During both 2007 and 2008 we primarily investigated DNS quality.
To perform the investigation we used the same software as for the DNSCheck tool mentioned earlier.
Using the software .SE performed automated runs against a predetermined number of .SE domains. In
addition, we carried out certain supplementary investigations into such services as e-mail and web servers,
which is not a part of DNSCheck. We took the opportunity to study some important key parameters
concerning e-mail and the web services. Other additional investigations covered checking which are the
Security and Protection of Information 2009
55
most commonly used pieces of name server software, and whether there are any dominating market players
in Sweden.
From the 2007’s slightly discouraging results on .SE’s health status, we decided to repeat the study in 2008
to ascertain whether it is possible to track the effects of some of the guidelines and recommendations that
were presented to the public in 2007 and whether these could be seen to have led to any measures being
implemented in the investigated operations.
The study confirms that, in general, knowledge of what is required to maintain a high level of quality in
the domain name system is deficient, although the interpretation of what constitutes “high quality” may of
course be discussed. In addition, there is also reason to believe that these knowledge deficiencies also apply
to competencies in operation and operational responsibility.
.SE decided to define the quality level according to recommendations from international practice and from
our own experiences.
In year 2008, a total of 671 different domains and 912 unique name servers (or 1,870 name servers if you
count per domain) were investigated.
Of the 671 domains tested, 621 were classified as critically important to society. Of this figure, 28 percent
had serious defects of a kind that should be corrected immediately and 57 percent had defects of a nature
that generated a warning.
Our observations indicate that a series of considerable improvements have been made in several areas
between 2007 and 2008, although much work remains to be done.
Generally, without much effort it should be possible to decrease the number of defections below 15
percent, by just following best practice. With some investments in education and equipment it should be
possible to decrease the number to reach 10 percent. Getting lower than 10 percent will mean set backs
that probably are difficult to motivate.
4 A robust and resilient design of DNS
Whether a company manages its own DNS infrastructure or is outsourcing the management to another
organization, there are Best Current Practices available to ensure that DNS is designed properly and can
provide robust, uninterrupted service.
While it is important for each network administrator to secure any host connected to the Internet, they
must give name servers even more consideration due to the important role they play. During the years,
.SE’s engineers and operations teams, based on their experience of managing the .se zone within the
ccTLD domain name registry, have defined the following checklist to ensure maximum reliability, security
and performance of the DNS systems:
1. Establishing multiple name servers to serve zones ensures that the failure of one of your name
servers does not cut zones off from the Internet. In other words, eliminate every single point of
failure. Denial of service attacks targeted at a name server, or an accidental network outage
isolating the name server from the rest of the network can have far reaching consequences. If the
name servers are unreachable, users will be unable to reach other hosts since they will be unable to
translate host names into IP addresses. This affects both internal and public systems depending on
the network architecture. To combat denial of service attacks and prevent accidental service
outages, it is necessary to eliminate single points of failure in the DNS infrastructure. Some
common pitfalls to avoid are by placing all of your name servers:
• on a single subnet;
• behind a single router;
56
Security and Protection of Information 2009
•
•
behind a single leased line;
within a single autonomous system.
2. Distribute name servers geographically. The distribution serves two purposes:
a) A network failure due to an act of nature or other incident affecting that particular region will
not take out other name servers on your network.
b) By locating name servers closer to the communities of users who need them will help users
resolve resource names like web site names quicker. For instance, a substantial user base in
Asia may justify at least one name server that will take care of queries in Asia.
3. In addition to mitigating the risks described above, you should make sure to have multiple
physical paths to your network. This is an important step to verify, even if you have multiple
service providers, since they may lease their physical plant from the same carriers. Without
physical path diversity, your network is more susceptible to physical disruption (commonly
known as “backhoe fade”). Connect name servers to multiple ISP networks, which ensure that the
failure of one ISP does not interrupt the DNS service.
4. Provide name servers with fast, high-bandwidth connections to the Internet, which, in
combination with geographic diversity and the use of multiple ISP’s, makes the DNS
infrastructure more resilient and resistant to a distributed denial of service attack.
5. Invest in regular and updated training for system administrators, network operations
administrators, engineering and other personnel involved of the management of DNS to minimize
configuration mistakes on complex name server software, such as BIND. Maintain 24x7 technical
support staff, which ensures that DNS expertise is available for unexpected events that could
jeopardize continuity of operations.
6. Provide proper security for name servers, including the latest updates, security patches and bug
fixes for name server software (where BIND for the time being is the most widely used name
server software). As with any piece of software, name server software evolves with each release.
Make sure to keep track on related mailing lists. Virtually all older name servers have widely
known vulnerabilities that can be exploited. Vulnerabilities that appear in one version are usually
fixed in subsequent releases.
While running the latest version of your name server software doesn’t guarantee your server’s
security, it minimizes the possibility of exploitation. When upgrading, be sure to read the release
notes to understand what has been fixed and what has been changed in the software. Upgrading
the name server to a new version may require that changes are made to the configuration in order
to provide the expected performance or to take advantage of new features.
7. A number of different steps can be taken to secure a name server that only has to provide DNS to
a single audience since it can be optimized for a particular function. Therefore, it can be useful to
have separate name servers, each configured to play a specific role. Frequently, different security
policies can be applied to these servers such that even if one server is compromised, the other
server will continue to function normally. Consider creating at least two separate kinds of name
servers, each optimized for a particular function:
a)
Authoritative name server
An authoritative name server would commonly be used as an external name server that is
authoritative for your DNS zones. It would advertise the DNS information about your zones
to the Internet. Since it should not be queried for information about zones to which it is not
authoritative, it should be configured as a non-recursive server. Thus, the server would only
provide resolution for the zones for which it has authoritative information.
Security and Protection of Information 2009
57
(b) Resolving name server
A resolving name server would commonly be used to provide name resolution services to
internal clients. It may or may not be configured as being authoritative for the internal zones.
Since it must find the DNS information requested by internal hosts (regardless of whether it
is authoritative for that information or not), it should be configured as a recursive server.
However, it should only answer queries from trusted sources (internal hosts), not from the
entire Internet.
By adopting this strategy, the external authoritative name server is configured to provide little
service other than answering queries for which it is authoritative. The internal resolving name
server must provide recursion, and is therefore somewhat more susceptible to attacks since it
must accept data from other DNS servers. Additional protection of the resolving name server
can be provided through other means such as packet filtering and restricting the server to
respond only to known hosts.
In this manner, if the resolving server were to be compromised or its cache poisoned, the
advertising server’s authoritative zone information would be unaffected, thus limiting the
potential damage. Similarly, if the resolving name servers are also configured to be
authoritative for internal zones, a compromise of the advertising name server(s) would not
affect the normal operation of the internal clients of the resolving name server(s).
8. Institute a change management process which allows you to ensure that new name server
configurations and zone data are tested before they are put into production.
9. Filter traffic to your name server. Many organizations run their name servers on dedicated hosts. If
the hosts that run the name servers don’t provide any other services, there is no need for them to
respond to non-DNS traffic. As such, all unnecessary traffic should be filtered out, thus reducing
the possibility of the name server being compromised by vulnerabilities in some other piece of
software. (In addition to the filtering of unused services, it is good security practice to disable or
remove any unnecessary software from the name server.)
10. Monitor name servers as tightly as hardware, operating system or applications, including
monitoring availability and responsiveness. Create business continuity and disaster recovery plans
for DNS, including augmenting DNS infrastructure with additional DNS servers in additional
locations. Take your plans into practice in order to train your staff.
Finally I would like to stress the importance of having well maintained contacts with peer networks, local
and global communities and incident handling organizations. Make sure to have business continuity and
disaster recovery that also covers the DNS.
5 New threats to the Domain Name System
One of the biggest threats was discovered during 2008 by the researcher Dan Kaminsky. He discovered
how already known flaws in the Internet’s Domain Name System, DNS, more easily than before can be
exploited for an attack. Through what became known as the Kaminsky bug an attacker can, by simple
means, trick Internet users by temporarily taking over a domain name and redirecting queries to another
server. In practice, the method can simplify so-called ”phishing attacks”, where users believe that they are
communicating with for instance their on line Internet bank, but are actually being tricked into sending
sensitive information such as account numbers and passwords to the attacker's server. To the user it still
looks like the bank’s site.
58
Security and Protection of Information 2009
6 DNS Security Extensions - DNSSEC
In the beginning of the 90’s, when the web started to appear, it was discovered that DNS suffered from
several vulnerabilities. At that time, work was begun to develop a more secure version of DNS, but it was
not until 1999 that the protocol standard was in such a condition that it was possible to implement and
use in tests. The result is DNS Security Extensions, or in short DNSSEC.
In short, DNSSEC is an internationally standardized extension to the technique for name resolving, DNS,
to make it more secure. DNSSEC mainly supports more secure resolving, minimize the risks for
manipulation of information and cache poisoning. The basic mechanisms of DNSSEC are the use of
encryption and signatures.
You may say that DNSSEC is the Internet’s answer to DNS Identity Theft. It protects users from and
makes systems detect DNS attacks. Almost everything in DNSSEC is digitally signed, which allows
authentication of the origin of the DNS data and ensures the integrity of the DNS data. Digitally signed
means that DNSSEC uses Public Key Cryptography, with a Secret Private Key and an Open Public Key
used together. DNS Messages are scrambled using the Private Key – the Public Key is needed to
unscramble it. You will now know who sent the message. If the data is modified, mangled, or otherwise
compromised en-route, the signature is no longer valid and you will be able to discover it.
DNSSEC protects from different types of tampering with DNS queries and answers during the
communication between servers within DNS. The main function is lookups from domain names to
IP−addresses with signed DNS answers. An extended use of DNSSEC could also act as an enabler of other
security measures as, for instance, to use DNS as a container to distribute other security related keys or
parameters.
As the worlds very first Top Level Domain (TLD, .SE (Sweden) offered DNSSEC as a service. We started
by running a project 1999 that led to the signing of the .se-zone in September 2005.
Figure 3: The .SE DNSSEC symbol.
In February 2007 DNSSEC was launched as an additional service to domain holders through services
from some of .SE’s Registrars. The aim is that .SE’s DNS service should be, not only highly robust and
available but also trustworthy. .SE’s vision for 2011 is that DNSSEC shall be a natural part of DNS, used
by all important .SE domains and supported by several applications.
To provide DNSSEC as a service, several issues have to be considered. Many of them are the same
regardless if it’s a TLD that provides the service or a small DNS Name Service Provider just hosting DNS
for a few domains.
Systems, policies and routines for key management and signing of the DNS data have to be developed.
When .SE developed its service, the main goal was to keep the high availability on its ordinary DNS
Security and Protection of Information 2009
59
services and at the same time get a highly secure new DNSSEC service. Since no suitable software was
available for key management and zone signing, .SE was forced to develop its own system.
Another challenge for .SE, as a pioneer, has been to get the market for DNSSEC started. Back in 2006 .SE
made a market survey among the .SE Registrants and found a very positive attitude towards the use of
DNSSEC. This attitude has been confirmed in the ongoing contacts and discussions with Registrants.
This means we had a TLD providing DNSSEC with customers requesting it, however, that still isn’t
enough. Each Registrant also needs a DNS Name Service Provider. This is because DNS as well as
DNSSEC are administered in a distributed way. The task for a TLD is to provide the addresses to the
Registrants DNS Name Servers. It’s not the TLD’s responsibility to handle the Registrants DNS data.
The DNS Name Service Provider is the party who actually handle the Registrants DNS and DNSSEC
data. Today most Registrants don't run their own DNS Name Server. They are often run by an external
DNS Name Service Provider which could be a Registrar, a web hosting company, an outsourcing partner
etc. Only a few of them are actually offering DNSSEC services today, which is an obstacle for the
deployment of DNSSEC.
DNSSEC is probably not the solution to any of the top priority security issues on Internet, like malicious
code and malware such as trojans and worms, distributed through phishing and spam. However, it is an
interesting new layer of infrastructure which provides a solution in the long run. DNSSEC also increases
the possibility to support different defence methods, and like all new infrastructures, the value increases
with the number of users.
The real value of DNSSEC is obtained when the Internet users actually validate the answers from the
DNS look-ups to ensure that they originate from the right source. This can be handled in different ways.
One common wish is that the validation should be performed by the end users’ application and that the
end user should be informed of the result; similar to the small icon in shape of a padlock that is shown in
the web browser, when a secure SSL session is established. There do exist applications that are performing
DNS look-ups together with DNSSEC validation, but DNSSEC is not yet supported by our most
common applications.
The validation can also be made by the users’ local DNS resolver. For the ordinary broadband customer
this resolver is typically provided by the users ISP. For the Swedish DNSSEC project it has really been
encouraging that the major Swedish ISP’s have turned on DNSSEC validation in their resolvers and are
actually doing the validation for their customers. This is a good start, while waiting for DNSSEC support
in the users’ applications.
.SE will continue the work to get DNSSEC to become a natural part of DNS used by all important .SE
domains and supported by useful applications. Thus, we will continue to develop the market, systems and
applications. The market development will comprise of activities to stimulate our registrars to offer
DNSSEC services and to promote the use of DNSSEC tools among DNS Name Service Providers.
Together with other TLD’s, we will also continue our system development, to make the key management
and zone signing easier and more effective. Finally, we shall also promote applications that have great
benefit from using DNSSEC.
Due to the complexity of DNSSEC, the DNS Name Service Providers require easy to use and reliable
administrative tools. For the deployment of DNSSEC, a good supply of commercial and open source tools
is crucial. There are already some available and SE has good experience of a number of them, but more
scalable and better tools are still needed. In order to get more tools available we run the project
OpenDNSSEC.
A major obstacle to the widespread adoption of DNSSEC is the complexity of implementing it. There is
no package that one can merely install on a system, click the "start" button, and have DNSSEC running.
On the contrary, there are a variety of tools, none of which on their own is a complete solution, and
60
Security and Protection of Information 2009
different aspects of DNSSEC management such as key management and use of hardware assistance have
not been adequately addressed. The aim of the OpenDNSSEC project is to produce software that will
provide a comprehensive package. Installed on a system with a minimum amount of operator input, it will
be able to perform the following:
•
Handle the signing of DNS records, including the regular re-signing needed during normal
operations.
•
Handle the generation and management of keys, including key rollover.
•
Seamlessly integrate with external cryptographic hardware, HSM (including USB tokens and
smart cards).
•
Seamlessly integrate into an existing deployment scenario, without the necessity to overhaul the
entire existing infrastructure.
As one of the very early adopters of DNSSEC .SE is still concerned about the slow progress of the
DNSSEC deployment efforts. We believe that the successful deployment of DNSSEC is crucial for the
continued stability and security of the Internet. As this is contingent upon a signed DNS root zone, we
also have urged ICANN to speed up and improve its efforts to rapidly migrate to a signed root zone.
Bearing in mind the vulnerability in DNS detected by Kaminsky, we believe that the Internet now has
reached a point where DNSSEC offers a solution in the long run. The absence of a signed root zone is no
longer only merely unfortunate. Rather, the absence of a signed root zone directly contributes to the
development of inferior alternatives, thereby confusing the community and jeopardising the long term
success of DNSSEC deployment.
It is not possible to tell when all the name servers on the Internet will be prepared for DNSSEC, but due
to .SE’s choice to be an early adopter with the strategy to be in the forefront of the future development,
.SE will make it easier for everyone to have DNSSEC secured zones.
References
[1]
IETF RFC 1101 DNS encoding of network names and other types. P.V: Mockapetris. April 1989.
[2]
DNS Threat Analysis, Mark Santcroos, Olaf M. Kolkman, NLnet Labs, 2006-SE-01 version 1.0,
May, 2007. http://www.nlnetlabs.nl/downloads/publications/se-consult.pdf
[3]
Reach ability on the Internet - Health Status of .SE 2008
http://www.iis.se/docs/reachability_on_the_internet_-_2008.pdf
[4]
Tool: http://dnscheck.iis.se/
[5]
Information site: http://thekaminskybug.se/
[6]
http://www.iis.se/domains/sednssec
[7]
IETF RFC 4033, DNS Security Introduction and Requirements, R Arends, Telematica Instituut, R
Why deploy DNSSEC? Several authors, 2008. http://www.dnssec.net/why-deploy-dnssec
[8]
Austein, ISC, M Larson, Verisign, D Massey, Colorado State University, S Rose, NIST, March
2005
Security and Protection of Information 2009
61
A system to assure authentication and transaction security
Lorenz Müller
[email protected]
AXSionics AG
Neumarktstrasse 27
Biel, Switzerland
Abstract
The security of transactions over the Internet is jeopardized by new kind of sophisticated criminal attacks
mainly focusing on the local computer system of the end user (Man-in-the-middle, Man-In-The-Browser,
malicious software, fraudulent spam etc). Today’s authentication and transaction assurance methods are no
more sufficient to protect from identity theft and manipulated communication protocols. Attackers are
able to modify the content of messages before they are protected by the underlying Internet security
mechanisms like SSL/TLS. Today a user can never be sure that the information he or she sees on the own
computer screen is indeed the information sent to or received from the business partner. To achieve
a secure communication channel between two business partners mutual authentication of the principals,
authenticity, secrecy, integrity and freshness of the exchanged messages and to some extend provability of
the content is necessary. At least for crucial transactions the E-commerce platforms should provide such
secure communication opportunities. A secure channel can only be established using fully trusted
infrastructure at both endpoints. The AXSionics-Authentication System (AXS-AS) is a realization of such a
trusted platform based on infrastructure that allows partners to exchange secure confirmation messages
over open networks and unprotected local computers. In contrary to the so called Trusted Computing
approach the AXS-AS does not try to secure the whole local computer but establishes a secure channel
through this computer into a trusted dedicated device, the AXSionics Internet Passport. The Internet
Passport is a personal token with the form factor of a thick credit card and acts like a digital notary
between the E-business operator and the client. It allows the secure authentication of both partners and the
confirmation of transaction data for both sides.
Keywords: Transaction security, authentication, trusted infrastructure, identity protection.
1 Introduction
Despite the recent boost of online criminality almost all industries have developed business models which
rely on so called E-business processes that use the communication facilities of the Internet. In addition
many new applications and business models have appeared that exclusively use online interactions for the
communication between user and operators over the Internet. Services and platforms like Google,
Facebook, Wikipedia and many others have a big impact on the social and economic life of a growing part
of the world population. The growth rates of companies in this new economic sector are impressive. The
corresponding indicators show that online retail sales in 2006 (B2C) reached a value of $148 billion in
North America and $102 billion in Europe with an expected CAGR1 of 20 % or more [1,2]. The already
much larger ICT-based business to business (B2B) market (projected $2.3 trillion in US and €1.6 trillion
in Europe in 2006) still continues to rise substantially above the total market growth (CAGR of 16.9 % in
1
CAGR – Compound Annual Growth Rate
62
Security and Protection of Information 2009
US [1] and 19 % in Europe). New IT technology rapidly spreads around the world and the rate of the
world population with high speed Internet access is expected to reach 10 % by 2010 [1]. More and more
people are willing to pay for digital content like music, video, gaming or other information. The
distribution infrastructure equipments for voice, TV and data merge to become a common digital
platform for all kind of information and communication access that is available at fixed and mobile data
terminals. The E-society groups in interest circles and communities and the users generate new content
that represents a multi-billion $ market. All these trends show that the cyberspace already today is the host
of a substantial fraction of the total worldwide commercial exchange and has a growing impact on
everybody’s social and economic life.
This evolution not only attracts more consumers, users and providers but also the rapidly growing
community of cyber criminals. In the mean time security concerns have become the most important
inhibitors for the usage of the Internet for business activities. The fraud rate in Internet transactions is
more than a magnitude higher than in conventional business transactions [3]. The preferred target for
criminals is the market of online finance products and services. But especially this sector is sensitive to
a loss of trust in the security of online transactions. Therefore the growth rates in the E-finance market
clearly underperform relative to less security sensitive sectors [4]. Gartner estimates that security concerns
have kept over 30 % of the potential customers away from E-banking and higher value E-commerce
transactions in the US. The same fears will be an important obstacle for the further development of new
services like E-health care and E-government if the security breaches for online transactions can not be
closed.
2 Threats to the digital society
The threats to the digital society divide roughly in three categories:
•
Threats originating from criminals which intend to harm the welfare of others to gain financial,
competitive or political advantages.
•
Threats originating from reckless promoters of their online offering using unwished marketing
means.
•
Threats originating from software or product pirates that offer their fraudulent copies or illegal
material.
All these threats target the digital economy and, in the same time, use the technological means of the
digital economy. Typical methods to execute an attack within one of the above threat categories include
the application of malware that is introduced to the users Internet access device, the operation of malicious Web sites or the infection of innocent Web sites or servers with malicious codes, the pushing of
unwanted information content to the user (spam, adware etc) and the distribution of illegal copies of
digital content or values [5].
2.1
What is new in online criminality
In fact the cyberspace is not so much different from the real world. Behind the entire complex infrastructure are human beings with more or less the same ethic behaviour as in the physical world. They
interact with each other, make business and form social networks or build even common new living spaces.
At the end we find similar threats in the cyberworld and in the physical world. Criminals still intend to rob
or cheat others with the goal to enrich themselves. Virtual violence can often end in real physical violence
or sexual abuse. So the basic threats are not new in the cyberworld and like in all social communities we
have a certain percentage of the population that adopts a criminal behaviour which rates around 3-4 % on
Security and Protection of Information 2009
63
average [6]. There are however important differences that change the balance between threats and risks in
the cyberspace [7].
2.1.1
Automation
Automation enabled by the digital technology overcomes the limits of single unit production of traditional
attacks. A classical bank robbery needs a very high success rate to be profitable for the bank robber. He has
neither the time nor the resources to rob thousands of banks with the same method. This is different for a
cyberspace attacker. He simultaneously can attack thousands or millions of victims and he needs only a
tiny success rate to make the attack profitable. Typical phishing attacks have at best a few percent
responding victim candidates but the absolute number of victims may be in the thousands or even more
[8].
2.1.2
Remote action
A second difference comes from the very nature of the cyberspace. A personal and therefore risky presence
on site of the crime is no more necessary. A cyberspace criminal may rob a bank in Norway easily from his
home computer somewhere in Korea or elsewhere. The risk to get caught on the spot is quite high for
a classical bank robber. For a cyberspace criminal the risk to be dismantled and caught however is at the
per mille level [9].
2.1.3
Access to crime supporting technology
A further difference comes from the openness of the Internet. It leads to a rapid spread of new knowledge
on security breaches and to a wide distribution of instruments to take advantage of such security holes.
This is a further advantage for the cyber criminal over the classical criminal. Digital ‘arms’ are easy to
procure and there is no real dissemination control. The attacker community has undergone a fundamental
change in the last years. Highly skilled professionals produce the attacking tools and offer them on the
Internet for moderate fees [10]. Organized crime but also so called script kiddies can use such tools to
perform efficient attacks on persons and institutions. But such technology can also support the cheating of
thousands of individual customers which may jeopardize even big companies or whole industries. The
cracking of pay TV decoders or the distribution of illegal software copies are typical examples of such low
level fraud at large scales.
2.2
The criminal scaling threat
The most worrying fact however comes from the rapid growth of access and usage of the Internet in
combination with the still inefficient detection and repression of digital crimes that incite a large fraction
of potential crooks to try their chance. Combining the above cited typical rate [6] of a few percent of
individuals inclined to criminality with the steeply growing access to the Internet (Fig 1) shows that the
Internet will become more and more a dangerous place for doing business. Each additional attacker is
a threat for the whole Internet community and the probability to become a victim gets close to unity [11].
New technologies and concepts like mobile, ubiquitous and pervasive computing will additionally open
wide fields for new kind of threats and attacks [12].
64
Security and Protection of Information 2009
Figure 1: Growth of the number of Internet hosts is still unbroken.
The growing number of highly skilled persons that offer their service and knowledge for setting up online
fraud and creating attack tools will not be matched by the defending industry. The technical security
measures become more and more complex, cumbersome and annoying for the users if they are applied
ubiquitously to the whole communication infrastructure. Users will not accept such impediments for the
Internet usage and circumvent the security measures or abstain from secured systems [13]. The arms race
with increasingly sophisticated technologies trying to turn the Internet in a safe place will probably not end
in favour of the security industry. New strategies against fraud in the cyberspace are necessary.
3 A new approach to transaction security
One error of estimation is the assumption that we need a secure Internet. In reality for most of our typical
activities in the cyberspace we do not need security and thus do not care about it. To browse some news
sites or chat within an anonymous community we are well aware that things may not be like they are
presented and we don’t bother about that. A secure communication between the transaction partners is
only necessary and expected when personal or material values are involved. It therefore makes sense to
concentrate security measures on such situations where the principals in a transaction process are keen to
participate in more demanding assured communication protocols. For this we need a clear view about the
critical steps in a transaction situation between a customer and a provider of a value service.
We may represent such an E-business transaction schematically by a triangular relationship. On one side
we have the physical person that intends to take advantage of the providers offer. To get in contact with
the provider the person needs a certain digital identity, which allows the operator to define and maintain
the business relation with the person. The operator administrates the access rights to his value services and
allocates these rights through the allocated identity to the authorized user or customer. A person intending
to use the operator’s service must present some identity credentials to show that she really has the claimed
identity with the corresponding rights (e.g. possession of a personal credit card). On the backward step the
operator then grants access or allowance for the value service to the person after having checked the
presented identity credentials and the communicated transaction instructions.
Security and Protection of Information 2009
65
3.1
Structure of an E-commerce transaction
The mandatory processing steps to conclude an E-commerce transaction as just described are
•
Authentication, which binds a physical person to the allocated identity in the identity
management system of the operator,
•
Authorization, which defines the transaction content based on the rights allocated to the digital
identity and the transaction instructions from the user and
•
Allowance, which includes the delivery and access to the rights that have been negotiated in the
previous steps.
These steps may differ in their order but in some form they are always present in an online transaction.
Their relation can be represented in the triangular diagram of Fig. 2.
0101010
0110111
1001001
t
ef
th
y
tit
en
Id
t
Au
n
he
io
at
tic
n
Identity
T
m ra n
a n sa
ipu cti
lat on
ion
Au
th
or
is
at
ion
Allowance
Access
Rights
Person
DoS, hijacking,
repudiation
Figure 2: Authentication, Authorization, Allowance – the principal steps
in an E-commerce transaction and related threats.
Each of the three processing steps is endangered by some specific threats, namely identity theft
compromises the authentication, transaction manipulation jeopardizes the authorization step and any kind
of threats to the use or delivery of the rights endangers the exercise of the rights through the authorized
person. To make E-transactions more secure it is necessary to focus the security measures on these
interaction processes. The AXS-Authentication System (AXS-AS) is an example of such a focused security
concept. The AXS-AS delivers dedicated solutions for the mutual authentication of business partners and
methods to verify the authenticity of a transaction request even in a hostile environment. The AXS-AS
does not try to make the Internet a secure system but it delivers dedicated means to secure the
authentication and the authorization step of a transaction.
66
Security and Protection of Information 2009
3.2
Attacks on E-commerce transactions
For a better understanding of the threats to online business transactions we need a closer look on the
different weak spots of the end-to-end communication channel between a user and operator and the way
the transaction can be attacked. With ‘user’ and ‘operator’ we use the typical notation for an E-commerce
transaction, but the same discussion holds for other forms of E-business relations e.g. for partners in
internal business processes or in enterprise communication acts. The schematic model of Fig. 3 shows
some of the weak spots in the communication channel. The scheme is strongly simplified. In reality the
communication channel is a complex technical system that bears many additional known and still
unknown exploits and points of attacks. The attack surface is huge and the rapid technological evolution
generates new weak spots all the time.
Figure 3: Potential attack vectors on a user-operator communication channel
in an E-business transaction scenario.
3.2.1
Attacks against the operator
A typical communication channel has at the operator side a secure zone where the value services, the access
rights and the user identities are administered. Such secured zones have in general a high level of logical
and physical protection against intruders or malware. Their security is often assessed and defined through
some sort of certification procedure. The entry threshold for an attacker into the heart of such an IT
infrastructure is rather high but can never be completely excluded.
The Web, mail and DNS servers are directly accessible from the outside and thus often a target for
attackers e.g. with the aim to install malware that infect visiting browsers. The main threats for the Web
infrastructure come from attacks against the availability of the service. In so called DoS (Denial of Service)
or even DDos (Distributed Denial of Service) attacks the attacker sends a huge amount of phoney service
requests to the Web portal of an organisation. The resulting overload of the Web infrastructure may lead
to a breakdown of the service.
Security and Protection of Information 2009
67
3.2.2
Attacks against the connecting channel
Another potential weak point is the infrastructure of the Internet itself. Any gateway, router or data bridge
can be used to eavesdrop or to manipulate the passing data traffic. If the communication between the
webserver and the Internet access device of the user is secured by a SSL/TLS connection this kind of attack
is marginally important. But there is the possibility that an attacker can manipulate the configuration of
the local infrastructure to accept a self created SSL certificate and to connect to his webserver whenever a
specific IP address is called. In this case the attacker becomes an impostor that redirects the whole traffic to
run through his machine and acts as a so called Man-in-the-Middle.
3.2.3
Attacks against the user Internet access device
The most vulnerable zone in the channel is the user Internet access device and the local infrastructure of
the user. Personal computers are often insufficiently protected against new types of malware. In addition
local users are subject to social engineering attacks. The attacker tricks the user to allow or even support
the installation of malware at different levels of the local operating system, the Internet applications or
even in HW device drivers. Identity theft, MITM (Man-in-the-Middle, Man-in-the-Browser) and session
hijacking attacks most often happen at this level. There is a last but very important weak point in the endto-end communication which is the authentication of the user itself. A simple UserID/PW authentication
is highly insecure. New 2- or 3- factor authentication should be in place to protect from impostor attacks.
3.2.4
Insider attack
An insidious threat is the so called insider attack. A legitimate user running an E-commerce process later
may deny that he had done so. This kind of cheating undermines the trust in E-business relations and it is
very harmful for a proper functioning of E-business relations. It only works if the user can create
reasonable doubt that he was not the author of a certain transaction. If the transactions are secured
through provable and auditable mechanisms such attacks become ineffective.
3.3
Securing E-commerce transaction
Security for the E-commerce transaction means that all the threats have to be disabled up to a certain level
of remaining and acceptable risks and a set of security requests have to be fulfilled [14]. The
comprehensive list of such security requests are
• mutual trustworthy authentication of the user and operator,
• protection of private data of both involved parties,
• conservation of anonymity as far as possible in function of the character of the transaction,
• integrity and freshness of the transaction data,
• confidentiality of the transaction data,
• non repudiation.
An IT infrastructure that fulfills these requests follows certain common design principles like minimal
attack surface, defense in depth, compartmental separation of independent processes and communication
channels and last but not least trusted and verified infrastructure anchored in a tamper resistant hardware.
To these security request we have to add the usability requirements like ubiquitous availability, easy and
ergonomic handling for the user and low operational cost.
68
Security and Protection of Information 2009
3.3.1
The trusted computing platform approach
Any real secure digital system has to build up its protection scheme onto a tamper resistant hardware
platform. A realization of such a secure hardware anchor is the so called Trusted Computing (TC)
technology which is propagated by the Trusted Computing Group [14]. High end personal computers
today have a so called TPM (Trusted Platform Module) chip installed that can be used as secure HW
anchor. In a typical setup the secured system checks its status during the bootstrap and ramp up process
using verification strings stored in the TPM. Thus the trusted computing concept allows a control of the
hardware and the software running in a system. Although this technology seems to be efficient against
most of the malware attacks it has some fundamental problems that are directly linked with the still
uncontrolled attack surface of a typical personal computer. An attacker could install his own soft- and
firmware excluding the TPM chip in the boot process and just communicating to the user that the system
is still trusted. To overcome such an attack TC systems must be controlled by an external trusted server
which then leads to a level of supervision that is strongly disputed by privacy defending organizations.
Another drawback of this technology is its limited flexibility for configuration changes that are approved
by the authorized user. For real world applications it turns out that a PC system is often too complex to be
controlled efficiently over time with such an approach. TC is therefore not (yet) adapted for the home
computing domain.
3.3.2
The AXS-Authentication System approach
The AXS Authentication System™ (AXS-AS) uses a different approach that combines the advantage of the
TC technology with the requests for an easy accessible and privacy respecting computing for everybody. In
fact secure transactions are needed only in a small fraction of our Internet activities. Only for these cases
we need a secure end-to-end channel between the user and the service operator with fully trusted
infrastructure. The idea is to bridge the most insecure sections of the communication channel and install
a direct uninterrupted link from the secure zone of the operator directly to a dedicated secure data terminal
of the user. In the AXS-AS such a channel is realized by a dedicated simple device which is linked but not
integrated in the insecure local computing system and which has practically no attack surface. This
personal trusted device, called Internet Passport™, connects to any computer through an optical interface.
No local installation is necessary to establish a secure communication. The Internet Passport acts like a
notary between user and operator and assures secure authentication of both parties and transaction
authorization even over completely corrupted systems and tampered Internet connections. A remote
attacker has no possibility to enter in the strongly protected private end-to-end channel. The fact that the
user needs an additional device for the security operation is even beneficial. It underlines the special
situation of a secure and trusted transaction and makes the user more alert and vigilante.
4 Protocol and architecture of the AXS-AS
The AXS-AS is designed to present a minimal attack surface for external attackers. Communication
channels are established on the fly and new session keys are used for every transmitted message. The
security of the mutual authentication and the transaction verification relies on different cryptographic
secrets thus the compartmentalization and the defense-in-depth principles are realized. The communication protocols are defined in a way that all above security requests are fulfilled.
Security and Protection of Information 2009
69
Figure 4: The AXS-AS establishes a secure communication channel between the operator and the user.
The Internet Passport acts like a notary authenticating the user and vice-versa
displaying the authentic trade name of the operator for the user.
4.1
Basic Application Cycle – the AXS-AS security protocol
The legitimate owner of an Internet Passport can use his card for the access to all services of Providers that
accepts the AXS-AS authentication and transaction hedging and that are attached to an AXS Security
Manager server (AXS-SM). At the first usage of an Internet Passport at a Provider’s service site an
initialisation step runs in the background. The called AXS-SM server allocates and initialises a dedicated
secure channel for the connection between the Internet Passport and the business application (BA) of the
concerned Provider.
After this initial registration step a user authentication or a transaction hedging process runs with the
following simple protocol, called Basic Application Cycle (BAC) protocol:
70
•
The BA decides in a business relation process to run an authentication or a transaction hedging
and asks the user to provide his Internet Passport number (PPN). The user enters the PPN.
•
The BA calls the AXS-SM server with the corresponding request for the specific PPN through
a Web Service call (SOAP message)
•
The AXS-SM server generates a secured message for the channel within the Internet Passport that
was allocated to the Provider and returns the bit-string to its BA
•
The Flickering Generator in the Web server of the BA transforms the message into a flickering
code and displays it in the browser of the user.
•
The user reads the flicker code with his Internet Passport holding the optical sensors of the card
directly on the screen of his local Internet access device.
•
The Internet Passport decrypts and interprets the message and asks the user to authenticate.
•
The user authenticates himself towards the Internet Passport with a 2F or 3F authentication
protocol including fingerprint recognition.
•
Upon successful authentication the Internet Passport shows the user the hedge message with
a onetime codebook code.
•
The user selects the appropriate answer and returns the corresponding response code to the BA of
the Provider which forwards it by a subsequent WS call to the AXS-SM server.
•
The AXS-SM server verifies the response and returns a response ticket to the BA of the Provider
which completes the BAC protocol.
Security and Protection of Information 2009
Figure 5: The sequence diagram of the BAC protocol.
4.2
Architecture of the AXS-AS
The Provider of a value service that wants to use the AXS-AS registers his Business Application (BA) at an
Operator that runs an AXSionics Security Manager server (AXS-SM) and installs the Flickering Code
Generator (FCG) in his Web Server. He adapts the workflow of his BA to include secure authentication
and transaction hedging in his security concept and insert a Web Service client for calling the
corresponding AXS-AS application. Part of the Provider registration process is the submission of a verified
and cryptographically secured branding seal that visually represents the Provider organisation. After
a completed registration the Provider is ready to accept Internet Passport as credentials from previously
known or identified user to access the value service.
When an AXS-SM server receives from a Provider’s BA the request to register a new Internet Passport it
needs the corresponding keys to access and initialise a new free channel in the concerned Internet Passport.
For this the AXS-SM requests from the AXSionics Credential Management (AXS-CM) server the
necessary credential record2. The AXS-CM delivers the corresponding security association if the Internet
Passport is known in its domain. Otherwise the AXS-CM server has first to ask for such a credential record
2
The credential records in the AXS-Card and the AXS-SM contain the parameters to establish a secure
communication channel to run the BAC protocol; it has the function of a security association (SA) as
defined in RFC2408 or IPSEC.
Security and Protection of Information 2009
71
at the AXS-CM of the issuing domain of the said Internet Passport3. The AXSionics Domain Routers
(AXS-DR) of the AXS-AS provide this localisation information for all AXS-CM. The Domain Routers
have a similar function for the AXS-AS than DNS servers for the URL resolution in the Internet. They
provide the path to the AXS-CM where to get a credential record for an Internet Passport that was issued
over another domain. Whenever a new Internet Passport is delivered to a user the corresponding security
association data are deposited in the AXS-CM server of the issuing broker.
Operator
Provider
User
Actors
(Organisational-entities)
Broker
Business
portal
AXS-CM
Credential
Management
AXS- SM
Server
Business
portal
Infrastructure
(IT-entities)
AXS-SM
Server
AXS-DR
Business
portal
AXS-CM
Credential
Management
AXS-SM
Server
AXS-SM
Server
Business
portal
Internet
Passport
Business
portal
Business
portal
Figure 6: The architecture of the AXS-AS with the operative AXS-SM servers
and the credential managing AXS-CM servers.
5 Innovative concepts of the AXS-AS
The AXS-AS introduces several innovative concepts and technologies for the communication, the identity
management and the user authentication.
5.1
Optical interface
The first and often surprising novelty is the way the token receives messages. The BAC-message generated
by the AXS-SM server appears on the screen display of the local computer as a window with flickering
fields. The user holds the activated Internet Passport directly on the display with the flickering fields in
3
The domain of an AXS-CM is defined as the tree of all attached AXS-SM and all the BA of the Providers
registered at these AXS-SM as well as all Internet Passports that have been initialised for and issued over
this tree.
72
Security and Protection of Information 2009
form of a stripe, a trapeze or a diamond. The card reads the flickering message with a linear array of optical sensors. Only the authentic Internet Passport that was addressed by the AXS-SM can read and decrypt
the displayed flickering code. This one way communication channel is sufficient to transmit a one-time
password, a transaction receipt, a voting list or any other kind of short document that is needed to hedge
a transaction. The Internet Passport shows the user the message content on the internal display together
with some one-time codes for the possible responses. This communication scheme needs no electrical or
radio connection between the Internet Passport and the local computer and no SW or HW has to be
installed on the local Internet access device. This makes the AXS-AS mobile, flexible and simple to roll
out. Any computer that is connected to the Internet can be used to establish a secure channel between
a service provider and a user.
Figure 7: The optical interface of the AXS-AS consist of a window that opens in the browser of the local
computer. It shows flickering fields in the form of a trapeze. The user holds the AXS-Card directly on the
flickering pattern on the computer display and receives the encrypted message from the operator. The local
computer only serves as a transmission device and has no access to the encrypted message.
5.2
User side identity federation
The second innovative approach is the identity credential management within the AXS-AS. Each Internet
Passport contains a large number of pre-initialized secure channels that can be activated whenever
necessary. The user decides which provider get access to the identity credentials in his card. For this he
submits a unique identifier (AXSionics Passport Number) to the operator. The AXS-SM to which the
Provider’s BA is attached then allocates automatically one of the not yet used pre-initialized channels to
link the BA with the card. The AXS-SM changes the keys of the allocated channel in the Internet Passport
at the first message exchange. After this first registration step the allocated channel is only accessible by the
BA of the provider. But the user decides to whom he wants access to the pre-initialized channels in his
Internet Passport. This shared control of the communication channels allows a very flexible realization of
identity federation. The trust is established and shared only between user and provider. No additional
agreements between different operators or the sharing of identity information are necessary.
The advantage of such a personal identity management assistant is evident. Today a typical user of
computer systems and Internet services has to memorize and manage over 50 UserIDs, passwords and
PIN-codes. It is a well-known fact that users don’t handle such identity credentials as valuable secrets.
Users choose either simple passwords or simple rules to memorize passwords. Dictionary attacks can break
most of such password secrets within seconds [16]. To augment the authentication security value service
providers move now to 2-factor authentication and distribute passive or active tokens (cards, OTP-lists,
time dependant pass code generators, digital certificates etc). The handling of all these physical and virtual
identity credentials makes life not easier for their owner. Only a user side identity management can handle
this proliferation of identity credentials. The Internet Passport handles the administration of the multiple
identity relations of the typical E-business user with a user side federation.
Security and Protection of Information 2009
73
Figure 8: User side identity federation through the Internet Passport.
5.3
Encapsulated biometrics
A further innovation of the AXS-AS is the way biometrics is implemented. The storage of the biometric
reference template, the measurement and the comparison process are completely integrated in the Internet
Passport. The card keeps the biometric data encapsulated under the full control of the owner. All the fears
about irreversible corruption of biometric data become obsolete. The privacy of these data in the card is
fully protected which differentiate the AXS-AS solution from most other biometric identification or
verification systems.
The biometric data are controlled by the user and only the biometric processing is controlled by the
operator that deploys the processing device. Any outside instance gets only digital identity credentials with
no information about the biometrics of the user. This concept allows mutual trust in the biometric
identity verification and guarantees the necessary privacy and protection for the non revocable biometric
data of the user. This type of shared control implementation of biometrics is recommended by the privacy
protection community [17].
Figure 9: All biometric processing and data are enclosed in the tamper resistant Internet Passport that
remains in the possession of the user. The biometric data are encapsulated
and protected in the secure processor memory of the device.
74
Security and Protection of Information 2009
6 Conclusion and outlook
The E-business community is forced to find a solution against the growing importance of Internet related
crimes. The need to secure the communication between operators and users of a value service has been
recognised on a broad scale [18]. However there is not yet a consensus about the way how to face this
challenge. More or less all agree that the necessary secure channel needs some kind of additional
infrastructure. There are communities which see the mobile phone and the SIM-Card as a carrier of this
infrastructure [19], others see a solution in the Trusted Computing approach or in an extensive use of
smart cards as identity credential [20]. On a longer time scale the community expects a convergence of the
personal mobile communicator with an identity assistant. But independent of all the possible strategic
technologies that the industry may develop and roll out on a large scale in the future, immediate efficient
solutions are requested now.
The presented AXS-Authentication System has the potential to defeat most of the actual MITM and
identity theft attacks on financial services. It could become the requested secure platform that runs already
on the present IT-infrastructure. In addition it presents unmatched advantages for the user convenience,
the privacy protection and the cost per channel. First adopters are now rolling out Internet Passports in
their user community. As soon as a critical density of users and acceptors is reached the full power of the
user centric architecture will become evident.
References
[1]
The Digital Economy Fact Book, 9.ed, The Progress Freedom Foundation, 2007;
[2]
Europe’s e-commerce Forecast:2006 to 2011, Jaap Favier, Forester Research, 2006
[3]
“Identity theft: A new frontier for hackers and cybercrime”, Claudio Cilli, Information Systems
Control Journal, 6, “2005 Online fraud costs $2.6 billion this year”, B. Sullivan, 2007
MSNBC.com, http://www.msnbc.msn.com
[4]
Gartner, “Nearly $2 Billion Lost in E-Commerce Sales in 2006 Due to Security Concerns.”
Gartner, November 27, 2006. (http://www.gartner.com/it/page.jsp?id=498974)
Gartner, “7.5 Percent of U.S. Adults Lost Money as a Result of Some Sort of Financial Fraud in
2008”, Gartner, March 4, 2009 (http://www.gartner.com/it/page.jsp?id=906312)
[5]
Melani report, Informationssicherung, Lage in der Schweiz und international, 2007/1; ISB,
Schweiz. Eidgenossenschaft
[6]
Seventh United Nations Survey of Crime Trends and Operations of Criminal Justice Systems;
http://www.nationmaster.com/graph/cri_tot_cri_percap-crime-total-crimes-per-capita
[7]
Secrets & Lies; B. Schneier; Wiley Computer Publishing, J. Wiley ¬ Sons, Inc., ISBN 0-47125311-1
[8]
Identity Fraud Trends and Patterns; G.Gordon et al.; Center for Identity Management and
Information Protection, Utica College- cimip US Dept. of Homeland Security
[9]
ID-Theft: Fraudster Techniques for Personal Data collection, the related digital evidence and
investigation issues; Th. Tryfonas et al.; Onlinejournal, ISACA, 2006
[ 10 ] Web server exploit Mpack: http://reviews .cnet.com/4520-3513 7-6745285-.html
[ 11 ] An unprotected computer attached to the Internet is infected within minutes, see “Nach einer
Minute schon infiziert”, Sonntags Zeitung, Switzerland, 21.11.2008
Security and Protection of Information 2009
75
[ 12 ] Top Threats to Mobile Networks - And What to Do about Them, D. Ramirez, November 17,
2008, SearchMobileComputing,
(http://smartphone.searchmobilecomputing.com/kw;Mobile+Threats/Mobile+Threats/smartphone/
phone-technology.htm)
[ 13 ] Gartner, “Consumers Are Unwilling to Sacrifice Convenience for Security, Despite Widespread
Online Fraud”, Gartner, February 24, 2009 (http://www.gartner.com/it/page.jsp?id=895012)
[ 14 ] L. Müller, Authentication and transaction security in E-business, Proc. 3rd IFIP Int. Summer
School 2007, ISBN-13: 978-0387790251
[ 15 ] Trusted Computing Group; http://www.trustedcomputinggroup.org/home
[ 16 ] MySpace Passwords aren’t so dumb, Bruce Schneier, in Wired, 14.12.06
http://www.wired.com/politics/security/commentary/securitymatters/2006/12/72300
[ 17 ] Biometrics in identity management, FIDIS EU-NoE FP6; D3.10; (to be published),
http://www.fidis.net
[ 18 ] Information Security is falling short, it is time to change the game; A. Coviello, Keynote speech at
the RSA Conference Europe 2007, London
[ 19 ] The smart and secure world in 2020, J. Seneca, Eurosmart Conference, 2007
[ 20 ] Establishing a uniform identity credential on a national scale; Bearing Point White Paper and
Protecting future large polymorphic networked infrastructure; D. Purdy, US solution for a national
governemental electronic ID-Card; presented at the World e-ID conference in Sophia Antipolis,
Sept. 2007
76
Security and Protection of Information 2009
Security analysis of the new Microsoft MAC solution
Martin Ondráček
SODATSW spol. s r.o., product director
[email protected]
Ondřej Ševeček
GOPAS, a.s., product manager
[email protected]
Abstract
Because the general principle behind the Mandatory Access Control (MAC) is well established and has also
been implemented into various security technologies, Microsoft decided to add this security technology to
the Microsoft Windows family of operating system products - available from Windows Vista. Microsoft
Windows are the most widely deployed enterprise operating systems, so the technology will have an impact
on the vast majority of the world’s confidential data and expertise.
Until Windows Vista was released the operating system enforced and guaranteed only user identity based
Discretionary Access Control (DAC) which had, as it later proved, certain limitations. These include, but
are not definitely limited to, the inability to separate operating system and user code and data or to enforce
strict administrative access separation upon various user roles. Integrity Levels, as Microsoft calls their
implementation of MAC, brings a powerful new level of access control. Application data can now be
protected even amongst code segments running under a common user or system identity. This provides for
better security for services susceptible to remote attacks. It also enables application developers to prevent
network authorities from accessing user data.
In this paper we do not only describe the basic principles of general MAC and details of its current
Microsoft implementation. The most important part is a discussion of new possibilities within the data
security field and its administrative process model. The limitation of the technology is that at is focused on
external threats and only few internal ones. Because employees are the biggest threat to a company’s data,
we would like do explain how it, in its current state of functionality, still cannot prevent the most severe
attacks originating from within organizations themselves.
Keywords: Mandatory Access Control, Windows Vista, Integrity Levels.
1 Mandatory access control - introduction
Mandatory access control in computer security means a type of access control by which the operating
system restricts the ability of a subject to access or perform some operation on an object. In fact a subject is
typically a process; object means file, directory, shared memory etc. Whenever a subject attempts to access
an object, the operating system kernel runs an authorization rule which decides whether the access is
allowed. That is the reason why each subject and object has a set of security attributes. MAC assigns
a security level to all information and assigns a security clearance to each user. The authorization rules are
well-known as a policy. Any operation is tested against the policy to determine if the operation is allowed.
The goal is to ensure that users have access only to that data for which they have clearance. The security
Security and Protection of Information 2009
77
policy is controlled by a security policy administrator. Users do not have permission to change the policy for example to grant access to files that would otherwise be restricted.
By contrast Discretionary Access Control (DAC) defines basic access control policies for objects. These are
set at the discretion of the owner of the objects. For example, user and group ownership or file and
directory permissions. However, DAC allows users (owners of an object) to make policy decisions and
assign security attributes - to grant/deny access permission to other users. MAC-enabled systems allow
policy administrators to implement organization-wide security policies. This allows only security
administrators to define a central policy that is guaranteed to be enforced for all users.
Mandatory Access Controls are considerably safer than discretionary controls, but they are harder to
implement and many applications function incorrectly. That’s why users are used to DAC, but MAC has
only usually been used in extremely secure systems including secure military applications or mission critical
data applications.
Mandatory access control models exhibit the following attributes:
•
Only administrators, not data owners, make changes to a resource's security label.
•
All data is assigned a security level that reflects its relative sensitivity, confidentiality, and
protection value.
•
All users can read from a lower classification than the one they are granted (A "secret" user can
read an unclassified document).
•
All users are given read/write access to objects of the same classification (a "secret" user can
read/write to a secret document).
•
Access to objects is authorized or restricted based on the time of day and dependant upon the
labelling on the resource and the user's credentials (driven by policy).
1.1
Some implementations in operating systems
There are only a few robust operating systems with MAC implemented. Actually none of these is certified
by TCSEC (Trusted Computer System Evaluation Criteria) robust enough to separate Top Secret from
Unclassified. However, some less robust products exist.
78
•
SELinux (NSA project) added a MAC architecture to the Linux 2.6 kernel – it used an LSM
Linux kernel feature (Linux Security Modules interface). Red Hat Enterprise Linux version 4 and
later comes with an SELinux-enabled kernel.
•
Ubuntu and SUSE Linux has MAC implementation called AppArmor from version 7.10
onwards – this also uses an LSM Linux kernel feature. AppArmor is incapable of restricting all
programs and is not included in the kernel.org kernel source tree. In most Linux distributions
MAC is not installed.
•
GrSecurity – this is a patch for the Linux kernel providing a MAC implementation, it is not
implemented in any Linux distribution by default. GrSecurity disables the kernel LSM API,
because it provides hooks that could be used by rootkits.
•
TrustedBSD – from version 5.0 onward this is incorporated into releases of the FreeBSD. MAC
on FreeBSD comes with pre-built structures for implementing MAC models such as Biba and
Multi-Level Security. It can also be used in Apple's Mac OS X through the TrustedBSD MAC
framework.
Security and Protection of Information 2009
•
Trusted Solaris – Sun uses a mandatory access control mechanism, where clearances and labels
are used to enforce a security policy. However, the capability to manage labels does not imply the
kernel strength to operate in Multi-Level Security mode.
•
Microsoft Windows Vista and Server 2008 – finally, the last version of Microsoft operating
system implemented Mandatory Integrity Control along with adding Integrity Levels to processes
running in a login session. Because Microsoft Windows are the most widely deployed enterprise
operating systems, we will describe the implementation through remainder of this article.
2 MAC in Windows
The article describes MAC (Mandatory Access Control) technology in Microsoft Windows Vista and
newer versions of Microsoft operating systems built upon Windows NT platform. Unless mentioned
otherwise, the term Windows or Vista refers to the current implementation of Microsoft Windows Vista
or Microsoft Windows Server 2008 (which both utilise the same kernel versions). Other terms used are in
accordance with Microsoft’s own public documentation resources unless explicitly stated otherwise.
2.1
Identities
Windows represent the identities of users, groups and other objects by utilising their respective SIDs
(Security IDs). Each user or group is assigned a unique (mostly random but unique) SID. A user can be
a member of several groups. This would result in the user having several identities (several SIDs), their
own and all the group SIDs.
Starting with Vista, each user is also assigned a special system SID (one per user) defining their Integrity
Level. The Integrity Level SID is used to enforce MAC (Mandatory Access Control), which is our topic
here.
For example, a user could have the following SIDs:
SID
Note
user’s SID
S-1-5-21-1292003277-1427277803-2757683523-1000
random, but system wide unique
group1 SID
S-1-5-21-1292003277-1427277803-2757683523-1001
random, but system wide unique
group2 SID
S-1-5-21-1292003277-1427277803-2757683523-1002
random, but system wide unique
group3 SID
S-1-5-32-544
built-in group, system wide unique
the same on all Windows systems
integrity
level
S-1-16-12288
system wide unique
the same on all Windows systems
Table 1: Example of user’s Access Token contents.
Security and Protection of Information 2009
79
The Integrity Levels defined in Vista are:
Integrity Level (Mandatory Level)
SID
Mandatory access rights level
Untrusted
S-1-16-12285
Lowest
Low
S-1-16-12286
Medium
S-1-16-12287
High
S-1-16-12288
System
S-1-16-12289
Trusted Installer
S-1-16-12290
Highest
Table 2: Integrity Levels on Vista.
Note: The “Mandatory access rights level” column in table 2 is included only to facilitate understanding
the principle, which will be discussed later. For now, we will say only that the higher the Integrity Level,
the higher the user rights within the system.
As stated, the user’s identity is represented by the list of SIDs. The list is usually stored in a system defined
data structure called an Access Token. An Access Token is used for access checks against securable objects.
2.2
Windows Access Control Architecture
The enforcement of access control in Windows is the sole responsibility of each separate component
providing services for processes running under a specific user account. Such kernel components include,
for example, NTFS file system (kernel mode driver), Registry (kernel mode component of executive),
Window manager (kernel mode driver). User mode windows services that enforce access control
restrictions are, for example, Service Controller, Server service (file sharing), Active Directory, Terminal
Services, Winlogon (logon UI), DNS Server and Cluster Services.
To enable access control checks, the operating system provides developers with two similar API functions
available both in kernel mode (SeAccessCheck) and in user mode (AccessCheck). The two functions
provide stateless access checking (they are parts of call-based .DLLs). It is the responsibility of an
application enforcing access control to provide the function with a user’s Access Token (user identity
proof) and a Security Descriptor (list of access control rules) of the securable object.
As previously mentioned, an Access Token is a data structure which contains a list of user’s identities in
the form of SIDs. This list must contain the user’s own SID and can also contain a list of SIDs for groups
of which the user is a member. Starting with Vista, the list also contains one system SID entry asserting the
user’s Integrity Level.
An Access Token is built by a user mode process called Local Security Authority (LSA, lsass.exe). LSA is
a trusted operating system process which starts automatically at system boot. Its binaries are digitally
signed by Microsoft and the signatures are checked by kernel mode process loader. This ensures the
validity of the LSA process identity. LSA builds Access Tokens only in response to user requests (usually
coming from the Winlogon logon process, but these can even be issued by an arbitrary user mode code)
which must contain proof of user identity (such as user login name and password). The resulting Access
Token is then returned to the user process which requested it. The token contents are unsigned, which is
only an implementation detail and is not necessary to ensure its validity (read further).
80
Security and Protection of Information 2009
An Access Token can be attached to a Process (or Thread), the kernel mode structure which describes each
process (thread) running on the operating system. Only trusted operating system processes (such as
Winlogon or LSA) can attach an arbitrary token to Process (Thread). This fact offers the same level of
security as if the token were signed by LSA as previously mentioned.
When an application is checking access it can either use the Process’s (Thread’s) attached token, or can
build its own token from supplied user credentials. Kernel mode components always use only the system
guarantied Process or Thread attached tokens.
Security Descriptor (SD) is a data structure comprised mainly of a list of Access Control Entries. Access
Control Entries (ACEs) represent access control rules applied to a securable object. Such objects are, for
example, files, folders, registry keys, LDAP containers, LDAP objects, shared folders etc. An ACE is
basically an access rule defining “who can do what” based on the SID of the user (identity). For example,
Martin’s SID can read/write or Sales’ group SID can read only. Vista also defines several system SIDs for
various Integrity Levels which means that an ACE can also be created for something like “a certain
Integrity Level can read and execute”.
Security Descriptors are stored by each application in their own defined storage such as $security file in
NTFS volumes, in registry files or in NTDS.DIT (Active Directory database file). The application itself
guaranties SD validity and it is its own responsibility to prevent tampering with this. Before Vista, there
was no built-in mechanism to prevent unauthorized modifications of Security Descriptors when the
attacker was able to obtain physical access to the storage. Starting with Vista, the storage can be physically
secured by BitLocker Drive Encryption which uses TPM (Trusted Policy Module) for encryption key
storage, hardware configuration validation and signature checks of operating system binaries (more on this
later).
2.3
Trust
As described previously, an Access Token (the user’s identity), is built by LSA, system’s trusted security
authority. In principle an Access Token can even be built manually by application processes because it is
unsigned. However a system (LSA) created Access Token cannot be spoofed. The data structure created by
LSA is stored in kernel mode and user applications receive only a handle to it.
If an application accepts only the tokens attached to processes/threads, it can be sure they are not tampered
with. This attachment could only have been made by the trusted authority itself. From that point onwards
the token lives in kernel mode. Processor architecture (64bit only) provides operating system code with the
ability to ensure it cannot be tampered with within the kernel mode memory, even by administrators.
There is also an API which enables applications to ask LSA to securely modify existing tokens issued by
itself. A user can ask to flag some of the SIDs with certain values or to duplicate the token with a different
Integrity Level.
2.4
Process startup
This duplication is requested, for example, when a new process is created from an existing one. The kernel
mode code doing so asks LSA to duplicate the token and attach it to the newly created kernel mode
structure. This duplication is made by default. There is no modification applied by the default process, but
it is possible on behalf of the user process requesting the process creation operation.
Processes running with specific levels of permission can also create children with completely different
tokens. However, the tokens must be those originally created in kernel mode by LSA which ensures trust.
In this manner every process has an Access Token of the respective user attached. It ensures that access
control enforcement can be carried out later by whichever service requires it.
Security and Protection of Information 2009
81
2.5
Access Token and Integrity Levels
When LSA builds the user’s Access Token it always includes the user’s own SID and all the SIDs of the
groups of which that user is a member. On Vista, LSA also includes the Integrity Level SID dependent
upon some hard coded logic.
The Integrity Level included is determined by the circumstances of the user account in question. The
system will always include only one Integrity Level SID according to table 2 and table 3:
Logon type
Integrity Level
Access rights level
User has been logged on anonymously
(without providing credentials)
Low
Lowest
User is normal user logged on with credentials
Medium
User is member of local Administrators
High
User is logging on as a Service
System
User is Trusted Installer system account
Trusted Installer
Highest
Table 3: Integrity Levels according to logon type.
Note 1: The resulting Integrity Level will usually be the highest possible value of those available as long as
the logon type meets several conditions. The except is the anonymously logged on user whose Integrity
Level is always Low regardless of other conditions.
Note 2: Trusted Installer system account is a special system account which only LSA itself can log onto.
LSA creates such a token only when requested by a special kernel mode LPC (Local Procedure Code) call
from another trusted system process called Service Controller. The account is used by the operating system
itself for self-code modifications such as the application of updates, upgrades, patches or re/installation of
system components. The code running under such an account works only with Microsoft signed operating
system binaries.
Note 3: Regardless of logon method, a user cannot receive a higher level of access than System.
Note 4: A user can ask for a token that has a lower access level than what would be the default. A User
cannot receive an access level higher than that which applies to his account from the logic above. A User
can also ask LSA to duplicate an existing token with a lower Integrity Level applied.
Note 5: Medium Integrity Level is usually issued to normal users.
Note 6: The logic mentioned is currently hard coded. There is no way to modify a user account in order to
configure it with a certain preset Integrity Level. Although the list of levels is limited to the 7 levels
mentioned by table 1, the levels are only a local setting (not affecting other network systems). This means
that future versions of the operating system could offer a larger list of levels or different logic for their
assignment. It is probable however, considering its essence that the “level” principle will be adhered to.
The newly created token can be attached to a newly created process. This is usually the first user process
created for the user who has just logged on. Any future processes started from the actual process will
receive a copy of the token by the normal method of its duplication as mentioned above. If requested, the
newly created processes can have the tokens modified to a lower Integrity Level.
82
Security and Protection of Information 2009
2.6
Normal user logon process
When the operating system powers up it first starts the two trusted system processes, LSA and Winlogon.
Winlogon then displays logon UI to enable users to provide their credentials. Once the credentials have
been collected from a user, Winlogon asks LSA to create an Access Token for that user. LSA creates the
token in kernel mode memory, obtains a handle to it and returns the user mode handle to Winlogon.
Winlogon then creates a new (the first) user mode process for the user and attaches the created token to it.
Next the user’s process receives control and can start to obtain user control requests and create further
processes on behalf of the user. The original token, or its duplicates, propagate throughout the processes
and each service requested can apply access control mechanisms to the user’s token.
We can understand this behaviour by imagining the process is running under the user’s identity. In our
particular case we will refer to it as running under the user’s identity with a certain Integrity Level.
2.7
Security Descriptors
Files, registry keys, folders, services, windows etc. can be assigned a single Security Descriptor. The security
descriptor can contain several ACEs for a specific SID and an appropriate access type. Some ACEs will be
for user or group SIDs, others for Integrity Level SIDs. These are used further down and are called
Integrity Level ACEs.
With Regards to Integrity Levels, we are concerned only about Read, Write and Execute access types. It is
possible to map each of the more granular access type to these three generic types. This mapping is hard
coded into the access check API routines.
A single ACE can be created, for example, so that it specifies Medium Integrity Level for Read access.
Another could be created in the same Security Descriptor with High Integrity Level for Write access. The
ACEs can also have standard system flags such as object-inherit (OI), container-inherit (CI), inherit-only
(IO) and/or no-propagate-inherit (NP) as well. As there are only three access types there may be, at most,
three Integrity Level ACEs in a single Security Descriptor.
If there is no such Integrity Level ACE for a certain access type inside a Security Descriptor, the system
then assumes the object has been stamped with Medium Integrity Level for that particular access type
(Read, Write or Execute respectively).
2.8
Access checks against Integrity Level ACEs
As opposed to the previous access checking behaviour, Vista introduced modified logic in regard to the
Integrity Level ACEs. When an Access Token is checked against the ACE, there can be, basically, only two
results from such a check. The Access Token either meets the ACEs requirements or it doesn’t.
We could define two simple terms:
•
ACE Level means the Integrity Level in the specific Integrity Level ACE.
•
Token Level means the Integrity Level in a user’s token.
Note: There can be up to three ACE Levels in a Security Descriptor (one for each access type of Read,
Write and Execute) and only a single Token Level in a token.
Then the following logic applies. A token meets the ACE rule if the Token Level is the same or higher
(table 2) than than the ACE Level.
Security and Protection of Information 2009
83
For example:
Token Level
Possible ACE levels meeting the requirement
High
High, Medium, Low, Untrusted
System
System, High, Medium, Low, Untrusted
Table 4: Example of Token Levels versus ACE Levels in a Security Descriptor.
Examples from the other point of view:
ACEs on a file
Token Level
Granted access
Low Read
Medium Write
Medium
Read, Write, Delete, Change
ACE
Low Read
Medium Write
Low
Read
High Read
High Write
Medium
-
System Read
System Write
Medium
-
Table 5: Examples of file access ACEs in relation to user Token Levels.
Note: Delete and Change ACE access types are mapped by the API system as the generic Write access type.
Note: The Integrity Level ACEs do not grant appropriate access by themselves. There still have to be
appropriate normal ACEs that allow the user access. Integrity Level ACEs can be viewed as a second layer
of security applied only after the regular ACEs have been evaluated. It only restricts further access.
2.9
Usage examples
Normal resources (files, registry, etc.) are not stamped with any Integrity Level ACE. This results in them
having Medium ACE Level for all three types of access. A normal user is logged on with a Medium Token
Level. This means that normal users can generally access whatever resources they are granted access to by
other ACEs.
If the system wants certain resources to be accessible only to administrators, services and itself (Trusted
Installer), it can stamp them with a High ACE Level. There can also be resources accessible only to system
services (System ACE level). This provides a layer of isolation from administrators. Although
administrators cannot access the files directly by themselves, they could ask a system service called Service
Controller to install their arbitrary code as a system service. This code would in turn run with System
Token Level and they would be able to gain access to some files stamped with System ACE Level.
However, there is currently one ACE Level which could prevent even Administrators from accessing some
resources. The Trusted Installer ACE Level requires processes running under the Trusted Installer Token
Level. Only trusted operating system processes can run under such a Token Level. Unless the trusted
processes enable Administrators to access or modify the data, they would have no access at all. Currently,
there are no such resources within Vista that would prevent Administrators from gaining access to them,
but this could change in future versions of the operating system.
84
Security and Protection of Information 2009
Another use could be demonstrated upon vulnerable or insecure data/processes. For example Internet
Explorer, which is one of the most common malicious software attack venues, runs intentionally at Low
Token Level. There is only one disk folder (the temporary storage) which is stamped with Low ACL Write
access. This means that the IE process, even if compromised, cannot write/modify/harm anything other
than the temporary files stored in this location..
3 Impacts for data security
Our analysis of Windows MAC implementation shows, how the operating system works with identities,
files, processes etc. This technical description can uncover possible advantages and disadvantages. The
most important thing is that a standard user is logged on with Medium Token Level. Standard files,
directories or the registry also have Medium ACE Level for all access (because, they are not stamped with
any Integrity Level ACE). By default, processes started by a user obtain Medium Integrity Level and
elevated processes have High Integrity Level. A process must be explicitly configured to run with Low
Integrity Level = low-integrity process.
The limitation of the technology is that it is only focused only upon external threats. The technology
protects the system core, programs and files especially against internet threats. Because Internet Explorer
(and in future, hopefully, other internet clients) is the only low-integrity process, it cannot interact with
other processes with a higher Integrity Level. This means that it cannot interact with other user or system
processes, data or the registry. If Internet Explorer runs an harmful code which tries to perform API
functions such as CreateRemoteThread (inject a DLL), WriteProcessMemory (send data to a different
process), SetThreadContext and related APIs, it will be blocked. This allows potentially-vulnerable
applications to be isolated with limited access to the system (system level objects cannot be read or
modified by it).
However, our daily experience exposes a different problem. The biggest threats to a company’s data are its
employees – standard users or administrators. They are able to access data (files), make modifications,
make copies, use USB discs etc. They also have many notebooks and remote access. The Windows MAC
without third party software doesn’t change anything in data security. There is still place for off-line data
attack (e.g. notebook/harddisk/USBdisk lost...) or on-line attack (e.g. unlocked computer, emails,
administrator attack...). Note that unless there is a physical security on meta/data storage, the ACEs can be
modified by an offline attack regardless of their ACE Level.
If we view our users as potential attackers, MAC doesn’t help us at all. The user has no new limitation
when accessing data, copying data or using unauthorized software (of course, they can’t install this, but
there are so many portable apps ...). We still have to use sophisticated software for monitoring users, file
restrictions, encryption or alerting. Of course, security is a combination of various measures and we must
view it as such. MAC can help us to beat unwanted code from the internet more easily but other routes to
our data are still available.
In addition to that, there are practically only 2 applicable access rights levels – Low and Medium. Higher
levels are used by local administrator, system, or Microsoft (trusted installer). Low integrity level is
currently used only by Internet Explorer and is prepared for separation an internet code from the rest of
computer. So it is unimaginable to use it for standard applications which are working with user files. In
general practice it would mean that almost all standard applications will use the same level – Medium
Integrity Level. From this point of view all applications will have the same access to files containing
company know-how. Every application running under user token is potential security leak.
And there is one new unknown. The Trusted Installer Integrity Level, which is higher then system and is
able to do everything and none is capable to stop it. Trusted installer is prepared for Microsoft (patching
Security and Protection of Information 2009
85
Windows), but it can be potentially used for hiding files, processes or anything else. We have to trust
Microsoft, that his binaries are safe and without mistakes.
Because there is no possibility to add own access level or modify the logic of Integrity Levels, the result of
the whole technology is only operating system protection. Microsoft prepared it to increase system stability
and lower quantity of administrator interventions. Data classification and its separation are miles out.
Windows Vista is still not the enterprise standard. This is the reason why many software producers do not
prepare software to utilise the future potential of Integrity Levels. There will be quite a large gap between
the technology push-off and usage. We predict that vast usage is probable in the next 3-5 years. On the
other hand once most of the common threats have been blocked, the virus/trojans/spyware producers will
have to change their target. Because social engineering is becoming more and more popular, this could be a
suitable field for such groups. Preventing or controlling this will be a much harder goal for software
producers. This is going to be challenge in the future.
4 Conclusion
Starting with Windows Vista, Microsoft has added another layer of access control security exhibiting
similarities to Mandatory Access Control. Currently this implementation is restricted to a small set of
useful levels which are applied only to prevent malicious software attacks and increase system stability from
the point of preventing system code/configuration modifications.
Integrity Levels technology is not a silver bullet. It currently has only limited use for applications
developers, but future development could bring greater application possibilities due to the fact that the
technology does not require network wide compatibility and is only a matter of local system access checks.
There has also been a significant step into the realm of “self-secure” operating systems which prevents even
local administrators from doing anything untoward. The presumed goal is probably to enforce the DRM
(Digital Rights Management) or RMS (Rights Management Service) features of the operating system and
to prevent local administrators from tampering with such technologies.
There is no overall approach in data security against offline attacks and theft by authorised users. There is
still a need to encrypt company’s data and control user activities.
On the other hand, this not only secures the operating system code itself, but also provides Microsoft (the
owner of the code and the higher Integrity Levels) with the future ability to hide code/data or enforce
policies that would not be under the local administrator’s control.
References
[1]
http://msdn.microsoft.com/en-us/library/bb625964.aspx
[2]
http://msdn.microsoft.com/en-us/library/bb250462.aspx
[3]
http://www.minasi.com/apps/
[4]
http://www.securityfocus.com/infocus/1887/2
[5]
http://www.symantec.com/avcenter/reference/Windows_Vista_Security_Model_Analysis.pdf
[6]
http://blogs.technet.com/steriley/archive/2006/07/21/442870.aspx
[7]
http://blogs.technet.com/markrussinovich/archive/2007/02/12/638372.aspx
[8]
http://en.wikipedia.org/wiki/Mandatory_access_control
86
Security and Protection of Information 2009
Cryptographic Protocols in Wireless Sensor Networks
Petr Švenda 1
[email protected]
Faculty of Informatics
Masaryk University
Brno, Czech Republic
Abstract
This paper will introduce the technology of wireless sensor networks with a special focus on its security
issues. This relatively young technology started to evolve together with the advance in miniaturization of
electronic devices, decreasing costs and general spread of wireless communication. Data sensed by the
miniature devices in a target area (e.g., temperature, pressure, movement) are locally processed and then
transmitted to end user who obtains the possibility to continuously monitor target environment. The
usage of the technology starts from medical monitoring of the patients over agriculture and industrial
monitoring or early warning emergency systems, ending with uses for military purposes as well – that is
where the technology originally started. We will cover the issue of design of a key distribution and
establishment protocols secure against the partial network compromise in more details. Special focus will
be given to possibility for its automated generation of protocols for particular scenario based on
evolutionary algorithms. Opposite direction will be covered as well – automated search for attacker's
strategies with applications to secure routing and key capture attacks.
Keywords: key exchange protocols, evolutionary algorithms, wireless sensor networks.
1 Introduction
Advance in miniaturization of electronics opens the opportunity to build devices that are small in scale,
can run autonomously on battery and can communicate on short distances via wireless radio. These
devices can be used to form a new class of applications, Wireless Sensor Networks (WSNs). WSNs consist
of a mesh of a several powerful devices (denoted as base stations, sinks or cluster controllers) and a high
number (102 - 106) of a low-cost devices (denoted as nodes or motes), which are constrained in processing
power, memory and energy. These nodes are typically equipped with an environment sensor (e.g., heat,
pressure, light, movement). Events recorded by the sensor nodes are locally collected and then forwarded
using multi-hop paths to a base station (BS) for further processing.
Wireless networks are widely used today and they will become even more widespread with the increasing
number of personal digital devices that people are going to be using in the near future. Sensor networks
form just a small fraction of future applications, but they abstract some of the new concepts in distributed
computing.
WSNs are considered for and deployed in a multitude of different scenarios such as emergency response
information, energy management, medical and wildlife monitoring or battlefield management. Resourceconstrained nodes render new challenges for suitable routing, key distribution, and communication
The findings described in this work are result of cooperation with Dan Cvrček, Jiří Kůr, Václav Matyáš,
Lukáš Sekanina and others.
1
Security and Protection of Information 2009
87
protocols. Still, the notion of sensor networks is used in several different contexts. There are projects
targeting the development of very small and cheap sensors (e.g., [2]) as well as research in middleware
architectures [3] and routing protocols (AODV [4], DSR [5], TORA, etc.) for self-organising networks –
to name a few.
No common hardware architecture for WSN is postulated and will depend on the target usage scenario.
Currently available hardware platforms for sensor nodes range from Crossbow Imote2 [6] with powerful
ARM XScale processor with clock frequency up to 416MHz over Mica Mote2 [7] or TMote Sky equipped
with 8/16-bit processor with less than 10 MHz clock frequency down to Smart Dust motes [2] with their
total size around 1 mm3 and extremely limited computational power. No tamper resistance of the node
hardware is usually assumed as tamper resistance significantly increase the cost of device.
Security is usually an important factor of WSN deployment, yet the applicability of some security
approaches is often limited. Terminal sensor nodes can have no or little physical protection and should
therefore be assumed as untrusted.
Also, network topology knowledge is limited or not known in advance. Due to limited battery power,
communication traffic should be kept as low as possible and most operations should be done locally, not
involving the more powerful and (possibly) trusted base station.
2 Target of interest
Secure link communication is the building block for large part of the security functions maintained by the
network. Data encryption is vital for preventing the attacker from obtaining knowledge about actual value
of sensed data and can also help to protect privacy of the sensed environment. The aggregation of the data
from separate sensors needs to be authenticated; otherwise an attacker can inject his own bogus
information. Routing algorithms need to utilize the authentication of packets and neighbours to detect
and drop malicious messages and thus prevent network energy depletion and route messages only over
trustworthy nodes. On top of these common goals, secure and authenticated communication between
neighbours can be used to build more complex protocols designed to maintain reliable sensing information
even in a partially compromised network. The generally restricted environment of WSNs is a challenge for
the design such protocols.
2.1
Target scenarios
Early stage of the wireless sensor networks design with sparse real-world implementations naturally results
in wide range of the assumptions in theoretical works about network architecture (static, dynamic,
possibility of redeployments, degree of possible disconnection), topology (completely decentralized, with
clusters, various tree-like structures), size (from tens of nodes up to hundreds of thousands of nodes) and
expected lifetime (short time and task specific networks vs. long term ubiquitous networks embedded in
the infrastructure). Various assumptions are made about the properties of sensor nodes, namely their
mobility (fixed, slowly moving, frequent and fast change), available memory, computation power
(symmetric cryptography only, regular use of asymmetric cryptography), energy restrictions (power grid,
replaceable/non-replaceable batteries) or availability of special secure hardware (no protection, secure
storage, trusted execution environment). Different assumptions are made also about network operation
mode (continuous, seasonal, event-driven), how are queries issued and propagated (all nodes report in
certain intervals, on demand data reading) and how are data from sensor nodes transported back to base
stations (direct communication, data aggregation). Finally, attacker model ranges from fully fledged
adversary with capability to monitor all links and selectively compromise nodes to more restricted attacker
with limited resources, presence and priory knowledge about attacked network.
88
Security and Protection of Information 2009
Here, we will introduce nodes and network properties with relevant assumptions that targets networks that
needs to be maintained in decentralized, distributed, energy efficient way with expectation of partial
network compromise. Given list also provide introduction into wide range of parameters and tradeoffs that
need to be taken into account when designing suitable cryptographic protocol. Defined assumptions will
be used in subsequent text.
•
Network architecture – We assume mostly static network with large number of nodes (102-106)
deployed over large area without the possibility for a continuous physical surveillance of the
deployed nodes. Redeployments (additional nodes are physically deployed into same target area
and cryptographically interconnected with existing nodes) are possible for group supported
protocol described in Section 3.1 and secrecy amplification protocols applied with probabilistic
pre-distribution (Section 3.2), but not for Key Infection approach [14]. We studied networks with
densities (number of neighbours in reach of radio transmission) from very sparse (e.g., two nodes
on average) up to very dense (forty nodes on average) networks.
•
Network topology – We assume network topology that requires link keys between direct
neighbours (e.g., for link encryption, data aggregation...), leaving more advanced topology issues
related to routing and data transmission unspecified. Although is possible in principle to use
protocols described in Section 3.1 to establish also peer to peer (p2p) keys with distant nodes, we
focus on link key establishment and do not explicitly discuss p2p keys as additional assumptions
about network topology and routing algorithms are then needed.
•
Network lifetime – We focus on networks with long expected lifetime where most decisions
should be done locally and the message transmissions should be as low as possible. A higher
communication load is expected for the initial phase after the deployment or during the
redeployments, when new link keys are established. Section 3.2 specifically targets message
overhead of secrecy amplification protocol to improve overall network lifetime.
•
Base stations – Base stations and their placement influence significantly the network structure.
The usual assumption expect significantly more powerful device than ordinary node with faster
processing speed, larger memory storage, better connectivity and possible tamper resistance. Yet
network might be consisting only from the ordinary nodes with selected node connected to user
device (laptop) used as base station to issue query and retrieve measured data from network. Base
stations are not addressed much in our work as we target link keys between direct neighbours
without involvement of base station itself. We assume that only few base stations are presented in
the network and high majority of the ordinary sensor nodes cannot use advantage provided by
possible trusted and powerful base station positioned closely.
•
Degree of centralism – We focus on networks where security related decisions during the key
establishment should be done without a direct involvement of a base station and the nodes are not
assumed to communicate with base station(s) later on regular basis (e.g., for later verification of
passed key establishment process). A base station may eventually take part in the process (e.g.,
during the redeployment), but is not assumed to communicate directly with each separate node or
act as mediator between nodes.
•
Nodes mobility – We target scenarios for which make sense to establish link keys. These links do
not necessarily remain static during whole network lifetime. The group-protocol described in
Section 3.1 assumes an existence of link keys with direct neighbours, therefore high mobility of
nodes is not desirable here or link keys must be re-established when necessary. On the other side,
nodes mobility provides better opportunity for selection of parameters of the network than for
case of fixed immobile nodes. Relatively static set of neighbours should remain present at least
during execution of secrecy amplification protocols.
Security and Protection of Information 2009
89
90
•
Communication medium – We target nodes with wireless communication with assumption of
non-directional antenna with controllable transmission power necessary for Key Infection
approach (see Section 3.2). An ideal propagation of the signal is assumed in the simulations we
made. The communication medium and antenna properties are not relevant for the part of the
work focused on probabilistic pre-distribution. Real system must provide resiliency against errors
and collisions when wireless transmission is used for communication. Short range radio allows
reusing of same transmission frequency for geographically distant groups of nodes.
•
Computation power – As was already discussed, computational power might differ in order of
two magnitudes based on actually used hardware. Most powerful nodes like Imote2 have same
processing power like common PDA and therefore fit well even in category of wireless ad-hoc
networks. Performance of cryptographic operations varies significantly with computational power
and hardware accelerators might be present to speed up common cryptographic operations
(typically block ciphers, not asymmetric cryptography). We target nodes capable to transparently
use symmetric key cryptography, but not the asymmetric cryptography. Even when
encryption/verification with asymmetric cryptography keys might be possible on given hardware
platform in principle (e.g., verification of signature in two seconds on Mica2 motes [8]), protocols
proposed in this work generally require processing of high number but small messages. Usage of
asymmetric cryptography then significantly increases the total time required to finalize link key
establishment. Moreover, code of cryptographic libraries necessary to implement promising
algorithms for restricted algorithms like ECC or pairings occupies itself significant (currently
around 40kB [8]) part of otherwise limited memory. Still, secrecy amplification protocols (see
Section 3.2) can be used atop of a basic link key exchange facilitated by an asymmetric
cryptography if used.
•
Memory limitations – Our work targets devices with limited memory. Typical persistent
memory in available devices ranges from tens of kilobytes (Mica2) up to tens of megabytes
(Imote2) and less then ten kilobytes up to hundreds of kilobytes for RAM memory. Protocol
described in Section 3.1 requires persistent memory in order of kilobytes, secrecy amplification
protocols from Section 3.2 requires storage for values exchanged only between direct neighbours,
totally less than kilobyte for a reasonably dense network.
•
Energy source – We assume that nodes energy is limited and therefore nodes should not
communicate too often and most of the decisions should be made locally. Energy limitation is
additional reason why symmetric cryptography is preferred over asymmetric in our work as
detection of corrupted message with integrity protection based on symmetric cryptography is
much more efficient.
•
Tamper resistant hardware – We do not assume existence of any secure hardware on the sensor
node side as such protection increases cost significantly, especially for large scale networks we are
interested in. Moreover, production of tamper resistant device with small size is difficult task alone
as long history of the development of cryptographic smart cards and attacks against them shows.
In our work, all secrets from captured nodes are assumed to be compromised. The automatic
search for selective node capture (see Section 3.2) particularly exploits the possibility for a key
extraction.
•
Pre-distributed secrets – We work with the pre-distributed secrets in form of probabilistic predistribution in Section 3.1 and with part of secrecy amplification protocols. No pre-distributed
secrets are assumed when Key Infection approach is used. Secrecy amplification protocols alone do
not require any pre-distributed secrets, only availability of a suitable (pseudo-)random generator is
required.
Security and Protection of Information 2009
•
2.2
Routing algorithm – We assume existence of suitable routing algorithm to propagate message in
closed geographical area, usually inside group of direct neighbours and we abstract from
algorithms specifics in our work. If multiple paths for delivery of parts of fresh key during secrecy
amplification are used, mechanism for packet confirmation should exist. Otherwise composed key
will be corrupted if one or more parts are lost in transmission.
Node-compromise attacker model
The common attacker model in the network security area is an extension of the classic NeedhamSchroeder model [9] called the node-compromise model [10, 11, 12, 13]. Original model assumes that an
intruder can interpose a computer on all communication paths, and thus can alter or copy parts of
messages, replay messages, or emit false material. Extended model is described by the following additional
assumptions:
•
The key pre-distribution site is trusted. Before deployment, nodes can be pre-loaded with
secrets in a secure environment. Part of our work aims to omit this phase completely as it is
cheaper to produce identical nodes (even at the memory level).
•
The attacker is able to capture a fraction of deployed nodes. No physical control over
deployed nodes is assumed. The attacker is able to physically gather nodes either randomly or
selectively based on additional information about the nodes role and/or carried secrets.
•
The attacker is able to extract all keys from a captured node. No tamper resistance of nodes is
assumed. This lowers the production cost and enables the production of a high number of nodes,
but calls for novel approaches in security protocols.
The attacker model is in some cases (Key Infection [14]) weakened by the following assumption:
•
For a short interval the attacker is able to monitor only a fraction of a links. This assumption
is valid only for a certain period of time after deployment and then we have to consider a stronger
attacker with the ability to eavesdrop all communication. The attacker with a limited number of
eavesdropping devices can eavesdrop only a fraction of links and the rational reason behind this
assumption is based on specifics of WSNs:
a) Locality of eavesdropping – the low communication range of nodes allows for a frequent
channel reuse within the network and detection of extremely strong signals, so it is not possible for
an attacker to place only one eavesdropping device with a highly sensitive and strong antenna for
whole network.
b) Low attacker presence during deployment – a low threat in most scenarios during first few
seconds before the attacker realizes what target area is in use. If the attacker nodes are already
present in a given amount in the target location, we can deploy a network with density and node
range such that the ratio between legal nodes and the attacker's eavesdropping devices is such that
a secure network can be formed.
Note that the attacker model for WSNs is stronger than the original Needham-Schroeder one, because
nodes are not assumed to be tamper resistant and the attacker is able to capture them and extract all carried
sensitive information.
2.3
Cryptographic issues in network bootstrapping and basic results
Security protocols for WSNs in our scenarios deal with very large networks of very simple nodes. Such
networks are presumed to be deployed in large batches followed by a self-organizing phase. The latter is
automatically and autonomously executed after a physical deployment of sensor nodes.
Security and Protection of Information 2009
91
The deployment of a sensor network can be split into several phases. The following list is adjusted to
discern important processes of key pre-distribution, distribution and key exchange protocols. Not all steps
need to be executed for different key establishment approaches. The main phases are as follows (see also
Figure 1):
1. Pre-deployment initialization – performed in a trusted environment. Keys can be predistributed during this phase.
2. Physical nodes deployment – random spreading (performed manually, from plane ...) of sensors
over a target area in one throw or in several smaller batches.
3. Neighbour discovery – nodes are trying to find their direct neighbours (nodes that can be
directly reached with radio) and to establish communication channels.
4. Neighbour authentication – authentication of neighbours with pre-shared secrets, via trusted
base station, etc.
5. Key setup – key discovery or key exchange between direct neighbours.
6. Key update – periodic update of initial keys based on events like secrecy amplification, join to
cluster, node revocation or new nodes redeployment.
7. Establishment of point-to-point keys – the final goal is to transmit data securely from sensors to
one of a few base stations. Point-to-point keys are pairwise keys between sensors and base stations
(or distant sensors).
8. Message exchange – the production phase of the network.
Figure 1: Network lifetime with highlighted phases relevant for the key establishment.
Periods of attacker eavesdropping capability are depicted for the Key Infection model.
The basic approaches and issues in the area of key link establishment for WSNs can be summarized as
follows:
•
92
Master key scheme – One of the simplest solutions for key establishment is to use one networkwide shared key. This approach has minimal storage requirements; unfortunately the compromise
of even a single node in the network enables the decryption of all traffic. The master key scheme
has no resilience against node capture. As recognised in [11], this approach is suitable only for
a static network with tamper resistant nodes. An additional works extended master key scheme
Security and Protection of Information 2009
with an assumption of secure erase after certain time period [15] and limit impact of master key
compromise [16].
•
Full pairwise key scheme – A contrast to the master key scheme, where a unique pairwise key
exists between any two nodes. As shown in [12], this scheme has perfect resilience against node
capture (no other keys are compromised but from the captured nodes). However, this approach is
not scalable as each node needs to maintain n-1 secrets, where n is the total number of nodes in
the network. Note that perfect resilience against the node capture does not mean that an attacker
cannot obtain a significant advantage by combining keys from the captured nodes (e.g., collusion
attack [17]).
•
Asymmetric cryptography – The usage of PKIs for WSNs is often assumed as unacceptably
expensive in terms of special hardware, energy consumption and processing power [18, 10, 14].
The usage of asymmetric cryptography can lead to energy exhaustion attacks by forcing the node
to frequently perform expensive signature verification or creation [19]. Energy efficient
architecture based on elliptic curves is proposed in [20, 8] with signature verification in order of
small seconds. The maintenance and verification of a fresh revocation list and attacks like the
collusion attack [17] remain a concern.
•
Base station as a trusted third party – Centralised approaches like the SPINS architecture [18]
use the BS as a trusted mediator when establishing a pairwise key between two nodes. Each node
initially shares a unique key with the base station. This approach is scalable and memory efficient,
but has a high communication overhead as each key agreement needs to contact the BS, causing
non-uniform energy consumption inside the network [21].
•
Probabilistic pre-distribution – Most common key pre-distribution schemes expect that any two
nodes can always establish a link key if they appear as physical neighbours within their mutual
transmission range. This property can be weakened in such a way that two physical neighbours
can establish the link key only with a certain probability, which is still sufficient to keep the whole
network connected by secured links. A trade-off between the graph connectivity of link keys and
the memory required to store keys in a node is introduced. If the network detects that it is
disconnected, a range extension through higher radio power can be performed to increase the
number of physical neighbours (for the price of higher energy spending).Various variants of
random pre-distribution were proposed to ensure that neighbours will share a common key only
with a certain probability, but still high enough to keep the whole network connected [10, 22,
11]. During the pre-deployment initialisation, a large key pool of random keys is generated. For
every node, randomly chosen keys from this pool are assigned to its (limited) key ring, yet these
assigned keys are not removed from the initial pool. Hence the next node can also get some of the
previously assigned keys. Due to the birthday paradox, probability of sharing at least one common
key between two neighbours is surprisingly high even for a small size of the key ring (e.g., 100
keys). That makes this scheme suitable for memory-constrained sensor nodes. After deployment,
nodes search in their key rings for shared key(s) and use it/them as a link key, if such key(s) exist.
Variants based on threshold secret sharing provide better resilience against the node capture [12,
23].
•
Deployment knowledge pre-distribution – The efficiency of probabilistic pre-distribution can
be improved if certain knowledge about the final node position or likely neighbours is available in
advance. Ring keys selection processes based on node physical distribution allocation are proposed
in [13, 24, 25]. The nodes that have a higher probability to be neighbours have keys assigned in a
such way to have higher probability to share a common key.
•
Key infection approach – An unconventional approach that requires no pre-distribution is
proposed in [14]. The weakened attacker with limited ability to eavesdrop is assumed for a short
Security and Protection of Information 2009
93
period after deployment. Initial exchange key exchange between neighbours is performed in
plaintext and then the number of compromised keys is further decreased by secrecy amplification
techniques [14, 26, 27].
•
Impact of Sybil and collusion attack – Powerful attacks against known key pre-distribution
protocols are presented in [28, 17]. In the Sybil attack, the attacker is able either to insert many
new bogus nodes or create one node that exhibits multiple identities equipped with secrets
extracted from the captured nodes. In the collusion attack, compromised nodes are sharing their
secrets to highly increase the probability of establishing a link key with an uncompromised node.
This attack shows that the global usage of secrets in a distributed environment with no or little
physical control poses a serious threat, which is hard to protect against.
3 Example results
This section provides introduction to example results in area of security protocols for WSNs. The link key
establishment method based on probabilistic pre-distribution, probabilistic authentication, secrecy
amplification protocols and automatic search for attack scenarios will be described.
3.1
Group cooperation and probabilistic authentication
We aim to achieve secure authenticated key exchange between two nodes, where the first node is
integrated into the existing network, in particular knows IDs of its neighbours and has established secure
link keys with them. The main idea comes from the behaviour of social groups. When Alice from such
a group wants to communicate with a new incomer Bob, then she asks other members whether anybody
knows him. The more members know him, the higher confidence in the profile of the incomer.
A reputation system also functions within the group – a member that gives references too often can
become suspicious, and those who give good references for malicious persons become less trusted in the
future (if the maliciousness is detected, of course).
We adapt this concept to the environment of a wireless sensor network. The social group is represented by
neighbours within a given radio transmission range. The knowledge of an unknown person is represented
by pre-shared keys. There should be a very low probability of an attacker to obtain a key value exchanged
between two non-compromised nodes and thus compromise further communication send over this link.
Key exchange authentication is implicit in such sense that an attacker should not be able (with a high
probability) to invoke key exchange between the malicious node and a non-compromised node. Only
authorised nodes should be able to do so. The whole concept builds on the probabilistic pre-distributions
schemes introduced and extended in [10, 22, 11, 12]. The group of neighbours creates a large virtual
keyring and thus boosts resiliency against the scenario where each node has only its own (limited) keyring
available.
In short, node A asks her neighbours inside group around him to provide “onion” keys generated from
pre-distributed key that can also be generated by the newcomer B. The onion keys are generated from
a random nonce Rij, RB and keys pre-shared between A's neighbours and B. All of these onion keys are used
together to secure the transport of the new key KAB between nodes A and B. The legitimate node B will be
able to generate all onion keys and then to recover KAB. Details of the protocol can be found in [29].
Approximately up to 10000 captured nodes can be tolerated in dense networks with 40 neighbours and
key ring memory is able to store up to 200 keys when multi-space polynomial pre-distribution as an
underlying scheme is used. Analytical and simulation results for this work can be found in [29].
Combination of the protocol with multiple key spaces pre-distribution by [12] can be found in [30].
94
Security and Protection of Information 2009
3.1.1
Probabilistic authentication
Described group-supported protocol can also be used as a building block for probabilistic entity
authentication. Probabilistic authentication means that a valid node can convince others with a high
probability that it knows all keys related to its ID. A malicious node with a forged ID will fail (with a high
probability) to provide such a proof.
We propose to use the term probabilistic authentication for authentication with following properties:
•
Each entity is uniquely identifiable by its ID.
•
Each entity is linked to the keys with the identification publicly enumerable (only the key
identification can be obtained from entity ID, not the key value itself) from its entity ID.
•
A honest entity is able to convince another entity that she knows keys related to her ID with some
(high) probability. For practical purposes, this probability should be as high as possible to enable
authentication between as many other nodes as needed.
•
Colluding malicious entities can convince other entities that they know keys related to an ID of
a not-captured node only with a low probability. For practical purposes, this probability should be
as low as possible to limit masquerading.
This approach to authentication enables a tradeoff between security and computation/storage requirements
for each entity. As the potential number of neighbours in WSNs is high, the memory of each node is
severely limited and an attacker can capture nodes, this modification allows us to obtain reasonable security
in the WSN context.
However, special attention must be paid to the actual meaning of the authentication in case of a group
supported protocol. Firstly, as we assume that a part of the group can be compromised, verification of B's
claims can be based on a testimony of compromised neighbours. Secondly, node A has a special role in the
protocol execution and can be compromised as well. The neighbours should not rely on an announcement
from node A that node B was correctly authenticated to it. The special role of A can be eliminated if the
protocol is re-run against all members of the group with a different central node each time. Yet this would
introduce an additional communication overhead compared to the approach where a single member of the
group will announce the result of the authentication to others.
3.2
Secrecy amplification protocols
The uncertainty about the identity of direct neighbours prior to the actual deployment naturally results in
such a property of the key distribution schemes that most of the nodes in the network should be able to
establish a shared key (either directly or with support from other nodes). This alone is a serious challenge
for memory- and battery-limited nodes. At the same time, this property implies one of the main problems
for maintaining a secure network in presence of an adversary with the ability to compromise link keys
(e.g., via node capture or eavesdropping). If the extracted secrets can be commonly used in any part of the
network, various Sybil and replication attacks can be mounted (e.g., to join an existing cluster with
a captured node clone). Moreover, even when the compromised node is detected and its keys are revoked,
the revocation list must be maintained for the whole network (e.g., if the revocation list is maintained in
a completely decentralized way then ultimately every node must store a copy of the list). A common way
and good practice to introduce localized secrets is to not use pre-distributed keys for ordinary
communication, but only to secure the key exchange of fresh keys, which are locally generated only by
nodes involved in the exchange. If the usage of the pre-distributed keys is allowed only for a certain time
(or treated with more scrutiny later), such practice can limit the impact of the node capture as the localized
keys have no validity outside the area of their generation. An attacker is forced to mount his attack only
during a limited time interval and it is thus reasonable to expect that the number of compromised nodes
Security and Protection of Information 2009
95
will be lower. When such localized secrets are established with the support of other (potentially
compromised) nodes, the secrecy of the resulting key can be violated. To tackle this threat, secrecy
amplification protocols were proposed.
Secrecy amplification protocols (also known as privacy amplification protocols) were introduced by [14]
for weaker attacker model together with the plaintext key exchange as a lightweight key establishment
method (so-called Key Infection) for WSNs. This approach does not require any pre-distributed keys.
Nodes are simply distributed over the target deployment plane and perform discovery of neighbours by
radio. Every node then generates a separate random key for its each neighbour and sends it un-encrypted (as
no keys are pre-shared) over a radio link. This key is then used as a basic link key to encrypt subsequent
communication. Moreover, some links can be secured, even when the original link was compromised. This
scheme has minimal memory requirements, requires no pre-distribution operations in a trusted
environment and every node can essentially establish a key with any other nodes. Perfect node capture
resilience is achieved as any two nodes share different keys. If no attacker with eavesdropping capability is
present during such plaintext key exchange then all keys will be secure. On the other side, if an attacker is
able to eavesdrop all plaintext exchanges then the scheme will be completely insecure as all keys will be
compromised. A modified attacker model based on the assumption that the attacker has limited
material/financial resources is used instead. Basic assumption is that not all links are eavesdropped.
What is commonly unknown to the network nodes is the identity of links that are actually compromised.
Still, we can execute the amplification protocol as the second layer of defence, even when the link between
A and B is secure against the attacker (but we do not know that). If we create a new link key as
KAB' = H(KAB, K), where KAB is the original link key, K is a fresh key exchanged during amplification
protocol and H is a cryptographically strong one-way function, we will obtain a secure link if either the
original link is already secure or K can be securely transported to both A and B over some existing path.
Eventually, more iterations of the amplification protocol can be performed. The security of link keys can
be further improved as links newly secured in the previous iteration can help to secure a new link in the
next iteration.
Such process poses a significant communication overhead as the number of such paths is significant, but
may also significantly improve the overall network security, as demonstrated on Figure 2.
Figure 2: An increase in the number of secured links after execution of different secrecy amplification
protocols in the Random compromise pattern. Human designed secrecy amplification protocols PUSH
and PULL are compared with automatically found protocols.
96
Security and Protection of Information 2009
Secrecy amplification protocols can be categorized based on:
•
Number of distinct paths used to send parts of the final key – if more than one path is used then
the protocol performs so-called multi-path amplification. An attacker must eavesdrop all paths to
compromise the new key value. If two nodes A and B exchange a new key directly in one piece,
then only one path is used. Note that multiple virtual paths can be constructed over one physical
path [31].
•
Number of involved intermediate nodes per single path – basic key exchange between A and B
requires no intermediate node. If at least one intermediate node is used then the protocol performs
so-called multi-hop amplification. The path is compromised if an attacker is able to eavesdrop at
least one link on the path.
3.2.1
Automated design of secrecy amplification protocols
The different key distribution approaches result in the different compromise patterns when attacker
captures some nodes and extracts their secrets or eavesdrop communication. The performance (number of
secured links) of a particular amplification protocol may vary between such patterns. Human design of an
efficient protocol without unnecessary steps for a particular pattern is time consuming. We proposed
a framework for automatic generation of personalized amplification protocol with effective-only steps.
Evolutionary algorithms can be used to generate candidate protocols and our network simulator then
provides metric of success in terms of secured links.
Evolutionary algorithms (EAs) are stochastic search algorithms inspired by Darwin's theory of evolution.
Instead of working with one solution at a time (as random search, hill climbing and other search
techniques), these algorithms operate with the population of candidate solutions (candidate protocol in our
case). Every new population is formed by genetically inspired operators such as crossover (part of protocol’s
instructions are taken from one parent, rest from another one) and mutation (change of instruction type or
one of its parameter(s)) and through a selection pressure, which guides the evolution towards better areas
of the search space. The Evolutionary Algorithms receive this guidance by evaluating every candidate
solution to define its fitness value. The fitness value (success of candidate strategy, e.g. fraction of
compromised links), calculated by the fitness function (simulated or real environment), indicates how well
the solution fulfils the problem objective (e.g., fraction of secure links). In addition to the classical
optimization, EAs have been utilized to create engineering designs in the recent decade. For example,
computer programs, electronic circuits, antennas or optical systems are designed by genetic programming
[44]. In contrast to the conventional design, the evolutionary method is based on the generate&test
approach that modifies properties of the target design in order to obtain the required behavior. The most
promising outcome of this approach is that an artificial evolution can produce intrinsic designs that lie
outside the scope of conventional methods.
The approach was verified on two compromise patterns that arise from Key Infection approach and
probabilistic key pre-distribution. For these patterns, all published protocols we were aware of at the
beginning of our work were rediscovered and a new protocol that outperforms them was found. More
than 90% of secure link can be obtained after a single run of secrecy amplification protocol even in
a network with half of compromised links. We also demonstrated that secrecy amplification protocols are
not limited only to relative specific plaintext Key Infection approach distribution model [14]. According to
our simulations, secrecy amplifications actually works even better with much more researched probabilistic
pre-distribution schemes [10, 12, 23] providing a higher increase in the number of secured links (see
Figure 2).
The practical disadvantage of established design of secrecy amplification protocols (so called nodeoriented) is a significant communication overhead, especially for dense networks. In [32], we propose
a group-oriented design, where possibly all direct neighbours can be included in a single protocol run.
Security and Protection of Information 2009
97
Only a very small fraction of messages is necessary to obtain a comparable number of secured links with
respect to the node-oriented design. Moreover, a linear increase of necessary messages instead of
exponential increase with increasing density of the network is obtained. This makes our approach
practically usable for networks where energy-expensive transmissions should be avoided as far as possible.
Automatic generation framework was used to generate well performing protocol in the message restricted
scenario. More details on automatic generation of secrecy amplification protocols can be found in articles
[32] and [33].
3.3
Search for attacker strategies
We explore the possibility for automatic generation of attacks in this section. The idea is based on
fundamental asymmetry between the attacker and the defender, where the attacker needs to find only one
attack path where the defender must secure all of them. A brute-force search over the space of possible
attack paths is then more suitable approach for the defender. An attacker can make an informed search for
possible attack without inspecting all possibilities. The general concept uses some generator of candidate
attacks from elementary actions and execution environment to evaluate the impact of the candidate attack.
This concept cannot be used for any type of attacks – the existence of some metric to evaluate attack
success in a reasonable time is necessary. In our case, we used evolutionary algorithms as the candidate
attacks generator and the network simulator to evaluate them. We focused on two applications relevant in
context of this thesis to provide proof of applicability of proposed concept.
It was an earlier work on the evolutionary design of secrecy amplification protocols with a suspiciously
high fraction of secured links (typically 100%) that lead us to a deeper inspection of the protocol with
such a high performance. Here we discovered either our program mistake or incomplete specification of
the evaluation function that was exploited by the evolution. Repetition of this behaviour then lead us
farther to the idea of using Evolutionary Algorithms to search not only for defences (like the secrecy
amplification protocol), but also as a tool for discovering new attacks (mistakes in code or incomplete
specification).
3.3.1
Automatic search for attacker strategies
We have developed a general concept for automatic design of attacks. It consists of the following sequence
of actions:
1. Execution of the Xth round of candidate attack strategy generator → attack strategy in
a metalanguage.
2. Translation from the metalanguage into a domain language.
3. Strategy execution (either by a simulation or in a real system).
4. Evaluation of the fitness function (obtaining attack success value).
5. Proceed to the (X+1)th round.
Details are as follows: Prior to actual generation we have to inspect the system and define basic methods
how an attacker may influence the system (create/modify/discard messages, capture nodes, predict bits,
etc.) and what constitutes a successful attack. Subsequently we decompose each basic method into a set of
elementary rules and identify its parameters (e.g., modification of xth byte in the message, delay a message
certain time x, capture a particular node ...). These elementary rules serve as basic building blocks of new
attack strategies.
Having these blocks, we can start generating the strategies:
1. The candidate strategies are generated from elementary rules using specified mechanisms like:
98
Security and Protection of Information 2009
•
Educated guess – field expert selects combinations of elementary rules that might work.
•
Exhaustive search – all possible combinations of elementary rules are subsequently examined
and evaluated. This becomes very inefficient for large search spaces.
•
Random search – combination of elementary rules is selected at random. No information
about the quality of the previous selection is used during the following one.
•
Guided search – actual combination of the rules is improved according to some rating
function able to compare between quality of previous and newly generated candidate strategy.
We will use the Evolutionary Algorithms for this task.
2. Candidate strategy is translated from the metalanguage used for generating into the domain
language, which can be interpreted by the simulator or real system.
3. Candidate strategy is executed inside simulated or real environment.
4. Impact of the attack is measured by a fitness function.
5. The whole process is repeated until a sufficiently good strategy is found or the search is stopped.
Usability of the proposed concept was verified on several attack vectors for WSNs, but is not limited only
to this area. Details and results of application of described framework to area of WSNs can be found in
[34].
Firstly, well performing pattern for the deployment of eavesdropping nodes was developed as an attack
against Key Infection plaintext key distribution [14] achieving roughly twice as many compromised links
compared to the random deployment. Secondly, several variations of attacks based on selective node
capture were examined. Approximately 50-70% increase in the number of compromised links was
obtained with respect to the random node capture (for the whole network) or 25-30% decrease in the
number of nodes to capture, lowering the cost of attack. These two groups of attacks are examples of
automatic optimization of known attacks.
Third examined group of attacks demonstrates the ability of our concept to search for novel attack
strategies. Two insecure routing algorithms (Minimal cost forwarding, Implicit geographic routing) were
targets of our attacks. The Evolutionary Algorithms were able to find all known basic attacks.
Furthermore, they confirmed their usability for optimization of known attacks by finding a several patterns
for dropping messages.
4 Conclusions and research opportunities
The area of WSNs is currently very active research field with wide range of research opportunities.
Commonly used solutions from other types of networks sometimes cannot be directly applied due to
limited resources of deployed nodes (especially limited energy), uncertainties about network structure
before deployment (unknown neighbours, large network size and network connectivity properties) and
different assumptions about attacker capabilities (node-capture attacker model). Here, we will describe
some possible research.
The problem we encountered with group-supported probabilistic protocol is the dependence of the key
establishment schemes on some node replication detection scheme. Such scheme must work efficiently in
highly decentralized networks but with low communication overhead. Approaches described in or [28] or
probabilistic detection described in [35] provide a partial solution, but with communication overhead
quickly increasing with the increasing network size. A combination of replication detection with limitation
of area where duplicated nodes are of some use from the attacker‘s point of view may provide solution.
Such limitation can be introduced based on the geographic limitation of valid node occurrence area (a
Security and Protection of Information 2009
99
node will be rejected if tries to connect in part of the network other than assigned geographic area) or
position in the network hierarchy (node is rejected outside assigned virtual cluster). A key distribution
technique with only locally usable keys like the Key Infection approach [14] can be used as well, but
possibility for redeployments and interconnection with existing nodes is limited. We encourage studying
efficient solutions based on probabilistic rather than deterministic detection.
The possibility for node authentication in the context of symmetric cryptography is usually perceived as
viable only if the key used for authentication is known to no other nodes besides the authenticating and
verifying node. Such assumption is common even in works that explicitly deal with partially compromised
networks and probabilistic protocols [14, 36]. The notion of probabilistic authentication in the context of
wireless sensor network was used in [45] for message authentication and in [28, 29] for node
authentication. We expect that the difference between compromised and incorrectly authenticated node is
negligible for many scenarios. And if the network is able to properly function when partially compromised,
it should be possible to function when a limited number of malicious nodes are incorrectly authenticated.
Is it really necessary to authenticate separate nodes, especially when prevention of the Sybil attack is such a
hard task? We propose to study schemes, where the probability of successful authentication decreases with
increasing number of cloned nodes that use (part of) particular authentication credentials. One of the
possible directions might be usage of anonymous money [37] where multiple spending of same “coin”
leads to loss of anonymity. Multiple exposures of the same authentication credentials (by Sybil or cloned
nodes) then reveal secret information usable to blacklist cloned node. The scheme should be adapted to
decentralized nature of sensor networks and should be based on the probabilistic rather than deterministic
detection with relaxation of the memory storage requirements necessary to detect multiple credentials
exposure at the same time.
We expect to see a wider application of secrecy amplification protocols as the next layer of defence. Secrecy
amplification protocols proved to increase overall network security substantially, at least in both inspected
compromise patterns. A highly insecure network with half of its links compromised can be turned into
a reasonably secure network with more than 90% of links secure – node capture resilience for probabilistic
schemes is then significantly improved. Future analysis of new key distributions schemes should discuss
not only the direct impact on the network security when certain number nodes are captured, but also how
many links remain compromised after the application of secrecy amplification protocol. For example,
more advanced threshold cryptography probabilistic schemes like [12] gain less from secrecy amplification
than simpler schemes like [10], because networks with the threshold scheme have very good resilience
alone against node capture until critical number of nodes is captured. After this threshold network quickly
become completely insecure with almost all links compromised, situation where secrecy amplification
protocols cannot operate successfully.
We see the potential of highly decentralized operations coordinated within group of neighbours as a basic
principle for governing for the distributed networks with lack of centralized control, especially when an
economic-based approach to network security is applied and part of the network is assumed to be
compromised. The way that separate nodes in the network should cooperate (protocols) to accomplish
certain tasks can be very simple or very complicated, depending on various aspects like the degree of
decentralization, available information about the operating environment and especially solved problems.
On one extreme, we can see very simple rules of behaviour, with interaction only with direct neighbours in
the case of cellular automata [38], still resulting in very complex global behaviour. If properly designed,
such systems may provide significant resilience against intentional or random failure and provide the
overall robustness of the network. However, the communication overhead to accomplish certain task is
usually substantial with respect to centralized or hierarchical systems. This becomes an issue, when applied
to a scenario with energy-limited nodes. The large-scale wireless sensor networks based on devices like
smart dust [2] are step in this direction – nodes are energy-limited but should work at least partially in
a decentralized fashion. Such nodes are still significantly more powerful than simple cells in case of cellular
100
Security and Protection of Information 2009
automata. Yet efficient combination of possible node actions can be hard for a human designer. The
automatic design and optimization of rules may provide a flexible way of obtaining near-optimal
parameters of network behaviour for a particular task, like the combination of evolutionary algorithms
with cellular automata [39].
A wide range of future research directions is possible for an attacker strategies generator based on
evolutionary algorithms. We developed a working framework and tested few initial scenarios that
confirmed the usability of the framework. In general, the application of the approach to optimization
problems provides usually usable results, but the real appeal is in the automated search for novel attacks
rather than improvement of existing attacks. We preliminarily probed this option for attacks against
geographic routing protocols. Two types of results were obtained. Several already known attacks principles
were rediscovered, including base station impersonation, beacons forging, selective message forwarding and
dropping, and selective collisions on MAC layer and overloading of neighbours' message buffers.
Economic tradeoffs based attacks, where only a very small fraction of malicious nodes was able to affect
majority of communication, were especially successful.
A combination of the attack generator (either random or based on informed search like evolutionary
algorithms) with a real system instead of the simulator is possible. The only requirement on the real system
is the existence of an accurate and relatively stable fitness function with reasonable speed of evaluation. We
see the potential in areas such as attacks against routing in real network architectures, bypassing intrusion
detection systems (already researched [40, 41, 42]) or artificial manipulation of person characteristics (e.g.,
reputation status) in massive social networks. The fundamental advantage of work with a real system is the
absence of the necessity for the abstraction of the system realized in a simulator and the potential to work
out the attacks both on the design and the implementation level.
Another appealing idea is a continuous evolution of an attacker strategy as a response to the variability of
the environment and applied defences in real systems. Such an approach is commonly used for real-time
evolution in embedded devices, e.g., software filters for recognition of speed limit signs continuously
adapted to actual weather conditions [43]. In fact, this is the behaviour seen in gross granularity in the
never-ending confrontation between attackers and defenders like virus creators with antivirus companies.
Instead of having a fixed scenario and a well-performing attacker strategy for it, an attacker can run
a continuous search for new strategies and use the one that is performing best at a given time. Such finegrained approach can underpin even subtle changes in network topologies, fluctuation in the network load
or the improvement of defence strategies.
References
[1]
Crossbow Technology, Inc. http://www.xbow.com/ [Last Access: 2009-03-28].
[2]
Smart dust project website. http://robotics.eecs.berkeley.edu/~pister/SmartDust/ [Last Access:
2009-03-27].
[3]
Eiko Yoneki and Jean Bacon. A survey of wireless sensor network technologies: research trends and
middleware's role. Technical Report, UCAM 646, Cambridge, 2005.
[4]
Elizabeth M. Royer Charles E. Perkins. Ad hoc on-demand distance vector routing. In Proceedings
of the 2nd IEEE Workshop on Mobile Computing Systems and Applications, pages 90-100, 1999.
[5]
Josh Broch David B. Johnson, David A. Maltz. Dsr: The dynamic source routing protocol for
multi-hop wireless ad hoc networks. In Charles E. Perkins, editor, Ad Hoc Networking, pages 139172. Addison-Wesley, 2001.
[6]
Crossbow Technology, Inc., Imote2, http://www.xbow.com/Products/productdetails.aspx?sid=253
[Last Access: 2009-03-28]
Security and Protection of Information 2009
101
[7]
Crossbow Technology, Inc., Mica2, http://www.xbow.com/Products/productdetails.aspx?sid=174
[Last Access: 2009-03-28]
[8]
Piotr Szczechowiak, Leonardo B. Oliveira, Michael Scott, Martin Collier, and Ricardo Dahab.
NanoECC: Testing the limits of elliptic curve cryptography in sensor networks. In LNCS 4913,
pages 305-320, 2008.
[9]
Roger M. Needham and Michael D. Schroeder. Using encryption for authentication in large
networks of computers. Communications of the ACM, vol. 21, issue 12, pages 993-999, 1978.
[ 10 ] Laurent Eschenauer and Virgil D. Gligor. A key-management scheme for distributed sensor
networks. In Proceedings of the 9th ACM Conference on Computer and Communications Security
(CCS'02), Washington, DC, USA, pages 41-47, 2002.
[ 11 ] Haowen Chan, Adrian Perrig, and Dawn Song. Random key predistribution schemes for sensor
networks. In Proceedings of the 2003 IEEE Symposium on Security and Privacy (SP'03),
Washington, DC, USA, pages 197-214. IEEE Computer Society, 2003.
[ 12 ] Wenliang Du, Jing Deng, Yunghsiang S. Han, and Pramod K. Varshney. A pairwise key predistribution for wireless sensor networks. In Proceedings of the 10th ACM Conference on
Computer and Communications Security (CCS'03), Washington, DC, USA, pages 42-51, 2003.
[ 13 ] Wenliang Du, Jing Deng, Yunghsiang S. Han, and Pramod K. Varshney. A key management
scheme for wireless sensor networks using deployment knowledge. IEEE INFOCOM 2004, Hong
Kong, 2004.
[ 14 ] Ross Anderson, Haowen Chan, and Adrian Perrig. Key infection: Smart trust for smart dust. In
Proceedings of the Network Protocols (ICNP'04), 12th IEEE International Conference,
Washington, DC, USA, 2004.
[ 15 ] Sencun Zhu, Sanjeev Setia, and Sushil Jajodia. Leap: efficient security mechanisms for large-scale
distributed sensor networks. In CCS '03: Proceedings of the 10th ACM conference on Computer
and communications security, pages 62-72, New York, NY, USA, 2003. ACM. ISBN 1-58113738-9.
[ 16 ] Jing Deng, Carl Hartung, Richard Han, and Shivakant Mishra. A practical study of transitory
master key establishment for wireless sensor networks. In SECURECOMM '05: Proceedings of the
First International Conference on Security and Privacy for Emerging Areas in Communications
Networks, pages 289-302, Washington, DC, USA, 2005. IEEE Computer Society. ISBN 0-76952369-2.
[ 17 ] Tyler Moore. A collusion attack on pairwise key predistribution schemes for distributed sensor
networks. In Proceedings of the Fourth Annual IEEE International Conference on Pervasive
Computing and Communications Workshops (PERCOMW'06), Washington, DC, USA, 2005.
[ 18 ] Adrian Perrig, Robert Szewczyk, J.D. Tygar, Victor Wen, and David E. Culler. Spins: Security
protocols for sensor networks. Wireless Networks 8/2002, Kluwer Academic Publishers, pages 521534, 2002.
[ 19 ] Tuomas Aura, Pekka Nikander, and Jussipekka Leiwo. Denial of service in sensor networks. IEEE
Computer, Issue 10, pages 54-62, 2002.
[ 20 ] Ronald Watro, Derrick Kong, Sue-fen Cuti, Charles Gardiner, Charles Lynn, and Peter Kruus.
TinyPK: Securing sensor networks with public key technology. In Proceedings of the 2nd ACM
workshop on Security of ad hoc and sensor networks (SASN'04), Washington, DC, USA, pages 5964, 2004.
[ 21 ] Stephan Olariu and Ivan Stojmenoviµc. Design guidelines for maximizing lifetime and avoiding
102
Security and Protection of Information 2009
energy holes in sensor networks with uniform distribution and uniform reporting. In Proceedings of
the 25th IEEE Conference on Computer Communications (INFOCOM'06), 2006.
[ 22 ] Haowen Chan, Adrian Perrig, and Dawn Song. Key distribution techniques for sensor networks.
Kluwer Academic Publishers, Norwell, MA, USA, 2004. ISBN 1-4020-7883-8.
[ 23 ] Donggang Liu and Peng Ning. Establishing pairwise keys in distributed sensor networks. In CCS
'03: Proceedings of the 10th ACM conference on Computer and communications security, pages
52-61, New York, NY, USA, 2003. ACM Press. ISBN 1-58113-738-9.
[ 24 ] Shaobin Cai, Xiaozong Yang, and Jing Zhao. Mission-guided key management for ad hoc sensor
network. PWC 2004, LNCS 3260, page 230237, 2004.
[ 25 ] Donggang Liu and Peng Ning. Location-based pairwise key establishments for static sensor
networks. 1st ACM Workshop Security of Ad Hoc and Sensor Networks Fairfax, Virginia, pages
72-82, 2003.
[ 26 ] Yong Ho Kim, Mu Hyun Kim, Dong Hoon Lee, and Changwook Kim. A key management scheme
for commodity sensor networks. ADHOC-NOW 2005, LNCS 3738, pages 113-126, 2005.
[ 27 ] Dan Cvrček and Petr Švenda. Smart dust security - key infection revisited. Security and Trust
Management 2005, Italy, ENTCS vol. 157, pages 10-23, 2005.
[ 28 ] James Newsome, Elaine Shi, Dawn Song, and Adrian Perrig. The Sybil attack in sensor networks:
Analysis & defenses. In Proceedings of the third international symposium on Information
processing in sensor networks (IPSN'04), Berkeley, California, USA, pages 259-268, 2004.
[ 29 ] Petr Švenda and Václav Matyáš. Authenticated key exchange with group support for wireless sensor
networks. The 3rd Wireless and Sensor Network Security Workshop, IEEE Computer Society
Press. Los Alamitos, CA, pages 21-26, 2007. ISBN 1-4244-1455-5.
[ 30 ] Petr Švenda and Václav Matyáš. From Problem to Solution: Wireless Sensor Networks Security
(chapter in book). Nova Science Publishers, New York, USA, 2008. ISBN 978-1-60456-458-0.
[ 31 ] Harald Vogt. Exploring message authentication in sensor networks. ESAS 2004, LNCS 3313, pages
19-30, 2005.
[ 32 ] Petr Švenda, Lukáš Sekanina, Václav Matyáš. Evolutionary Design of Secrecy Amplification
Protocols for Wireless Sensor Networks. Second ACM Conference on Wireless Network Security
(WiSec'09), 2009.
[ 33 ] Lukáš Sekanina et al. Evoluční hardware, Academia, Praha, Czech Republic, in final preparation, to
be published in 2009.
[ 34 ] Jiří Kůr, Václav Matyáš, Petr Švenda. Evolutionary design of attack strategies, 17th Security
Protocols Workshop, Cambridge, to appear in LNCS, 2009.
[ 35 ] Bryan Parno, Adrian Perrig, and Virgil Gligor. Distributed detection of node replication attacks in
sensor networks. In Proceedings of the 2005 IEEE Symposium on Security and Privacy (SP'05),
Washington, DC, USA, pages 49-63, 2005. ISBN 0-7695-2339-0.
[ 36 ] Tyler Moore. Cooperative attack and defense in distributed networks. In University of Cambridge,
Technical report UCAM-CL-TR-718. University of Cambridge, 2008.
[ 37 ] David Chaum, Amos Fiat, and Moni Naor. Untraceable electronic cash. In CRYPTO '88:
Proceedings on Advances in cryptology, pages 319-327, New York, NY, USA, 1990. SpringerVerlag New York, Inc. ISBN 0-387-97196-3.
[ 38 ] Joel L. Schiff. Cellular Automata: A Discrete View of the World. Wiley & Sons, Ltd, 2008. 0-470-
Security and Protection of Information 2009
103
16879-X.
[ 39 ] Rajarshi Das James P. Crutchfeld, Melanie Mitchell. The evolutionary design of collective
computation in cellular automata. In Evolutionary Dynamics, Exploring the Interplay of Selection,
Neutrality, Accident, and Function. New York: Oxford University Press, 2002.
[ 40 ] Shai Rubin, Somesh Jha, and Barton P. Miller. Automatic generation and analysis of NIDS attacks.
In ACSAC '04: Proceedings of the 20th Annual Computer Security Applications Conference, pages
28-38, Washington, DC, USA, 2004. IEEE Computer Society. ISBN 0-7695-2252-1.
[ 41 ] Prahlad Fogla and Wenke Lee. Evading network anomaly detection systems: formal reasoning and
practical techniques. In CCS '06: Proceedings of the 13th ACM conference on Computer and
communications security, pages 59-68, New York, NY, USA, 2006.ACM. ISBN 1-59593-518-5.
[ 42 ] Jong-Keun Lee, Min-Woo Lee, Jang-Se Lee, Sung-Do Chi, and Syng-Yup Ohn. Automated cyberattack scenario generation using the symbolic simulation. In AIS 2004, pages 380-389, 2004.
[ 43 ] Jim Torresen, W. Jorgen Bakke, Lukáš Sekanina. Recognizing speed limit sign numbers by
evolvable hardware. Lecture Notes in Computer Science, 2004(3242):682-691, 2004.
[ 44 ] J. R. Koza, F. H. Bennett III., D. Andre, and M. A. Keane. Genetic Programming III: Darwinian
Invention and Problem Solving. Morgan Kaufmann Publishers, San Francisco, CA, 1999.
[ 45 ] Ashish Gehani and Surendar Chandra. Past: Probabilistic authentication of sensor timestamps. In
ACSAC '06: Proceedings of the 22nd Annual Computer Security Applications Conference on
Annual Computer Security Applications Conference, pages 439-448, Washington, DC, USA,
2006. IEEE Computer Society. ISBN 0-7695-2716-7.
104
Security and Protection of Information 2009
Risk-Based Adaptive Authentication
Ivan Svoboda
[email protected]
RSA, security division of EMC
V parku 20, Praha 4 - Chodov, Czech Republic
Abstract
This paper describes:
• the current status of the identity fraud “industry”,
• the modern approach to secure authentication: so called “Risk-Based Adaptive Authentication”.
Governmental and other public sector agencies are increasingly offering portals for employees, citizens and
NGOs to access sensitive information at government sites and to advance the access and exchange of this
information across agencies.
Government IT security budgets are growing, but the threats from hackers, malware, bots, Phishing, and
other attacks are growing even faster and more sophisticated.
Achieving the right balance of authentication security without compromising the user experience or
straining the budget is a challenge for government and public sector agencies.
The Adaptive Authentication approach represents cost-effective yet highly secure authentication and risk
management platform for an entire user base protection. Adaptive authentication can be implemented
with any combination of the authentication methods including:
•
Authentication based on device identification and profiling.
•
Out-of-band authentication using phone call, SMS or e-mail.
•
Challenge questions using question- or knowledge-based authentication.
Should the organization require even stronger authentication, this could be easily solved by integrating
almost any current authentication and authorization tools like one-time password tokens, certificates,
scratch cards, numeric grids, etc.
RISK ENGINE, which is the core of the Adaptive Authentication approach, allows to decide (based on the
risk and value of the information/transaction), which level/method of authentication should be
appropriate. Hence the risk engine enables to dramatically decrease the cost and allow more flexible
operations for the end-user.
Keywords: Identity Fraud, Adaptive Authentication, Web Access.
Security and Protection of Information 2009
105
1 Introduction
Governmental agencies are increasingly offering portals for employees, citizens and NGOs to access
sensitive information at government sites and to advance the access and exchange of this information
across agencies. The migration of services to the Internet offers many benefits including increased
efficiency and reduced costs; citizens and NGOs are provided with more convenient self-service options
while government officials are provided with immediate access to relevant information held by other
agencies. At the same time, fraud and cyber attacks continue to increase and grow more sophisticated,
leaving the government to grapple with the challenges of safeguarding data, particularly Personally
Identifiable Information (PII).
Achieving the right balance of authentication security without compromising the user experience or
straining the budget is a challenge for government agencies. And as government is one of the top sectors
being targeted for data breaches that could lead to identity theft, it is critical for the personally identifiable
information of employees, citizens and NGOs that they provide access to be strongly protected from
external compromise.
2 Evolution of Online Fraud
Online fraud is evolving. Phishing and pharming represent one of the most sophisticated, organized and
innovative technological crime waves faced by online businesses. Fraudsters have new tools at their
disposal; and are able to adapt more rapidly than ever.
The volume of phishing attacks detected by RSA during 2008 grew an astonishing sixty-six percent over
those detected throughout 2007 [Ref.1]. In 2008, RSA detected 135,426 phishing attacks, compared to
just over 90,000 phishing attacks detected in 2007. The first six months of 2008 demonstrated a dramatic
increase in the volume of phishing attacks detected by RSA, peaking in April with 15,002 attacks. Attacks
initiated by the Rock Phish Gang and those initiated via other fast-flux attacks accounted for over half of
the phishing attacks detected by RSA during the first half of the year.
Figure 1: Volumes of Phishing Attacks.
106
Security and Protection of Information 2009
2.1
Phishing and Trojans
The widespread implementation of strong authentication has made it extremely difficult for fraudsters to
get into bank accounts to steal money. In addition, the increase in consumer education about phishing and
identity theft has made it harder to dupe customers using traditional attack vectors. This has forced
fraudsters to evolve by increasing their level of sophistication in many areas, but mostly in the technology
and tactics they use to gain access to online accounts.
A recent example is the use of malware by the Rock Phish gang, reported by RSA in April 2008. “Rock
Phish” refers to a series of ongoing phishing attacks that have been targeting financial institutions
worldwide since as early as 2004. The Rock Phish group is believed by many experts to be responsible for
more than half of all phishing attacks worldwide. Traditionally, they have used a network of compromised
computers (often called a “botnet”) to send spam and phishing e-mails. The concept behind the attacks
and the architecture used by the group make it extremely difficult to shut them down.
However, the Rock Phish group elevated their level of sophistication in recent attacks by using their
phishing sites as malware infection points. More specifically, if an online user clicks on a link and is
redirected to one of these phishing sites, in addition to having their personal data stolen, they are also
infected with the Zeus Trojan to steal additional information in the future. The Zeus Trojan is designed to
perform advanced key logging when infected users access specific web pages, including pages which are
protected by SSL protocols.
Another recent example is the use of social engineering techniques to dupe online users into downloading
malware onto their computers. Fraudsters are sending e-mails posing as legitimate businesses to online
users, providing a tracking number for an undeliverable package or details of recently purchased airline
tickets. A zip file is attached within the e-mail that claims to have information on the package shipment or
particulars of a flight itinerary, but it actually contains harmful malware that is downloaded with the intent
of gathering personal details, passwords and other information from the user’s computer.
2.2
The Sophisticated Online Fraud Supply Chain and the Evolution of
Fraud-as-a-Service
The economic lifecycle of the underground fraud community functions very similarly to the world of
legitimate business. Online fraudsters have supply chains, third-party outsourcers, vendors, and online
forums where people with skills and people with opportunities can find each other – and everyone makes
money along the way. It is not a perfect system because criminals conduct business with other criminals,
sometimes forming cracks within the system. Even so, the underground fraud supply chain is becoming
more technically and operationally sophisticated, and as a result, increased awareness and preventative
measures taken by financial institutions have become even more important.
Within the world of online fraud, there are parallel tracks composed of a technical infrastructure and an
operational infrastructure, and they are not linear. It is composed of a set of people, process and
technology that creates a fraud supply chain that fuels of ongoing lifecycle of online identity theft.
The technical infrastructure is built by Engineering Fraudsters who create tools and delivery services for
various forms crimeware such as phishing kits and Trojans. Some Trojans, such as the Limbo Trojan, are
available for only $350. Utilizing the infrastructure built by Engineering Fraudsters – for pay – individuals
known as Harvesting Fraudsters are able to steal credentials culled by various forms for crimeware. These
credentials include credit card and debit card numbers, Card Verification Value codes (CVV), PINs,
usernames and passwords.
Engineering Fraudsters are now getting smarter. They are providing a more efficient approach leveraging
the Software-as-a-Service model, coined by RSA as “Fraud-as-a-Service” or “FaaS”. This is where the
Security and Protection of Information 2009
107
technical infrastructure becomes a hosted service that provides an easy-to-use, less expensive and more
effective means of deploying crimeware to conduct online fraud. FaaS removes the need for any common
fraudster to build, maintain, patch and upgrade crimeware in order to keep it both operational and
invisible. It also means that they do not have to install it into a complex system and find a bulletproof
hosting service to deploy spam engines or infection kits. The FaaS model is not only attractive because of
these operational efficiencies, but subscriptions can run as low as $299 a month. The delivery of the attack
can be a spam engine for sending thousands of phishing emails, sometimes embedded with an infection kit
to breach PCs with an invisible Trojan. Once up and running via the FaaS model, a Harvesting Fraudster
can log into their hosted service and gain a view into thousands of computers under their control, and
access and harvest stolen credentials that the malware effectively collects for them.
But how do fraudsters trust one another? How does a specific piece of malware become successful and
profitable? And how does FaaS become a successful enterprise? This happens through trust, and trust
comes in the form of building a good reputation. When FaaS offerings, phishing kits, and malware
infection services prove themselves to be effective, they become recommended by fraudsters to other
fraudsters. Fraudsters rate these systems within their forums and IRC chat rooms noting positive features,
and perhaps a negative feature such as “dashboard is in Russian”. This is not unlike how a legitimate
product reviewer will compare and contrast technologies such as handheld devices or printers within
a popular magazine.
Within the lifecycle of executing online fraud, there are many roles and no single individual takes on all of
them, but everyone gets paid for the efforts. The Harvesting Fraudster who culls stolen credentials from
the technical infrastructure relies on those within the operational infrastructure to make money. Within
the operational infrastructure are Cash Out Fraudsters who, in turn, rely upon Mules and Drop Specialists
for monetization. Cash Out Fraudsters and Harvesting Fraudsters communicate with each other through
fraud forums and chat rooms to establish relationships and conduct transactions. Harvesting Fraudsters sell
thousands of accumulated stolen credentials to Cash Out Fraudsters, who then employ their operational
infrastructures composed of Mules and Drop Specialists in order to monetize goods.
3 Is it a governmental/public sector problem as well? (YES, it is!)
From the statistics, it is quite clear that the banking sector is the main target for majority of online fraud
attacks. However, the attacks are growing against other sector companies and institutions as well, like job
servers, health insurance portals, and the governmental or citizen facing portals as well.
Several facts:
• Attacks are increasing at alarming rates: 18,050 US government cybersecurity events in 2008 vs.
5,144 in 2006*.
• 5,488 installations of malware and unauthorized access to government computers in 2008*
(*According to the US Department of Homeland Security Computer Emergency Readiness Team
(US-CERT).
• Attacks are targeting both internal government networks and citizen facing portals.
Sinowal example:
• Over 500,000 Compromised Credentials.
• Infected over 750 government and military personnel globally.
108
Security and Protection of Information 2009
What are the attackers motivations?
1. Steal Sensitive Personally Identifiable Information (PII)
• Citizen facing portals
• Employee data
• Used to commit identity theft and fraud
2. E-spionage: 21st Century Spying
• Credential Stealing
• Stolen sensitive government data
3. Disruption and Cyber-warfare
• Inflicting damage on critical infrastructure and information systems.
4 Best practices for mitigating advanced threats
Is there a silver bullet for stopping fraud? And what approach can institutions take to minimize the effect
on their customers, their brand and assets, and the integrity of their online channel?
There are several individual solutions available on the market today to help institutions combat the threat
of new innovative attack methods and the spread of malware. However, security experts agree that
a layered security approach that combines external threat protection, login authentication and risk-based
transaction monitoring is the ideal solution for providing the most comprehensive protection to online
users. Applying the following best practices can help institutions mitigate the risk posed by advanced
threats.
4.1
Understand the threats that are targeting your institution
The first step is to understand the nature of the threats that are targeting your business. By proactively
identifying the threats that exist, institutions can mitigate the damage that is caused by an attack or even
prevent it from occurring at all. By gathering and sharing intelligence and developing a broad knowledge
of potential threats, institutions can better evaluate their own vulnerabilities and implement security
solutions to address them.
Security and Protection of Information 2009
109
4.2
Use multi-factor authentication to protect login
Username and password is not enough to protect sensitive data with the advanced nature of today’s threat
landscape. Moreover, many countries have imposed regulations requiring financial institutions to protect
their customers with a second form of strong authentication. There are a number of ways that institutions
can provide authentication at login to their online users, including:
•
SMS (Out-of Band)
•
OTP (One-time passwords)
•
Digital Certificates
4.3
Monitor transactions and activities that occur post-login
While putting a lock on the front door is suitable in most cases, fraudsters have developed technology to
bypass login authentication – whether launching a phishing attack in an attempt to secure answers to
challenge questions or developing advanced man-in-the-middle Trojans to bypass one-time password
systems. So in addition to authentication solutions that challenge users to prove their identity at login,
institutions should consider implementing a transaction protection solution that monitors and challenges
high-risk transactions after login has occurred.
5 Adaptive Authentication as a Cost-Effective Approach
While a layered approach to security is the best defense in the face of a constantly evolving fraud
environment, budgets are not unlimited and applying every possible defence is simply impractical. When
it comes to maximizing security investments, practitioners must take a risk-based approach balancing three
key variables: security, user acceptance and cost.
Adaptive Authentication represents a comprehensive authentication and risk management platform
providing cost-effective protection for an entire user base. Adaptive Authentication monitors and
authenticates user activities based on risk levels, institutional policies, and user segmentation and can be
implemented with most existing authentication methods including:
110
•
Invisible authentication. Device identification and profiling.
•
Site-to-user authentication. Site-to-user authentication assures users they are transacting with
a legitimate website by displaying a personal security image and caption that has been pre-selected
by the user at login.
•
Out-of-band authentication. Phone call, SMS, or e-mail.
•
Challenge questions. Challenge questions or knowledge-based authentication (KBA).
•
One-time passwords. Hardware tokens, software tokens and toolbars, display card tokens,
transaction signing tokens or CAP/EMV.
Security and Protection of Information 2009
Figure 2: Risk–based Adaptive Authentication Approach.
By having the ability to intelligently support most existing authentication technologies, organizations that
use Adaptive Authentication can be flexible in:
•
How strongly they authenticate end users.
•
How they distinguish between new and existing end users.
•
What areas of the business to protect with strong authentication.
•
How to comply with changing regulations.
•
What they are willing to accept in terms of risk levels.
•
How to comply with the various requirements of the regions and countries where they operate.
6 Conclusions
Man-in-the-middle attacks, Trojans and malware, once just considered theoretical musings of information
security experts, have come to fruition. The threat is here – and it is real. When institutions erect
a roadblock, fraudsters are always innovating ways to drive around it.
This is apparent from the advanced technology and tactics being used to target institutions and the low
cost and ease of execution for fraudsters to attack. Yet, these are only a few examples of the types of threats
that exist.
It is critical for institutions to establish defenses at every corner and to never assume they are not
vulnerable to these threats. As long as there is fraud, institutions should always be working to discover that
delicate balance between security and risk. The deployment of some variant of strong authentication is
necessary, at least for the critical, high-risk transactions.
By using the Adaptive Authentication approach, it is possible to apply the strong authentication only
exclusively in a cost-effective way. RISK ENGINE, which is the core of the Adaptive Authentication
approach, allows to decide (based on the risk and value of the information/transaction), which
level/method of authentication should be appropriate. Hence the risk engine enables to dramatically
decrease the cost and allow more flexible operations for the end-user.
References
[1]
RSA AFCC: Online Fraud Report, December 2008.
Security and Protection of Information 2009
111
DNSSEC in .cz
Jaromir Talir
[email protected]
CZ.NIC, z.s.p.o.
Americka 23
120 00 Prague 2, Czech Republic
Anotation
CZ.NIC is an association of legal entities responsible for management of DNS system in .cz domain. DNSSEC
technology was deployed into .cz domain in October 2008. This presentation will briefly describe chosen solution
and some specifics we introduced to ease DNSSEC management. Statistics about DNSSEC will show which
registrars currently offer this new technology and we will than mention required steps to secure your domain. At the
end we will show how to check if your internet infrastructure support DNSSEC and what to do if this is not the
case.
112
Security and Protection of Information 2009
Security for Unified Communications
Dobromir Todorov
UC Architect, BT Global Services
[email protected]
81 Newgate Street, London EC1A 7AJ, United Kingdom
1 Executive Summary
Unified Communications is the new paradigm of IT promising to go much beyond just e-mail, voice,
instant messaging and presence and allow users, organisations and applications to communicate seamlessly.
On the flipside, security needs to balance out and protect personal and organisational assets. The session
expands on how well positioned we are to do that today and what more we need tomorrow.
2 Unified Communications for the Business
Unified Communications are making their way into enterprises and reshaping the existing infrastructure
and applications. Similar to every new technologies, designers often concentrate on the functionality (“It
must work”), and only then retrofit security (“Let’s make it secure now”). History shows that security has
to be an essential design requirement.
3 Unified Security for Unified Communications
UC is a new concept but it does not contradict with well established security design principles.
3.1
User Identity and Authentication
In a business process, it is essentially required to know the identity of involved parties, and this identity has
to be authenticated. Unlike telephony which uses E.164 numbers as a primitive way for user identification
and authentication, more powerful mechanisms exist in the data world to provide for such protection. The
problem is that not all of these methods provide uniform identity information. Some of the identification
and authentication challenges include communication with external parties (such as partners and
suppliers), as well as with public IM providers.
3.2
Signalling, IM and Presence
Historically, signalling is often left unprotected, mainly due to the closed nature of traditional TDM
telephony, and the historical expectations of the users. In the UC world though, signalling provides for
session establishment between communicating parties, for presence information – including the potentially
sensitive user location tracking information, and often carries actual user payload, such as instant messages.
There are simple but effective mechanisms for protecting the signalling channel but in implementing these
it is often important to consider the protection of the media channel as well. Also, presence as metadata
may provide useful information for internal users, and may lead to information leakage, if shared with
malicious third parties: a balance is required here.
Security and Protection of Information 2009
113
3.3
Audio and Video communications
Protection of media streams is increasingly becoming one of the requirements for business
communications. Audio and video streams are encrypted, and their integrity – authenticated by default.
Now the requirements of compliance logging, and voice recording need to be considered, and there are
a number of ways that this problem can be solved in. Also, interaction with traditional firewalls and NAT
is creating new functional and security challenges.
3.4
Communications-enabled business processes
Unified Communications is the stepping stone for the next paradigm – the communications enabled
business processes, which will result in even better process automation, and reduced latency. Protecting the
CEBP infrastructure will be a critical requirement, and a number of SOA-related technologies are poised
to provide solutions.
3.5
The Social Web
Web 2.0 and social Web sites have become a natural way of communication, especially for the younger
generation, which is currently graduating from universities and converging with mainstream business.
Businesses are starting to reap the benefits of such type of communication in the form of attracting young
talent, effective contact and knowledge management. Social Web though has its challenges in terms of
identity, integrity, and privacy. This session covers some of the consideration for Social Web Security.
4 Summary
Unified Communications is not a transient concept; it is here to stay. Thorough understanding of the way
it operates, and the associated protection mechanisms is required.
References
[1]
Mechanics of User Identification and Authentication: Fundamentals of Identity Management,
Dobromir Todorov/Auerbach Publications, 2007, ISBN 1420052195
[2]
Microsoft® Office Communications Server 2007 R2 Resource Kit, Rui Maximo et al/Microsoft
Press, 2009, ISBN 0735626359
114
Security and Protection of Information 2009
Integrating Competitor Intelligence Capability within the Software
Development Lifecycle
Theo Tryfonas
Paula Thomas
[email protected]
[email protected]
Faculty of Engineering
University of Bristol
Bristol, UK
Faculty of Advanced Technology
University of Glamorgan
Pontypridd, UK
Abstract
The objective of this paper is to propose a framework to integrate appropriate Competitive Intelligence
(CI) tools into the software development process. The paper first examines the status quo and the
development of the software industry, and then looks into the software development lifecycle. It reviews
the field of CI and existing tools that can be used for CI purposes. We then identify the intelligence
requirements in both the software development process and the software marketplace. We finally propose a
model framework to aid software developers to choose appropriate CI tools to apply their CI strategy.
Keywords: intelligence, competitive advantage, software development.
1 Introduction
Several scholars indicate that the software industry and IT departments are facing extreme pressures to
provide new applications that add value in today's competitive environment [1]. In the struggle to remain
competitive, many companies have turned to new technologies to improve their business activities [2].
This makes Competitive Intelligence (CI) increasingly attractive.
CI is the process of monitoring the competitive environment. It is a systematic and ethical program for
gathering, analyzing, and managing information that can affect a company's plans, decisions, and
operations [3]. Despite the vagueness in defining the content and boundary of CI in the market, research
shows that companies with well-established CI programs enjoy greater earnings per share than companies
in the same industry without CI programs [4]. In the complex and competitive software industry, applying
a successful CI program could and should bring benefits.
There exist many general and specific tools that can facilitate the CI process or some of its phases.
However, there is not much guidance about how the software industry can adopt CI tools and apply
a successful CI program. In this paper, building on previous empirical fieldwork [5] and identification of
other relevant issues, such as ethics [6], we try to identify the CI needs in the software development process
and the software market. Based on that, we propose a framework to embed CI tools into the software
development process, in order to meet these requirements.
The paper is then structured as follows. Section 2 introduces the context of this study, discussing the
characteristics of the software industry, as well as aspects of the concept of intelligence and how it can give
competitive advantage. The discussion also extends to detailing tools and application software that can be
used for competitor intelligence purposes. In section 3 we identify the intelligence requirements of the
software development lifecycle (SDLC) and propose a model framework for the delivery if CI capability
within it. Section 4 concludes the paper and discusses ideas for further research.
Security and Protection of Information 2009
115
2 Software Development and Competitive Intelligence
2.1
The software industry and its processes lifecycle
In the 1970s and early 1980s, software moved beyond government and became increasingly important for
commercial use [7]. On one hand programming languages are more sophisticated, software development
processes are more mature, the applications produced are more complex [33]; on the other hand, the
cheaper and more accessible personal computer generated overwhelming new demand for software. The
increasing demand for software brought many new software firms into birth. Historically for example,
according to AL-Zayani [7], there were a total of 3,919 companies in the US software industry from 1975
to 1981. However, in 1982, there was a significant increase of approximately 400 new companies, totalling
4,340 companies. With the increase of the software industry, the cost to develop and market the products
became expensive and the competition within this industry became more and more fierce. This fierce
competition finally led to the current paradox landscape of software monopolies in sectors of particular
interest, for example operating systems, databases etc. [5].
The benefits of this fast-growing, international business are not equally distributed all over the world with
US software firms dominating the international software market. Few large software companies hold
a large numbers of software patents which makes small companies become dependent on them. In 1997,
Microsoft had already owned 400 software patents [8]. In 2000 IBM touted 2,886 patents, of which
a third shipped in the form of products. According to the company spokesman Tim Blair, IBM raked in
$1.6 billion in intellectual property license fees in that year [9]. The software patents make it very difficult
for small software companies who struggle for their market share.
As a result of the software monopoly, Open Source Software / Free Software (OSS/FS) has risen to great
prominence. Briefly, OSS/FS programs are programs whose licenses give users the freedom to run the
program for any purpose, to study and modify the program, and to redistribute copies of either the
original or modified program (without having to pay royalties to previous developers [21]). In such
a competitive environment, Competitive Intelligence (CI) is essential for keeping in pace with the market,
but software companies are not clear on how to apply the concept of CI in their field [5].
The software development process is characterised by its lifecycle, the simplest form of which is the
waterfall model. The waterfall model was put forward by Barry Boehm in the mid-1970s. It profoundly
affected software engineering and some would say that it was and is the basis of the field ([10], p.83). The
waterfall model describes a software development method that is linear and sequential. In the simplest
version of this model, each phase has distinct goals and once a phase of development is completed, the
development proceeds to the next phase and there is no turning back.
The advantage of waterfall development is that it allows for departmentalisation and managerial control.
A schedule can be set with deadlines for each stage of development and a product can proceed through the
development process like a car in a carwash, and theoretically, be delivered on time. The disadvantage of
waterfall development is that it does not allow for much reflection or revision. Once an application is in
the testing stage, it is very difficult to go back and change something that was not well thought out in the
concept stage [11].
Although it has come under attack in recent years for being too rigid and unrealistic when it comes to
quickly meeting customer's needs, the Waterfall Model is still widely used [11]. Moreover, it is attributed
with providing the theoretical basis for other Process Models, because it most closely resembles a “generic”
model for software development. As a result, we choose the waterfall model to illustrate the informational
requirements in software development process and thereby each phase will be discussed in detail in the
following part. Usually the waterfall model contains the following phases: feasibility study, requirements
specification, design, coding and testing. Although different sources may use different names to define
every phase in the waterfall model, they do have similar function.
116
Security and Protection of Information 2009
2.2
Forms of intelligence and competitive advantage
Over two thousand years ago, Sun-Tzu wrote in his work, ‘The Art of War’, that “If you know the enemy
and know yourself, you need not fear the result of 100 battles. If you know yourself but not the enemy, for
every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will
succumb in every battle”. Today’s business competition is often metaphorically described as another kind
of ‘war’ because business moves fast; product cycles are measured in months, not years; partners become
rivals quicker [12]. Thus, if you want to win, you must “know the enemy and know yourself”.
According to a dated but indicative survey released in 1997 by researchers at The Futures Group, 82% of
companies with annual revenues over $10 billion had an organized system for collecting information on
rivals, while 60% of all surveyed U.S. companies had an organized intelligence system. For operating in an
industry where many of the players are ‘corporately intelligent’, maybe this is indicative of how important
a CI program is to a company [13].
In the Oxford dictionary, “intelligence” is defined amongst others as: the gathering of information of
military or political value. From a commercial perspective, the concept of “intelligence” conjures up many
images, from government spies and military covert operations to Competitive Intelligence and marketing,
through to research & development, and even specialist software and programmes. It seems that
“intelligence” is a catholicon that can be used to mean a lot of things. Intelligence is anchored in past and
present data to anticipate the future, in order to drive and guide decisions in enterprises [14]. This
indicates that intelligence can be used to describe a process or an activity and can also be used to describe
the final output or product.
To be a process, intelligence can be looked as a continuous circulation of information consisting of five
phrases: Planning and Direction, Collection, Processing, Analysis and Production, and Dissemination
[15]. In order to maintain this circulation, manpower, material resources, financial affairs and information
technology are all required to construct an infrastructure. The start of this circulation is the user’s needs or
requirements; the input is the raw data collected by people or Information System tools; and the output
can also be called intelligence which helps a manager to respond with the right market tactic or long-term
decision ([16], p. 23).
Many people may use intelligence products to make decisions and still be unaware of its existence. Usually
we could divide intelligence into these categories but not cover all: National Intelligence, Military
Intelligence, Criminal Intelligence, Corporate Intelligence, Business Intelligence, Competitive Intelligence
etc. We will focus on the last three types, although there are no clear boundaries between them and in fact
many people often use them interchangeably.
From another point of view, according to the intelligence consumer’s position, it could be divided into
three categories: strategic, tactical and operational. Strategic intelligence is future-oriented and allows an
organization to make informed decisions concerning future conditions in the marketplace and/or industry
[17]. It also helps decision makers discern the future direction of the organization. Ultimately, over time,
strategic intelligence facilitates significant organizational learning. Tactical intelligence is present-oriented.
This level of intelligence provides organizational decision makers with the information necessary to
monitor changes in the company’s current environment and proactively helps them search for new
opportunities. Tactical intelligence is real time in nature and provides analysis of immediate competitive
conditions. Operational Intelligence is intelligence that is required for planning and conducting campaigns
and major operations to accomplish strategic objectives within theatres or operational areas. This type of
intelligence has the potential to reduce the time between the discovery of problems or opportunities and
taking action on them (Information Builder, on-line).
Security and Protection of Information 2009
117
Business Intelligence should be viewed as a logical extension to an organization’s key systems of record
(i.e., ERP, Order Entry, etc.). Leveraging data into aggregated, focused information allows organizations
to capitalize on the greater depth and breadth of information available in current systems [18].
On the other hand Competitive Intelligence is a systematic and ethical program for gathering, analysing,
and managing external information that can affect a company’s plans, decisions, and operations [4].
Whereas, corporate intelligence (CI) is a business environment sensing and understanding process that
embraces the entire operational environment of a company. It includes competitive, technical, people, and
market intelligence, which cover a company’s exterior world, as well as knowledge management, which
looks at a company’s interior environment [13].
Those definitions indicate that business intelligence, competitive intelligence and corporate intelligence
focus on different aspects and have different scope. Business intelligence focuses on data inside the
organization, aims to use the information available in current systems more efficiently. Competitive
Intelligence focuses on the utilization of information outside the company. Corporate intelligence seems to
be much more comprehensive than the former two, which emphasizes the awareness of the environments
both inside and outside a company. Besides the differences between them, the three contain similar
processes such as planning, collection, analysis, dissemination etc.
According to Bouthilier and Shearer [2], CI is both an information system, in that it adds value to
information through a variety of activities, and an intelligence system, in that it transforms information
into intelligence. They identify similarity in the proposed models in the existing literature. For example,
each model they have reviewed recognizes the importance of identifying the type of intelligence/information that is needed in the beginning of the CI cycle. Also, each model contains a collection or
acquisition stage as a second step. Based on the minimal differences identified between the different CI
models, Bouthillier and Shearer [2] introduced an integrative CI model which combines both intelligence
processes and information processes (Figure 1). We adopt their model as a typical manifestation of the CI
cycle, because it can help us identify the information system requirements for the prescribed CI activities.
Identification of CI needs. To identify the main CI needs, several key activities should be conducted:
identifying the main CI client communities, identifying intelligence needs, identifying CI analysis
techniques, translating intelligence needs into information needs and so on [2]. The most difficult task in
this stage should be the translation of intelligence needs into information needs.
Acquisition of Competitive Intelligence information. The main activity of this stage is information
collection. Today there are many free information sources and it is not difficult to acquire information.
However, not all information is of the same quality and not all information sources have the same
reliability. When collecting information, CI practitioners should confirm that an information source must
have some or all of the following characteristics: Reliability, Validity (accuracy), and Relevance.
Organization, Storage, and Retrieval
Identification of CI
needs
Acquisition of
competitive
information
Analysis of
information
Development of CI
products
Distribution of CI
products
Figure 1: Information Process Model of the CI Cycle [2].
118
Security and Protection of Information 2009
Analysis of information. Analysis is the key process that transforms information into intelligence [19].
The output of this stage is a set of hypothetical results (in terms of gains, sales, advantages, and
disadvantages) based on a number of possible strategic actions that could be taken by an enterprise [2].
During the analysis process, many analytical techniques could be used, such as Benchmarking, SWOT,
Personality Profiling, Patent Analysis, War Gaming and similar techniques. Different techniques provide
the company with a different snapshot of its competitive environment but no single one can provide a full
picture of the competitive environment. A good CI process should employ several of those.
Development of Intelligence Products. This is not just a simple word processing phase. CI professionals
decide which format will best convey the analysis done, reveal the critical assessment of information, and
will indicate the limitations of information sources [2].
Distribution of Intelligence Products. This is the final process in the CI cycle. The key to this stage is
getting the right information to the right person at the right time [20]. And of course, as CI is viewed as
a cycle that never ends, the feedback from the CI products will and should bring a new CI cycle into
action.
2.3
Analysis of Competitive Intelligence tools
With the increasing popularity of CI, more and more companies claimed to produce resource or tools to
support CI process. Thus a number of CI consulting companies, information services, software packages,
and CI training course emerged. In the Fuld & Co. 2000 report, 170 potential CI software packages are
identified and on the “CI Resource Index”, developed and maintained by CISeek.com, are listed more
than 289 potential CI software applications. Many of these are not specifically designed or tailored to CI
needs, and many even have tenuous connections to the CI process as a whole [2].
In this paragraph, we will examine software applications that can assist the CI process. Not all companies
can afford expensive expert CI software and not all companies need that. A general purpose, low price
software may suit them more. We will discuss some key software applications identified according to their
function and placement within the CI lifecycle as adopted and defined earlier in this text.
Identification of CI needs. Software applied in this stage is expected to be helpful for one or more of the
following CI activities: the identification of main CI needs, the identification of CI topics, the translation
of CI topics into specific information requirements, the identification of CI analytical techniques or
Capability to change CI topics and techniques. In the CI tools literature, there are some software packages
that are advertised as of supporting specific methods for identifying, storing and disseminating strategic
information needs. For instance, tools that visualize the variables and causal relations relevant for
specifying the information needs. Such examples are system dynamics software (e.g. iThink, Vensim or
Powersim), or software supporting, identifying or visualizing key intelligence topics (e.g. Mindmap) [22].
Acquisition of competitive information. Compared to the previous stage, there are tools in abundance
that can be used in this stage. To achieve the main tasks identified in figure 2, many techniques and
technologies can be used, such as Profiling/Push technology, Filtering/Intelligent Agent technology, Web
Searching technology, Information Services/vendors and so on [2]. In general, this kind of software can be
divided into two groups: one is for data searching and the other is for information source monitoring.
Here, we will only introduce some representative ones. Among data searching software, Google, Lexis
Nexis, Fast, Lycos, Factivia, Copernic and Kart00 are very popular. Among these some are free software
and they are linked to large databases assisting companies to obtain a vast amount of information. On top
of that, the use of commercial search engines can provide access to more relevant or specialised
information. A tool which can deal with information overlap and infiltrate large amounts of data will save
much time in this time-consuming stage. So the trade-off lies in the price vs. time to produce quality
Security and Protection of Information 2009
119
information. Popular customized search engines are Deep Query Manager/Lexibot by Bright Planet,
BullsEye by INtelliseed, Copernic Agent Basic/personal by Copernic, Digout4U by Arisem etc.
Information source monitoring software includes: Copernic Enterprise search by Copernic, Copernic
Agent Pro also by Copernic, Right Web Monitor by RightSoft, Inc, Site Snitch by Site Snitch etc. For
Copernic Enterprise Search and Copernic Agent, it is said that it is able to rank a document whose main
theme corresponds to search keywords higher than a document that only contains search keywords once or
twice. The results ranking can be fine-tuned by altering the weight of different ranking factors. The
software also does automatic indexing of new and updated documents in real time, something that
according to Bouchard, Copernic’s president and CEO, “most competitive products don’t do” [23].
Organization, storage, and retrieval. As discussed in the previous section, this CI stage can be viewed as
an Intelligence Information System which can support the whole of the CI process. The main function of
this stage is storage and retrieval. Therefore, software applications that can be used in this stage range from
simple word processors, such as Microsoft Office Word and Adobe Reader, to database software, such as
Microsoft office Access, and to complex multipurpose portal software (examples of which are
OpenPortal4U, Portal-in-a-box etc.). MS Office is one of the most popular software packages used in
many environments to aid the CI process.
Analysis of information. In the CI literature, the jury is still out as to whether the analysis function of CI
can be conducted or facilitated to any great degree by technology [2]. However, purposeful human activity
can be assisted by information technology. In this stage, applications that support war-gaming or scenario
analysis could be helpful and other software, such as groupware could also facilitate the analysis process.
An example of scenario analysis software is the Scenario Analysis Tool [34]. This kind of software for
example can visualize the analysis process; assisting the analytical processes in this way.
Development of the CI product. During this stage, software as Microsoft Office Word, Microsoft Office
Excel, Microsoft Office PowerPoint, Adobe Reader and other reporting tools can be used. For specific
applications the following link provides a list of the top ten reporting tools with detailed description .
Distribution of the CI product. There are many ways to distribute the CI products, for example via
presentations, email, in printed materials and on-line documentation, formal reports etc.
Since 1998, Fuld & Company has continuously evaluated software packages with potential CI
applications. In its 2004/05 intelligence software report, CI ready tools from four companies were
examined: Cipher, Strategy, Traction and Wincite. In Aurora WDC 2004 Enterprise Competitive
Intelligence Software Portals Review, six CI software packages are introduced: Knowledge. Works™
(Cipher Systems, LLC), Knowledge XChanger™ 2.2(Comintell AB), Viva Intelligence Portal™ 3.2
(Novintel), STRATEGY!™ Enterprise 3.5 (Strategy Software, Inc.), TeamPage™ 3.0 (Traction Software,
Inc.) and LLC: Wincite™ 8.2 (Wincite Systems).
So far we referred to both general tools and specific CI packages. We did not focus extensively on these as
we only aimed to create an understanding of what technologies exist and what kind of software may be
helpful in the course of CI operations. Hopefully this can be a pathway for practitioners to identify the
relevant technologies and apply them as appropriate within their case. Table 1 provides details of particular
software and vendors, whilst categorizing them according to their functionality.
Finally, to conclude the discussion on tools, many CI experts have pointed out the importance of the use
of the Internet as a CI tool ([24], [25], [26], [27], [28] etc.). They see the Internet in their majority as
a major evolving CI component in its own right and outline how it can be used in the direction,
collection, analysis, and distribution of CI. Due to its significant impact, a lot propose a separate
intelligence sub-process for the Internet in particular.
120
Security and Protection of Information 2009
Method or Tool
Functionality provided
Product examples and software
vendors
1. System Dynamics
Provide "what-if" scenarios in a risk-free environment, and
gain insight into the consequences of decision-making.
Vensim (Ventana Systems)
2. Mind Mapping
Helpful for mind-mapping, creative thinking and
brainstorming, affordable price
ConceptDraw MindMap
(ConceptDraw)
3. Profiling/Push
technology
Provides automatically at regular intervals or real time data
or text from multiple sources based on interest profiles or
predetermined queries; can look for changes and alert user
Back Web e-Accelerator (Back Web)
4. Filtering/Intelligen
t Agent technology
Monitors Web sites, documents, and e-mail messages to
filter information according to particular preferences; can
learn user preferences; can high light the most important
part automatically; can provide text summary; can prioritize,
delete, or forward information automatically.
Copernic Enterprise Search,
Copernic Agent Pro (Copernic)
5. Web Searching
Collecting online information in various databases. General
search, Meta-search search, and directory search.
Free search engines: Google, Lycos,
etc.
Powersim (Powersim)
Verity Profiler Kit (Verity)
Customized search engine: Deep
Query Manager (Bright Planet);
BullsEye (Intelliseek), etc.
6. Information
Services/vendors
Provides access to information sources based on
subscription, usually offers push technology, can provide
features for presenting and distribution reports.
Dialog, Factiva, Lexis-Nexis, etc.
7. Document
management
Searches and retrieves information with advanced search
technology, including full-text, search term highlighting,
metadata, document summarization, and result clustering.
PowerDocs/CyberDocs
(Hummingbird )
8. Multipurpose
Portals
Contains multiple functional features such as Automated
Content Aggregation and Management, Intelligent
Navigation and Presentation, Hierarchical Categories,
Automatic Summarization, Retrieval and so on
Corporate Portal (Plumtree)
9. Text analyzing and
structuring
Automatic clustering and/or categorization assignment into
defined categories; Summarization based on representative
text; Full-text search engine with linguistic analysis and so
on.
Intelligent Miner for Text (IBM)
10. Group ware
Encompass several capabilities in one software application
including messaging, calendaring, e-mail, workflow, and
centralize database.
Lotus Domino (IBM)
11. Data mining and
data warehousing
Data storage, index, retrieval
Microsoft Access
12. Text summarizing
Pinpoints key concepts and extracts relevant sentences,
resulting in a summary of larger document.
Copernic Summarizer (Conpernic)
13. Analyzing and
Reporting data
Extracts data, searches patterns, slices, dices, drills down to
find meaning, and allows various reporting options.
Powerplay/Impromptu/DecisonStrea
m (Cognos)
14. Internet/Intranet
Information collection, communication, etc.
Lotus Notes (IBM)
Openportal4U (Arisem)
Portal-in-a-box (Autonomy)
SunForum (Sun Microsystems)
MS SQL sever
Table 1: Software and Technologies that Facilitate CI.
Security and Protection of Information 2009
121
So far we examined definitions of CI and outlined the main activities in the CI life cycle. Based on this it is
be possible to map out the intelligence requirements of the software development process and to select
appropriate tools to meet these needs.
3 Meeting the Intelligence Requirement in the Software Industry
The Chinese military strategist Sun-Tzu emphasized on the importance of the intelligence function for
military organisations [29]. Sun-Tzu divided the intelligence on competition in order to win a war into
two kinds: “yourself” and the “enemy” and emphasized that only when information from both your
organisation and the enemy are obtained, “you need not fear the result of 100 battles”. We will next
discuss the CI needs for a software company from both inside and outside perspectives. The inside aspects
refer mainly to the information requirements of the software development stages; the outside aspects deal
with competitors, industry regulation, customer-relationships and so on.
3.1
Information Requirements in the Software Development Process
Based on the software development lifecycle, in order to produce software that meets user requirements in
a cost effective manner, there is a lot of information that needs to be considered. In order to identify these
information requirements, we will go through the software development process stage by stage, from the
feasibility study to the software release or delivery.
Feasibility study. The importance of an objective feasibility study has been discussed earlier however it is
not a trivial task. A good feasibility study should consider both internal and external factors. From an
internal point of view, information about emerging technologies, development tools, human resources,
financial status etc. should all be considered. From the market angle, information on competitors’ similar
products, consumer trend analysis, etc. should be taken into account. When developing commercial
software, information about similar products in the market is extremely important. For example software
that has high first-copy costs and low replication and distribution costs, creates substantial economies of
scale (unit cost decreasing with volume), although some recurring costs such as distribution and customer
support do increase with unit sales [30].
Requirements Analysis. As it has been pointed out in the previous section, this stage is user-centric.
Requirements engineering should focus around the user’s or customer’s needs. In order to identify user
requirements, information about the user’s workflow and business operations are necessary. Other
information such as habitual user practices, personal preferences etc. could also be helpful.
Design. Information required in this stage is mostly technical, found within the software development
literature. Technical advances and theory should be taken into account and inform state of art systems
design.
Coding. Coding is again a technical process. The success of this stage mostly depends on the efforts of
software development professionals and the collaboration of the development team.
Testing. Testing is the last phase of the development process before the software is delivered to the enduser. Choosing an effective testing model is important and in order to choose appropriate models and
testing parameters, information about the software’s architectural features and the users’ requirement are
both needed.
Software release or delivery. After the testing phase the software usually in executable form, reaches the
customer and the wider market. Thus, the intelligent requirement moves ultimately from inside to outside.
122
Security and Protection of Information 2009
3.2
Intelligence Requirements and the Marketplace
For commercial software sold in the general marketplace, many software-planning and design decisions are
based not only on meeting user needs but also on various marketplace issues. When talking about the
uncertainties and strategic issues that suppliers encounter in selling or licensing software in the
marketplace, Messerschmitt and Szyperski [30] consider that strategic decisions in planning and designing
commercial software often arise from marketplace issues related to the ROI measures of revenue, cost, and
risk.
Indeed, according to Messerschmitt and Szyperski [30], a company’s revenue rests on the product of
market share and market size. When the market size is fixed, the market share becomes the decisive factor
for the company revenue. To increase market share, taking customers away from competitors and taking
higher proportion of new adopters are two direct means.
For the former a comprehensive understanding of the competitor should be obtained. For conducting
a competitor analysis, Porter’s five forces model [31] is one of the most popular techniques. The five forces
include: existing competitors, threats of new entrants, threats of substitute products, bargaining power of
suppliers, and bargaining power of buyers. Porter also defined other components that should be diagnosed
when conducting competitive analysis: future goals, current strategy, assumptions, and capabilities of
competitors and competitors’ response profiles.
Depending on the nature of the competitive environment, there are many other techniques that can be
used besides Porter’s five forces model. For example, benchmarking, which involves comparing the
attributes of your competitor with those of your own company to help identify where you can improve;
Competitor Profiling, which is the most common type of analysis used in CI [32], refers to a number of
more specific types of profiling such as the personality profiling, financial statements, manufacturing
processes etc. [30]. Each technique will provide the company with different snapshot of its competitors,
and each technique will have its own set of information requirements. Therefore an analysis of competitors
should use more techniques so that an objective and comprehensive understanding could be obtained.
For acquiring higher proportion of new adopters, a factor thought to be critical in rapidly growing
markets, both increasing value and reducing the customer’s total cost should be considered [30]. To
achieve this, a lot of information about the customers is important.
Because software has high first-copy costs and low replication and distribution costs, Suppliers should
avoid creating direct substitutes for other vendors’ products, and software design should therefore focus
heavily on useful, customer-valued innovation [30]. It can be argued that the only way to know how to
increase customer value and decrease customer cost is to fully understand the customer and users’ work
and their value chain.
Reducing cost is another way of increasing revenue. However, software is different from general goods and
companies cannot decrease its development cost by saving for example on materials. There are, however,
ways to reduce software development costs. Component-based development and software reusability for
example can reduce software costs and complexity while increase reliability and modifiability. Another way
to reduce software development cost is outsourcing; the use of contracted staff with specific skills working
on large and small projects [5]. Often, the employed software development staff does not possess the skills
required and it is too expensive, or time consuming, for the existing staff to gain such skills. In such
situation, the use of contracted staff can reduce the developing cost and save time. However, contracted
staff can introduce other issues to the project which increases risks to the company.
Security and Protection of Information 2009
123
3.3
Delivering CI capability in the SDLC
Based on the previous discussion on the software industry, the Competitive Intelligence literature and CI
tools, as well as the information requirements of the software development industry, we propose the
following integrative approach for the application of CI to the software development and distribution
process.
In Table 2 we have identified information technologies that can be used through the software lifecycle,
listed against the CI phase they can facilitate. At each software development stage, an appropriate CI task
can inform the process with the required intelligence and provide the information required. For example,
in the Requirements Engineering stage System Dynamics and Mind mapping techniques are particularly
helpful as the developers must understand users’ needs and translate those to architectural blueprints.
4 Conclusions and Further Work
The theory and practice of Competitive Intelligence have been applied in various fields for years with
a variety of effects and is now growing as a distinct discipline and concentration of knowledge. However in
the ever-changing and ultra competitive software industry all but a few companies have yet to capitalise on
the benefits of using CI tools, not to mention adopting a formal CI process. The lack of a model
framework to integrate CI tools into the software development process could be a major cause for this. The
implementation of such a model framework was the key focus of this paper.
To this end, we first examined the software development process and issues of this particular industry.
Then, we considered intelligence requirements of users, competitors and software development
technologies. We reviewed tools that can implement/facilitate the CI function along the software
development process. We outlined the relevant information technologies applicable to the CI activities
relevant to each stage. Hence our final product is produced by crossing the phases of the CI cycle with the
stages of a typical SDLC model and then looking into what CI tools would be applicable at each stage.
CI
SDLC
Identification
of CI
requirements
Acquisition of
competitive information
Organization,
Storage, and
Retrieval
Analysis of
information
Development
of CI products
Distribution of CI
products
Feasibility
Study
1.System
Dynamics;
3.Profiling/Push
technology;
4.Filtering/Intelligent
Agent technology; 5.Web
Searching; 6.Information
Services/vendors; etc.
7.Document
management;
8.Multipurpose
Portals; 9.Text
analyzing and
structuring;
10.Group ware;
11.Data mining and
data warehousing;
etc.
12.Text
summarizing;
9.Text analyzing
and structuring;
13.Analyzing and
Reporting data,
etc.
12.Text
summarizing;
13.Analyzing an
Reporting data,
etc
10.Group ware;
8.Multipurpose
portals;
14.Intranet;
6.Information
Services/vendors,
etc.
3.Profiling/Push
technology;
4.Filtering/Intelligent
Agent technology; 5.Web
Searching; 6.Information
Services/vendors; etc.
7.Document
management;
8.Multipurpose
Portals; 9.Text
analyzing and
structuring;
10.Group ware;
11.Data mining and
data warehousing;
etc.
12.Text
summarizing;
9.Text analyzing
and structuring;
13.Analyzing and
Reporting data,
etc.
12.Text
summarizing;
13.Analyzing an
Reporting data,
etc
10.Group ware;
8.Multipurpose
portals;
14.Intranet;
6.Information
Services/vendors,
etc.
2.Mind
mapping, etc.
Requirements
1.System
Dynamics;
2.Mind
mapping, etc.
124
Security and Protection of Information 2009
CI
SDLC
Identification
of CI
requirements
Acquisition of
competitive information
Organization,
Storage, and
Retrieval
Analysis of
information
Development
of CI products
Design
n/a
5.Web Searching; etc.
7.Document
management;
8.Multipurpose
Portals; 9.Text
analyzing and
structuring;
10.Group ware;
11.Data mining and
data warehousing;
etc.
12.Text
summarizing;
etc.
12.Text
10.Group ware;
summarizing;
14.Intranet, etc.
13.Analyzing an
Reporting data,
etc
Coding
n/a
5.Web Searching; etc.
7.Document
management;
8.Multipurpose
Portals; 9.Text
analyzing and
structuring;
10.Group ware;
11.Data mining and
data warehousing;
etc.
12.Text
summarizing;
etc.
12.Text
10.Group ware;
summarizing;
14.Intranet, etc.
13.Analyzing an
Reporting data,
etc
Testing
n/a
5.Web Searching;
7.Document
management;
8.Multipurpose
Portals; 9.Text
analyzing and
structuring;
10.Group ware;
11.Data mining and
data warehousing;
etc.
12.Text
summarizing;
etc.
12.Text
10.Group ware;
summarizing;
14.Intranet, etc.
13.Analyzing an
Reporting data,
etc
Release or
Delivery
1.System
Dynamics;
3.Profiling/Push
technology;
4.Filtering/Intelligent
Agent technology; 5.Web
Searching; 6.Information
Services/vendors; etc.
7.Document
management;
8.Multipurpose
Portals; 9.Text
analyzing and
structuring;
10.Group ware;
11.Data mining and
data warehousing;
etc.
12.Text
summarizing;
9.Text analyzing
and structuring;
13.Analyzing and
Reporting data,
etc.
12.Text
summarizing;
13.Analyzing an
Reporting data,
etc
2.Mind
mapping, etc.
Distribution of CI
products
10.Group ware;
8.Multipurpose
portals;
14.Intranet;
6.Information
Services/vendors,
etc.
Table 2: A Model Framework for Integrating CI into SDLC.
Due to the nature of the software industry the authors believe that there is much to be gained by the
introduction of a formal CI process into the development stages. All but a handful of software developer
organisations fail to embrace such technologies, despite their awareness of business intelligence and
exposure to the software product. The ‘siloed’ nature of the software market is largely forming what are, in
essence, monopolies along specific product lines (e.g. Browsers, Operating Systems or Databases).
Companies many times hold as much as seventy to 90% of a market (e.g. Internet Explorer is the preferred
browser of as much as 70% of the market). This may detract away from the key awareness of competition
and the differentiating features of competitive products that, in healthy competition, is a factor for
continuous improvement and added customer value.
At the time of concluding the writing of this paper, the authors are looking to evaluating the theoretical
rigour, practical relevance and potential impact of the model, firstly by receiving industry’s expert feedback
on the model’s features. We secondly intend to update this model to reflect on the feedback received and
then put it into practice by engaging actively into the development process of a typical software house.
Security and Protection of Information 2009
125
Through this action case study, we will be able to collect first-hand empirical data and observe in a
measurable manner the added value of CI in the development process.
Acknowledgments. The authors would like to thank Mingna Xie, a graduate of Glamorgan’s Information
Security and Competitive Intelligence MSc programme, for her assistance with the numerous CI tools
review.
References
[1]
David, J.S. & McCarthy, W. (2003). Agility - the key to survival of the fittest in the software
market. Communications of the ACM, 46(5), 65-69.
[2]
Bouthillier, F. & Shearer K. (2003). Assessing Competitive Intelligence Software. Medford, NJ:
Information Today.
[3]
Kahaner, L. (1998). Competitive Intelligence: How to Gather, Analyze, and Use Information to
Move Your Business to the Top. New York: Touchstone.
[4]
Society of Competitive Intelligence Professionals. (n.d.). Frequently Asked Questions. Retrieved
January 28, 2008, from http://www.scip.org/2_faq.php
[5]
Thomas, P. & Tryfonas, T. (2005). An Interpretive Field Study of Competitive Intelligence in
Software Development. Journal of Competitive Intelligence and Management, 3(3), 40-56.
[6]
Tryfonas, T. & Thomas, P. (2006). Intelligence on Competitors and Ethical Challenges of Business
Information Operations. In D. Remenyi (Ed.) ECIW 2006, Proceedings of the Fifth European
Conference on Information Warfare and Security. (pp. 237-244). National Defence College,
Helsinki, Finland.
[7]
AL-Zayani, A.S. (2001). Software: A Historic View of its Development as a Product and an
Industry. Retrieved from
http://www.computinghistorymuseum.org/teaching/papers/research/software_historic_view_of_its_
development_Alzayani.pdf
[8]
European Patent Organisation. (n.d.) Microsoft and Patents. Retrieved January 28, 2008, from
http://eupat.ffii.org/players/microsoft/
[9]
European Patent Organisation. (n.d.) IBM and Software Patents. Retrieved January 28, 2008, from
http://eupat.ffii.org/gasnu/ibm/index.en.html
[ 10 ] Hamlet, D., & Maybee, J. (2001). The engineering of software. Addison Wesley.
[ 11 ] Green, D. and DiCaterino, A. 1998. A Survey of System Development Process Models. Retrieved
from http://www.ctg.albany.edu/publications/reports/survey_of_sysdev/survey_of_sysdev.pdf
[ 12 ] Imperato, G. (1998). Competitive Intelligence - Get Smart! FASTCOMPANY. 14, p. 269. Also
retrievable from http://www.fastcompany.com/magazine/14/intelligence.html
[ 13 ] Persidis, A. (2003). Corporate Intelligence in a‘Corporately Intelligent’ World. Journal of
Competitive Intelligence and Management, 1(2), 87-99.
[ 14 ] du Toit, A. (2003). Competitive Intelligence in the Knowledge Economy. International Journal of
Information Management, 23(2), 111-120.
[ 15 ] Central Intelligence Agency. (n.d.). The Intelligence Cycle. Retrieved January 28, 2008, from
https://www.cia.gov/kids-page/6-12th-grade/who-we-are-what-we-do/the-intelligence-cycle.html
[ 16 ] Fuld, L.M. (1995). The new competitor intelligence. John Wiley & Sons.
126
Security and Protection of Information 2009
[ 17 ] Carlin, S. and Womack, A. (1999). Strategic and Tactical Competitive Intelligence for Sales and
Marketing. American Productivity & Quality Center (APQC) research report, issued May 1999
(can be bought on-line at http://www.researchandmarkets.com/reportinfo.asp?report_id=42812).
[ 18 ] CGI. (n.d.). Business Intelligence Enabling Transparency across the Enterprise. Retrieved January
28, 2008, from http://www.cgi.com/web/en/library/white_papers/p1.htm
[ 19 ] Herring, J. (1998). What is intelligence analysis? Competitive Intelligence Magazine, 1(2), 13-16.
[ 20 ] Choo, C.W. (2002). Information Management for the Intelligent Organization: The Art of
Scanning the Environment. 3rd ed. Medford, NJ: Information Today.
[ 21 ] Wheeler, D. A. (n.d.). Why Open Source Software / Free Software (OSS/FS, FLOSS, or FOSS)?
Look at the Numbers!. Retrieved January 28, 2008 from
http://www.dwheeler.com/oss_fs_why.html
[ 22 ] Vriens D., (2004). The Role of Information and Communication Technology in Competitive
Intelligence. In D. Vriens (Ed.) Information and Communication Technology for Competitive
Intelligence. (pp. 1-33). IRM Press.
[ 23 ] Hane, P.J. (2003). Copernic Launches Enterprise Search Product for the SME Market.
Newsbreaks. Retrieved from http://www.infotoday.com/newsbreaks/nb031006-2.shtml
[ 24 ] Chen, H., Chau, M., & Zeng, D. (2002). CI-spider: A tool for competitive intelligence on the
Web. Decision Support Systems, 34, 1-17.
[ 25 ] Teo, T.S.H. & Choo, W.Y. (2001). Assessing the impact of using the Internet for competitive
intelligence. Information & Management, 39, 67-83.
[ 26 ] Cook, M. & Cook, C. (2000). Competitive Intelligence. London: Kogan Page.
[ 27 ] Graef, J.L. (1997). Using the Internet for competitive intelligence: a survey report. Competitive
Intelligence Review, 8(4), 41-47.
[ 28 ] Cronin, B., Overfelt, K., Fouchereaux, K., Manzvanzvike, T., Cha, M., & Sona, E. (1994). The
Internet and competitive intelligence: a survey of current practice. International Journal of
Information Management, 14(3), 204-222.
[ 29 ] Rustmann Jr., F.W. (1997). The Craft of Business Intelligence: An American View. International
Executive, 39(4), 459-464.
[ 30 ] Messerschmitt, D. & Szyperski, C. (2004). Marketplace Issues in Software Planning and Design.
IEEE Software, special issue on “Software Return on Investment” May/June 2004.
[ 31 ] Porter, M.E. (1998). Competitive advantage: creating and sustaining superior performance. Free
Press.
[ 32 ] Sandman, M.A. (2000). Analytical Models and Techniques. In J.P. Miller (Ed.) Millenium
Intelligence. (pp. 69-95). Medford, NJ: Information Today.
[ 33 ] Davis, A.M. (2004). Great Software Debates. Wiley.
[ 34 ] Oracle Crystal Ball. (n.d.). Scenario Analysis User Manual. Retrieved January 28, 2008, from
http://crystalball.com/support/cbtools/scenariotool.pdf
Security and Protection of Information 2009
127
Validation of the Network-based Dictionary Attack Detection
Jan Vykopal
Tomáš Plesník
Pavel Minařík
[email protected]
[email protected]
[email protected]
Institute of Computer Science
Masaryk University
Brno, Czech Republic
Abstract
This paper presents a study of successful dictionary attacks against a SSH server and their network-based
detection. On the basis of experience in the protection of university network we developed a detection
algorithm based on a generic SSH authentication pattern. Thanks to the network-based approach, the
detection algorithm is host independent and highly scalable. We deployed a high-interaction honeypot
based on VMware to validate the SSH dictionary attack pattern that is able to recognize a successful attack.
The honeypot provides several user accounts secured by both weak and strong passwords. All the
communication between the honeypot and other hosts was logged at the host and even network layer (the
relevant NetFlow data were stored too). After successful or unsuccessful break-in attempt, we could
reliably determine detection accuracy (the false positive and negative rate). The pattern was implemented
using a dynamic decision tree technique, so we can propose some modifications of its parameters based on
the results. In addition, we could validate the pattern because the detection relies only on the NetFlow
data.
This study also discusses the performance details of detection method and reveals methods and behaviour
of present successful attackers. Next, these findings are compared to the conclusions of the previous study.
In our future work, we will focus on an extension of the detection method to other network services and
protocols than SSH. Further, the method should also provide some reasons for the decision that the attack
occurred (e. g., distributed dictionary attack).
Keywords:
dictionary attack, SSH, NetFlow, attack pattern, validation, honeypot.
1 Introduction
In our previous paper [1], we proposed a SSH dictionary attack pattern that is able to recognize
a successful attack. We inspected logs on attacked hosts and then identified appropriate traffic in NetFlow
data collected at the border of a university network or its subnets. As a result, we derived the following
dictionary attack pattern at NetFlow level:
• TCP port of the victim is 22, TCP port of the attacker is generally random and greater than 1024,
• many flows (tens or hundreds) from the attacker to the victim in a short time window
(5 minutes),
• the flows from the attacker are small: from 10 to 30 packets and from 1 400 to 5 000 bytes,
• victims' responses are small too (typically the same number of packets and bytes),
• flow duration is up to 5 seconds,
• the last flow is different in case of successful attempt.
128
Security and Protection of Information 2009
Next, we analysed network traffic of various applications that utilize the SSH protocol (ssh, scp, Putty,
WinSCP, sftp and rsync) to eliminate false positives. In short, we did not find any traffic that fully
corresponds to the attack pattern derived from real dictionary attacks. What is more, the attack pattern
matches all flows during our simulated attacks and, of course, the attacks captured in log files at the
beginning of our research. The main idea of the detection algorithm is to monitor and tune up key
indicators (flow duration, number of packets and number of bytes transferred in victim's reply to the
attacker) of the proceeding attack between attacker and its victim and observe significant change of these
indicators. After a pre-specified number of attempts an unsuccessful attack is reported. Sudden and
significant change of the flow characteristics followed by the stop of the attack may indicate successful
attack. While attacks may vary among different attackers and victims, we need really adaptable approach to
detect successful attack. According to these requirements we chose decision tree method to implement the
algorithm. Using dynamic decision tree, we are able to store network traffic statistics, attack indicators and
relevant detection parameters persistently and what is most important to build the tree according to attacks
in progress.
In this paper, we evaluate detection accuracy (the false positive and negative rate). We deployed a highinteraction honeypot and examined behavior of attackers. Incoming network traffic to a honeypot is
assumed malicious by the definition. Common network traffic should not reach the honeypot. When the
attacker conducts a successful dictionary attack we are notified via e-mail. We can immediately validate if
the detection algorithm is accurate or not, whether detected the dictionary attack itself and whether
correctly determined that the attack was successful or not. In case of the inaccurate detection, we can tune
the parameters of the detection pattern, run the improved pattern and validate the results again.
Details of the test bed is described in the following section. Section 3 contains experimental results.
Related work is summarized and results are compared to our work in Section 4. Section 5 concludes the
paper.
Security and Protection of Information 2009
129
2 Test bed
To validate results of the described detection method, we deployed several high-interaction honeypots and
NetFlow probes in the Masaryk university network. The whole test bed depicts Figure 1.
Figure 1: Test bed.
2.1
Honeypots
We decided to deploy high-interaction honeypots because they are real systems, not only software
simulating real systems. In order to lower costs and minimize maintenance effort, we employed VMware
Server as a virtualization platform (host) and installed five virtual honeypots (guests) upon it. The guests
were runing Ubuntu 8.10 Server Edition with a patched OpenSSH 5.1p1 server. The SSH server was
modified in the following way:
•
standard log file (auth.log) contains user names and even passwords entered in authentication
process,
•
a copy of standard log file (auth.log) is stored in uncommon path in file system,
•
e-mail alert is sent after successful authentication via password.
Each guest machine provides ten user accounts and one superuser account for maintenance purposes.
These accounts were reachable from the whole Internet. We chose common user names and passwords on
the basis of our previous research [1] and other studies [3, 4]. Superuser account called 'root' was disabled.
130
Security and Protection of Information 2009
SSH daemon was listening to the TCP port 22. Any other services and daemons were disabled. The guest
machines only reply to ICMP echo requests, reset TCP connections except the TCP port 22 and send
ICMP port error messages in case of a connection attempt to UDP ports. All outbound traffic from the
guests was shaped to 32 kbps by Traffic Control in the Linux kernel [5].
When we observed tens scans of the TCP port 80, we deployed lighttpd web server on all honeypots to
attract other attackers.
2.2
NetFlow Probes and Collector
All incoming traffic passes the edge router and is monitored by FlowMon [6] probe connected via SPAN
(mirror) port. In addition, another FlowMon probe monitors all traffic between guests and other hosts in
the university network or the Internet. As a result, all network traffic of honeypots (regardless of its source
or destination) is captured in NetFlow data exported by both FlowMon probes. NetFlow collector nfdump
[7] stores NetFlow records and serve NetFlow data to the detection module.
A flow is defined as a unidirectional sequence of packets with some common properties that pass through
a network device. [8] The flow is commonly identified by the following 5-tuple: (srcIP, dstIP, proto,
srcPort, dstPort). Particularly, there is no information about payload. NetFlow provides traffic aggregation
(sum of packets, bytes, flow duration etc.) and thus is eligible for multigigabit networks.
2.3
Detection Module
The dictionary attack detection module is generally based on decision tree technique. In contrast to
traditional decision trees which are static sets of conditions organized in a structure, the tree presented will
grow dynamically according to the classified data. Each tree node allows a set of conditions and operations
to be connected. The satisfaction of given conditions controls data flow through the decision tree and the
execution of operations.
The detection module works with NetFlow data (processes individual flows). The main idea of the
detection algorithm is to focus on key attack indicators (flow duration, number of packets and number of
bytes transferred in victim's reply to the attacker) changes of the proceeding attack. Sudden or significant
change of the attack indicators followed by the stop of the attack may indicate successful attack. The
detection module starts with a set of input flows sorted by flow_start and processes flows one by one in
a sequence. Detection also starts with generic bounds for attack indicators that fit to most of the attacks.
For each pair (attacker, victim) arrays of attack indicators are build (duration array, packets array and bytes
array) until attack attempts threshold is reached. New bounds for given attacker and victim are calculated
using toleration parameters afterwards. From that moment each flow between the bounds is considered as
unsuccessful attack and each flow out of bounds is considered as successful attack. More detail and formal
explanation of the detection algorithm is available in [1].
From a performance point of view the detection module based on decision trees is able to handle
thousands of events per second on COTS (cost of-the-shelf) hardware. Particular results will depend on
actual tree structure. An analysis of performance demonstrates that the approach presented is able to
handle the detection of dictionary attacks real time in a large high-speed network (10 000 computers, 10
Gigabit Ethernet interface, all traffic approximately 220 million flows per day, SSH traffic approximately
1.5 million flows per day). Particular performance results are presented in section 3.3.
3 Results
First of all, we define some important notions. An unsuccessful attack is at least 20 repetitive attempts to
log in a guest machine in a short time originating from the only one IP address. Two contiguous attempts
Security and Protection of Information 2009
131
must occur in 30 minutes otherwise they are considered to be two different attacks. The attack is successful
if and only if the password provided by the attacker is correct and he or she is successfully loged in.
TCP SYN or TCP SYN/RST scan (of port X) is a reconnaissance probe to a server (to the TCP port X)
when no TCP connection is established (connection handshake is exploited). It originates from the only
one IP address. Similarly, UDP scan is a UDP probe to a server originating from the only one IP address
regardless of the server reply.
3.1
Behaviour of Attackers
We observed totally 65 SSH dictionary attacks during a 23 day period. Despite the fact that the fifty user
accounts on five machines were secured by weak passwords, only 3 attacks were successful (4,61 %). Next,
we also observed less than 20 repetitive log-in attempts for 16 times.
No traffic (including TCP and UDP scans) originating in the defended network and destined to the
honeypots was not observed. On the contrary, there were logged totally 938 TCP and 501 UDP scans
originating outside the network. Table 1 shows numbers and types of scans destined to particular
honeypots.
Honeypot
Total number of
TCP scans
Number of TCP
SYN scans
Number of TCP
SYN/RST scans
Total number of
UDP scans
H1
164
157
7
81
H2
203
191
12
108
H3
195
183
12
107
H4
202
190
12
109
H5
174
154
20
96
Table 1: Numbers and types of scans destined to particular honeypots.
The most popular TCP ports was 1433 (MS SQL) with 197 scans followed by 80 (HTTP) with 79 scans
and 4899 (radmin) with 67 scans. Considering UDP ports, the majority of scans were aimed at ports 1026
and 1027 (106 and 83 scans).
Considering scans of standard SSH port (TCP port 22), we observed that 21 of 34 scans were followed by
SSH dictionary attacks originating from the same IP address. The time between the scan and the attack
varied from 6 minutes to 2 hours. In case of the successful attacks, only one of three attacks was preceded
by the scan as was defined above. Other two successful attacks was preceded by an establishing of TCP
connection to the port 22 (further SSH scan) about 1 and 9 hours before the attack. According to the log
file, the attackers did not try any password. The log files say that sshd did not receive identification string
from attacker's IP address.
Next, we identified the following attack scenarios and groups (AGx in short) comprising both successful
and unsuccessful attacks. We chose all attacks conducted by intruders that were successful for at least one
time.
The first successful attack (in AG1) was preceded by SSH scan of all five honeypots. Finally, the attacker
was logged on the only one honeypot as “guest” by password “guest123”. He or she was successful in
1 minute and 4 seconds, after 44 log-in attempts. The vast majority of attempts were tried with password
as same as username. After successful break-in, the attacker continued with the dictionary attack until the
honeypot was shut down. The total number of log-in attempts was 2 191. No other attacker's activity was
observed (e. g., modification of filesystem or file downloading).
132
Security and Protection of Information 2009
The second successful attack (in AG2) was preceded by TCP SYN/RST scan of all five honeypots.
Similarly to AG1, although the attacker conducted dictionary attacks against all honeypots, he or she was
successful on the only one honeypot. After 56 attempts tried in 3 minutes and 6 seconds, the attacker was
logged as “guest” by password “12345”. Again, no other attacker's activity was observed. After 4,5 hours,
the attacker performed another TCP SYN/RST scan of the same honeypot and then after 38 minutes tried
to log in as other users than “guest” for 9 times.
The third successful attack (in AG3) was preceded by SSH scan of four honeypots. Similarly to AG1 and
AG2, the attacker conducted attacks against four honeypots, but succeeded on the only one host after 401
attempts in 21 minutes and 48 seconds. He or she was logged as “test” by password “qwerty”. Again, the
majority of log-in attempts were tried with password as same as username and no other attacker's activity
was observed. Further, the attacker continued with attacks against other three honeypots.
3.2
Parameters of Detection and Results
The detection was performed with default parameters set a follows:
•
bppLimitReplies = 250; Maximum of bytes per packet in a SSH login reply flow, other flows are
ignored.
•
bppLimitRequests = 150; Maximum of bytes per packet in a SSH login request flow, other flows
are ignored.
•
deltaBytesCoefficient = 1.2; Tolerance in difference of bytes in a flow to report a successful
attack.
•
deltaDurationCoefficient = 1.2; Tolerance in difference of flow duration to report a successful
attack.
•
deltaPacketsCoefficient = 1.2; Tolerance in difference of packets in a flow to report a successful
attack.
•
failedAttackResponseBytes = "1400,5000"; Initial bounds for bytes in a flow representing an
attack.
•
failedAttackResponseDuration = "0.400,5.000"; Initial bounds for duration of a flow
representing an attack.
•
failedAttackResponsePackets = "10,30"; Initial bounds for packets in a flow representing an
attack.
•
manyAttackAttempts = 20; Minimal count of attempts to report an attack.
•
tsDeltaThreshold = 1800; Minimal slack in seconds between two flows to distinguish between
two different attacks from a single attacker to a single victim.
The parameters are described in details in [1].
Security and Protection of Information 2009
133
Next, we validate the detection of the three selected attack groups AG1, AG2 and AG3. All 14 SSH
dictionary attacks (5 in AG1, 5 in AG2 and 4 in AG3) were detected. What is more, no other attacks
were detected and all attacks were correctly labeled as “successful” or “unsuccessful” except one. The false
positive is caused by the fact that the attacker continued with the attack after successful intrusion. This is
opposed to our expectations. The detection was run within a time window of each attack on NetFlow data
collected on the honeypot interface eth1 (see Figure 1).
3.3
Performance Analysis
To obtain results of performance tests, we also used NetFlow data collected on SPAN port of the edge
router as input. The processed traffic was a few times greater than on the honeypot interface.
Unfortunately, the detection accuracy was not satisfactory in this case. It was caused by biased primary
data - network packets provided by the SPAN port. For instance, the edge router mirrors the passing
packets in nondeterministic way.
Table 2 shows number of processed flows and duration in seconds of key operations of the whole detection
and the performance measured in flows per second:
•
data_delivery comprises loading data from nfdump and receiving to the detection module server,
•
stored_in_memory stands for a storing data to the memory,
•
pairs_created – an operation of flow pairing according to [9],
•
port_filtered – filtering TCP traffic (port 22),
•
break_detected – SSH dictionary attack detection including storing results in a relation table.
This measurement was done for all three attack groups.
Operations
Attack group
AG1
AG2
AG3
data_delivery
11
1
5
stored_in_memory
114
9
46
pairs_created
204
10
37
port_filtered
41
6
17
break_detected
161
53
218
Number of
processed flows
429 607
36 781
180 669
Performance of the
detection module
2 668
693
828
Total time
531
79
323
Overall performance
in flows per second
809
466
559
Table 2: Duration of key operation and performance of the detection module for selected attacks.
134
Security and Protection of Information 2009
4 Related Work
Our results concerning behaviour of attackers can be compared to results of [3] and [4]. Both studies
utilize honeypots to create an attacker profile. In contrast to these studies, we did not observed any
activities including downloading, installing, running malicious software or password change in case of
three successful attacks. But we can confirm very low percentage of successful attacks. Generally, attempted
username and password patterns are very similar as in [3] and [4]. Next, we observed the majority of
attacks were preceded by TCP scans that is different to the findings in [4]. On the contrary, we confirm
other findings in [4] that the attacks follow very simple and repetitive patterns such attacks continued
although the attacker has already guessed the correct password.
5 Conclusions
The achieved results show that network-based attack detection has a large potential to substitute
traditional host-based methods. During the detection pattern evaluation we identified only one false
negative when a successful attack was identified as an unsuccessful attack. Another important result
consists in validation of primary data quality according to probe wiring. Using SPAN port connection the
quality of primary data goes down rapidly which influences results of the attack detection. From the
performance point of view presented method is capable to process the whole university SSH traffic in real
time.
Concerning behaviour of successful attackers we observed no malicious activities on the host even in
network traffic in hours after intrusions. Surprisingly, one attacker continued with the dictionary attack
after successful log in. This behaviour could advert to low-skilled attacker.
In our future work we will focus on the on-line detection method deployment which has been already
started. We would also like to validate the SSH attack pattern on other authenticated services like FTP or
web logins.
Acknowledgement
This work was supported by the Czech Ministry of Defence under Contract No. SMO02008PR980OVMASUN200801.
References
[1]
Vykopal, J., Plesnik, T., and Minarik, P.: Network-based Dictionary Attack Detection, in Proc. Of
ICFN 2009, Bangkok, pp. 23-27, 2009. ISBN 978-1-4244-3579-1.
[2]
VMware, Inc. web site. http://www.vmware.com/download/server/.
[3]
Ramsbrock, D., Berthier, R., and Cukier, M.: Profiling Attacker Behavior Following SSH
Compromises, in Proc. 37th Annual IEEE/IFIP International Conference on Dependable Systems
and Networks, pp.119-124, 2007.
[4]
Alata, E., Nicomette, V., Kaâniche, M., Dacier, M., and Herrb, M.: Lessons learned from the
deployment of a high-interaction honeypot, in Proc. 6th European Dependable Computing
Conference (EDCC-6), Coimbra, pp. 39-44, 2006. http://arxiv.org/pdf/0704.0858.
[5]
Linux Advanced Routing & Traffic Control. http://lartc.org/.
[6]
FlowMon probe web site. http://www.invea-tech.com/products/flowmon.
[7]
Nfdump web site. http://nfdump.sourceforge.net/.
Security and Protection of Information 2009
135
[8]
Claise, B.: Cisco Systems NetFlow Services Export Version 9. RFC 3954 (Informational), 2004.
http://www.ietf.org/rfc/rfc3954.txt
[9]
Trammell, B. and Boschi, E.: Bidirectional Flow Export Using IP Flow Information Export
(IPFIX). RFC 5103, 2008. http://www.ietf.org/rfc/rfc5103.txt
136
Security and Protection of Information 2009
Řešení kryptografických prostředků pro ochranu utajovaných
informací EU, NATO a národních
Ing. Jiří Douša
[email protected]
ATS-TELCOM PRAHA a.s.
Milíčova 14, 130 00 Praha 3
1 Musí mít každá kategorie informací vlastní kryptografický prostředek?
Při elektronickém zpracování a přenosu utajovaných informací je často požadováno použít kryptografické
prostředky (dále jen “KP”). Jejich použití pro ochranu utajovaných informací upravuje národní
legislativa 1. Zapojením ČR do mezinárodních organizací (NATO a EU) musí být současně respektovány
bezpečnostní politiky těchto organizací 2. Bezpečnostní politiky stanovují omezující požadavky na použití
KP i na jejich začlenění do informačních a komunikačních systémů. Zvláštní péče je v bezpečnostních
politikách věnována správě a distribuci kryptografických klíčů. Zejména při ochraně utajovaných
informací stupně utajení Tajné, NATO SECRET, SECRET UE, nebo vyšších, při důsledné aplikaci
uvedených předpisů, vzniká potřeba vzájemně oddělit kryptografickou ochranu informací jednotlivých
mezinárodních organizací a národních informací, což v konečném důsledku vedlo k použití několika
různých KP, plnících stejnou funkci v národním informačním nebo komunikačním systému. Tento
přístup je finančně velmi náročný. Uvedené problémy se naplno projevily po připojení nových členských
zemí do mezinárodních organizací NATO a EU.
1.1
Jak se mění požadavky na kryptografický prostředek a jeho správu
Národní i nadnárodní bezpečnostní autority pracují na standardizaci parametrů KP, které umožní schválení jednoho KP (jeho certifikaci) pro použití v organizacích NATO i EU a umožní i jejich národní
použití.
Dále probíhají práce na unifikaci parametrů KP a jejich správy kryptografických klíčů tak, aby KP od
různých výrobců byly vzájemně kompatibilní, včetně správy kryptografických klíčů. Jako příklad lze uvést
projekt SCIP v rámci NATO.
V případě požadavku na ochranu národních utajovaných informací (interních informací, které nelze
poskytnout jiným státům) je požadováno použít KP, který je plně pod kontrolou národní autority.
Národní KP musí být nekompatibilní s KP jiných států nebo organizací, může ale využívat společný
základ, který je schválen (certifikován).
Omezením kurýrní přepravy a snížením personálu pro správu kryptografického materiálu se do popředí
dostal požadavek na elektronickou správu kryptografických klíčů, známý pod označením EKMS. Tento
systém umožňuje nejen plánování, generování, elektronickou distribuci a správu kryptografických klíčů
pro KP, ale zajistí úplnou evidenci veškerého kryptografického materiálu (například KP), umožňuje
1
Zákon č. 412/2005 Sb. o ochraně utajovaných informací a o bezpečnostní způsobilosti, ve znění
pozdějších předpisů.
2
Bezpečnostní politika NATO CM(2002)49, Rozhodnutí Rady EU (2001/264/EC).
Security and Protection of Information 2009
137
kontrolu evidovaného materiálu, jeho účtování a tisk reportů v požadované formě. U kryptografických
klíčů, které je do KP nutno vkládat na nosiči, umožní plnění klíčů do nosičů ve vzdáleném bodě
(např. národní autoritou), bez nutnosti jeho fyzické přepravy z místa generování.
1.2
Duální kryptografický prostředek obsahující dva algoritmy
Přechod na jakýkoliv standard nebo změnu u HW KP znamená podstatný zásah do jeho bezpečnostních
parametrů s následnou úplnou certifikací, pokud na tuto změnu není KP navržen. Požadavek na flexibilitu
a snadné implementování změn do bezpečnostních parametrů KP (především zásahy do kryptografického
algoritmu a systému kryptografických klíčů) bohužel odporuje požadavkům na konstrukci a řešení KP pro
vyšší stupně utajení (Důvěrné a vyšší).
Tento požadavek je možno řešit například „duálním módem KP“. Takový KP má architekturu, která
umožňuje vložení 2 kryptografických algoritmů, úložiště kryptografických klíčů, jejich řídících a kontrolních jednotek a současně jejich bezpečné oddělení (jedná se o dva nezávislé módy). Duální KP je možno
inicializovat v jednom ze dvou módů, jedním je např. NATO algoritmus (čip Einride), druhým módem
může být národní algoritmus, nebo jiný standard. Konkrétním řešením je produkce firmy THALES
Norway AS, systém Cryptel-IP (TCE 621B-Dual, TCE 621C-Dual) 3, tyto KP lze inicializovat jako TCE
621B / TCE 621C 4 nebo jako TCE 621B-AES / TCE 621C-AES 5.
1.3
Duální kryptografický prostředek obsahující jeden algoritmus
Vložením pouze jednoho (národního) algoritmu 6 byl v ČR vytvořen národní KP TCE 621B/CZ 7 a TCE
621C/CZ 8. K uvedeným KP vždy přísluší centrum pro generování kryptografických klíčů KGC a středisko pro správu a distribuci kryptografických klíčů TCE 671, resp. TCE 671/CZ.
Výsledkem sjednocení požadavků NATO a EU na KP je rodina algoritmů, které jsou akceptovány oběma
organizacemi. Konkrétním produktem je výrobek firmy SECTRA Communication AB, SECTRA Tiger
XS, který je možno používat pro ochranu utajovaných informací EU 9 i NATO 10. Jeho současná certifikovaná podoba je duální (naplnění parametrů pro EU nebo NATO), která umožní připojení ke středisku
správy kryptografických klíčů NATO nebo EU. Ke KP SECTRA Tiger XS přísluší centrum pro
generování kryptografických klíčů KGC a středisko pro správu a distribuci kryptografických klíčů SMC.
2 Nabídka kryptografických prostředků a zařízení pro jejich správu
ATS-TELCOM PRAHA a.s. je výhradním dovozcem IP kryptografického systému Cryptel-IP společnosti
THALES Norway AS a KP Tiger XS, včetně Tiger XS Office, společnosti SECTRA Communications
3
hodnocení NSM Norway pro stupeň utajení Secret (č. V13/2008-NBÚ/ÚR2)
4
hodnocení NATO (č. MCM-0033-2008 pro Cosmic Top Secret), hodnocení NBÚ (č.K20079 pro
Tajné a NATO Secret)
5
algoritmus typu B dle NATO předpisu č. D0047 Rev2
6
algoritmus typu A dle NATO předpisu č. D0047 Rev.2
7
hodnocení NBÚ č. K20104 pro Důvěrné, NATO Confidential, Confidentiel UE
8
hodnocení NBÚ č. K20105 pro Důvěrné, NATO Confidential, Confidentiel UE
9
hodnocení EU č.6499/07 pro Secret UE,
10
hodnocení NATO MCM ze dne 8.10.2008 pro NATO Secret
138
Security and Protection of Information 2009
AB. Je také finálním výrobcem KP odvozených od systému Cryptel-IP. Tyto technologie dodává organizačním složkám státu a státní správě ČR. Především se jedná o následující bezpečnostní produkty:
• TCE 621/B, TCE 621/C, TCE 671 a TCE 114 (KGC),
• TCE 621B/CZ, TCE 621C/CZ, TCE 671/CZ a TCE 114/CZ (KGC/CZ),
• SECTRA Tiger XS, SECTRA Tiger XS Office, SECTRA KGC (Centrum pro generování klíčů)
a SECTRA SMC (Středisko pro správu a distribuci klíčů),
• eCustodian.
2.1
TCE 621/B, TCE 621/C, TCE 671 a KGC
TCE 621/B a TCE/621/C jsou plně HW KP, určené pro ochranu dat, přenášených na úrovni protokolu
IP rychlostí až do 1 Gbit/s. Podporují IPv4 i IPv6, redundanci, multicast a NAT. Umožňují provoz
v manuálním režimu (s ruční distribucí a správou kryptografických klíčů, generovaných v nezávislém
centru pro generování klíčů KGC) a v automatickém režimu s dálkovou správou kryptografických klíčů.
V automatickém režimu je využíváno středisko pro správu a distribuci klíčů TCE 671, které zabezpečuje
on-line distribuci kryptografických klíčů, dohled v síti a distribuci přístupových práv. Jsou plně kompatibilní s KP první generace TCE 621 IP. TCE 621/B, TCE 621/C, TCE 671 a KGC jsou schváleny
NATO MC pro ochranu utajovaných informací do stupně utajení NATO Cosmic Top Secret a v ČR
jsou certifikovány NBÚ pro ochranu utajovaných informací do stupně utajení NATO Secret a Tajné4.
2.2
TCE 621B/CZ, TCE 621C/CZ, TCE 671/CZ a KGC/CZ
Jedná se o národní KP na bázi TCE 621/B-Dual a TCE 621/C-Dual u kterých je vložena pouze rozšiřující kryptografická deska s národním algoritmem, který je pod plnou kontrolou národní autority - NBÚ.
Technické parametry jsou obdobné jako u kryptografických prostředků TCE 621/B a TCE 621/C,
uvedených v bodě 2.1, obdobné jsou i vlastnosti a parametry TCE 671/CZ a KGC/CZ. Provedení TCE
621B/CZ, TCE 621C/CZ, TCE 671/CZ a TCE 114/CZ je uvedeno na obrázku 1. TCE 621B/CZ,
TCE 621C/CZ, TCE 671/CZ a KGC/CZ jsou v ČR certifikovány NBÚ pro ochranu utajovaných
informací do stupně utajení DŮVĚRNÉ, NATO CONFIDENTIAL, CONFIDENTIEL UE7.
Obrázek 1: Národní kryptografické prostředky na bázi systému Cryptel-IP.
Security and Protection of Information 2009
139
2.3
SECTRA Tiger XS, SECTRA Tiger XS Office, SMC a KGC
SECTRA Tiger XS je osobní kryptografický prostředek pro ochranu utajovaných informací přenášených
v sítích GSM, PSTN, ISDN a satelitních sítích. Prostřednictvím Tiger XS je možno utajit hlas, SMS
nebo datový soubor. Tiger XS se na černé straně připojuje ke komunikačnímu terminálu (k mobilnímu
nebo satelitnímu telefonu přes Bluetooth nebo k modemu přes rozhraní RS 232. Připojení Tiger XS na
červené straně, pro vstup a výstup utajovaných dat, je zajištěno USB kabelem. Pro hlasovou komunikaci je
použita handsfree sada. Provedení Tiger XS a Tiger XS Office je uvedeno na obrázku 2.
SECTRA Tiger XS Office je rozšiřující stanice do které se vkládá Tiger XS. Umožňuje použít Tiger XS
v kancelářském prostředí. Obsahuje vestavěný modem, který se připojí k analogové účastnické přípojce
veřejné nebo pobočkové telefonní ústředny. Jeho prostřednictvím komunikuje vložený Tiger XS přes
pevnou telefonní síť. Rozšiřující stanice Tiger XS Office je pro pohodlnější hlasovou komunikaci
vybavena telefonním sluchátkem, které vloženému Tiger XS nahrazuje handsfree sadu. Tiger XS Office
také umožňuje připojení analogového či digitálního faxu a PC na červenou stranu vloženého Tiger XS.
K účastnické přípojce ISDN se Tiger XS Office připojuje přes externí modem.
Tiger XS je aktivován po dobu maximálně 30 hodin, v aktivním stavu umožňuje přenášet utajená data
a obsahuje utajované informace. V neaktivním stavu Tiger XS neobsahuje utajované informace a je zařazen v kategorii CCI 11, k aktivaci se používá CIK. Tiger XS umožňuje provoz v zákaznickém módu s ruční
distribucí skupinových kryptografických klíčů a v automatickém módu s dálkovou správou kryptografických klíčů s využitím SMC. Ke generování kryptografických klíčů je využíváno nezávislé centrum pro
generování klíčů KGC. Středisko SMC slouží ke generování a automatické distribuci párových klíčů.
Tiger XS a Tiger XS Office jsou v ČR certifikovány NBÚ pro ochranu utajovaných informací do stupně
utajení Tajné a Secret UE 12.
Obrázek 2: Provedení Tiger XS a Tiger XS Office.
11
hodnocení NBÚ doplněk č.1 k K20110
12
hodnocení NBÚ č. K20110 pro Tajné a Secret UE,
140
Security and Protection of Information 2009
2.4
eCustodian
Pro elektronickou správu kryptografických klíčů (EKMS) systému Cryptel-IP a správu kryptografického
materiálu byl vyvinut produkt eCustodian 13, který je plně kompatibilní se systémem DEKMS NATO.
Elektronický systém správy klíčů (EKMS) eCustodian je produkt, který zahrnuje HW i SW pro
plánování, objednávání, generování, přešifrování, ukládání, distribuci kryptografických klíčů a případně
jejich výstup na nosná média. Kryptografické klíče jsou ukládány a distribuovány v neutajované (zašifrované) podobě a po celou dobu existence jsou auditovány a centrálně účtovány. eCustodian umožňuje
elektronické účtování, správu a kontrolu pohybu i hmotného kryptografického materiálu (např. KP). Na
obrázku 3 jsou uvedeny komponenty a funkce eCustodian, včetně propojení na DEKMS terminál.
Obrázek 3: SW a HW komponenty eCustodian a jejich funkce a vazby.
3 Poskytované služby a jejich předpokládaný rozvoj
ATS-TELCOM PRAHA a.s. poskytuje a zajišťuje servis na dodávané KP a zajišťuje pro ně hotline.
ATS-TELCOM PRAHA a.s. poskytuje ke všem dodávaným produktům a systémům školení správců
i uživatelů včetně školicí dokumentace v češtině. Zajišťuje zpracování technické a provozní dokumentace
nebo její překlady do češtiny.
ATS-TELCOM PRAHA a.s. je finálním výrobcem KP TCE 621/CZ. U těchto KP je zajišťuje plný servis
v ČR, včetně uživatelské podpory jejich provozu.
ATS-TELCOM PRAHA a.s. je držitelem osvědčení NBÚ č. 000923 pro seznamování, poskytování
a vznik utajovaných informací až do stupně utajení TAJNÉ a disponuje pracovníky kryptografické
ochrany. Firma poskytuje uživatelskou podporu a pomoc při projektování, nasazování a provozu dodaných systémů a KP.
13
hodnocení NATO (č. MCM-173-01 pro distribuci KM všech stupňů utajení)
Security and Protection of Information 2009
141
ATS-TELCOM PRAHA a.s. předpokládá nadále rozšiřovat použití KP v duálním provedení tak, aby byly
použitelné pro utajované informace NATO, EU a národní. U systému Cryptel-IP předpokládáme v ČR
zahájit certifikaci KP TCE 621/B-Dual a TCE 621/C-Dual ve verzi CZ pro stupeň utajení NATO
Secret, národní Tajné a Confidentiel UE. U KP SECTRA Tiger XS předpokládáme zahájit rozšíření
certifikace pro stupeň utajení NATO Secret.
142
Security and Protection of Information 2009
Výzkum univerzální a komplexní autentizace a autorizace
pro pevné a mobilní počítačové sítě
Viktor Otčenášek; RNDr. Pavel Sekanina, MSc.
[email protected]; [email protected]
AutoCont CZ a.s.
Kounicova 67a, Brno, Česká republika
1 Úvod
Nic není stoprocentně bezpečné, ale o kolik je přihlášení čipovou kartou bezpečnější než jméno a heslo?
A je přihlášení pomocí použití biometriky bezpečnější než přihlášení prostřednictvím čipové karty?
A pokud ano, o kolik? A vlastně proč?
Částečně se můžeme inspirovat zkušenostmi z objektové bezpečnosti, nebo z oblasti ochrany utajovaných
informací. Tam existují jisté normy, pravidla, kategorie zabezpečení, která říkají: pokud chcete mít systém
zabezpečen na této úrovni, musíte splnit toto, toto a ještě toto.
V oblasti autentizace zatím žádná jednotně uznávaná stupnice neexistuje. Existují standardy v jednotlivých
oblastech bezpečnosti, ale také existuje velmi zdařilý projekt v oblasti hodnocení hrozeb – CVVS
(A Common Vulnerability Scoring System) [viz 07].
Jsme přesvědčení, že vytvoření takového sytému hodnocení není samoúčelné. Stupnice by například měla
umožnit správcům IT, ředitelům IT a finančním ředitelům orientovat se v otázce, zda míra dosaženého
stupně zabezpečení odpovídá výši vynaložených nákladů.
1.1
Projekt KAAPS
Na některé z těchto otázek dáváme odpověď v projektu KAAPS – „Výzkum univerzální a Komplexní
Autentizace a Autorizace pro pevné a mobilní Počítačové Sítě.“ Tento výzkumný projekt je spolufinancován
z Národního programu výzkumu II Ministerstva školství, mládeže a tělovýchovy. Projekt začal v roce
2008, spoluřešiteli jsou Vysoké učení technické v Brně, Fakulta elektrotechniky a komunikačních technologií a AutoCont CZ a.s. Cílem je prozkoumat možnosti vytvoření modulárních, univerzálních autentizačních systémů. Dále se věnujeme vytvoření nových autentizačních metod a možnostem jejich integrace
s jinými technologiemi.
2 Autentizační metody
2.1
Historie autentizačních metod
Autentizace – metoda prokázání své identity v informačním systému. Autorizace – přidělení oprávnění
osobě, kterou systém již autentizoval.
Zřejmě nejrozšířenější formou autentizace je spojení uživatelského jména a hesla. Tento způsob je
mnoha odborníky považován za nedostatečný (debata o problematické bezpečnosti této metody se vede již
déle než 40 let), ale jednoduchost implementace i jednoduchost užití zajišťují tomuto způsobu první místo
a dlouhá léta bude jistě ještě používán.
Security and Protection of Information 2009
143
S postupem rozšiřování nových technologií i se zvýšením všeobecného povědomí uživatelů lze očekávat
posun k „bezpečnějším“ metodám. Mezi ně určitě patří použití hardwarového prostředku, jímž může
být:
• čipová karta/USB token,
• generátor (pseudo)náhodného čísla – „kalkulátor“,
• RFID chip (bezkontaktní autentizace),
• použití SW tokenu, např. v PDA.
Zvláštní místo v této kategorii si postupně získává mobilní telefon, ať už jako nosič SW tokenu, nebo
například jako příjemce jednorázového hesla. Dle predikce analytické společnosti Gartner se ke konci roku
2010 stane mobilní telefon nejrozšířenějším fyzickým prostředkem autentizace [viz 02].
Postupující zlevňování také pomůže k rozšíření autentizace založené na použití biometriky. Čtečka otisku
prstů se stává běžnou výbavou notebooků. Webová kamera a mikrofon také nejsou výjimkou. Již
v současné době lze tedy používat:
• otisk prstu,
• snímek sítnice nebo duhovky,
• rozpoznávání hlasu.
Přihlašování pomocí otisku prstu se začíná prosazovat i u majitelů přenosných počítačů. Ne vždy je ale
řešení, které nabízí výrobce notebooku spolu s operačním systémem, skutečnou „vícefaktorovou“ nebo
„silnou“ autentizací. Otisk prstu hraje roli PINu, kterým se přistupuje k úložišti hesel – a jsme zase
v první kategorii jméno/heslo.
Nicméně lze v brzké době očekávat širší užití vícefaktorové autentizace kombinující jméno/heslo,
biometrické prvky a čipové karty a to nejen v přísně utajených vojenských systémech.
2.2
Dělení autentizačních metod
Autentizační metody můžeme porovnávat dle toho, s čím pracují. Tím mohou být například písmenka,
obrázky, zvuky, biomateriál pevný (otisky, sítnice, rozměry, …), biomateriál nepevný (krev, sliny, pot,
vůně, …) atd.
Můžeme také autentizační metody porovnávat dle způsobu, jak s danými informacemi pracují: jak
identitu počítají, jak ji porovnávají, zda výpočet probíhá v místě klienta nebo na straně serveru atd.
2.3
Příklady autentizačních metod
Následující výčet zdaleka jistě není úplný, jeho účelem je demonstrovat škálu možných autentizačních
metod.
• Sledování charakteristiky psaní na klávesnici: Metoda pracuje s teorií, že různé osoby lze od sebe
odlišit sledováním charakteristických parametrů při psaní na klávesnici.
• Sdělení jména a hesla: Metoda pracuje na principu ověření identity prokázáním znalosti nějakého
tajemství, které je sděleno osobou žádající o ověření ověřovací autoritě.
• OTP pomocí papírové tabulky: Metoda pracující s předem dohodnutou tabulkou hesel.
• OTP token: Metoda pracující s použitím zařízení pro vygenerování jednorázového hesla.
• Označování obrázků (picture-based passwords): Metoda pracující s hesly založenými na poloze
v obrázku, nebo na pořadí více obrázků.
• Rozpoznávání obličeje.
• Otisk prstu.
• Geometrie ruky, krevní řečiště dlaně.
144
Security and Protection of Information 2009
•
•
•
•
•
•
•
•
Rozpoznání oční duhovky nebo oční sítnice.
Vlastnoruční podpis.
Rozpoznání hlasu.
Termografické rozpoznání obličeje.
Rozpoznání zápachu, olfaktometrie.
Rozpoznávání DNA: Metoda založená na určení a porovnání DNA.
Rozpoznání chůze: rozpoznávání charakteristik lidské chůze a souvisejících pohybů.
Rozpoznání tvaru ušního kanálku.
Některé z výše uvedených metod jsou běžnou praxí, některé byly nabízeny v rámci komerčních produktů,
ale neprosadily se, některé zatím pomocí stávajících technologií mají příliš dlouhou odezvu pro potřeby
autentizace v ICT (například ověření DNA).
2.4
Metriky autentizačních metod
V literatuře je popsána řada způsobů, jakými lze měřit jednotlivé aspekty autentizační metody, některé
jsou založené více na exaktních faktorech, jiné jsou více subjektivní a vyjadřují postoj hodnotitele k dané
metodě:
•
Univerzálnost: každá osoba v množině možných uživatelů by měla mít danou charakteristiku.
•
Jedinečnost, jednoznačnost: daná charakteristika by nás jednoznačně měla odlišit od jiné osoby
v dané množině potenciálních uživatelů.
•
Trvalost: daná charakteristika by se neměla měnit v čase.
•
Snadnost měření: získání dané charakteristiky.
•
Výkon: výpočetní výkon potřebný k získání charakteristiky a následně k ověření identity.
•
Přijatelnost dané metody – ať už je to přijatelnost z hlediska kultury dané společnosti, anebo je to
přijatelnost z hlediska pohodlí koncového uživatele.
•
Obelstitelnost: snadnost, s jakou lze systém obejít.
•
Cena:
• cena pořízení autentizačního systému.
• cena provozování autentizačního systému.
• cena úspěšného útoku (vynaložené náklady nutné k prolomení dané metody).
•
ERR – Chybovost: pod tímto pojmem v literatuře existuje mnoho variant.
•
FAR – False Acceptance Rate: chybně přijaté, míra osob, které prošly a neměly.
•
FRR – False Rejection Rate: chybně odmítnutí, míra osob, které neprošly a měly projít.
Security and Protection of Information 2009
145
3 Trendy, možnosti rozvoje autentizace
3.1
Budoucí směry rozvoje autentizačních mechanizmů
Vícefaktorové – kombinace více metod/faktorů v jednu komplexní metodu autentizace:
• něco znáš (jméno a heslo),
• něco máš (HW token: čipová karta),
• něco jsi/něco nejsi (biometrika: obraz sítnice/nejsi opilý),
• někde jsi/někde nejsi (GPS souřadnice, WiFi triangulace: jsi u svého pracovního stolu/nejsi mimo
území státu).
Variantní – skutečně více rozdílných metod pro jednu autentizaci. Dovolí uživateli zvolit tu metodu
autentizace, která je pro něj přijatelná a zároveň z hlediska systému „dostatečně bezpečná“.
Modulární – tvůrci informačního systému budou moci poskládat autentizační mechanismus dle svých
potřeb z předpřipravených, navzájem komunikujících a zaměnitelných modulů.
Dynamická – pro některé operace stačí tři faktory, pro jiné operace se systémem bude požadován další
faktor v okamžiku zadání příkazu.
Fuzzy logika – autentizační mechanismus využije teorii fuzzy množin a bude pracovat nejen s logickými
výrazy „ano, určitě je to on“ a „ne, není to on“, ale i s výrazy typu „možná je to on“.
Míra pravděpodobnosti – jistá variace předchozí metody, kdy autentizační mechanizmy budou
vyjadřovat míru jistoty, že se jedná o danou osobu pomocí pravděpodobnosti: „s pravděpodobností
P=0.75 je to daná osoba.“
Časově-prostorové autentizační techniky – autentizační systémy budou pracovat i s informací o prokázané poloze uživatele. [viz 04].
3.2
Konvergence autentizace a autorizace
Některé scénáře použití autentizačních technik zmíněných v předchozím odstavci vedou k těsnějšímu
propojení obou „A“ – autentizace a autorizace. Informační systém bude požadovat takové metody
autentizace v závislosti na míře autorizace potřebné k provedení požadované operace. Lze si tedy představit
situaci, kdy systém v polovině práce bude požadovat dodatečné ověření totožnosti uživatele (například
před konfirmací bankovního příkazu nebo při přístupu k citlivým informacím).
Při obodování rozdílných metod autentizace může být uživateli dána možnost výběru, které metody zvolí,
aby získal dostatečný počet autentizačních bodů, vyžadovaných autentizačním systémem.
4 Závěr
V rámci aktivit projektu KAAPS bychom rádi vyvolali diskusi k definicím a standardizaci pojmů v oblasti
autentizace a autorizace, diskusi, která již v jiných oblastech ICT proběhla nebo alespoň probíhá.
Jsme přesvědčeni, že koncept hodnocení autentizačních metod, je užitečný a zveme vás ke spolupráci na
tomto konceptu hodnocení autentizačních metod.
146
Security and Protection of Information 2009
5 Použitá literatura
[1]
N. Chinchor and G. Dungca, “Four Scores and Seven Years Ago: The Scoring Method for MUC6,” Proc.MUC-6 Conference, Columbia, MD, pp. 33-38 and pp. 293-316, Nov. 1995.
[2]
Ant Allan: Information Security Scenario, Symposium/ITxpo 2008, November 3-7, 2008, Cannes,
France
[3]
Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone: Handbook of applied
cryptography, , CRC Press. http://www.cacr.math.uwaterloo.ca/hac/ CRC Press, ISBN: 0-84938523-7, October 1996, 816 pages
[4]
David Jaroš, Radek Kuchta, Radimir Vrba: Possibility to apply a position information as part of
a user’s authentication, Proceedings of conference Security and Protection of Information 2009.
[5]
The Use of Technology to Combat Identity Theft – Report on the Study Conducted Pursuant to
Section 157 of the Fair and Accurate Credit Transactions Act of 2003
[6]
Kresimir Delac, Mislav Grgic: A Survey Of Biometric Recognition Methods, 46th International
Symposium Electronics in Marine, ELMAR-2004, 16-18 June 2004, Zadar, Croatia, pp 184 – 193
[7]
Mike Schiffman, Gerhard Eschelbeck, David Ahmad, Andrew Wright, Sasha Romanosky, "CVSS:
A Common Vulnerability Scoring System", National Infrastructure Advisory Council (NIAC),
2004.
[8]
United States Computer Emergency Readiness Team (US-CERT). US-CERT Vulnerability Note
Field Descriptions. 2006 [cited 16 March 2007]. Available from URL:
http://www.kb.cert.org/vuls/html/fieldhelp.
[9]
SANS Institute. SANS Critical Vulnerability Analysis Archive. Undated [cited 16 March 2007].
Available from URL: http://www.sans.org/newsletters/cva/.
[ 10 ] National Institute of Standards and Technology (www.nist.gov,
http://csrc.nist.gov/about/index.html).
Security and Protection of Information 2009
147
Ověřování uživatelů při přístupu do lokálních sítí pomocí 802.1x protokolu
Ivo Němeček
[email protected]
Cisco Systems
V Celnici 10, Praha
1 Motivace
V prostředí, kde nemůžeme kontrolovat fyzický přístup k síti, je vhodné ověřovat uživatele nebo připojovaná zařízení na síťových prvcích. Ověřování je obvykle nezbytné při přístupu přes veřejnou síť, například
z internetu přes IPSec nebo SSL spojení nebo při připojení přes veřejnou telefonní síť. Ověřování uživatelů
nebo stanic se nevyhneme v prostředí bezdrátových sítí, kde nemůžeme vyloučit pokusy o připojení
nepovolaných osob do sítě. Velmi často nemáme plnou kontrolu ani nad přepínaným lokálními sítěmi.
V tomto článku se zaměříme zejména na tyto sítě.
Důvody pro ověřování uživatelů v LAN sítích mohou být rozličné, například
•
Chceme zajistit, aby se do sítě nemohli připojit nežádoucí uživatelé přes veřejně dostupné porty
LAN přepínačů, například z jednacích místností.
•
Chceme kontrolovat a omezit přístup do sítě pro externisty, kteří mají přístup k portům LAN
přepínačů ve vnitřní síti.
•
Potřebujeme odlišit různé skupiny uživatelů a definovat různá pravidla pro jejich přístup do sítě.
•
Mobilní uživatelé vyžadují stejné prostředí bez ohledu na to, kde se fyzicky nacházejí.
•
Chceme v síti odlišit uživatele pracující na sdílených počítačích.
•
Potřebujeme mít přehled, na kterých portech přepínačů jsou jednotliví uživatelé připojeni,
například kvůli řešení bezpečnostních incidentů nebo kvůli účtování služeb.
Pro ověřování uživatelů v prostředí přepínaných i bezdrátových sítí je určen protokol IEEE 802.1x. Tento
protokol řídí přístup k síti na úrovni portů a stanovuje rámec pro výměnu ověřovacích informací.
Nedefinuje ovšem autentizační metody, způsoby implementace autentizačního serveru, neřeší otázky
přístupových práv uživatelů do sítě (autorizaci). To je předmětem dalších standardů nebo proprietárních
funkcí síťových zařízení.
2 Metody pro ověřování uživatelů v lokální síti
Pro ověřování uživatelů v přepínané síti se obvykle využívají následující protokoly a metody:
• 802.1x protokol,
• Extensible Authentication Protocol (EAP),
• Autentizační metody různých typů, například EAP TLS, EAP-MSCHAPv2, EAP-TTLS,
• RADIUS protokol,
• Metody řízení provozu pro autentizované uživatele (omezení provozu filtry, dynamické zařazení
do VLAN sítě, aplikace QoS metod apod.).
148
Security and Protection of Information 2009
2.1
IEEE 802.1x protokol
802.1x je protokol typu klient-server. Je určený k ověřování a řízení přístupu do LAN sítě přes veřejně
přístupné porty. Protokol 802.1x definuje role zařízení v síti, řízení stavů portů síťových zařízení, způsob
formátování a přenosu autentizačních údajů přes EAP protokol v prostředí LAN sítě (EAPOL). EAP
protokol není součástí specifikace 802.1x.
802.1x protokol definuje následující role zařízení v síti:
•
Supplicant – někdy označovaný jako klient. V praxi je představován softwarovým modulem na
stanicích, IP telefonech apod. Supplicant převezme ověřovací údaje ze stanice nebo od uživatele.
Těmito údaji může být uživatelské jméno a heslo nebo certifikáty.
•
Authenticator – typicky je představován síťovým zařízením - LAN přepínačem, ke kterému je
připojena stanice, bezdrátovým přístupovým bodem (access point, AP) nebo směrovačem
•
Authentication Server – server, který vyhodnotí ověřovací informace od uživatele. Většinou jde
o AAA server využívající RADIUS protokol. AAA server může komunikovat s externími
databázemi (MS Active Directory, Novell NDS, LDAP, ODBC). Server však může být
implementován na stejném zařízení, jako authenticator.
802.1x definuje dva stavy portů síťových zařízení: řízený a neřízený port (controlled, uncontrolled). Řízený
port se otevře, jen když je připojené zařízení autentizováno 802.1x protokolem. Neřízený port propouští
pouze rámce přenášející autentizační informace přes LAN síť.
Základní princip protokolu ukazuje obrázek 1.
Obrázek 1: Základní princip ověření uživatele v LAN síti.
Security and Protection of Information 2009
149
Připojovaná stanice (klient) je síťovým zařízením (LAN přepínačem, AP) vyzvána k předání ověřovacích
údajů (1). Klient odešle informace síťovému zařízení (2), zařízení je předá autentizačnímu serveru (3).
Server informace vyhodnotí a předá síťovému zařízení informaci, zda je uživatel oprávněn do sítě
přistoupit (4). Síťové zařízení otevře řízený port pro přístup (5). Server může případně odeslat přepínači
nebo AP pokyn pro definici pravidel přístupu uživatele do sítě (6). Autentizačním serverem může
například být RADIUS AAA server.
Protokol 802.1x definuje pojem supplicant, v praxi se často používá výraz klient. Supplicantem bývá
nazýván softwarový modul v operačním systému, který zajišťuje 802.1x komunikaci.
Klient může poslat ověřovací údaje, aniž je vyzván přepínačem. Povšimněme si také, že klientem může být
i LAN přepínač nebo AP. Tato zařízení mohou hrát roli klienta i autentikátoru současně. Krok (6)
přesahuje rámec 802.1x protokolu, v praxi se však velmi často používá.
2.2
Extensible Authentication Protocol
Pro přenos ověřovacích informací se využívá Extensible Authentication Protocol (EAP). Protokol EAP je
definován v dokumentu IETF RFC 3748.
EAP je zapouzdřen do jiného prokolu, jako je například 802.1x nebo RADIUS. Obrázek 2 ukazuje
zjednodušeně zapouzdření zpráv při přenosu mezi klientem a síťovým zařízením
Obrázek 2: Zapouzdření ověřovacích údajů při přenosu.
Zapouzdření EAP protokolu do 802.1x v prostředí LAN (EAP over LAN, EAPOL) ukazuje obrázek 3:
Obrázek 3: Formát rámců EAPOL.
Cílová MAC adresa záleží na prostředí, v přepínaných sítích to je speciální adresa (Port Access Entity
group address, 01-80-C2-00-00-C3). Podle hodnoty ethertype (0x888E) rozpozná přepínač, že jde
o 802.1x protokol s EAP. Pole version ozačuje verzi EAPOL protokolu (nyní 0x2). Pole Type označuje
5 typů EAP paketů:
• EAPOL-Start (EAPOL Start Frame),
• EAPOL-Logoff (EAPOL logoff request frame),
• EAP-Packet (EAP paket),
• EAPOL-Key,
• EAPOL-Encapsulated-ASF-Alert.
150
Security and Protection of Information 2009
EAP je velmi pružný, může zajistit prakticky libovolný přenos ověřovacích informací. Typicky se pro
ověřování používají následující metody:
•
Metody pracující systémem výzva-odpověď (challenge-response). Sem patří například protokoly
• EAP-MD5,
• LEAP (Lightweight Extended Authentication Protocol).
Tyto protokoly používají pro ověření uživatelská jména a hesla, jsou jednoduché pro
implementaci, obsahují vsak v sobě jistou slabinu – pokud útočník zachytí výzvu a odpověď,
může hrubou silou (popř. s využitím slovníku) najít heslo a získat přístup do sítě. Útok je zcela
reálný v bezdrátových sítích, pro omezení rizika je zde nezbytné používat silná hesla.
•
Kryptografické autentizační metody využívající infrastruktury s veřejnými klíči (PKI). Tyto
protokoly využívají pro ověření certifikátů na straně klientů i autentizačního serveru. Sem patří
protokol EAP-TLS (Transport Layer Security, RFC 2716). EAP- TLS je velmi bezpečnou metodou, je ovšem náročnější na implementaci, protože vyžaduje přidělování a správu certifikátů pro
klienty.
•
Tunelující metody. Tyto metody vytvářejí zabezpečený tunnel, přes který se vyměňují ověřovací
informace. Pro vytvoření zabezpečeného tunelu se mohou použít certifikáty na straně autentizačního serveru nebo sdílené tajemství mezi klientem a autentikátorem. Patří sem například
protokoly
• PEAP (Password Extended Authentication Protocol),
• EAP-TTLS (Tunneled Transport Layer Security),
• EAP-FAST (Protokol firmy Cisco pro autentizaci v bezdrátových sítích).
Tyto metody představují kompromis mezi úrovní zabezpečení a náročností implementace.
Například PEAP potřebuje certifikáty pouze na straně autentizačních serverů, klienti používají
jména a hesla.
•
Další metody, jako EAP-OTP pro ověřování jednorázovými hesly
Protokoly LEAP, EAP-FAST, EAP-TTLS se používají typicky v bezdrátových sítích, EAP-MD5 v přepínaných sítích, PEAP, EAP-TLS v obou prostředích.
2.3
Autentizační server a RADIUS protokol
Roli autentizačního serveru zajišťuje obvykle RADIUS server. RADIUS server vyhodnotí autentizační
údaje, vrátí výsledek autentikátoru. Server může navíc předat autentikátoru pokyny pro zavedení přístupových pravidel pro klienty ve formátu AV párů (viz obrázek 4)
Obrázek 4: Přenos informací přes RADIUS protokol.
Security and Protection of Information 2009
151
Přístupovými pravidly může být například přiřazení klienta do VLAN, zavedení paketového filtru na port
klienta, aktivace QoS profilu a podobně. RADIUS server může využívat vlastní databázi uživatelů nebo
databáze externí, například Active Directory, NDS, LDAP, ODBC a tak hladce začlenit 802.1x
autentizaci do existujícího prostředí.
3 Rozšíření autentizace o další funkce
V praxi je často nutné řešit další otázky. Například, jak naložit se stanicemi, které nepodporují 802.1x
protokol? Jakými způsoby mohu přidělit práva uživatelům ověřeným 802.1x protokolem? Jaká bude
situace, pokud je za jedním 802.1x portem více stanic? Je možné vzbudit po síti stanice, které se nacházejí
za porty, řízenými 802.1x protokolem? Tyto otázky protokol 802.1x podrobně neřeší, různí výrobci
ošetřují uvedené situace různě. Pro ilustraci praktické implementace uveďme příklad přepínačů Cisco
Catalyst 6500 (CatOS 8.4). Tyto přepínače umožňují :
•
Zapnout nebo vypnout 802.1x na jednotlivých portech přepínače, ručně ovládat 802.1x stavy
portů.
•
Prověřit uživatele pomocí 802.1x protokolu s využitím hesel, jednorázových hesel nebo
certifikátů.
•
Přiřadit uživatele dynamicky do VLAN sítě podle pokynu RADIUS serveru.
•
Omezit přístup uživatele do sítě dynamicky pomocí paketových filtrů (port Access List, PACL).
•
Zavést QoS profil pro uživatele.
•
Kombinovat 802.1x s funkcemi Port Security (zabezpečením podle MAC adres).
•
Umístit více klientů za 802.1x port.
•
Řídit přidělování IP adres DHCP protokolem podle atributů získaných z RADIUS serveru.
•
Zajistit ochranu ARP protokolu.
•
Zajistit ověření klienta připojeného přes IP telefon umístěný v oddělené VLAN síti.
•
Zařadit klienta, který nepodporuje 802.1x do zvláštní VLAN sítě (Guest VLAN).
•
Zařadit klienta, který neprošel přes ověření, do vyhražené VLAN sítě.
•
Vyvažovat přiřazování uživatelů do VLAN sítí v rámci skupiny VLAN sítí definované pro
uživatele.
•
Zajistit fungování Wake On LAN (WoL) na portech s 802.1x.
•
Ověřovat uživatele pomocí záložních RADIUS serverů.
•
Poskytnout účtování přístupu (accounting).
4 Zavádění 802.1x autentizace do reálného prostředí
Implemetace 802.1x protokolu v LAN sítích není jednoduchá záležitost. Je potřeba posoudit řadu otázek,
například:
1. Jaký 802.1x supplicant bude vhodný pro naše prostředí, dodávaný s OS nebo jiný?
2. Jakou autentizační metodu zvolíme? Musíme vzít v úvahu, jaké metody podporují klienti
i autentizační servery. V heterogením prostředí nemusí být výběr snadný.
3. Proti jaké databázi budeme uživatele ověřovat?
4. Pokud není dostupný autetnizační server, uživatelé obecně nemohou přistupovat k síti. Jak tuto
situaci ošetříme? Jak budeme řešit vysokou dostupnost serveru?
152
Security and Protection of Information 2009
5. Podaří se nám dosáhnout jednotného přihlášení do sítě i do operačních systémů?
6. Nebudou mít operační systémy nebo aplikace problémy s čekáním, než se uživatel přihlásí do sítě
přes 802.1x ? Nenastane problém při dynamickém přiřazení uživatele do VLAN?
Podívejme se krátce na bod 6 v operačních systémemch Windows s 802.1x suplicantem Microsoftu (OS
XP, 2000). Pokud uživatel zapne PC, probíhá posloupnost procesů, znázorněná na obrázku 5.
Obrázek 5: Procesy při přihlašování uživatelů do Windows.
Pokud se provádí ověřování uživatele 802.1x protokolem (5a), má uživatel k dispozici síťové spojení až po
ověření 802.1x protokolem (např. uživatel zadá jméno a heslo). Do té doby je port na přepínači pro
přenos dat uzavřen. Pokud používáme pro přiřazení IP adresy DHCP protokol, pak záleží, jaká doba
uběhne mezi prvním DHCP požadavkem a otevřením portu. Pokud je tato doba delší než je standardní
timeout ve Windows (zhruba 1 minuta), DHCP proces se vzdá a tuto situaci je potřeba ošetřit zvlášť.
Další potíž může nastat, pokud při přihlašování do domény zavádíme pro uživatele Group Policy Objects
(GPO). Při standardním ověření uživatele není v době aplikace GPO k dispozici síťové spojení a proces
neproběhne. Tento problém i problém s DHCP je možné řešit ověřením stanice (802.1x machine autentization). Toto ověření otevře včas síťové spojení, stanice získá adresu z DHCP serveru, GPO mohou být
přeneseny na stanici (obrázek 5b).
Je také možné kombinovat ověření stanice s ověřením uživatele (obrázek 5c). Ověření stanice otevře
přístup do sítě, uživatel je pak autentizován znovu. Podle výsledku autentizace pak mohou být uživateli
přiřazena pravidla pro přístup do sítě. Při dynamickém přiřazení portu do VLAN ale může nastat opět
problém s DHCP protokolem. Přepínač změní po ověření uživatele VLAN (tedy i IP subsíť) na portu.
DHCP proces na stanici se to ale přímo nedozví, neví tedy že má požádat znovu o IP adresu. Popsané
problémy jsou částečně odstraněny v novějších aktualizacích Windows XP a 2000 - stanice po provedení
802.1x autentizace posílá ping na default gateway, pokud nedostane odpověď, vyšle nový DHCP
požadavek a získá tak IP adresu v nové VLAN síti.
Přesné vyladění celého procesu nemusí být v reálném prostředí jednoduché. Podrobnější informace o zmíněných obtížích a způsobech řešení jsou uvedeny v [6].
Security and Protection of Information 2009
153
5 Závěr
802.1x protokol může zřetelně zvýšit úroveň zabezpečení sítě. Zajistí ochranu před neoprávněným přístupem do sítě. Síťová zařízení mohou podle výsledku ověření řídit přístup uživatelů nebo stanic do sítě
a účtovat služby. Ověřováním získá správce sítě lepší přehled o zařízeních a uživatelích připojených v síti.
Zavedení 802.1x prokolu ovšem nemusí být snadné a vyžaduje pečlivou analýzu, přípravu, správný návrh
a přesné provedení.
Odkazy
[1]
IEEE P802.1X-REV/D11, July 22, 2004. Standard for Local and Metropolitan Area Networks—
Port-Based Network Access Control
[2]
Extensible Authentication Protocol (EAP, IETF RFC 3748)
[3]
IETF RFC 2716, PPP EAP TLS Authentication Protocol
[4]
IETF RFC 2865, Remote Authentication Dial In User Service (RADIUS)
[5]
Network Infrastructure Identity-Based Network Access Control and Policy Enforcement, Cisco
Systems http://www.cisco.com/application/pdf/en/us/guest/netsol/ns178/c649
/ccmigration_09186a0080160229.pdf
[6]
Understanding 802.1x, IBNS, and Network Identity Services, Cisco Networkers 2004
http://www.cisco.com/networkers/nw04/presos/docs/SEC-2005.pdf
154
Security and Protection of Information 2009
McAfee Enterprise Firewall (Sidewinder) pod drobnohledem
Jaroslav Mareček 1
email: [email protected]
COMGUARD a.s.
Vídeňská 119b, 619 00 Brno
1 Charakteristika řešení a jeho klíčové vlastnosti
McAfee Enterprise Firewall (Sidewinder) s unikátní architekturou aplikačních proxy bran zajišťuje bezpečnost nejcitlivějších světových sítí a po více než patnáct let zůstává jediným doposud neprolomeným
řešením svého druhu na trhu. Jako jediný má certifikaci EAL 4+ pro aplikační kontroly, a proto jsou jeho
typickými uživateli banky (včetně US Bank, Federál Reserve Bank, Národná banka Slovenska), státní
sektor (v USA včetně CIA, NSA, FBI, USAF, US Army, US Navy, města v ČR) i organizace komerční
sféry ve více než 140 zemích světa. Výrobcem je společnost McAfee (akvizice Secure Computing), která je
největším světovým producentem bezpečnostních řešení.
Obrázek 1: McAfee Enterprise Firewall (Sidewinder), Model 2150E.
McAfee Enterprise Firewall (Sidewinder) jako typické UTM zařízení v sobě integruje všechny podstatné
funkce pro zabezpečení internetového provozu. Využívá soubor unikátních technologií pracujících na
principu aplikačních proxy, kde neexistuje žádné přímé spojení mezi vnitřní sítí a sítí Internet. Do jednoho systému konsoliduje funkce pro zabezpečení sítí i aplikací před známými i neznámými hrozbami,
kombinovanými útoky a škodlivými kódy skrytými v šifrovaných i nešifrovaných protokolech. Nedílnou
součástí je systém globální inteligence TrustedSource doplňující lokální analýzy provozu o znalost reputace
subjektů přistupujících do sítě.
V současné době je na trhu k dostání sedm různých modelů McAfee Enterprise Firewallu. Jejich vlastnosti
zajišťují dostatečnou škálovatelnost výkonu a umožňují vytvořit celé spektrum konfigurací od klasického
active-pasive režimu až po one-to-many instalace, tedy uspořádání do klastru tvářícího se jako jedno
zařízení. Řada začíná od modelu 210F, který je určen pro malé a střední organizace, a končí nejvýkonnějším modelem 4150, což je čtyřprocesorové zařízení pro velké organizace s propustností přes 3 Gb/s při
kontrole na aplikační úrovni.
1
s využitím článku Miroslava Štolpy “Sidewinder pod drobnohledem”, DSM 1/2008
Security and Protection of Information 2009
155
1.1
Secure OS
Srdcem McAfee Enterprise Firewallu je operační systém SecureOS, jenž je postaven na upravené platformě
BSD, ve které je důsledně realizována patentovaná bezpečnostní technologie zvaná Type Enforcement. Jde
o striktní uplatnění povinného řízení přístupu (mandatory access control), kdy aplikace běžící v systému
mají striktně definovaný přístup ke konkrétním zdrojům v systému bez ohledu na běžná unixová
oprávnění a bez ohledu na to, zda je vlastníkem superuživatel (root) či nikoliv. Podle účelu jsou jednotlivé
aplikace přiřazeny do tzv. domén a pro položky souborového systému se dále definují tzv. typy. Prakticky
jde o to, že aplikace může navázat spojení přes určitý port a číst soubory (např. konfigurační) jen tehdy,
pokud náleží do stejné domény. V běžícím operačním systému nelze databázi Type Enforcement vypnout
či nějak modifikovat.
Technologie Type Enforcement je implementována přímo do jádra a eliminuje všechny dosud známé
útoky na firewally. Jedná se o útoky cestou přetečení vyrovnávací paměti (buffer overflow), zneužití
systémových služeb, převzetí kontroly systému (root access), pomocí změny bezpečnostní politiky,
importu nebezpečného softwaru do systému firewallu a spuštění per manentního útoku, oklamání proxy
a serveru na firewallu, atd.
1.2
Oddělené IP zásobníky
Technologie Type Enforcement používá systém oddělených síťových vyrovnávacích pamětí. Tím eliminuje útok, kdy firewall považuje útočníka z vnější sítě za uživatele na vnitřní straně. Síťový zásobník
pracuje s různými úrovněmi softwarové zodpovědnosti, které se starají o odlišné aspekty komunikace.
Například jedna z úrovní kontroluje informace o směrování, tedy aby data byla odesílána do správné sítě.
Běžné počítačové systémy a firewally, které nedisponují zabezpečenými operačními systémy, mají jen jeden
síťový zásobník.
SecureOS obsahuje řadu nástrojů poskytujících silné oddělení komunikace mezi vnitřním a vnějším
rozhraním. Všechna data procházející přes jednotlivé zásobníky jsou kontrolována napříč celým spektrem
modelu OSI (Open System Interconnection). Je tak prováděna kontrola integrity nejen na síťové a transportní vrstvě, ale i na vrstvě aplikační. Logické oddělení zásobníků technologií Type Enforcement
umožňuje, aby měl SecureOS několikanásobnou kontrolu procházejících dat. Informace procházející
firewallem tak putují několika zásobníky a v kterémkoliv okamžiku mohou být bezpečnostní politikou
zablokovány.
Dříve než proces může přesunout data z jednoho síťového zásobníku do druhého, musí mu být uděleno
povolení bezpečnostní politikou Type Enforcement pro přístup k síťovému zásobníku daného rozhraní.
Interval, ve kterém se pakety prověřují, je stanoven agentem zodpovědným za zpracovávání paketů.
Firewall se řídí sadou pravidel, která určují propouštěný a zakázaný provoz. Každé pravidlo vyžaduje
určitou službu a ta propojuje provoz na transportní vrstvě se specifickým agentem, který je zodpovědný za
její provedení. Používají se tři typy takovýchto agentů: proxy, paketový filtr a server. Informace z transportní vrstvy obsahují položky jako jsou: protokol, port, spojení nebo doba relace.
1.3
UTM řešení
McAfee Enterprise Firewall je založen na hybridní technologii filtrování. K ochraně nepotřebuje zvláštní
moduly s programovým vybavením, příslušné funkce jsou přímo součástí operačního systému. Detekce
abnormální aktivity a provozu v síti je jen jedna z částí toho, co dokáže vyhodnotit. Neméně důležitým
nástrojem je inspekce emailů a webového provozu, protože velké množství útoků je skryto právě ve
webových stránkách a v emailové komunikaci. Antivirus, antispam, filtrování URL a technologie webové
filtrace zajišťují účinnou obranu proti těmto typům útoků a chrání i před phishingem, malwarem resp.
spywarem.
156
Security and Protection of Information 2009
Obrázek 2: Komplexní kontrola provozu v podání McAfee Enterprise Firewallu (Sidewinder).
Zařízení obsahuje podporu všech hlavních protokolů potřebných pro správu vnitřní sítě. Nechybí SNMP,
Syslog, LDAP, Active Directory, Radius, OSPF, RIP, NTP atd. Zajištěna je i podpora protokolu IPv6.
Zařízení disponuje řadou pohotovostních vlastností jako jsou „Power-It-On“, vzdálená správa zálohování
a obnovy, centralizovaný záznam událostí a kompletní auditní monitoring. Pro tyto vlastnosti ho lze
instalovat i na místa s omezeným přístupem.
1.4
Instalace a správa
Instalace trvá jen několik minut v závislosti na volbě bezpečnostní strategie. Lze využít konfiguračního
průvodce, který položí pár základních otázek týkajících se dané sítě a v ní uplatňovaných bezpečnostních
pravidel. Pro podporu tvorby a správy bezpečnostních politik lze využít různých nástrojů jako jsou Admin
Console (grafické rozhraní chráněné pomocí SSL), Secure SSH, Telnet či CommandCenter (centrální
správa). Centrální správa umožňuje delegování administrátorských práv i návrat k výchozímu stavu
politiky (policy rollback). K dispozici je i export pravidel do formátu CSV.
1.5
Ochrana na bázi aplikačních proxy bran
Firewall disponuje obousměrnou aplikační bránou oddělující sítě s různou mírou důvěryhodnosti (typicky
intranet a internet). Obsahuje 47 aplikačních proxy kontrolujících komunikaci řady aplikací a užívané
protokoly. Provoz je podroben inspekci od 3. (síťové) po 7. (aplikační) vrstvu OSI modelu. Nevytváří se
zde žádné přímé spojení mezi interní a externí sítí. Proxy funguje jako prostředník mezi klientem
a cílovým počítačem nebo serverem, překládá klientské požadavky a vůči cílovému počítači vystupuje sám
jako klient. Na straně interní sítě vystupuje jako server, který přijímá požadavek (a na aplikační úrovni jej
překontroluje). Poté, již v roli klienta, zahajuje komunikaci se skutečným serverem. Aplikační proxy
umožňuje nejen ochranu IP zásobníků chráněných zařízení, ale i jemnější řízení datového provozu a to
díky možnosti zadávat aplikačně specifická pravidla. Na firewallu je navíc možné přímo provozovat
serverové služby typu SMTP (sendmail), DNS včetně split módu 2 a zajistit tak maximální ochranu
serverů ve vnitřní síti nebo DMZ.
Security and Protection of Information 2009
157
Obrázek 3: Model aplikační proxy brány McAfee Enterprise Firewallu.
Aplikační proxy zajišťuje zejména nucené splnění požadavků RFC, analýzu anomálií, přepis paketů,
kontrolu záhlaví paketů, vynucení správného tvaru URL (délka, příkazy, atd.), blokování nežádoucího
obsahu, kontrolu kódu na viry, červy, spamy a klíčová slova. Dále ochraňuje vnitřní síť před odhalováním
její struktury, před fingerprintingem neboli zjišťování verzí operačního systému na základě řady dílčích
indicií (poskytuje o sobě jen minimum informací).
Součástí je software SecurityReporter, poskytující tvorbu a zpracování hlášení v reálném čase. Pomocí
tohoto softwaru lze shromažďovat záznamy o bezpečnostních událostech z více než jednoho zařízení, dále
je třídit a zpracovávat. SecurityReporter umožňuje generování téměř osmi stovek variant výstupů pro
prezentaci dat.
2 Zátěžový test firewallu
Následující text popisuje výsledky zátěžových testů provedených ing. Miroslavem Štolpou na Univerzitě
obrany v Brně.
Výrobcem udávaná propustnost testovaného modelu 2150D byla pro paketovou filtraci (TCP) na hranici
2,8 Gb/s a s aplikační kontrolou 1,8 Gb/s. Této propustnosti lze dosáhnou se základním nastavením
pravidel. Testy ukázaly, že v závislosti na počtu uplatněných pravidel se mírně snižuje propustnost i počet
uskutečněných transakcí. Na tomto se podepisuje více faktorů, například typ pravidel a jejich počet, velikosti paketů, propustnost linky, úroveň záznamů logů na disk firewallu atd.
K ověření vlastností firewallu byly použity dvě testovací zařízení – Avalanche a Reflektor od společnosti
Spirent, na prvním z nich bylo nasimulováno 252 klientských stanic a na druhém webový server (viz
obr. 4). Firewall, přes který spolu klienti a server komunikovaly, byl vybaven verzí operačního systému
s označením 7.0.0.05. Cílem bylo otestovat maximální počet HTTP požadavků při minimálním počtu
TCP spojení v závislosti na uplatňované ochranné politice firewallu. Kritéria byla definována pro protokol
HTTP/1.1. Každý klient byl nastaven tak, aby mohl uskutečnit maximálně dvě TCP spojení, přičemž
v každém odešle deset HTTP požadavků typu Get a poté spojení ukončí.
158
Security and Protection of Information 2009
Obrázek 4: Uspořádání experimentu.
2.1
Test HTTP paketů bez antivirové inspekce
Velikost HTTP zprávy byla v prvním testu stanovena na 4 kB mimo bajtů záhlaví. Zátěž byla generována
dle schodového schématu s maximální hodnotou 3 000 transakcí za sekundu, kde každý schod reprezentuje nárůst o 300 transakcí po dobu 11 sekund. Test trval 2 minuty a 10 sekund.
Generované požadavky byly úspěšně vyřizovány do 104 sekundy testu. Až zde se začalo projevovat vytížení
firewallu a všechny transakce nad počet 2 500 vykazovaly časové zpoždění. S klesajícím počtem transakcí
se toto zpoždění opět snižovalo a zvýšil se i počet úspěšně dokončených požadavků.
2.2
Test HTTP paketů se zapnutou antivirovou ochranou
Maximální velikost zprávy ve druhém testu byla stanovena na 20 kB plus bajty tvořící HTTP záhlaví.
Zátěž byla generována opět dle schodovitého schématu, maximální počet transakcí za sekundu byla
snížena na hodnotu 600. Každý schod tedy představoval nárůst o 60 transakcí po dobu 11 sekund. Test
trval 2 minuty 20 sekund. Firewall byl nastaven tak, aby prováděl stejnou inspekci HTTP zpráv i s blokací nepovolených prvků a eliminací nepovolených znaků v záhlavích jako v testu prvním. Navíc byl ale
veškerý provoz podroben antivirové kontrole a na firewallu.
Výsledky druhého testu se od svého předchůdce zásadněji nelišily. V závislosti na velikosti paketů
a nastavených pravidel se snížil počet transakcí. I zde se začalo projevovat zatížení firewallu nárůstem doby
odezvy. Na konci testu hodnota zpoždění dosáhla 1,6 sekund, zato však počet neúspěšných (tj. nedokončených) transakcí byl nulový.
3 Závěr
Sidewinder je postaven pouze na patentovaných technologiích. Dosud nikdo nenašel, resp. nezveřejnil
potřebu opravy nebo vydání bezpečnostních záplat ani na technologii Type Enforcement ani na
SecureOS. Firewall reaguje na bezpečnostní incidenty v reálném čase a to odpovídajícím způsobem (ohlášením chyby). Sleduje 28 různých typů událostí jako je porušení omezení seznamu řízení přístupových
práv, pokusy o útok, neúspěšná autentizace pro Telnet nebo FTP proxy, atd.
Security and Protection of Information 2009
159
IS EU Extranet ČR
S.ICZ a.s.
Hvězdova 1689/20, 140 00 Praha 4
tel.: +420 244 100 111, fax: +420 244 100 222
1 Úvod
Společnost S.ICZ a.s. vyvinula informační systém EU Extranet ČR umožňující elektronickou distribuci
oficiálních dokumentů Generálního sekretariátu Rady EU (GSC) do stupně utajení VYHRAZENÉ
(RESTREINT UE). Dokumenty jsou šířeny prostřednictvím nadřazeného systému EU Extranet-R do
členských zemí EU a v rámci České republiky dále nejenom směrem k Ministerstvu zahraničních věcí
České republiky (MZV), ale i do dalších resortů a orgánů státní správy ČR. Informační systém EU
Extranet ČR hraje významnou roli v rámci předsednictví ČR v Radě EU. Do systému je v tuto chvíli
připojeno 21 generických uzlů (ministerstev a dalších orgánů státní Správy ČR) v kategorii do stupně
utajení „VYHRAZENÉ" a 34 generických uzlů v kategorii „LIMITE".
2 EU Extranet-R
Informační systém EU Extranet-R slouží pro elektronickou distribuci oficiálních dokumentů mezi
centrálními orgány EU (GSC a Evropskou komisí) a členskými zeměmi EU prostřednictvím ministerstva
zahraničí a stálé mise. Dokumenty šířené IS EU Extranet-R jsou určené nejenom pro pracovníky Ministerstva zahraničních věcí ČR, ale i pro jednotlivé orgány a organizace státní správy.
Pro plnohodnotné fungování ČR v rámci struktur EU je jako základní předpoklad právě včasné roztřídění
a doručování dokumentů EU na příslušná pracoviště jednotlivých orgánů a organizací státní správy.
Pomocí IS EU Extranet-R se šíří jednosměrně dokumenty z EU, ale v opačném směru musí být schopna
Česká republika včasně projednat jednotlivá stanoviska a ty pomocí existujících struktur EU prosazovat.
3 EU Extranet ČR
3.1
Výchozí předpoklady pro ČR
EU Extranet ČR je informační systémem, který nakládá s utajovanými informacemi do stupně
RESTREINT UE, tedy dle české legislativy do stupně VYHRAZENÉ. Podle požadavků EU a dále dle
české legislativy musí být systém certifikován Národním bezpečnostním úřadem ČR (NBÚ). To mimo
jiné znamená, že tento systém nesmí být nijak propojen s žádným necertifikovaným informačním
systémem, respektive od každého takového systému musí být schváleným a certifikovaným způsobem
oddělen.
3.2
Požadavky na řešení
Návrh koncepce řešení IS EU Extranet ČR vychází z požadavků na klasifikovaný informační systém
umožňující distribuci velkého množství dokumentů v elektronickém formátu a s rozdílným stupněm klasifikace (VYHRAZENÉ, LIMITE). Distribuci dokumentů EU je nutné zajistit bezpečně a včas na jednotlivé orgány a organizace státní správy ČR tak, aby s nimi mohli efektivně pracovat a aby dokázaly
připravit svá stanoviska v požadovaných termínech (někdy i v řádu hodin).
160
Security and Protection of Information 2009
3.3
Koncepce řešení
Základní koncepcí navrhovaného řešení je distribuovaný informační systém, který umožňuje zpracování
utajovaných dokumentů v samostatných klasifikovaných informačních systémech a neutajovaných
dokumentů ve vnitřních neklasifikovaných informačních systémech jednotlivých subjektů státní správy.
Toto řešení musí zajistit odpovídající kryptografickou ochranu utajovaných dokumentů, následné
bezpečné předávání dokumentů do neklasifikované meziresortní komunikační infrastruktury, vlastní
transfer těchto dokumentů, předání dokumentů do jednotlivých klasifikovaných i neklasifikovaných
informačních systémů subjektů státní správy s následnou možností zpracovat a distribuovat jednotlivé
návazné dokumenty nebo stanoviska. To vše při zajištění odpovídající bezpečnosti, funkcionality,
kontinuity a časových nároků na distribuci dokumentů Evropské unie a následného zpracování jednotlivých stanovisek
3.3.1
Zpracování neutajovaných dokumentů v neklasifikovaných IS
Prvním základním stavebním kamenem koncepce řešení je umožnění zpracování neutajovaných dokumentů EU distribuovaných v klasifikovaném IS EU Extranet ČR v neklasifikovaných informačních
systémech jednotlivých orgánů a organizací státní správy ČR. Pro úspěšnou realizaci toho stavebního
kamene bylo nutné z hlediska bezpečnostních rizik, bezpečně implementovat a certifikovat následující
funkčnost systému:
•
bezpečné třídění neutajovaných od utajovaných dokumentů v rámci klasifikovaného informačního systému IS EU Extranet ČR v centrále na MZV ČR,
•
bezpečné a automatizované předávání neutajovaných dokumentů z klasifikovaného systému IS
EU Extranet ČR v centrále na MZV ČR do neklasifikovaného prostředí (neklasifikované komunikační infrastruktury).
3.3.2
Distribuce utajovaných dokumentů pomocí neklasifikované komunikační infrastruktury
Druhým základním stavební kamenem koncepce řešení je umožnění distribuce utajovaných dokumentů
EU distribuovaných v klasifikovaném IS EU Extranet ČR pomocí neklasifikované komunikační infrastruktury propojující jednotlivé orgány a organizace státní správy ČR. Pro úspěšnou realizaci toho stavebního kamene bylo nutné z hlediska bezpečnostních rizik, bezpečně implementovat a certifikovat následující funkčnost systému:
•
zajištění ochrany utajovaných dokumentů pomocí certifikovaného kryptografického prostředku
v rámci klasifikovaného informačního systému IS EU Extranet ČR v centrále na MZV ČR (tím je
dosaženo toho, že takto ochráněné utajované dokumenty se stanou neutajované informace),
•
bezpečné a automatizované předávání neutajovaných informací (ochráněné utajované dokumenty) z klasifikovaného systému IS EU Extranet ČR v centrále na MZV ČR do neklasifikovaného prostředí (neklasifikované komunikační infrastruktury).
Security and Protection of Information 2009
161
3.3.3
Zpracování utajovaných dokumentů v oddělených klasifikovaných IS jednotlivých resortů
Třetím základním stavební kamenem koncepce řešení je umožnění zpracování utajovaných dokumentů
EU, distribuovaných jako neutajované informace (ochráněné utajované dokumenty) pomocí neklasifikované komunikační infrastruktury v oddělených klasifikovaných IS jednotlivých orgánů státní správy ČR.
Pro úspěšnou realizaci toho stavebního kamene bylo nutné z hlediska bezpečnostních rizik bezpečně
implementovat a následně certifikovat následující funkčnost systému:
•
bezpečné manuální nebo automatizované předávání neutajovaných informací (ochráněné utajované dokumenty) z neklasifikovaného prostředí (neklasifikovaná komunikační infrastruktura) do
oddělených klasifikovaných IS jednotlivých orgánů a organizací státní správy ČR,
•
obnova (dešifrování dokumentů) neutajované informace na utajované dokumenty pomocí certifikovaného kryptografického prostředku v rámci oddělených klasifikovaných IS jednotlivých
orgánů a organizací státní správy ČR.
3.4
Návrh řešení IS EU Extranet ČR
Informační subsystém EU Extranet ČR byl vybudován v rámci rozšiřování funkcionality certifikovaného
IS MZV-V, který je ve správě MZV. Příjem dokumentů z EU Extranet-R a jejich centrální zpracování
a ochranu zajišťuje MZV prostřednictvím „Centrálního uzlu IS EU Extranet ČR“. Dokumenty jsou zde
tříděny a dále předávány pomocí bezpečného rozhranní do meziresortní komunikační infrastruktury
(MKI) a následně jsou vyzvedávány odpovídajícími uzly IS EU Extranet ČR na jednotlivých resortech.
K vlastnímu zpracování dokumentů dochází v odpovídajících resortních uzlech IS EU Extranet ČR dle
klasifikace distribuovaných dokumentů.
3.4.1
Centrální uzel IS EU Extranet ČR
Centrální uzel obsahuje tyto základními komponenty (viz. obr. 1):
•
Distribuční Agent - evidence, třídění, distribuce a archivace dokumentů EU a stanovisek,
•
Crypto Gateway (Šifrovací brána) - šifrování a dešifrování dokumentů a předávání/převzetí
dokumentů na Bezpečnostní oddělovací blok,
•
Bezpečnostní oddělovací blok (MZV-V-BOB) - ochrana uzlu a bezpečný přenos dokumentů
oběma směry.
Distribuční Agent zajišťuje rozhraní na IS EU Extranet-R (přijímá dokumenty). Třídí a distribuuje přijaté
dokumenty podle národních distribučních pravidel a předává k odeslání ke vzdáleným uzlům v případě, že
klasifikace dokumentu odpovídá klasifikaci kanálu, do kterého má být odeslána. Zpětně Distribuční
Agent přijímá dokumenty doručené ze vzdálených resortních uzlů IS EU Extranet ČR.
Šifrovací brána odebírá dokumenty k odeslání z Distribučního Agenta. Kontroluje, zda klasifikace dokumentu je maximálně rovna klasifikaci kanálu, do kterého má být odeslána, dokument zašifruje kryptografickou funkcí dle stupně utajení dokumentu a kanálu (pro VYHRAZENÉ nebo LIMETE) a uloží je do
Bezpečnostního oddělovacího bloku. Zpětně šifrovací brána odebírá dokumenty z Bezpečnostního oddělovacího bloku, dešifruje příslušnou kryptografickou funkcí (pro VYHRAZENÉ nebo LIMITE) a ukládá
je do Distribučního Agenta.
Bezpečnostní oddělovací blok (BOB) je zařízení navržené pro bezpečné oddělení a automatizovanou
výměnu informací mezi IS obsahujícími data různého stupně utajení.
162
Security and Protection of Information 2009
3.4.2
Resortní uzly IS EU Extranet ČR
IS RESORT-N je neklasifikovaný resortní uzel IS EU Extranet ČR určený pro zpracování dokumentů
LIMITE. Je připojený k MKI prostřednictvím komponent IS EU Extranet ČR určených k automatizovanému přenosu dat. Jako IS RESORT-N slouží existující neklasifikované IS resortů. Uzel je propojen do
MKI přes Komunikační bránu se dvěma síťovými kartami a její součástí je i Šifrovací brána. Šifrovací
brána odebírá dokumenty z Bezpečnostního oddělovacího bloku (MZV-V-BOB) přes MKI, dešifruje je
(jen LIMITE) a ukládá do Distribučního Agenta. Zpětně Šifrovací brána odebírá zásilky k odeslání
z Distribučního Agenta, dokumenty šifruje (jen LIMITE) a uloží je přes MKI do MZV-V-BOB. Distribuční Agent zprostředkovává přímo aplikační rozhraní pro přístup k dokumentům a umožňuje napojení
na resortní IS. (viz. obr. 1)
IS RESORT-V je klasifikovaný resortní uzel IS EU Extranet ČR určený pro zpracování utajovaných
informací do stupně utajení VYHRAZENÉ. Obecně přenos dokumentů probíhá tak, že Šifrovací brána
odebírá dokumenty z MKI IS EU Extranet ČR (médium nebo BOB), dešifruje příslušnou kryptografickou funkcí (pro VYHRAZENÉ nebo LIMITE) a ukládá do Distribučního Agenta. Zpětně Šifrovací
brána přebírá zásilky k odeslání z Distribučního Agenta, dokumenty zašifruje kryptografickou funkcí
a předá do MKI IS EU Extranet ČR (médium nebo BOB). Distribuční Agent zprostředkovává přímo
aplikační rozhraní pro přístup k dokumentům a umožňuje napojení na nadstavbové aplikace určené pro
resortní zpracování dokumentů. Existují dvě varianty tohoto uzlu:
•
IS RESORT-V OFFLINE je klasifikovaný resortní uzel IS EU Extranet ČR umožňující
zpracování dokumentů na pracovních stanicích uzlu a výměna dokumentů s MKI IS EU Extranet
ČR je prováděna manuálně. Toto řešení je zaměřeno na resorty zpracovávající malý počet
utajovaných dokumentů. Tento uzel není připojen na žádný jiný klasifikovaný ani neklasifikovaný IS (ani do MKI). Komunikace s IS EU Extranet ČR je zajištěna pomocí komunikační
stanice, která je připojena pouze k MKI. Přenos dokumentů oběma směry je prováděn oprávněným uživatelem IS RESORT-V pomocí výměnných přenosných médií. Generický uzel IS
RESORT-V OFFLINE vyžaduje akreditaci MZV. (viz. obr.1)
•
IS RESORT-V ONLINE je klasifikovaný resortní uzel IS EU Extranet ČR, který umožňuje
zpracování dokumentů v lokální certifikované síti resortu a je připojen k MKI prostřednictvím
komponent IS EU Extranet ČR určených k automatizovanému přenosu dat. V případě, že resort
již měl k dispozici vhodný certifikovaný IS, došlo pouze k jeho rozšíření o potřebné komponenty
IS EU Extranet ČR, zajišťující přenos, ochranu a distribuci dokumentů. Uzel IS RESORT-V
ONLINE je připojen do MKI IS EU Extranet ČR přes Bezpečnostní oddělovací blok. Tento uzel
vyžaduje certifikaci NBÚ a akreditaci MZV pro připojení do IS EU Extranet ČR. (viz. obr.1)
3.4.3
Meziresortní komunikační infrastruktura (MKI)
Pro komunikaci s resorty je vytvořena neklasifikovaná MKI. Přípojka MKI je na všech resortních pracovištích, ve kterých se zpracovávají dokumenty z IS EU Extranetu ČR. MKI je vybudována jako uzavřená
privátní síť nad sítí KIVS/KKI a slouží pouze pro komunikační potřeby IS EU Extranet ČR. Ochrana
dokumentů při přenosu přes MKI je prováděna v několika úrovních:
•
Veškerá data předávaná v rámci IS EU Extranet ČR, prostřednictvím MKI, jsou zašifrována.
Každý uzel IS EU Extranet ČR je vybaven příslušnou šifrovací bránou, zajišťující jejich zašifrování před odesláním a rozšifrování při příjmu.
•
Pro ochranu komunikace s certifikovanými uzly IS EU Extranet ČR je použit národní kryptografický prostředek CSP II MicroCzech, pro ochranu komunikace s necertifikovanými uzly IS EU
Extranet ČR je použit necertifikovaný kryptografický prostředek Microsoft Strong CSP.
Security and Protection of Information 2009
163
•
Šifrovací brána MZV-CG před odesláním ověřuje, zda stupeň klasifikace dokumentu nepřesahuje
maximální povolený stupeň klasifikace vzdáleného uzlu (cílového kanálu).
•
Jednotlivé uzly IS EU Extranet ČR jsou odděleny od MKI. Certifikované uzly IS EU Extranet
ČR jsou odděleny pomocí Bezpečnostního oddělovacího bloku nebo jsou data přenášena z komunikační stanice, umístěné v necertifikovaném MKI, na výměnném datovém médiu. Necertifikované uzly IS EU Extranet ČR jsou odděleny od MKI pomocí komunikační brány.
Obrázek 1: Celkové schéma IS EU Extranet ČR.
4 Závěr
Jedinečná bezpečnostní technologie tzv. Bezpečnostní oddělovací blok ve spojení s certifikovaným kryptografickým prostředkem CSP-II MicroCzech umožnila v prostředí České republiky vybudovat reálný
certifikovaný informační systém, který je v době našeho předsednictví jediný informační systém tohoto
druhu.
Systém EU Extranet ČR splňuje všechna bezpečnostní pravidla pro distribuci utajovaných dokumentů
a k jeho provozování vydal povolení Národní bezpečnostní úřad ČR. Jako jedna z prvních zemí EU Česká
republika splnila i všechny podmínky bezpečnostních předpisů EU a dne 30.9.2008 byl GSC-SAA
(General Secretariat of the Council of the E.U. - Security Accreditation Authority) vydán certifikát, opravňující propojit národní informační systém EU Extranet ČR s nadřazeným informačním systémem EU
Extranet-R.
164
Security and Protection of Information 2009
McAfee Total Protection
Snižte složitost řízení bezpečnosti
Vladimír Brož, Territory Manager ČR & SR
[email protected]
McAfee, Inc.
1 Úvod
Počítačová bezpečnost se dramaticky změnila od doby, kdy se před 25 lety objevil první počítačový virus.
Je nyní mnohem složitější a časově náročnější. K virům se přidal nekonečný proud červů, trojských koní,
botů, hackerů, zneužití bezpečnostních děr, zlodějů identit a dalších útoků, které ohrožují celou vaši síť.
S expanzí sítí, které nyní zahrnují vzdálené i mobilní se zvětšily i trhliny v bezpečnosti.
Pouhé přidání nového samostatného produktu pro boj s novými hrozbami zvyšuje úroveň složitosti a neefektivity každé implementace.
McAfee® má naštěstí lék. Vzali jsme prověřenou technologii McAfee a použili ji pro první integrované
bezpečnostní řešení, které je efektivní, komplexní, praktické a vyzkoušené: McAfee Total Protection.
2 McAfee Total Protection – komplexní ochrana, která zastavuje
komplikované hrozby a snižuje riziko
McAfee Total Protection obsahuje: antivir, antispyware, antispam, desktopový firewall, prevenci nežádoucího vniknutí a kontrolu síťového přístupu - vše integrované a řízené jedinou konzolí. Řešení McAfee
Total Protection jsme navrhli a postavili tak, že je účinné, komplexní a pružné.
Hlavní funkce jsou:
•
Jedna konzole pro správu, jedna sada bezpečnostních aktualizací a jediné místo kontaktu pro
technickou podporu.
•
Komplexní integrovaná ochrana proti nejrůznějším hrozbám - známým i neznámým - pro
blokování útoků a prevenci narušení provozu .
•
Flexibilita a škálovatelnost, se základem pro budoucnost založenou na sjednocené strategii.
3 Komplexní ochrana, nekompromisní integrace
Naše technologie McAfee si vydobyla respekt předních analytiků a získala nesčetná ocenění. Naše globální
síť stovek bezpečnostních expertů je ve službě 24 hodin denně – monitorují hrozby, ohodnocují rizika
a provádějí veškeré nezbytné činnosti, aby vás informovali a chránili. Díky těmto specializovaným odborníkům je McAfee AVERT Labs jednou z nejlépe hodnocených bezpečnostních výzkumných organizací na
světě.
McAfee Total Protection vám dává pohodlí a důvěru pramenící z prověřených technologií, nepřetržité
technické podpory a sítě důvěryhodných partnerů, kteří nabízejí služby s přidanou hodnotou, a pomáhají
vám tak upravit bezpečnostní řešení přesně podle vašich potřeb.
Security and Protection of Information 2009
165
4 Pro dnešek i pro zítřek
Investice do řešení McAfee Total Protection znamená, že když se objeví nové, dříve neznámé hrozby,
nemusíte začínat od začátku. Řešení je vytvořeno na platformě, která se může vyvíjet spolu s vývojem
hrozeb. Budou tedy chráněné nejen vaše systémy, ale rovněž vaše investice do bezpečnosti.
5 Celková ochrana pro jakoukoliv firmu
McAfee si uvědomuje, že každá firma je unikátní - od nejmenších, až po největší korporace - proto nabízí
několik různých verzí McAfee Total Protection. Každá je integrovaným a komplexním řešením s funkcemi
„šitými na míru“ specifickým bezpečnostním a obchodním požadavkům.
Konsolidace softwaru při modernizaci ochrany může určitě ušetřit čas, peníze i zdroje. Tyto přínosy ale
moc neznamenají, pokud nemůžete věřit technologii, která je za nimi.
6 McAfee Total Protection for Endpoint
Jediné integrované řešení, jediná konzole, prověřená komplexní ochrana.
Může být obtížné určit správnou míru ochrany. Vyvažujete rizika a důsledky proti nákladům, dostupným
zdrojům a času. A ta správná míra ochrany se stále mění. Jestliže si osvojíte pružný a integrovaný přístup
k podnikové ochraně – s jediným komplexním řešením, které se přizpůsobí vývoji hrozeb – snadněji
zvládnete své dynamické IT prostředí.
•
Jediná konzole, která vám umožňuje nasazení, řízení a reportování týkající se stolních počítačů,
notebooků, serverů i vzdálených systémů.
•
Komplexní ochrana před boty, viry, červy, trojskými koni, spywarem, adwarem, programy
zaznamenávajícími stisknuté klávesy (keystroke loggers), cílenými útoky hackerů.
•
Ochrana počítačů před nepřátelským vniknutím, založená na signaturách a zkoumání chování
zabezpečí desktopy proti zero-day útokům i již známým hrozbám.
•
Rozšířitelná architektura chrání a maximálně zhodnocuje investice do existující infrastruktury.
•
Pokročilá ochrana e-mailů zastavuje spamy a viry.
•
Kontrola síťového přístupu vynucuje v systémech podnikovou bezpečnostní politiku a střeží
podniková aktiva před nezabezpečenými systémy (pouze verze Advanced).
7 McAfee Total Protection Services
Jediné řešení pro ochranu vaší firmy – vždy v provozu a vždy aktualizované. Nepřetržitá ochrana všech
vašich systémů je dnes problémem, protože povaha hrozeb se stále mění.
Jste si jistí, že máte ve firmě tu nejnovější ochranu? Proto jsme vyvinuli McAfee Total Protection for
Services. Nabízí komplexní zabezpečení, které je stále funkční a aktuální - a s ním i jistotu, že jste naprosto chráněni. Navíc je také k dispozici na webu, takže ho lze snadno řídit a rychle a pohodlně nainstalovat.
166
•
Zajistěte každému uživateli automatickou ochranu a bezpečnostní aktualizace.
•
Získejte komplexní ochranu proti virům, spywaru, spamu a phishingu - a navíc firewall pro PC v jediném integrovaném řešení.
Security and Protection of Information 2009
•
Využijte centralizovanou správu v systémech McAfee pro celkový přehled o bezpečnostním
statutu, podrobnostech o jednotlivých uživatelích a hlášení.
•
Instalujte snadno a rychle odesláním odkazu uživateli, který se může nacházet kdekoliv.
•
Získejte nejnovější technologii bez nutnosti nakoupit fyzická zařízení a zaměstnat specialisty na
IT bezpečnost.
8 Objevte McAfee Total Protection
To správné řešení pro vaši firmu.
Více informací o řešení McAfee Total Protection, které nejlépe vyhovuje vašim potřebám, získáte na
www.mcafee.com, 24 hodin denně sedm dní v týdnu nebo na telefonu 724 090 814,
Řešení McAfee Total Protection jsou součástí rodiny bezpečnostních produktů a služeb McAfee pro firmy.
Security and Protection of Information 2009
167
Řešení bezpečnosti na platformě Microsoft
Petr Šetka, MVP, architekt řešení IT
Mainstream Technologies, s.r.o.
1 Microsoft Forefront Server Security
Oblast Forefront zahrnuje několik oblastí a řešení, z nichž právě Forefront Server Security je v tuto chvíli
mezi správci ta nejvíce známá, neboť je součástí oblasti Forefront služebně nejdéle. Patří do ní produkty
Microsoft Forefront Security for Exchange Server a Microsoft Forefront Security for SharePoint.
Microsoft se na trhu produktů proti škodlivému softwaru nepohybuje tak dlouho, jako jiné společnosti,
které zatím zvládly z trhu ukrojit značné porce; o to razantnější byl nástup Microsoftu a s ním i velmi zajímavá řešení. Microsoft totiž přináší do této oblasti to, co většině správců dosud chybělo, a tím je současně
jistota maximální ochrany dat spolu s minimálními nároky na správce.
1.1
Jediný antivirový stroj nestačí
Celá řada správců vybírá řešení ochrany proti škodlivému softwaru podle kvality antivirového nástroje. Na
základě různých pramenů se dříve či později pro některé rozhodnou a nasadí je, v podobném duchu se
však dříve či později musejí smířit s tím, že takové řešení jednoho dne některý škodlivý software do infrastruktury vpustí. To je výsledek statistického pozorování a tak to jednoduše je. Nic nikdy není stoprocentní.
Microsoft přináší ve formě produktů Forefront for Servers řešení, které sestává z více antivirových strojů
renomovaných výrobců, přičemž náklady na jeho správu zůstávají na úrovni řešení obsahující jediný stroj.
Co to přesně znamená?
Určitě budete souhlasit, že dojde-li k napadení sítě i přes funkční antivirové řešení jednoho výrobce,
začnete přemýšlet o doplnění takové infrastruktury dalším řešením (na servery řešení od výrobce XXX, na
klienty řešení od výrobce YYY atd.). Náklady na správu takového prostředí pak jsou ale velmi vysoké
a zároveň se do ní vnáší více lidského faktoru, což v oblasti zabezpečení také není optimální. Na druhou
stranu se zvyšuje jistota, že průnik škodlivého softwaru se v budoucnu nebude hned tak opakovat.
1.2
Forefront Security for Exchange Server
Toto řešení je určeno pro Exchange Server 2007. Kromě antivirového řešení obsahuje také rozšíření
antispamových možností samotného serveru Exchange. Přináší standardně devět antivirových strojů od renomovaných výrobců, jako například CA, AhnLab, Authentium, Kaspersky, Norman, VirusBuster či
Sophos (od aktualizace SP 1 je počet strojů 8, neboť dva stroje od společnosti CA byly sloučeny), přičemž
si pro danou úlohu skenování zpráv můžete vybrat libovolných pět strojů a dalším nastavením
rozhodnout, kolik z nich se bude na skenování jednotlivých zpráv podílet.
Ochrana proti virům je navíc na serveru Exchange 2007 vrstvená, neboť jiné stroje můžete vybrat pro skenování příchozích zpráv na serveru SMTP (Edge Transport), jiné stroje pro skenování zpráv na interních
serverech SMTP (Hub Transport) a jiné stroje pro skenování zpráv v databázích (Mailbox server). Je-li
navíc například zpráva skenována již na vstupním serveru SMTP (Edge Transport), získá do hlavičky
informaci, že ke skenování došlo, a nebude tak skenována znovu na dalších serverech SMTP nebo
v databázích. Řešení Forefront je tak nesmírně produktivní spolu s minimálním zatížením serverů.
168
Security and Protection of Information 2009
Obrázek 1: Výběr antivirových strojů pro danou úlohu a nastavení položky Bias.
1.3
Skenování s více stroji
Z nabízeného počtu antivirových strojů lze vybrat maximálně 5 a nastavením položky BIAS určit, kolik
z nich se pro skenování objektu (e-mailová zpráva, soubor na portálu SharePoint) využije. Připravených
hodnot pro položku BIAS je 5, od upřednostnění max. výkonu (tedy využití jediného AV stroje) po maximální jistotu (Max Certainty), tedy využití všech strojů, což samozřejmě představuje vyšší zatížení serveru.
Dorazí-li e-mailová zpráva s přílohou EXE, vyberou se ke skenování ty stroje, které mají se skenováním
souborů EXE nejlepší výsledky (historicky v dané instalaci). Nelze-li na základě tohoto algoritmu rozhodnout, vyberou se stroje náhodně.
1.4
Aktualizace strojů
Aktualizace všech strojů se odehrává v nastavených intervalech z jediného místa, kterým je webový server
společnosti Microsoft (http://forefrontdl.microsoft.com/server/scanengineupdate). Správce může definovat sekundární místo (například sdílenou složku ve vlastní síti), které bude využito v případě nedostupnosti místa primárního, je ale možné upravit i primární.
V praxi je postup následující: výrobce vydá aktualizaci a tu zašle Microsoftu. Microsoft ji ověří v testovacím prostředí, a je-li vše v pořádku, digitálně ji podepíše a poskytuje zákazníkům vystavením na distribuční místo.
1.5
Forefront Security for SharePoint
Tento produkt je určen pro servery Office SharePoint 2007 a Windows SharePoint Services 3.0 a prakticky kopíruje možnosti řešení pro Exchange. Zůstává tak více antivirových strojů, stejně jako jediný
nástroj pro správu. A jediné místo stahování aktualizací.
1.6
Reporting
Nástroj pro správu řešení Forefront Security Administrator v sobě samozřejmě obsahuje také informace
o fungování řešení Forefront, kterým je zejména přehled zachycených virů a statistiku.
Security and Protection of Information 2009
169
Obrázek 2: Přehled incidentů a statistika e-mailového provozu.
1.7
Správa řešení Forefront v prostředích s více servery
V prostředích s více servery Exchange a SharePoint (i smíšených) nemá smysl provádět nastavení jednotlivých serverů lokálně. I správa takového prostředí by měla zůstat jednoduchá; jestliže se tedy správce
například rozhodne nastavit na všech serverech položku Bias na hodnotu Max Certainty, měl by to nastavit na jediném místě.
Pro tento účel existuje doplňkový (a také samostatně licencovaný nástroj) nazvaný Forefront Server
Security Management Console. Pomocí něj je možné spravovat nejen více serverů Exchange 2007,
SharePoint Services 3.0 a SharePoint Server 2007, ale také řešení Microsoft pro předchozí verze Exchange
2000/2003 či SharePoint (nazvané Microsoft Antigen).
1.8
Příklady bezpečnostních řešení na platformě Microsoft u našich zákazníků
Ministerstvo dopravy České republiky využívá platformu Microsoft Forefront jako svoje anti-malware
systém pro servery o klientské stanice. Pro podrobnější údaje o řešení kontaktujte: Tomáš Král, Account
Technical Specialist Microsoft, [email protected]
Bezpečnostní informační služba využívá Internet Security Accelerator (ISA) Server pro ochranu informací
při vstupu z vnějšího prostředí. Pro podrobnější informace kontaktujte Tomáše Mirošníka, Account
Technical Specialist Microsoft, [email protected]
170
Security and Protection of Information 2009
Odkazy
[1]
Microsoft Forefront Security for Exchange Server:
http://www.microsoft.com/forefront/serversecurity/exchange/default.mspx
[2]
Microsoft Forefront Security for SharePoint:
http://www.microsoft.com/forefront/sharepoint/en/us/default.aspx
[3]
Microsoft Forefront Server Security Management Console:
http://www.microsoft.com/forefront/serversecurity/mgmt/default.mspx
[4]
Microsoft Antigen: http://www.microsoft.com/antigen/default.mspx
Security and Protection of Information 2009
171
Nízko úrovňové přístupy k informační bezpečnosti
Ing. Jitka POLATOVÁ
[email protected]
PositronLabs, s.r.o.
Vinohradská 25, Praha 2, 120 OO
1 Úvod
Informační bezpečnost je v současné době jednou z nejsložitějších otázek. Ve svém článku se nechceme
vracet k mnoha, dnes často znovu publikovaných a diskutovaných, aspektům bezpečnosti informačních
a komunikačních systémů. Cílem tohoto článku je ukázat na nové možnosti a směry v bezpečnosti a v rozvoji informačních technologií vůbec.
Pokud se dnes rozhlédneme po odborných publikacích, časopisech, internetu, troufneme si tvrdit, že absolutní většina informací se týká aktuálních bezpečnostních hrozeb, bezpečnostních mezer, způsobených
škod, proložených články o nových integrovaných technologiích pro zabezpečení systémů. Exponenciální
nárůst hrozeb je nepopiratelným faktem a stabilní náskok útočníků před obranou proti nim taktéž.
Pokusme se ale zamyslet nad důvody tohoto stavu.
Bylo by zbytečné zde probírat otázky psychologie, důvody jednání útočníků, samotnou IT bezpečnost, ať
už z pohledu personální nebo objektové bezpečnosti. Chceme se zde zaměřit pouze na technickou stránku
věci. Odpovědí na současné a předpokládané budoucí hrozby je totiž čím dále složitější komplex software
a hardware. Postupně vytváříme další a další vrstvy zabezpečení nad systémy, daty, vplétáme vlákna agentů
do sítí a celé infrastruktury. Nutně potřebujeme nové a nové investice.
Je ovšem téměř s podivem, proč si nepokládáme otázku – „Proč je téměř každý náš krok z dlouhodobého
hlediska nakonec neefektivní?“. Nebo možná i pokládáme, ale je spíše sporné, zda si dokážeme správně
odpovědět.
Odpověď na tuto otázku leží totiž možná velmi hluboko v historii IT i hluboko uvnitř našeho SW a HW.
Jádra systémů jsou v podstatě již velmi stará. U architektury hardware už vlastně desítky let. Například na
té nejhlubší úrovni filozofie procesoru a jeho práce jsme v podstatě nezměnili téměř nic. Téměř vše,
chráněné módy, virtualizace paměti, virtuální podpora, spekulativní provádění instrukcí, … je v podstatě
jen další filozofická a technická vrstva. Co se týká sítí a dalších technologií, jsme na tom v podstatě stejně.
Když se jádra našich technologií vyvíjely, byl kyber zločin pojem zcela fiktivní, a vývoj tomu vlastně
odpovídal. Na tyto vrstvy jsme pak mnoho let „nabalovaly“ další schopnosti a teď se pokoušíme v našich
vysokých vrstvách bránit.
Tady někde možná leží odpovědi na naše otázky. Možná se bráníme tam, kde je každá obrana dříve nebo
později neefektivní. Možná je nutné se vrátit trochu na počátek, a položit základy naší obrany hned na
počátek vrstev. Zpětně doplnit vývoj, desítky let starých základů našich systémů, o bezpečnostní rozměr.
Společnost PositronLabs se zabývá právě výzkumem a vývojem základních vrstev v oblasti bezpečnosti
systémů. V tomto článku jsou popsány dva jednoduché, v praxi již realizované, principy návratu k základům. Jeden z oblasti kryptografie a jeden z oblasti izolace PC aplikací, které jsou založeny na fundamentálních technologiích vracejících se k základům. Tyto technologie jsou ovšem pouhým výřezem
dalších technologií, které jsou vyvíjeny společností PositronLabs.
Na každou z nich si můžeme odpovědět – „To se dnes řeší jinak.“ Ovšem jen do té doby, než si uvědomíme, že všechna ta jiná řešení, tedy ta, která nejsou již na počátku, jsou prostě jen plná mezer a ještě
nerozpoznaných hrozeb.
172
Security and Protection of Information 2009
2 Nelokální generátor náhodných čísel (NGNZ)
NGNZ je příklad vývoje základů z oblasti kryptologie. Na počátku vývoje NGNZ bylo jednoduché zadání, které bylo součástí složitějšího projektu.
„Chceme vést utajenou komunikaci přes veřejné prostředí.“
Pokud bychom šli „klasickou cestou“, vybereme si některý ze stávajících kryptosystémů a použijeme jej.
Otázka ovšem zněla – „Jsme schopni zaručit jeho neprolomitelnost?“. Dá se snadno zjistit, že prokazatelná
„nezlomitelnost“ existuje vlastně jen u Vernamovy šifry. Ostatním systémům často věříme jen proto, že
nikdo do současné doby neprokázal opak.
Bylo tedy rozhodnuto použít Vernamův systém. Nicméně je známo, že jeho největší slabinou je nesnadná
distribuce klíčů, protože musí být stejně dlouhé jako šifrovaná zpráva a nesmí se žádný klíč použít dvakrát.
Klíč se navíc musí skládat pouze ze zcela náhodných, vzájemně nezávislých znaků atd. …. (Popis Vernamovy šifry je mnohokrát popsán v literatuře, a proto ho zde nebudeme popisovat.)
Cílem projektu NGNZ tedy bylo navrhnout způsob, jakým distribuovat utajeně, přes veřejný kanál mezi
odesilatelem a příjemcem, náhodné, vzájemně nezávislé znaky tak, aby byly dostupné v dostatečném
množství. Vytvořil by se tak předpoklad pro nasazení Vernamovy šifry pomocí takto distribuovaných
klíčů. Z tohoto pohledu nemá samozřejmě smysl, šifrovat náhodné znaky jiným šifrovacím mechanismem
(jako RSA, AES …). Bylo nutné vytvořit jiný způsob.
V praxi by to znamenalo vyřešit dilema, jestli má být pro generování náhodných klíčů použit odesilatelem
a příjemcem shodný softwarový, ale pseudonáhodný generátor (generované znaky ovšem nebudou
náhodné a vzájemně nezávislé, a Vernam není Vernamem) nebo naopak generátor náhodný hardwarový
ale obtížně sdílitelný přes veřejný kanál (není zde zahrnuta třetí varianta distribuce omezeného konečného
množství klíčů na fyzickém médiu).
Cílem NGNZ bylo proto navrhnout způsob, jak synchronovat alespoň dva od sebe oddělené náhodné
generátory klíčů, kde jeden z nich je u odesílatele a druhý u příjemce informací. Toto je samozřejmě
technický nesmysl – pokud jsou náhodné, nemohou být synchronní. Řešením a výsledkem je NGNZ.
2.1
Podstata NGNZ
Podstata NGNZ spočívá v tom, že odesílatel i příjemce mají vlastní množinu náhodných klíčů {K}, kde
obě tyto množiny jsou po celou dobu identické, ale které se průběžně synchronně mění tak, že i po změnách zůstávají neustále identické.
Jeden generátor klíčů je například u odesílatele a druhý u příjemce informací. Zdroj náhodných znaků
může být umístěn u odesílatele, příjemce nebo na jiném, třetím místě.
Uvedený, dále popsaný postup je určený zejména pro distribuci klíčů a šifrování informací mezi odesílatelem a příjemcem. Obecně lze však postup s výhodou použít vždy, jestliže je potřebné utajeně provádět
na několika oddělených místech synchronizované náhodné úkony. Nicméně se jedná se o takový způsob
synchronizace generátorů klíčů z jednoho zdroje náhodných znaků, který je utajený. Tedy pouhým
odposlechem veřejného kanálu neoprávněným subjektem není možno rozpoznat bez dodatečných
informací skutečný stav synchronních náhodných generátorů klíčů, nebo je jeho určení extrémně náročné
na čas či výpočetní kapacitu.
Například v případě implementace NGNZ s {K} o velikosti 1MB je nutné provést až 28 000 000 výpočtů
pro prolomení systému NGNZ. Systém NGNZ je schopen ze svého principu schopen používat extrémně
dlouhé klíče, bez dopadu na rychlost systému. Z tohoto pohledu je zřejmé, že NGNZ je systémem pro
distribuci klíčů, jehož odolnost je nesrovnatelná s jakýmkoliv jiným řešením.
Security and Protection of Information 2009
173
2.2
Popis obecného algoritmu NGNZ
Definice
Zkratka
Popis
GEN
Náhodný generátor znaků
K
Množina náhodných znaků K1 až Kn sdílená mezi odesilatele a příjemcem
N
Nový náhodný znak
C
Klíč C získaný operací oC z množiny K (první výsledný znak)
V
Druhý výsledný znak V
oC (K)
První matematická operace pro získání klíče C z množiny K
oV
Druhá matematická operace pro získání druhého výsledného znaku V provedená mezi
znaky N a C
oVi
Inverzní matematická operace k druhé matematické operaci oV tzn. operace k získání
znaku N, provedená mezi znaky V a C
oK (K)
Třetí matematická operace nad množinou K, která vede k získání nové změněné množiny
K
2.3
Vstupní podmínky
Vstupní podmínky jsou následující:
2.4
•
existuje zcela náhodný generátor GEN znaků,
•
mezi odesílatelem a příjemcem existuje dohodnutá první matematická operace oC pro výběr klíče
C (prvního výsledného znaku) z množiny klíčů K,
•
mezi odesilatelem a příjemcem existuje dohodnutá druhá matematická operace oV prováděná nad
dvěma klíči, novým náhodným znakem N a prvním výsledným znakem C, a funkce k ní inverzní
oVI,
•
mezi odesilatelem a příjemcem existuje dohodnutá třetí matematická operace oK provedená nad
množinou klíčů K.
Příprava
Na počátku je pomocí náhodného generátoru GEN znaků generován náhodný soubor klíče K = { K1, K2,
… , Kn }, který se skládá ze znaků K1 až Kn. Soubor klíče K obdrží obě dvě strany (odesílatel i příjemce)
spolehlivou cestou, tak aby nemohl být odhalen.
2.5
Strana odesilatele
Krok 1 - odesílatel generuje pomocí náhodného generátoru GEN nový náhodný znak N.
Krok 2 - odesílatel provede s příjemcem první dohodnutou matematickou operaci oC nad množinou
klíčů, jejímž výsledkem je klíč C (první výsledný znak).
Krok 3 - odesílatel provede s příjemcem dohodnutou druhou matematickou operaci ov mezi novým náhodným znakem N a klíčem C (prvním výsledným znakem), jejímž výsledkem je druhý výsledný znak V.
174
Security and Protection of Information 2009
Krok 4 - odesílatel odešle druhý výsledný znak V veřejnou cestou příjemci
Krok 5 - odesílatel provede s příjemcem třetí dohodnutou matematickou operaci oK nad množinou klíčů
K, čímž se změní množina klíčů K.
Následně se celý tento postup u odesílatele opakuje od generování nového náhodného znaku.
2.6
Strana příjemce
Krok 1 - příjemce provede s odesílatelem dohodnutou první matematickou operaci oC nad množinou
klíčů K, jejímž výsledkem je klíč C (první výsledný znak) nad množinou klíčů.
Krok 2 - příjemce přijme druhý výsledný znak V veřejnou cestou.
Krok 3 - příjemce provede s odesílatelem dohodnutou inverzní matematickou operaci oVI mezi druhým
výsledným znakem V a prvním výsledným znakem C, jejímž výsledkem je nový náhodný znak N.
Krok 4 - příjemce provede s odesílatelem dohodnutou třetí matematickou operaci oK nad množinou klíčů,
čímž se změní množina klíčů K.
Následně se celý tento proces opakuje od provedení s odesílatelem dohodnuté první matematické operace.
2.7
Symbolický popis obecného algoritmu NGNZ
Krok Zdroj náhodných čísel / Odesílatel
Krok
Příjemce 1
Vytvoření a sdílení vygenerovaného náhodného soubor klíče znaků
1
N = GEN
--
--
2
C = oC (K)
1
C = oC (K)
3
V = N oV C
--
--
4
Odeslání znaku V veřejnou cestou
2
Příjem znaku V veřejnou cestou
--
--
3
N = V oVi C
5
K = oK (K)
4
K = oK (K)
Opakování od kroku 1
Opakování od kroku 1
Odesílatel a příjemce má vlastní množinu náhodných klíčů, které jsou navzájem identické, které se
průběžně synchronně mění náhodným znakem tak, že i po provedených změnách zůstávají neustále
identické. Využitím tohoto způsobu se získává utajený kanál pro předávání náhodných klíčů.
2.8
Aplikace nad NGNZ
Způsob synchronizace náhodných generátorů klíčů lze využít například pro šifrování informací symetrickou šifrou, a to například tak, že synchronní změny náhodných generátorů klíčů probíhají neustále, ale
v předem dohodnutém čase je aktuální, přechodný stav generátorů použit komunikujícími stranami jako
klíč k symetrické šifře DES, IDEA apod.
Jiný způsob použití je například využití přechodného stavu generátorů k vytváření náhodné posloupnosti
znaků, která může být použita k šifrování Vernamovou metodou. V tomto případě nelze použít k její
kryptoanalýze frekvenční analýzu. Složitost vzájemné závislosti klíčů je totiž dána bitovou délkou použité
množiny K.
Security and Protection of Information 2009
175
Dalším způsobem využití NGNZ je zařazení informace, kterou chceme utajeně přenést, přímo do synchronizačních změn generátorů. Složitost zpětné kryptoanalýzy je opět dána bitovou délkou použité
množiny K klíčů.
2.9
Závěr k NGNZ
NGNZ je systémem, který umožní využívat znaky z náhodného generátoru přes veřejný kanál na více
místech. Systém NGNZ nepožívá žádný standardní kryptografický mechanismus. V tomto smyslu vlastně
nepoužívá žádný mechanismus, kromě náhodných operací nad náhodnými znaky. V tomto smyslu jde
o systém, který je vlastně tou nejnižší vrstvou pro šifrování. Při správné implementaci lze zcela eliminovat
možnost různých útoků na systém a je možné zcela prokázat jedinou možnost napadení systému, a to hrubou silou. Tato možnost se ale vzhledem k schopnosti NGNZ používat extrémní délky klíčů (v podstatě
neomezené délky) bez dopadu na rychlost a výkon systému, limitně blíží nule. Díky této skutečnosti
můžeme klíče z NGNZ prokazatelně použít jako vstup pro Vernamovu šifru.
3 Multi Level Security
Další část příspěvku se věnuje problematice Multi Level Security (MLS). Ukazuje postupy, jak lze velmi
snadno získat základy MLS architektury, pokud se vrátíme i k základům systémů.
První část uvádí princip zařízení MLA (Multi Level Architecture™) Switch, jako zařízení pro totální izolaci
aplikací. Druhá představuje zařízení MLA DD (Data Diode) pro kontrolovaný průchod dat mezi aplikacemi.
3.1
MLA Switch
Technologie MLA Switch představuje revoluční řešení současných bezpečnostních hrozeb. Poprvé v historii moderního IT je možné se současně pohybovat na internetu a ve stejném okamžiku zpracovávat vysoce
citlivá data. MLA Switch garantuje stoprocentní bezpečnost, uživateli této technologie tedy nehrozí žádné
riziko on-line napadení chráněné části PC viry, spywarem, škodlivým softwarem či jinou hrozbou zvenčí.
Tato schopnost technologie MLA Switch je dokonale spolehlivá a nemá žádná slabá místa – pro produkty
využívající MLA Switch neexistuje žádné riziko narušení bezpečnosti této technologie.
Základem této technologie je jednoduchý poznatek, že pro uživatele PC je PC vlastně monitor a periferie.
Co je za nimi, není zajímavé, pokud je na monitoru „standardní zobrazení se standardním chováním“
a lze ho standardně ovládat pomocí periferií, typicky klávesnice a myši. Díky tomuto poznatku pracuje
MLA Switch tak, že sloučí dvě PC sestavy do sestavy jediné.
Základem unikátní technologie MLA Switch je plně hardwarové zařízení MLA Bridge, které slouží ke
slučování videosignálů z různých systémů. Úkolem zařízení je slučování obrazů aplikačních oken poskytovaných více systémy, zobrazovaných zpravidla na více monitorech, do jednoho obrazu, tento obraz
zobrazit na jediném monitoru, a zpětně pak automaticky přesměrovat vstupy od jediné sady klávesnice
a myši právě do toho systému, jehož aplikační okno si uživatel na obrazovce zvolil.
176
Security and Protection of Information 2009
Výsledkem je existence aplikací jako by na jediném monitoru a tedy i v „jediném PC“, jejich naprosto
transparentní ovládání - ale ve skutečnosti jejich existence v totální izolaci.
Security and Protection of Information 2009
177
Při nasazení v praxi je tedy standardní PC doplněno o MLA Bridge, který umožní provozování více
operačních systémů zcela transparentně a bezpečně vedle sebe. Operační systémy nemusí být pro provoz
nijak upraveny, používají se standardní distribuce. V jediném okamžiku tedy lze využívat systémy MS
Windows, Unix/Linux či SUN Solaris, které mohou pracovat bez jakýchkoliv omezení.
3.2
3.3
Unikátnost zařízení MLA Bridge spočívá především:
•
v absenci jakéhokoli softwaru, který by mohl sloužit jako nebezpečné prostředí pro škodlivé kódy
v operačním systému uživatele,
•
v totálním oddělení a neprostupnosti zdrojových systémů,
•
v univerzálnosti použití, jehož jediným předpokladem je dostupnost běžného DVI signálu od
zdrojových systémů,
•
v inteligenci zařízení, které dokáže komunikaci uživatele se zdrojovými systémy přepínat pouze na
základě znalosti videosignálu.
MLDD
MLA Data Diode představuje zařízení, které je na nejnižší vrstvě doplňkem zařízení MLA Switch, ale je
samozřejmě použitelná i samostatně.
Datová dioda je jednoduché a přitom zcela bezpečné řešení pro jednosměrný
datový přenos mezi dvěma samostatnými PC či informačními systémy.
Datová dioda MLDD se chová jako standardní USB zařízení, které se připojuje ke dvěma samostatným počítačům či informačním systémům. MLD
umožňuje přenos dat pouze jedním směrem (propustný směr), v závěrném
směru je neprůchozí. Zabezpečení propustnosti dat pouze jedním směrem je
realizováno pomocí optického členu, tudíž se jedná i o fyzické metalické
178
Security and Protection of Information 2009
oddělení. Na straně USB je MLD emulováno jako COMx
rozhranní, a je kompatibilní pro jakýkoliv SW komunikující
pomocí tohoto rozhraní a z tohoto důvodu nezávislý na
použitém operačním systému a konkrétní implementaci.
Vysílací ani přijímací část nemá žádné stavové informace
o svém protějšku (neexistuje fyzické propojení stavů).
3.4
Příklad použití:
Přenos dat ze systému, který zpracovává utajované informace do a včetně stupně utajení „Neutajované“,
lze zasílat data do systému s vyšším stupněm utajení, aniž by došlo ke zpětné vazbě, tj. příjmu dat v systému s nižším stupněm utajení.
Zařízení bylo vyvinuto za účelem levného a snadno dostupného způsobu pro předávání dat z veřejné sítě
do chráněné sítě tak, aby technickými prostředky (zcela nezávislými na SW) byl zajištěn jen jeden možný
směr dat (propustný směr). V závěrném směru se data automaticky „zahazují“, tj. je zcela znemožněno
jejich přijetí druhou stranou.
4 Závěr
Cílem příspěvku bylo ukázat, jak elegantně lze řešit základní problematiku některých bezpečnostních
hrozeb nebo kryptografie, pokud jsme schopni se vrátit k nejzákladnější úrovni myšlení.
Zařízením, která jsou v tomto příspěvku prezentována, jsou založena na patentovaných technologiích,
které jsou vlastnictvím společnosti PositronLabs a jejích pracovníků a jsou v současné době běžně
vyráběny.
Security and Protection of Information 2009
179
USB a ostatní rozhraní pod kontrolou
Ing. Zdeněk Sauer
[email protected]
SODATSW spol. s r.o.
Horní 32, 639 00 Brno
1 Úvod
Důvěra je podle odborníků na mezilidské vztahy jedna z věcí, která dokáže změnit výkonnost a výsledky
organizace. Důvěřuj, ale prověřuj je rčení, které mnohým napomáhá budovat prostředí organizace plné
důvěry mezi lidmi. Důvěřuj, ale prověřuj je také rčení, ze kterého vychází řešení USB pod kontrolou.
Důvěřuj v přínos použití moderních USB zařízení, ale na druhé straně prověřuj účel jejich použití.
USB pod kontrolou umožňuje získat kontrolu nad používáním USB zařízení v organizaci. Současně
zvyšuje a prověřuje důvěru ve vlastní pracovníky a tím zamezuje možnostem vynesení citlivých dokumentů, informací a dat organizace.
2 Proč USB pod kontrolou
Využívání dnes již nejstandardnějšího a všemi výrobci podporovaného USB rozhraní je v organizacích
nutností. Bez tohoto rozhraní by počítače organizace běžely na čtvrt plynu. Kromě přínosu je zde ale také
druhá strana mince. Přes USB rozhraní mohou z organizace unikat citlivé informace nebo naopak mohou
do organizace putovat nechtěná data. Cenová dostupnost USB externích disků umožní uživateli během
několika okamžiků vynést interní firemní dokumenty, know-how firmy, informace ze spisů, strategické
plány, seznamy pracovníků a další citlivé dokumenty, informace a data. K tomu všemu všudypřítomná
anonymita prováděných operací buduje úrodné prostředí pro zmiňované operace.
A přitom stačí provést tři kroky k vybudování důvěrného prostředí pro USB zařízení. Jsou to:
1. Monitorovat práci uživatele.
2. Zamezit použití nežádoucích USB zařízení.
3. Znemožnit použití obsahu mimo organizaci.
2.1
Monitorování práce uživatele a aktivní ochrana
Monitorováním práce uživatele získáte přesný obraz jeho činností při práci s počítačem. Monitorováním je
možné zjistit používané USB zařízení, další periferní zařízení připojované přes PCMCIA, FireWire,
BluetTooth, načítané, zapisované a kopírované soubory na externí paměťová média. Kromě těchto operací
souvisejících s USB zařízeními získáte přehled o používaných aplikacích, pohybu na internetu a dobu
trávenou aktivní prací na počítači.
V první fázi se zorientujete v používání USB zařízení ve Vaší organizaci. V druhé fázi umožníte uchovávat záznamy o prováděných operacích a zpětně vyhodnocovat potenciálně nebezpečné aktivity
vedoucí ke snížení důvěry. Ve třetí fázi jednoznačně prokážete, že USB zařízení bylo použito k nepovoleným operacím nebo dokonce možnému útoku. Čtvrtá fáze dokáže okamžitě informovat zodpovědné
osoby o možném zahájení a průběhu útoku.
180
Security and Protection of Information 2009
2.2
Zamezení použití nežádoucích zařízení
Zamezením použití nežádoucích USB zařízení zakážete použití v organizaci nepotřebných typů USB
zařízení. Na základě zorientování se v organizaci je možné vytvořit přesný seznam povolených USB zařízení, přičemž jejich identifikace může být na základě typu (např. klávesnice, myši, tiskárny atd.), určité sady
(např. fotoaparátů) nebo jednoznačného sériového čísla zařízení.
2.3
Znemožnění použití obsahu mimo organizaci
Znemožnění použití obsahu USB zařízení a dalších externích paměťových médií (CD/DVD, USB flash,
USB externí hard disk atd.) mimo organizaci umožňuje přenos dokumentů, informací a souborů uvnitř
organizace. Současně je možné zamezit vnesení nežádoucích dokumentů, informací a dat z těchto médií
do vnitřní počítačové sítě organizace.
3 Použití USB portů a zařízení z pohledu NBÚ
NBÚ ve svém metodickém pokynu „Používání FireWire a USB portu a bezpečnostní aspekty pamětí
typu flash“ definuje politiku používání těchto technologií v systémech určených pro nakládání s utajovanými informacemi podléhajících certifikaci v souladu se zákonem č. 412/2005 Sb., o ochraně utajovaných
informací a o bezpečnostní způsobilosti, ve znění pozdějších předpisů.
Z uvedené „Politiky pro bezpečné použití USB portů vyplývá, že pokud informační systém zpracovávající
utajované informace využívá USB portů jak v nerozebíratelném, tak v rozebíratelném spojení, je nutné
použití schválených a doporučených prostředků třetích stran pro zabezpečení selektivního přístupu a provádění auditu.
Rozšíření operačního systému o nástroje třetích stran doplní chybějící funkčnosti a nastavení operačního
systému především o:
•
jemnější nastavení přístupových práv na úrovni uživatelů a zařízení,
•
různé typy přístupů (čtení, zápis, bez přístupu),
•
módy přístupu – permanentní, dočasný, plánovaný, online/offline,
•
podpora většího množství typu zařízení – obsažena i podpora FireWire,
•
definice a rozlišení různých typů zařízení i v rámci jedné kategorie a práce s nimi,
•
rozšíření auditovacích schopností.
4 Použití USB portů a zařízení z pohledu ČSN ISO/IEC 27001-2006
V souladu s probíhajícími procesy zavádění systému řízení informační bezpečnosti (ISMS) dle ČSN
ISO/IEC 27001-2006, uvedené řešení podporuje plnění cílů a opatření k jeho zavedení např. v oblasti
správy výměnných počítačových médií, jejich bezpečnosti při přepravě, auditu jejich použití, monitorování využití systému apod. Z tohoto pohledu je uvedené řešení vhodné k nasazení v informačních
systémech zpracovávajících citlivé informace včetně osobních údajů.
Security and Protection of Information 2009
181
5 Pro koho je a není přínosem získání kontroly nad používáním USB zařízení
v organizaci?
Vedoucí pracovníci
• podporují naplnění legislativy na ochranu informací.
• podnikly kroky ke snížení korupce pramenící z vynesení, vložení nebo pozměnění informací
v dokumentech organizace.
• vzniklé incidenty pramení ze selhání jedince a ne systému.
Pracovníci správy IT a bezpečnosti
• mohou poskytnout přesné informace o stavu používání USB zařízení, které nejsou založeny
pouze na jejich domněnkách,
• mohou předložit přesné důkazy o incidentech,
• dostávají okamžité informace o možném potenciálním incidentu.
Loajální zaměstnanci
• neznamená pro ně žádnou změnu,
• mohou dále používat povolená USB zařízení ke své práci,
• pojistka v případě jejich chybného neúmyslného chování (například ztráta USB zařízení).
Zaměstnanci sabotéři
• rychlé odhalení jejich nekorektního chování,
• jednoznačné důkazy prokazující jejich úmysly,
• konec volné práce s USB zařízeními.
6 Jak funguje USB pod kontrolou
6.1
Jak funguje USB pod kontrolou z pohledu uživatele
Základní změnou pro uživatele v organizaci je to, že veškerá jeho činnost s USB zařízeními bude
monitorována a o všem se bude vědět. V této chvíli přestává být prostředí počítače pro uživatele anonymní. To je zásadní změna, která ve valné většině případů sama o sobě zvýší uvědomění uživatelů a sníží
hrozby pramenící z používání USB zařízení.
Jinak se pro běžného uživatele nic nemění. Pokud bude chtít použít zakázané zařízení, pak bude o této
skutečnosti informován a zařízení nebude moci použít. V případě, že si uloží na externí paměťové médium
soubory, se kterými bude chtít pracovat mimo vnitřní síť organizace, pak budou tyto soubory pro něj
nečitelné. Stejně tomu bude tehdy, když na externí paměťové médium zkopíruje soubory mimo vnitřní síť
organizace.
Pro běžného loajálního uživatele se zavedením USB pod kontrolou zvýší kontrola nad jeho chováním,
což funguje jako pojistka pro případ jeho chybného či neúmyslného jednání.
182
Security and Protection of Information 2009
6.2
Jak funguje USB pod kontrolou z pohledu administrátora
Každý počítač sítě organizace musí mít instalovaného klienta, který může vykonávat kterýkoli z uvedených
třech kroků vedoucích k vybudování důvěryhodného prostředí. Samozřejmým předpokladem pro zavedení
USB pod kontrolou je centrální správa, která umožňuje automatickou instalaci klientů USB pod
kontrolou na počítače organizace, řízení nastavení, update, stahování logových záznamů, vyhodnocování
atd.
Samotné nastavení USB pod kontrolou je možné provázat s Active Directory a řídit je na konkrétní
počítač či uživatele. V případě nastavení na uživatele je zachována možnost nastavení na bezpečnostní
skupiny a organizační jednotky, do kterých uživatel spadá. Vytvořením a editací jednoduchých šablon je
možné kdykoli upravit nastavení na uživatele nebo počítač a distribuovat jej k uživateli či počítači.
Monitorování práce uživatelů vytváří logové záznamy o aktivitách uživatele. Tyto logové záznamy jsou
pravidelně přenášeny na určený server, kde jsou shromažďovány. Nad těmito logovými záznamy lze z centrální správy provádět okamžité dotazy nad aktivitami uživatelů. Provedená aktivita uživatele je v podstatě
okamžitě přístupná administrátorovi v logovém záznamu. Pro celkové vyhodnocování aktivit se provádí
import do SQL databáze, kde jsou logy připraveny pro generování množství různých reportů pro různé
pracovní pozice.
Součástí monitorování aktivit uživatelů je alertový – výstražný systém, který umožňuje nastavit informování o potenciálně nebezpečných akcích. Jedná se o neobvyklé chování uživatele, které by mohlo znamenat hrozbu pro dokumenty, informace či data organizace. V takovém případě je automaticky zasílán email
nebo spuštěna jinak definovaná akce, která umožní informovat zodpovědné osoby o této skutečnosti.
Vytváření seznamu povolených nebo zakázaných USB zařízení (black/white listy) je možné provádět
administrátorem přes centrální správu. Detekce je prováděna automaticky vložení daného zařízení do libovolného počítače organizace. Zanesení zařízení do seznamu se automaticky distribuuje na všechny ostatní
počítače nebo uživatele.
7 Šifrování – spása pro bezpečnost dokumentů nacházejících se mimo
organizaci
Jedinou možností, jak zabránit použití obsahu externího paměťového média mimo vnitřní síť organizace,
je použití šifrování. Doba, kdy šifrování bylo výsadou tajných a zpravodajských služeb, je pryč. Dnes nás
šifrování obklopuje ze všech stran. V rámci centrální správy je řízena šifrovací politika, která v sobě
zahrnuje především správu šifrovacích klíčů a jejich distribuci k uživatelům. Veškeré soubory ukládané na
externí paměťové médium jsou on-line šifrovány bez jakékoli vědomosti a možnosti zásahu uživatele.
Takovéto externí paměťové médium je čitelné pouze uvnitř počítačové sítě organizace.
Pro splnění všech očekávání organizace poskytuje USB pod kontrolou portálové rozhraní pro generování
nejrůznějších reportů a detailní vyhodnocování aktivit uživatele v případě nutnosti dokumentování potenciálního útoku. Definování různých rolí umožňuje přistupovat a generovat potřebné typy reportů nad
různými skupinami uživatelů.
Security and Protection of Information 2009
183
8 Informace o modulárním řešení nasazení
Celé řešení USB pod kontrolou je založeno na produktech Desktop Management System OptimAccess
a Desktop Security System AreaGuard.
Desktop Management System OptimAccess verze 10.5 je samostatná aplikace pro operační systémy
Microsoft Windows 2000/XP/Vista, které rozšiřuje o funkce určené pro Desktop Management. Samotný
systém je založen na modulární koncepci a skládá se z osmi na sobě nezávislých modulů.
•
OptimAccess Standard – chrání strukturu a nastavení vlastního operačního systému. Umožňuje
obnovit uživatelovi zásahy při restartu stanice.
•
OptimAccess Extension – umožní detailní nastavení omezení změn aplikací, omezení přístupu na
internet a stahování z něj, nastavení systémových politik a přístupu do systémových částí
nastavení OS.
•
OptimAccess WorkSpy – je nástroj pro evidenci a monitorování pracovních aktivit uživatele,
možnost zasílání alertů.
•
OptimAccess Computer Audit – slouží k evidenci a porovnání nainstalovaného SW a přítomného
HW na uživatelský stanicích.
•
OptimAccess NetHand – je určen pro vzdálenou administraci stanic pomocí převzetí obrazovky
a dávkovou distribuci software.
•
OptimAccess Remote Control – je prostředek dálkové, ale i hromadné správy OptimAccess.
Nastavení OptimAccess lze exportovat do souboru a pomocí OptimAccess Remote Control
hromadně distribuovat na vzdálené stanice.
•
OptimAccess HelpDesk – slouží k pohodlné a přehledné evidenci požadavků na technickou
podporu a jejich řešení.
•
OptimAccess Report Center – přináší pokročilé možnosti pro reportování a prezentaci údajů
nasbíraných v ostatních modulech OptimAccess Solution.
Desktop Security Systém AreaGuard verze 4.1 je samostatná aplikace pro operační systémy Microsoft
Windows 2000/XP/Vista. Systém se skládá ze 4 modulů.
•
AreaGuard Gina – nabízí hardwarovou ochranu přihlašovacích informací (jméno, heslo
a podobně) pro operační systém i uživatelské aplikace.
•
AreaGuard Notes – rozšiřuje funkčnost a bezpečnostní mechanismy operačního systému počítačů
a pracovních stanic o transparentní on-line šifrování souborů s možností využití prvků
bezpečného hardware.
•
AreaGuard Server – rozšiřuje vlastnosti on-line šifrovacího modulu AreaGuard Notes o možnost
použití na serverových systémech typu terminálový server.
•
AreaGuard AdminKit – je nástroj pro administraci, konfiguraci a centrální správu bezpečnostního
systému AreaGuard.
Ovládací prostředí modulů je ve všech podporovaných operačních systémech jednotné, kompaktní
a celistvé. Jednotlivé moduly lze v organizaci nasazovat postupně a tím poskytnout organizaci přesné řešení na míru.
184
Security and Protection of Information 2009
9 Závěr
Podle zveřejněných výsledků průzkumů se ukazuje, že 70% neautorizovaných přístupů se děje uvnitř organizace a 95% útoků znamenajících pro organizaci přímou finanční ztrátu pramení ze strany vlastních
zaměstnanců. Tato zjištění by neměla nechat v klidu žádné odpovědné vedoucí pracovníky, ale naopak by
je měla „zvednout ze židle“ a problematiku útoků ze strany vlastních zaměstnanců důsledně řešit.
Především v období zvýšeného pohybu na pracovním trhu v souvislosti s globální ekonomickou situací
jsou tato rizika výraznější. Řešení existují a nejsou o nic složitější než antivir nebo firewall. Získání
kontroly nad pohybem informací a dat přes externí paměťová média, mobilní zařízení a notebooky
rozhodně snižuje rizika ztráty a zneužití informací ze strany vlastních zaměstnanců.
Security and Protection of Information 2009
185
Security Solutions od T-Systems
Mgr. Vlastislav Havránek
[email protected]
T-Systems Czech Republic a.s.
Na Pankráci 1685/19, 140 21 Praha 4
1 Úvod
V dnešní době, kdy je celý svět propojen internetovou informační sítí, si musíte být v každém okamžiku
naprosto jisti svojí bezpečností a ochranou. Komplexní bezpečnostní portfolio společnosti T-Systems Vám
nabízí bezproblémovou podporu ve všech bezpečnostních aspektech součastného ICT. Naše produkty
a řešení poskytují kompletní zabezpečení vašich sítí, systémů, aplikací a obchodních procesů.
Nové a nové bezpečnostní otázky se mohou objevit a také objevují v každém okamžiku dnešního
překotného vývoje ICT infrastruktury a informačních systémů. Ale jejich řešení často není ani zdaleka
snadné a jednoduše implementovatelné. Aby bylo zajištěno, že vaše firemní obchodní procesy probíhají
hladce a správně, je nezbytné, aby informační systém byl prokazatelně bezpečný. T-Systems plánuje,
vyvíjí, integruje a provozuje systémy, které umožní vašim statickým nebo mobilním zaměstnancům
kreativně pracovat s pomocí nejmodernějších technologií s tím, že máte plně bezpečnost svých dat
a procesů pod kontrolou.
Oblasti pro řešení ICT bezpečnosti (Security Solutions od T-Systems):
Security Consulting
• Identifikace a ohodnocení informačních rizik.
• Definice a implementace vhodných protiopatření.
• Návrh a implementace systému řízení informační bezpečnosti.
• Návrh a implementace technické bezpečnosti.
• Penetrační testy a audity informačních systémů.
• Návrh bezpečnostních procesů a zpracování dokumentace.
• Příprava na audity a prověrky bezpečnosti.
Crypto
•
•
•
•
Services
Správa a management uživatelských identit a práv.
Autentizace, autorizace, digitální podpisy a šifrování.
Systémy správy klíčů a certifikátů.
Aplikace na bázi čipových karet.
Managed Firewall
• Ochrana sítě zákazníka před vnějšími útoky.
• Filtrování a monitorování síťové komunikace zákazníka.
• Doplňkové bezpečnostní služby.
• Umožnění bezpečného připojení k Internetu.
• Vytvoření definované sítě podle specifikovaných bezpečnostních pravidel.
186
Security and Protection of Information 2009
SecureMail
• Kompletní zabezpečení emailové komunikace.
• Ochrana před nevyžádanou poštou.
• Ochrana před viry, červy a škodlivým kódem.
• Uživatelské karantény pro spam a další typy napadené pošty.
• Speciální bezpečnostní proaktivní filtry.
• Možnost detailního reportování a „online“ bezpečnostní sledování emailové komunikace.
• Ochrana příchozí, ale i odchozí pošty.
2 Security Consulting
2.1
Risk Management, Audits and Penetration
Komplexnost ICT infrastruktury vzrůstá jak s velikostí organizace tak v souvislosti s novými bezpečnostními trendy. Tím se zvyšuje celková zranitelnost informačního systému. Pomocnou ruku nabízí systém
bezpečnostních analýz, autorizovaných penetračních testů, strukturovaných průzkumů (audity) ve společnosti, které v kombinaci s dalšími opatřeními můžeme přinést výrazně lepší ochranu. Získané poznatky
jsou předány ve formě dokumentace obsahující podrobné analýzy rizik s vhodnými protiopatřeními
zodpovědným osobám ve společnosti. Dokumentace obsahuje především organizační, administrativní,
fyzickou, infrastrukturní a personální oblast bezpečnosti, které jsou uzpůsobeny vnitřním požadavkům
dané společnosti a příslušné legislativy.
2.2
Process Design and Implementation
ICT bezpečnost není pouze záležitostí použitého hardwaru nebo softwaru, nebo nejnovějších technologických inovací. Konstrukce bezpečnostně-souvisejících procesů je minimálně stejně rozhodující. Dokonce
ani nejchytřejší, velmi sofistikovaná IT řešení bezpečnosti může být ohroženo bez adekvátního plánování,
koordinaci, realizace a kontroly. T-Systems analyzuje, navrhuje, optimalizuje a provádí zabezpečení IT
procesů a odpovídajících organizačních struktur, s cílem vytvořit vysoce efektivní a vysoce transparentní
prostředí pro řízení bezpečnosti. Naše činnost je založena na národních i mezinárodních norem včetně
principů ISMS (Systém managementu bezpečnosti informací).
2.3
Security ICT Architecture Design and Implementation
Rozsáhlá infrastruktura ICT bezpečnosti vytváří neustále nové úkoly a výzvy. Organizace nutně potřebuje
umět reagovat na nové situace, tak aby bylo dosaženo specifikovaného standardu udržitelné ochrany.
Nejlepším způsobem vytváření informační infrastruktury je využít zkušeností prokazatelně bezpečných
ICT modelů a architektury již při jejím plánování. Naši architekti a designéři IT využijí své bohaté zkušenosti, které vám pomohou provádět zodpovědná rozhodnutí o budoucí podobě vaší strategie informačních systémů. Nabízíme vám plnou transparentnost nákladů s ohledem na bezpečnost už od nejnižší
vývojové fáze. To zajišťuje, že budete schopni provádět své interní a obchodní procesy v celé komplexnosti
účinně a efektivně, ať už zítřek přinese cokoliv.
2.4
Security and Vulnerability Management Solutions
Od firewallů, po antivirové mechanismy až po detekci průniků: to je reálná a efektivní ICT bezpečnost
pro podniky v součastné době. Ale ochranu tohoto druhu přináší své vlastní problémy – například pokud
jde o sledování a řízení bezpečnosti a nastavení konfigurace. Vzhledem k rostoucímu přívalu dat z nejrůznějších logů, identifikovat a reagovat na kritické události může představovat vážné potíže. S naší pomocí
Security and Protection of Information 2009
187
budete schopni identifikovat vaše skutečné bezpečnostní problémy a oddělit je od těch méně významných.
Naše služby zahrnují moduly pro vypracování bezpečnostních politik a organizačních manuálů. Pomáháme s identifikací podnikových aktiv, a s provedením analýzy rizik a jejich hodnocení, dokážeme nastínit
vhodná ochranná opatření a rozvíjet popisy procesů a provozních modelů včetně školení.
3 Crypto Services
3.1
Safe Access Control
Safe Access Control je systém pro úplné a komplexní řízení přístupu k síti. Bez ohledu na způsob připojení koncových bodů k síti, služba Network Access Control zjišťuje a vyhodnocuje jejich stav z hlediska
kompliance, zajišťuje optimální přístup k síti, a v případě potřeby zjednává nápravu a průběžně sleduje
změny stavu kompliance koncových bodů.
3.2
Safe Desktop and Laptop
Obavy o bezpečnost nezanikají při našem odpojení se z internetu nebo vypnutím počítače. Je velice
alarmující, jak vzrůstá riziko krádeže dat nebo celého zařízení. Naše bezpečnostní služby ochrání vaše
systémy před takovými bezpečnostními hrozbami a riziky – díky šifrování disků nebo rovnou celého operačního systému, využitím čipových karet pro identifikaci a kontejnerové šifrování, při kterém se automaticky ukládají data do virtuálního zabezpečeného prostředí.
3.3
Safe Storage
Safe Storage je bezpečnostní řešení v rámci celého životního cyklu pro vaše heterogenní datová úložiště
nebo pásková zálohovací zařízení bez narušení součastných aplikací, infrastruktury, klientů, serverů nebo
uživatelského workflow. Ukládání dat na úložiště s rychlým přístupem ze sítí může být citlivým a zranitelným místem vší infrastruktury. Naše řešení kombinuje bezpečné systémy kontroly přístupu, autentizace, šifrování a bezpečného přihlášení k ochraně vašich dat.
3.4
Safe Communication
Řešení od společnosti T-Systems vám bude elektronicky podepisovat nebo šifrovat e-maily, dokumenty
v libovolné elektronické podobě, s jediným kliknutím myši nebo zcela automaticky. Můžete si ověřit
původ dat nebo šifrovat a dešifrovat důležité dokumenty či komunikaci a další aktiva ve vaší společnosti.
Silné kryptografické principy autentizace, autorizace, komunikačních protokolů a šifrování zabezpečují
odolný a bezpečný systém ochrany informací.
188
Security and Protection of Information 2009
4 Managed Firewall
T-Systems vám pomáhá čelit těmto problémům – s bezpečným přístupem, firewally, antiviry, antispamem
a dalšími filtry, detekčními a prevenčními nástroji, bezpečnostním managementem a monitoringem,
a další komponenty pro mobilní a bezpečnou komunikaci. Navíc dodáváme odborné poradenství při
tvorbě a realizaci bezpečného ICT prostředí a infrastruktury informačních a komunikačních technologií.
Formou outsourcingu, můžeme a dokážeme převzít plnou odpovědnost za provoz vašeho bezpečnostního
řešení.
Základním prvkem ICT bezpečnostní infrastruktury je produkt Managed Firewall (obrázek 1).
Obrázek 1: Základní schéma produktu Managed Firewall.
Nabízené služby v rámci produktu Managed Firewall:
• Firewall,
• IPS/IDS,
• Antispam,
• Antivirus,
• Web Filtering,
• Security management,
• Security monitoring a reporting.
Security and Protection of Information 2009
189
5 SecureMail
Jedním z nejdůležitějších typů komunikace je dnes elektronická pošta, která se stala nedílnou součástí jak
osobního života tak pracovní činnosti. Přijímání a odesílání emailových zpráv je dnes stejnou samozřejmostí jako telefonování nebo faxování. Stinnou stránkou emailové technologie je prudký a neustálý nárůst
nevyžádané pošty a s tím spojená rizika virové infekce, podvržených stránek, spyware a phishingu. Nejúčinnější způsob jak čelit těmto hrozbám je zvolit vhodné bezpečnostní řešení, které všem těmto nebezpečím dokáže čelit.
Obrázek 2: Základní schéma produktu SecureMail.
6 Závěr
Společnost T-Systems Czech Republic si uvědomuje, že bezpečnostní aspekty se stávají nedílnou součástí
každého ICT projektu, a proto přichází s ucelenou řadou bezpečnostních produktů a řešení, které Vám
umožní nechat starosti s bezpečností na silného a důvěryhodného partnera.
Pro bližší informace navštivte naše internetové stránky www.t-systems.cz.
190
Security and Protection of Information 2009

Podobné dokumenty