381 lines
19 KiB
Plaintext
381 lines
19 KiB
Plaintext
FTP'd from cert.org:
|
|
File pub/virus-l/docs/net.hormones
|
|
|
|
Date: Thu, 16 Mar 89 20:56:18 +0100
|
|
From: David Stodolsky <stodol@diku.dk>
|
|
|
|
Net Hormones: Part 1 -
|
|
Infection Control assuming Cooperation among Computers
|
|
|
|
Copyright (c) 1989 David S. Stodolsky, PhD. All rights reserved.
|
|
|
|
1. Abstract
|
|
|
|
A new type of infection control mechanism based upon contact tracing is
|
|
introduced. Detection of an infectious agent triggers an alerting
|
|
response that propagates through an affected network. A result of the
|
|
alert is containment of the infectious agent as all hosts at risk
|
|
respond automatically to restrict further transmission of the agent.
|
|
Individually specified diagnostic and treatment methods are then
|
|
activated to identify and destroy the infective agent. The title "Net
|
|
Hormones" was chosen to indicate the systemic nature of this programmed
|
|
response to infection.
|
|
|
|
2. Introduction
|
|
|
|
A new type of infection control mechanism that is based upon network-
|
|
wide communication and that depends upon cooperation among computer
|
|
systems is presented. Neither diagnosis nor treatment is necessary for
|
|
the operation of the mechanism. The mechanism can automatically trigger
|
|
responses leading to effective containment of an infection. The
|
|
identification and destruction of the infectious agent is determined by
|
|
individual actions or programs. This permits a highly desirable
|
|
heterogeneity in diagnostic and treatment methods.
|
|
|
|
Definition: "Hormone . . . 1: a product of living cells that circulate
|
|
in body fluids or sap and produces a specific effect on the activity of
|
|
cells remote from its point of origin; especially one exerting a
|
|
stimulatory effect on a cellular activity. 2: a synthetic substance
|
|
that acts like a hormone (Webster's new collegiate dictionary, 1976)."
|
|
The analogy here is between each network node or computer system and
|
|
the cell. In biological systems hormones attach to specialized
|
|
receptors on the cell surface resulting in cell activation. In the
|
|
system described here, a match between a code in a system archive and a
|
|
code delivered as part of an alerting message results in activation.
|
|
Alerting messages circulated electronically serve the role of hormones.
|
|
|
|
Epidemiology has traditionally had three major approaches to the
|
|
control of infectious agents:
|
|
|
|
:1 - Treatment of the sick (e. g., penicillin)
|
|
|
|
:2 - Contact tracing (e. g., social-work notification programs, laws
|
|
forcing the reporting of certain diseases and of contacts of infected
|
|
persons)
|
|
|
|
:3 - Prevention (e. g., vaccination, public information campaigns)
|
|
|
|
In computer system terms:
|
|
|
|
:1 - Treatment of infections (e. g., various programs and manually
|
|
installed patches and fixes)
|
|
|
|
:2 - Contact tracing (e. g., software "recall", and other manual
|
|
operations)
|
|
|
|
:3 - Prevention (e. g., various programs for blocking virus
|
|
replication, alerting users, and for logging suspicious events)
|
|
|
|
Contact tracing has been neglected with computer systems, although it
|
|
could be argued it is much easier with computer systems than with
|
|
biological systems. Currently such tracing depends upon people reading
|
|
reports and determining if their system is subject to infection,
|
|
performing diagnostic tests, determining a treatment method, obtaining
|
|
software, and so on. This is chancy and time consuming, requiring most
|
|
often people with the highest level of expertise. As computers and
|
|
networks speed up, an infectious agent could spread through a network
|
|
in hours or minutes. "Once a virus has infected a large number of
|
|
computers on a network, the number of infected removable media elements
|
|
will begin to skyrocket. Eventually, if the virus continues to go
|
|
undetected, a stage is reached in which the probability of identifying
|
|
and recovering all of the infected media is virtually zero (McAfee,
|
|
1989)." An automated contact tracing system thus seems essential in the
|
|
future if infectious agents are to be controlled.
|
|
|
|
3. Threats
|
|
|
|
"The modification of an existing virus to incorporate a long term delay
|
|
(such as 6 months or even a year) coupled with a totally destructive
|
|
manipulation task (such as a FAT, Boot sector scribble followed by a
|
|
complete format) is a fairly simple task. Such an action would convert
|
|
even a crude virus strain such as the Lehigh 1 virus into a
|
|
devistating (sic) strain. (Eg the comment by Ken that the modified
|
|
version of the Lehigh virus is now far more dangerous due to
|
|
modification of the delay in activation of its manipulation task)
|
|
(Ferbrache, 1989)."
|
|
|
|
Scott (1989) requested comments on:
|
|
|
|
"A little future speculation here... currently we seem to be fighting a
|
|
losing battle against virus detection and as viruses improve it's
|
|
unlikely that that will change. If we want the capability to download
|
|
shareware, etc, from bulletin boards, etc, then we must assume that we
|
|
cannot check the software for a virus with 100% success before running
|
|
it. In general, you can't know the output of a program given the
|
|
input without running it, except in special cases.
|
|
|
|
We can check for *known* viruses; but how long before shape-changing
|
|
and mutating viruses hit the scene that defeat all practical
|
|
recognition techniques?"
|
|
|
|
An inapparent infection could spread rapidly, with damage noted only
|
|
much later. Consider a worm that is constructed to carry a virus. The
|
|
worm infects a system, installs the virus and then infects other nearby
|
|
systems on the net. Finally, it terminates erasing evidence of its
|
|
existence on the first system. The virus is also inapparent, it waits
|
|
for the right moment writes some bits and then terminates destroying
|
|
evidence of its existence. Later the worm retraces its path reads some
|
|
bits, then writes some bits and exits. The point is that an inapparent
|
|
infection could spread quite widely before it was noticed. It also
|
|
might be so hard to determine whether a system was infected or not,
|
|
that it would not be done until damage was either immanent or apparent.
|
|
This analysis suggests response to network-wide problems would best be
|
|
on a network level.
|
|
|
|
4. Theory of operation
|
|
|
|
Computers generate (in the simplest case) random numbers which are used
|
|
to label transactions. A transaction is defined as an interaction
|
|
capable of transmitting an infectious agent. After each transaction
|
|
both systems therefore have a unique label or code for that
|
|
transaction. In the event that a system is identified as infected, the
|
|
transaction codes which could represent transactions during which the
|
|
agent was transmitted are broadcast to all other computers. If a
|
|
receiving computer has a matching code, then that system is alerted to
|
|
the possibility of the agent's presence, and can broadcast transaction
|
|
codes accumulated after the suspect contact. This iterates the process,
|
|
thus identifying all carriers eventually. The effect is to model the
|
|
epidemiological process, thereby identifying all carriers through
|
|
forward and backward transaction tracking (Stodolsky, 1979a; 1979b;
|
|
1979c; 1983; 1986).
|
|
|
|
5. The process of infection control
|
|
|
|
The process can be broken down into routine and alerting operations.
|
|
During routine operations, each file transfer is labeled in a way that
|
|
does not identify the systems involved. These labels are time stamped
|
|
(or have time stamps encoded in them). They are written into archives
|
|
on each system, ideally write-once/read-many times devices or some
|
|
other type of storage that could not easily be altered.
|
|
|
|
Alerting procedures are invoked when an infectious agent is noted or
|
|
when a suspect transaction code is received that matches one in the
|
|
system's archive. The earliest time the agent could have arrived at the
|
|
system and latest time (usually the moment the agent is noted or a
|
|
received suspect transaction code is matched) it could have been
|
|
transmitted from the system are used to delimit suspect transaction
|
|
codes. These codes are broadcast to alert other systems to the
|
|
potential presence of the agent.
|
|
|
|
In the simplest and most common case, if a system gets an alert that
|
|
indicates, "You could have been infected at time one," then the system
|
|
automatically packages the transaction codes between time one and the
|
|
present time to generate a new alert indicating the same thing to other
|
|
systems with which it has had contact.
|
|
|
|
Another automatic response could be to immediately cut off
|
|
communications in progress, thus reducing the risk of infection. A
|
|
further benefit of such a reaction would be the possibility of
|
|
disrupting the transfer of an infectious agent. Such a disrupted agent
|
|
would be harmless and easily identified and evaluated. Reestablishment
|
|
of communication could occur immediately with new procedures in force
|
|
that could warn new users that an alert was in progress as well as
|
|
limiting the type of transfers that could take place.
|
|
|
|
5.1. Practical considerations
|
|
|
|
Direct identification, as opposed to identification through forward
|
|
tracing notification, does not delimit effectively the earliest time
|
|
that an agent could have been present on a system. Thus an alert from
|
|
an originating system could include all transaction codes written prior
|
|
to the identification (or some default value). This could generate
|
|
excessive reaction on the network. This reaction could be controlled if
|
|
another system in a later alert indicated it had originated the
|
|
infection on the system originating the alert. Thus, protection of
|
|
identity which reduces any inhibition about reporting infection is
|
|
important. The type of reaction discussed here might be called a panic
|
|
reaction, because an excessive number of systems might be notified of
|
|
potential infection in the first instance.
|
|
|
|
A more restricted response could be generated if persons at the alert
|
|
originating system analyzed the causative agent, thereby hopefully
|
|
establishing the earliest time the agent could have been present on
|
|
that system. In this case, the suspect transactions could be delimited
|
|
effectively and all systems that could have been infected would be
|
|
notified, as would the system that had transmitted the agent to the
|
|
system originating the alert (assuming one exists). Ideally, each
|
|
notified system would be able to determine if it had received or
|
|
originated the infection and respond accordingly.
|
|
|
|
5.2. Forward tracing assumption
|
|
|
|
Assume, however, that rapid response is desired. Each notified system
|
|
would then react as if it had been notified of an infection transmitted
|
|
to it. It would package the transaction codes that had been written
|
|
later than the suspect transaction code it had received and issue a
|
|
secondary alert. This forward tracing assumption would lead to quite
|
|
effective control because of the exponential growth in the number of
|
|
infected hosts in epidemics (and exponential growth of alerts resulting
|
|
>From forward tracing). That is, a system can infect many others as a
|
|
result of a single infective agent transmitted to it. Forward tracing
|
|
would alert all systems that the alerting system could have infected.
|
|
These newly alerted systems would also issue forward trace alerts, and
|
|
this would continue until containment was reached under the forward
|
|
tracing assumption.
|
|
|
|
5.3. Backward tracing of suspect contacts and diagnosis
|
|
|
|
As a result of this rapid forward tracing response, it is likely that
|
|
more active infections would be identified. The resulting new
|
|
information could be used to more effectively characterize the life
|
|
cycle of the agent, thereby hopefully permitting effectively delimited
|
|
backward tracing. Also as a result of accumulated information, positive
|
|
tests for the agent would become available. Once this stage had been
|
|
reached the focus of action could shift from control of suspect
|
|
transactions to control of transactions known to facilitate the
|
|
transmission of the agent.
|
|
|
|
6. Feasibility and Efficiency
|
|
|
|
Both technical and social factors play a key role in the operation of
|
|
the control mechanism. Contact tracing is probably most effective for
|
|
sparsely interacting hosts. The rate of transfer of the infectious
|
|
agent as compared to the rate of transfer of the suspect transaction
|
|
codes is also a critical factor. Recording of transactions can be
|
|
comprehensive on computer networks, however, unregistered transactions
|
|
will be a factor in most cases. Once the infectious agent has been
|
|
identified, the type of transactions capable of transmitting the agent
|
|
can be delimited. This could increase efficiency.
|
|
|
|
6.1. Social organization of alerts
|
|
|
|
Another major efficiency factor is errors in origination of alerts.
|
|
Since protected messages would trigger network-wide alerts, it is
|
|
important that false alarms are controlled effectively. On the other
|
|
hand, failure to report an infection could permit an infectious agent
|
|
to spread in an uncontrolled manner and could increase the number of
|
|
systems unnecessarily alerted. Successful operation of the mechanism
|
|
described above assumes voluntary cooperation among affected systems.
|
|
This assumption could be relaxed by application of an enforcement
|
|
mechanism. It would require substantially greater complexity and
|
|
greater centralization of coordination. In other words, if cooperation
|
|
was not forthcoming "voluntarily", users would likely be treated to a
|
|
complicated, restrictive, and resource intensive mechanism that would
|
|
be developed to enforce it. "Estimates of the damages inflicted by
|
|
November's Internet infection alone ranged upward of $100 million . . .
|
|
(McAfee, 1989)." Costs of this magnitude make it very likely that even
|
|
expensive enforcement mechanisms will be developed if they are made
|
|
necessary.
|
|
|
|
The simplest organizational strategy would assume that protection of
|
|
identity was not needed, but this would also be likely to inhibit
|
|
alerting. True anonymity, however, permits irresponsible behavior to go
|
|
unchecked. A reputation preserving anonymity (pseudonymity) would be
|
|
desirable to ensure both protection and accountability and thereby
|
|
promote cooperation. Pseudonyms would best be the property of persons
|
|
(in association with a computer system).
|
|
|
|
Even sincere cooperation, however, would not eliminate inefficiencies
|
|
resulting from false alarms or failure to alert. Both inadequate
|
|
training and poor judgement are likely sources of these errors. If
|
|
users realize that there are reputational costs associated with these
|
|
failures, then they are likely to be motivated to minimize them. False
|
|
alarms are already a major problem because of user inexperience and the
|
|
high level of defects in widely used software. A reputational mechanism
|
|
would motivate increased user education and more careful software
|
|
selection, with a corresponding pressure on software publishers to
|
|
produce well behaved and carefully documented products.
|
|
|
|
6.2. Enforcing cooperation
|
|
|
|
Crypto-protocols could be used to ensure that a non-cooperator could
|
|
not communicate freely with others using the infection control
|
|
mechanism. This type of communication limiting could be used routinely
|
|
to ensure that a system requesting connection was not infected. In
|
|
effect, systems would exchange health certificates before file
|
|
exchanges, to ensure that they would not be infected. A system that
|
|
could not show a health certificate could be rejected as a conversation
|
|
partner due to risk of infection. This would no doubt enforce
|
|
cooperation. The mechanism (Stodolsky, 1986) is beyond the scope of
|
|
this note.
|
|
|
|
6.3. Non-network transfers
|
|
|
|
While the discussion above has focused on transfers through networks,
|
|
the same principles could be applied to disk or tape transfers. The
|
|
originating system would write a transaction code on the medium with
|
|
each file. Protection of identity would possibly be reduced under this
|
|
type of transfer. Since there is no question about the directionality
|
|
of transmission of an infectious agent in off-line transfers, non-
|
|
network transmission is likely to be easier to control. Several other
|
|
factors, such as the rate of spread of the agent, are likely to make
|
|
such infections less troublesome.
|
|
|
|
7. Summary and Benefits
|
|
|
|
The idea behind Net Hormones is to make immanent danger apparent. More
|
|
precisely Net Hormones permit the visualization of infection risk.
|
|
|
|
7.1. Control of unidentified infectious agents.
|
|
|
|
Net Hormones work by permitting isolation of infectious hosts from
|
|
those at risk. Identification of the infectious agent is not required
|
|
for action. Therefore, new and as yet unidentified agents can be
|
|
effectively controlled.
|
|
|
|
7.2. Rapid response
|
|
|
|
Hosts could automatically respond to alerts by determining if they had
|
|
been involved in suspect contacts, and generate new alerts that would
|
|
propagate along the potential route of infection.
|
|
|
|
7.3. Protection of identity
|
|
|
|
The mechanism could function without releasing the identity of an
|
|
infected host. This could be crucial in the case an institution that
|
|
did not wish it to be publicly know that its security system had been
|
|
compromised, or in the case of use of unregistered software. More
|
|
precisely, software obtain by untraceable and anonymous file transfers
|
|
could be protected by this mechanism without release of users'
|
|
identity.
|
|
|
|
7.4. Distributed operation
|
|
|
|
Operation is not dependent upon a centralized register or enforcement
|
|
mechanism. Some standardization would be helpful, however, and a way to
|
|
broadcast alerts to all potential hosts would be valuable.
|
|
|
|
8. References
|
|
|
|
Ferbrache, David J. (1989, February 10). Wide area network worms.
|
|
VIRUS-L Digest, V. 2 : Issue 44. [<davidf@CS.HW.AC.UK> <Fri, 10 Feb 89
|
|
11:45:37 GMT>]
|
|
|
|
McAfee, J. D. (1989, February 13). In depth: Managing the virus threat.
|
|
Computerworld, 89-91; 94-96.
|
|
|
|
Scott, Peter. (1989, February 10). Virus detection. VIRUS-L Digest, V.
|
|
2 : Issue 44. [PJS%naif.JPL.NASA.GOV@Hamlet.Bitnet
|
|
<pjs@grouch.jpl.nasa.gov>. <Fri, 10 Feb 89 10:46:21 PST>]
|
|
|
|
Stodolsky, D. (1979a, April 9). Personal computers for supporting
|
|
health behaviors. Stanford, CA: Department of Psychology, Stanford
|
|
University. (Preliminary proposal)
|
|
|
|
Stodolsky, D. (1979b, May 21). Social facilitation supporting health
|
|
behaviors. Stanford, CA: Department of Psychology, Stanford University.
|
|
(Preliminary proposal)
|
|
|
|
Stodolsky, D. (1979c, October). Systems approach to the epidemiology
|
|
and control of sexually transmitted diseases. Louisville, KY: System
|
|
Science Institute, University of Louisville. (Preliminary project
|
|
proposal)
|
|
|
|
Stodolsky, D. (1983, June 15). Health promotion with an advanced
|
|
information system. Presented at the Lake Tahoe Life Extension
|
|
Conference. (Summary)
|
|
|
|
Stodolsky, D. (1986, June). Data security and the control of infectious
|
|
agents. (Abstracts of the cross disciplinary symposium at the
|
|
University of Linkoeping, Sweden: Department of Communication Studies).
|
|
|
|
Webster's new collegiate dictionary. (1976). Springfield, MA: G. & C.
|
|
Merriam
|
|
|
|
-------------------------------------------------------------
|
|
|
|
David Stodolsky diku.dk!stodol@uunet.UU.NET
|
|
Department of Psychology Voice + 45 1 58 48 86
|
|
Copenhagen Univ., Njalsg. 88 Fax. + 45 1 54 32 11
|
|
DK-2300 Copenhagen S, Denmark stodol@DIKU.DK
|