1799 lines
59 KiB
Plaintext
1799 lines
59 KiB
Plaintext
|
|
|||
|
|
|||
|
|
|||
|
NCSC*TG*019
|
|||
|
|
|||
|
Lib
|
|||
|
rary No. S*232,458
|
|||
|
|
|||
|
FOREWORD
|
|||
|
|
|||
|
|
|||
|
The Trusted Product Evaluation Questionnaire is the latest in a series of
|
|||
|
technical documents that are being published by the National Computer
|
|||
|
Security Center under the Technical Guidelines Programs. It is the goal
|
|||
|
of the Technical Guidelines Program to assure that each process in the
|
|||
|
Trusted Product Evaluation Program and the features of the Department of
|
|||
|
Defense Trusted Computer Systems Evaluation Criteria will be discussed in
|
|||
|
detail and provide the proper interpretations with specific guidance.
|
|||
|
These publications are designed to provide insight to the Department of
|
|||
|
Defense Trusted Computer Systems Evaluation Criteria requirements for the
|
|||
|
computer security vendor and developer, as well as the technical
|
|||
|
evaluator.
|
|||
|
|
|||
|
The specific questions in the Trusted Product Evaluation Questionnaire
|
|||
|
provide a set of good practices related to necessary system security and
|
|||
|
system security documentation. This questionnaire has been written to
|
|||
|
help the vendor understand what technical information is required
|
|||
|
concerning the system for a product evaluation. From the vendor's
|
|||
|
responses, the evaluator may obtain an understanding of the security of
|
|||
|
the system applying for evaluation.
|
|||
|
|
|||
|
As the Director, National Computer Security Center, I invite your
|
|||
|
recommendations for revision to this technical guideline. We plan to
|
|||
|
review this document when the need arises.
|
|||
|
|
|||
|
|
|||
|
________________
|
|||
|
Patrick R. Gallagher,
|
|||
|
Jr.
|
|||
|
16 October 1989
|
|||
|
Director
|
|||
|
National Computer Security Center
|
|||
|
|
|||
|
ACKNOWLEDGMENTS
|
|||
|
|
|||
|
The National Computer Security Center extends special recognition to
|
|||
|
Santosh Chokhani, Ph.D. and Harriet Goldman as the primary authors of this
|
|||
|
document, and to MAJ James P. Gordon (US Army) and LT Patricia R. Toth (US
|
|||
|
Navy) for the development and publication of this guideline.
|
|||
|
|
|||
|
We wish to thank the many members of the computer security community, who
|
|||
|
enthusiastically gave of their time and technical expertise in reviewing
|
|||
|
this questionnaire and providing valuable comments and suggestions.
|
|||
|
|
|||
|
CONTENTS
|
|||
|
|
|||
|
FOREWORD i
|
|||
|
|
|||
|
ACKNOWLEDGMENTS ii
|
|||
|
|
|||
|
INTRODUCTION 1
|
|||
|
|
|||
|
1. PURPOSE 1
|
|||
|
|
|||
|
2. SCOPE 2
|
|||
|
|
|||
|
QUESTIONNAIRE 4
|
|||
|
|
|||
|
1. SUBJECTS 4
|
|||
|
|
|||
|
2. OBJECTS 6
|
|||
|
|
|||
|
3. HARDWARE ARCHITECTURE 8
|
|||
|
|
|||
|
4. SOFTWARE 10
|
|||
|
|
|||
|
5. IDENTIFICATION & AUTHENTICATION (I&A) 14
|
|||
|
|
|||
|
6. OBJECT REUSE 16
|
|||
|
|
|||
|
7. DISCRETIONARY ACCESS CONTROL (DAC) POLICY 17
|
|||
|
|
|||
|
8. LABELS 20
|
|||
|
|
|||
|
9. MANDATORY ACCESS CONTROL (MAC) 25
|
|||
|
|
|||
|
10. INTEGRITY 26
|
|||
|
|
|||
|
11. AUDIT 28
|
|||
|
|
|||
|
12. MODELING AND ANALYSIS 32
|
|||
|
|
|||
|
13. TESTING 34
|
|||
|
|
|||
|
14. OTHER ASSURANCES 37
|
|||
|
|
|||
|
15. OTHER DOCUMENTATION 40
|
|||
|
|
|||
|
GLOSSARY 43
|
|||
|
|
|||
|
REFERENCES 54
|
|||
|
INTRODUCTION
|
|||
|
|
|||
|
The principal goal of the National Computer Security Center (NCSC) is to
|
|||
|
encourage the widespread availability of trusted computer systems. In
|
|||
|
support of this goal a metric was created, the Department of Defense
|
|||
|
Trusted Computer System Evaluation Criteria (TCSEC), against which
|
|||
|
computer systems could be evaluated. The TCSEC was originally published
|
|||
|
on 15 August 1983 as CSC*STD*001*83. In December 1985 the DoD adopted it,
|
|||
|
with a few changes, as a DoD Standard, DoD 5200.28*STD. DoD Directive
|
|||
|
5200.28, "Security Requirements for Automatic Information Systems (AISs),"
|
|||
|
has been written to require, among other things, the Department of Defense
|
|||
|
Trusted Computer System Evaluation Criteria to be used throughout the DoD.
|
|||
|
The TCSEC is the standard used for evaluating the effectiveness of
|
|||
|
security controls built into ADP systems. The TCSEC is divided into four
|
|||
|
divisions: D, C, B, and A, ordered in a hierarchical manner with the
|
|||
|
highest division (A) being reserved for systems providing the best
|
|||
|
available level of assurance. Within divisions C, B, and A there are
|
|||
|
subdivisions known as classes, which are also ordered in a hierarchical
|
|||
|
manner to represent different levels of security in these classes.
|
|||
|
|
|||
|
The NCSC has established an aggressive program to study and implement
|
|||
|
computer security technology and to encourage the widespread availability
|
|||
|
of trusted computer products for use by any organization desiring better
|
|||
|
protection of their important data. The Trusted Product Evaluation
|
|||
|
Program and the open and cooperative business relationship being forged
|
|||
|
with the computer and telecommunications industries will result in the
|
|||
|
fulfillment of our country's computer security requirement. We are
|
|||
|
resolved to meet the challenge of identifying trusted computer products
|
|||
|
suitable for use in processing all types and classifications of
|
|||
|
information.
|
|||
|
|
|||
|
|
|||
|
1. PURPOSE
|
|||
|
|
|||
|
The NCSC is responsible for evaluating commercial products through an
|
|||
|
independent evaluation based on TCSEC requirements by a qualified team of
|
|||
|
experts and maintaining a list of those products on the Evaluated Products
|
|||
|
List (EPL). To accomplish this mission, the NCSC Trusted Product
|
|||
|
Evaluation Program has been established to assist vendors in developing,
|
|||
|
testing, and evaluating trusted products for the EPL.
|
|||
|
|
|||
|
During the evaluation process, the TCSEC for classes C1 through A1
|
|||
|
requires a determination that the security features of a system are
|
|||
|
implemented as designed and that they are adequate for the specified level
|
|||
|
of trust. In addition, the TCSEC also requires documentation to support a
|
|||
|
system's security. During the various phases of the evaluation process,
|
|||
|
the vendor supplies to an evaluation team certain information on system
|
|||
|
security and documentation. The purpose of the Trusted Product Evaluation
|
|||
|
Questionnaire (product questionnaire) is to assist system developers and
|
|||
|
vendors as a data gathering tool for *formalizing the data gathering
|
|||
|
process for the various phases of the Trusted Products Evaluation process.
|
|||
|
|
|||
|
|
|||
|
Examples in this document are not to be construed as the only
|
|||
|
implementations that may answer the questionnaire. The examples are
|
|||
|
suggestions of appropriate implementations. The recommendations in this
|
|||
|
document are also not to be construed as supplementary requirements to the
|
|||
|
questionnaire.
|
|||
|
|
|||
|
|
|||
|
2. SCOPE
|
|||
|
|
|||
|
The questionnaire will address the TCSEC Criteria Classes C1 thru A1. In
|
|||
|
an effort to gather a better understanding of the system security, some
|
|||
|
questions in the questionnaire address information in addition to that
|
|||
|
required in the Department of Defense Trusted Computer Systems Evaluation
|
|||
|
Criteria. This document is organized by Criteria class subject area. The
|
|||
|
information provided in the questionnaire by the vendor is to assist the
|
|||
|
evaluator in obtaining an initial understanding of the system applying for
|
|||
|
evaluation and its security features of the respective Criteria class.
|
|||
|
The product questionnaire is not a statement of requirements, just an
|
|||
|
information gathering tool. This questionnaire should give the vendor an
|
|||
|
idea of the information required by the evaluator during the evaluation
|
|||
|
process and prepare the vendor for additional information necessary by the
|
|||
|
evaluation team later on in the evaluation process.
|
|||
|
|
|||
|
The questionnaire will be initially sent out to the vendor prior to the
|
|||
|
Preliminary Technical Review (PTR). The vendor can point to appropriate
|
|||
|
documents for the answers. The vendor need not answer the questions that
|
|||
|
are not pertinent. Some of the questions may be applicable at the later
|
|||
|
stages of the evaluation process and thus may be deferred until the
|
|||
|
appropriate time. The vendor will send a completed questionnaire to NCSC
|
|||
|
prior to the PTR. The PTR team will evaluate the vendor contribution and
|
|||
|
determine which information needs further elaboration. The PTR team will
|
|||
|
use the questionnaire during the PTR to seek additional information used
|
|||
|
later on in the evaluation process. When an evaluation team has reached
|
|||
|
the Design Analysis and IPAR preparation phase, it will use the
|
|||
|
questionnaire to seek specific references in vendor documentation for
|
|||
|
further details on the answers to these questions.
|
|||
|
|
|||
|
The document is to provide the evaluator an understanding of the various
|
|||
|
hardware and software configurations, architecture and design, testing,
|
|||
|
and documentation, system security features and their applicability to
|
|||
|
security and accountability policy, Trusted Computing Base (TCB) isolation
|
|||
|
and noncircumventability, and covert channel analysis methods. Also this
|
|||
|
questionnaire may request information on penetration testing and
|
|||
|
specification-to-code correspondence.
|
|||
|
|
|||
|
This questionnaire is designed for the operating systems only. This
|
|||
|
questionnaire does not address networks, subsystems nor data base
|
|||
|
management.
|
|||
|
|
|||
|
For definition and clarification of the terms used in this document,
|
|||
|
please see the Glossary section of the Department of Defense Trusted
|
|||
|
Computer System Evaluation Criteria (DOD 5200.28*STD) and Glossary of
|
|||
|
Computer Security Terms (NCSC*TG*004).
|
|||
|
|
|||
|
Review of this document will occur periodically or when the need arises.
|
|||
|
Address all proposals for revision through appropriate channels to:
|
|||
|
|
|||
|
Nat
|
|||
|
ional Computer Security Center
|
|||
|
980
|
|||
|
0 Savage Road
|
|||
|
For
|
|||
|
t George G. Meade, MD 20755-6000
|
|||
|
|
|||
|
Att
|
|||
|
ention: Chief, Criteria and Technical Guidelines Division
|
|||
|
|
|||
|
|
|||
|
QUESTIONNAIRE
|
|||
|
|
|||
|
1. SUBJECTS
|
|||
|
|
|||
|
A subject is an active entity in the system, generally in the form of a
|
|||
|
person, process, or device that causes information to flow among objects
|
|||
|
or changes the system state. A subject can be viewed as a process/domain
|
|||
|
pair whose access controls are checked prior to granting the access to
|
|||
|
objects.
|
|||
|
|
|||
|
1. What are the subjects in your system?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
2. When and how are the subjects created? (For example, they
|
|||
|
can be created or activated when a user logs on or when a process is
|
|||
|
spawned.)
|
|||
|
|
|||
|
|
|||
|
|
|||
|
3. When and how are the subjects destroyed? (For example,
|
|||
|
they can be destroyed or deactivated when a process terminates or when the
|
|||
|
user logs off.)
|
|||
|
|
|||
|
|
|||
|
|
|||
|
4. What are the security attributes of a subject? (Examples
|
|||
|
of security attributes are user name, group id, sensitivity level, etc.)
|
|||
|
For each type of subject in your system (i.e. user, process, device), what
|
|||
|
mechanisms are available to define and modify these attributes? Who can
|
|||
|
invoke these mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
5. What are other privileges a subject can have? (Examples
|
|||
|
of privileges are: super user, system operator, system administrator, etc.
|
|||
|
Your operating system may assign numerous other privileges to the
|
|||
|
subjects, such as the ability to use certain devices.) For each type of
|
|||
|
subject in your system, what mechanisms are available to define and modify
|
|||
|
these privileges? Who can invoke these mechanisms? Provide a list of
|
|||
|
subjects within the TCB boundary and the list of privileges for each of
|
|||
|
them.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
6. When a subject is created, where do its security
|
|||
|
attributes and privileges originate, i.e., how are the security attributes
|
|||
|
and privileges inherited? (Questions about security attributes and
|
|||
|
privileges will be asked later. For example, a subject may inherit a
|
|||
|
subset of attributes and privileges of the invoking process or the user.)
|
|||
|
|
|||
|
|
|||
|
|
|||
|
2. OBJECTS
|
|||
|
|
|||
|
Examples of objects in a system are directories, files, segments,
|
|||
|
processes, devices, etc.
|
|||
|
|
|||
|
7. List the objects in your system that are protected by the
|
|||
|
Discretionary Access Control (DAC) mechanisms.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
8. List the objects in your system that are protected by the
|
|||
|
Mandatory Access Control (MAC) mechanisms.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
9. List the objects that are not protected by either the DAC
|
|||
|
or the MAC mechanism. Why are they not protected by the DAC or the MAC?
|
|||
|
Describe other mechanisms used to isolate and protect these objects.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
10. List other shared resources which are not protected by the
|
|||
|
DAC or the MAC mechanism. (Examples include print queues, interprocess
|
|||
|
communications, etc.) Why are they not protected by the DAC or the MAC?
|
|||
|
Describe the mechanisms that are used to isolate and protect these
|
|||
|
resources.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
11. How are the various types of objects created (e.g.,
|
|||
|
directories, files, devices)?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
12. How are the various types of objects destroyed?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
13. Provide a list of objects within the TCB (e.g.,
|
|||
|
authentication database, print queues).
|
|||
|
|
|||
|
|
|||
|
|
|||
|
3. HARDWARE ARCHITECTURE
|
|||
|
|
|||
|
If this evaluation is for a family of hardware, the following questions
|
|||
|
(14-24) should be answered for each member of the hardware family. You
|
|||
|
may choose to answer each question for each member of the family, or
|
|||
|
answer each question for a baseline family member and point out the
|
|||
|
difference for each of the remaining family members.
|
|||
|
|
|||
|
14. Provide a high-level block diagram of the system. The
|
|||
|
diagram should depict various Central Processor Units (CPUs), memory
|
|||
|
controllers, memory, I/O processors, I/O controllers, I/O devices (e.g.
|
|||
|
printers, displays, disks, tapes, communications lines) and relationships
|
|||
|
(both control flow and data flow) among them.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
15. Provide a high-level block diagram of a CPU. The diagram
|
|||
|
should explain the relationship among the following elements: Instruction
|
|||
|
Processor, Microsequencer, Microengine, Memory, Cache, Memory Mapping or
|
|||
|
Address Translation Unit, I/O devices and interfaces.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
16. Provide a list of privileged instructions for your
|
|||
|
hardware. Provide a brief description of each privileged instruction.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
17. For each privileged instruction, provide the privileges
|
|||
|
required to execute the instruction. (Examples of privileges include the
|
|||
|
machine state, the executing ring/segment, physical memory location of the
|
|||
|
instruction, etc.)
|
|||
|
|
|||
|
|
|||
|
|
|||
|
18. How does the process address translation (logical/virtual
|
|||
|
to physical) work in your system?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
19. How does I/O processing address translation work for the
|
|||
|
Direct Memory Access (DMA) controllers/devices? Identify if the address
|
|||
|
translation is done through the memory address translation unit or if the
|
|||
|
logic is part of the controller. How are the address translation maps
|
|||
|
and/or tables initialized?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
20. Describe the hardware protection mechanisms provided by
|
|||
|
the system.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
21. Describe the isolation mechanisms for the process memory.
|
|||
|
Two possible techniques are rings and segments.
|
|||
|
|
|||
|
|
|||
|
22. Provide a description of the process address space. When
|
|||
|
and how is it formed?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
23. What are the machine/processor states supported by the
|
|||
|
system? How are the states changed? What data structures are saved as
|
|||
|
part of the processor state?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
24. List all the interrupts and traps (hardware and software).
|
|||
|
How are they serviced by the system?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
4. SOFTWARE
|
|||
|
|
|||
|
The TCB software consists of the elements that are involved in enforcing
|
|||
|
the system security policy. Examples of the TCB elements include: kernel,
|
|||
|
interrupt handlers, process manager, I/O handlers, I/O manager,
|
|||
|
user/process interface, hardware diagnostics, hardware exercisers, and
|
|||
|
command languages/interfaces (for system generation, operator,
|
|||
|
administrator, users, etc.). The security kernel consists of the
|
|||
|
hardware, firmware and software elements of the TCB that are involved in
|
|||
|
implementing the reference monitor concept, i.e., the ones that mediate
|
|||
|
all access to objects by subjects.
|
|||
|
|
|||
|
25. Provide a description and architecture of the Trusted
|
|||
|
Computing Base (TCB) at the element level.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
26. Describe the interfaces (control and data flow) among the
|
|||
|
TCB elements.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
27. Describe the interface between the kernel and the rest of
|
|||
|
the TCB elements.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
28. Describe the interface between the TCB and user processes
|
|||
|
that are outside the TCB.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
29. Describe the hardware ring/memory segment/physical
|
|||
|
location where each TCB element resides.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
30. Describe the hardware ring/memory segment/physical
|
|||
|
location where the user processes reside.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
31. List software mechanisms that are used to isolate and
|
|||
|
protect the TCB and the user processes. Provide a brief description of
|
|||
|
each mechanism.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
32. List all the privileges a process can have. Include the
|
|||
|
privileges based on the process or user profile, process or user name, or
|
|||
|
process or user identification.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
33. How is a process created? How is a process destroyed?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
34. How are a process's privileges determined?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
35. Describe various elements of the process address space and
|
|||
|
their location in terms of ring/segment/physical memory.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
36. Describe the process states. (Examples of process states
|
|||
|
are active, ready for execution, suspended, swapped out, etc.)
|
|||
|
|
|||
|
|
|||
|
|
|||
|
37. Describe how these states are manipulated by the TCB.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
38. Describe the data structures for a process context.
|
|||
|
Describe both hardware and software mechanisms used to manipulate/switch
|
|||
|
the process context.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
39. Describe process scheduling.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
40. Describe all interprocess communications mechanisms.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
41. Describe the file management system. This should include:
|
|||
|
the directory hierarchy, if any, directory and file attributes. Also
|
|||
|
identify all system directories and files, and their access attributes.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
42. Describe how the devices and their queues are managed.
|
|||
|
Examples of devices include tape drives, non-file-system disks, printers,
|
|||
|
etc.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
43. How are the batch jobs and their queues managed?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
44. What software engineering tools and techniques were used
|
|||
|
for the TCB design and implementation?
|
|||
|
|
|||
|
|
|||
|
45. How is a process sensitivity level determined?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
46. How was the modularity requirement achieved and
|
|||
|
implemented?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
47. For each TCB element, identify protection-critical
|
|||
|
portions of the code. Describe the protection-critical functions
|
|||
|
performed by the code.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
48. For each TCB element, identify non-protection-critical
|
|||
|
portions of the code. Explain why the code is part of the TCB.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
49. How was the data abstraction and information hiding
|
|||
|
requirement achieved and implemented?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
50. Is the TCB layered? If yes, how many layers are in the
|
|||
|
TCB? Provide a brief description of modules and functions in each layer.
|
|||
|
How are the lower layers protected from higher layers?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
5. IDENTIFICATION & AUTHENTICATION (I&A)
|
|||
|
|
|||
|
51. Does the system require the users to provide
|
|||
|
identification at login? If yes, what information is requested by the
|
|||
|
system?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
52. Is there any additional device or physical security
|
|||
|
required for user I&A (e.g., terminal ID, pass key, smart card, etc.)?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
53. Is each user uniquely identified?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
54. Does the system authenticate this identity at the time of
|
|||
|
login? If yes, what information is requested by the system? How does the
|
|||
|
system use this information to authenticate the identity?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
55. Describe the algorithms used in user authentication.
|
|||
|
Where in the system are the algorithms and data for authentication (e.g.,
|
|||
|
user/password data base) stored?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
56. How are the authentication algorithms and data protected?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
57. Does the I&A process associate privileges with the user?
|
|||
|
If so, what and how?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
58. Does the I&A process associate a sensitivity level with
|
|||
|
the user? If so, how?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
6. OBJECT REUSE
|
|||
|
|
|||
|
59. How are the storage resources cleared? Examples include
|
|||
|
writing predefined patterns, writing random patterns, preventing reading
|
|||
|
before writing, etc. When are the storage resources cleared: prior to
|
|||
|
allocation or after deallocation and/or release? Describe the TCB
|
|||
|
hardware, software and procedures used in clearing these resources.
|
|||
|
Please answer this question for each type of storage resource. (Example
|
|||
|
of storage resources include memory pages, cache, disk sectors, magnetic
|
|||
|
tapes, removable disk media, terminals, etc.)
|
|||
|
|
|||
|
|
|||
|
|
|||
|
60. Is it possible to read the data that have been deleted?
|
|||
|
For example, what happens when a process attempts to read past the
|
|||
|
end-of-file (EOF) mark? In this case, is it possible to read old data by
|
|||
|
going past the EOF?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
7. DISCRETIONARY ACCESS CONTROL (DAC) POLICY
|
|||
|
|
|||
|
61. What mechanisms are used to provide discretionary access
|
|||
|
controls? (Examples of mechanisms are: access control lists, protection
|
|||
|
bits, capabilities, etc.)
|
|||
|
|
|||
|
|
|||
|
|
|||
|
62. Can the access be granted to the users on an individual
|
|||
|
user basis? If so, how?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
63. Can access be denied to the users on an individual user
|
|||
|
basis, i.e., exclude individual users? If so, how?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
64. Can the access be granted to groups of individuals? If
|
|||
|
so, how?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
65. Can the access be denied to groups of individuals? If so,
|
|||
|
how?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
66. How is a group defined? Who has the ability to create or
|
|||
|
delete groups? Who has the ability to add or delete users from a group?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
67. What are the initial access permissions when an object is
|
|||
|
created? Can the initial access permission be changed? If so, by whom
|
|||
|
and how? (User/owner, system administrator, others.)
|
|||
|
|
|||
|
|
|||
|
|
|||
|
68. Can different initial access permissions be specified for
|
|||
|
different users, or is this is a system-wide setting? If the former, by
|
|||
|
whom and how?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
69. Who can grant the access permissions to an object after
|
|||
|
the object is created? (Examples include creator, current owner, system
|
|||
|
administrator, etc.) How is the permission granted?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
70. Can the ability to grant permissions be passed to another
|
|||
|
user? If so, by whom and how? How can the previous owner of the
|
|||
|
privilege still retain it?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
71. How can the access be revoked on an individual user basis?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
72. How can the access be revoked on a group basis?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
73. Are any objects that can be accessed by other users
|
|||
|
excluded from the DAC policy (e.g., IPC files, process
|
|||
|
signaling/synchronization flags)?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
74. For each TCB object identified in 13, describe the DAC
|
|||
|
mechanism.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
75. List the access modes supported by the system. Examples
|
|||
|
of access modes are: read, write, delete, owner, execute, append, etc.
|
|||
|
Briefly describe the meaning of each access mode for each class of object.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
76. For questions 62-72, how can the access modes be
|
|||
|
explicitly defined?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
8. LABELS
|
|||
|
|
|||
|
77. How many hierarchical sensitivity classifications (such as
|
|||
|
unclassified, confidential, secret, top secret), does your system provide
|
|||
|
for? What mechanisms are available to define the internal/storage and
|
|||
|
external/print format? What mechanisms are available to change them? Who
|
|||
|
can invoke these mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
78. How many nonhierarchical sensitivity categories (such as
|
|||
|
FOUO) does your system provide for? What mechanisms are available to
|
|||
|
define the internal/storage and external/print format? What mechanisms
|
|||
|
are available to change them? Who can invoke these mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
79. What is the internal TCB storage format of the sensitivity
|
|||
|
label?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
80. For each type of subject, where is the subject sensitivity
|
|||
|
label stored?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
81. For each type of object, where is the object sensitivity
|
|||
|
label stored?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
82. List the subjects and objects that are labeled and not
|
|||
|
labeled. Why are they labeled or not labeled? How are these subjects and
|
|||
|
objects controlled?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
83. How is imported (brought-in) data, labeled? Is a human
|
|||
|
being involved in the labeling? If so, what is the role of the person
|
|||
|
involved? Does this labeling require special privileges? What are those
|
|||
|
privileges?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
84. Who can change the labels on a subject? How?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
85. Who can change the labels on an object? How?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
86. How are the labels associated with objects communicated
|
|||
|
outside the TCB?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
87. How does the TCB acknowledge a change in the sensitivity
|
|||
|
level associated with an interactive user? Is the user notification
|
|||
|
posted on the user terminal? How immediate is this change?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
88. How does a user query the system TCB for his or her
|
|||
|
current sensitivity label? What part of the sensitivity label is output?
|
|||
|
Where is this output posted?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
89. How does the system designate each device to be
|
|||
|
single-level or multilevel? List the ways this designation can be
|
|||
|
changed. List the users who can invoke these mechanisms/ways.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
90. How does the TCB designate the sensitivity level of a
|
|||
|
single-level device? List the ways this designation can be changed. List
|
|||
|
the users who can invoke these mechanisms.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
91. How does the TCB designate the minimum and maximum
|
|||
|
sensitivity levels of a device? List the ways these designations can be
|
|||
|
changed. List the users who can invoke these mechanisms.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
92. How does the TCB export the sensitivity label associated
|
|||
|
with an object being exported over a multilevel device? What is the
|
|||
|
format for the exported label? How does the TCB ensure that the
|
|||
|
sensitivity label is properly associated with the object?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
93. What mechanisms are available to specify the
|
|||
|
human-readable print label associated with a sensitivity label? Who can
|
|||
|
invoke these mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
94. Is the beginning and end of each hardcopy output marked
|
|||
|
with the human-readable print label representing the sensitivity level of
|
|||
|
the output? In other words, does each hardcopy output have banner pages?
|
|||
|
What happens if a banner page output is longer and/or wider than a
|
|||
|
physical page?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
95. Is the top and bottom of each hardcopy output page marked
|
|||
|
with the human-readable print label representing the sensitivity level of
|
|||
|
the output? What happens if the print label is wider and/or longer than
|
|||
|
the space available for the top and/or the bottom?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
96. How does the TCB mark the top and bottom page of
|
|||
|
nontextual type of output such as graphics, maps, and images?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
97. How can these markings listed in questions 94-96 be
|
|||
|
overridden? Who can override the markings?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
98. How can an operator distinguish the TCB-generated banner
|
|||
|
pages from user output?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
99. What is the sensitivity label for each TCB object listed
|
|||
|
in question 13?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
100. Can a minimum sensitivity level be specified for each
|
|||
|
physical device? If so, how?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
101. Can a maximum sensitivity level be specified for each
|
|||
|
physical device? If so, how?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
102. List the circumstances under which the TCB allows input or
|
|||
|
output of data that fall outside a device sensitivity range (i.e.,
|
|||
|
minimum, maximum).
|
|||
|
|
|||
|
|
|||
|
|
|||
|
9. MANDATORY ACCESS CONTROL (MAC)
|
|||
|
|
|||
|
103. Describe the MAC policy for the possible access modes such
|
|||
|
as read, write, append, delete.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
104. Does the system use sensitivity labels to enforce the MAC?
|
|||
|
If not, what information is used to make the MAC decisions?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
105. List the subjects, objects, and circumstances under which
|
|||
|
the MAC policy is not enforced. Why?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
106. In what sequence does the system check for detecting
|
|||
|
access mechanisms such as: a. privileges that bypass DAC and MAC, b. DAC,
|
|||
|
c. MAC, d. other access mechanisms in lieu of DAC and/or MAC?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
107. Does the TCB support system-low and system-high
|
|||
|
sensitivity levels? If yes, how can they be designated and changed? Who
|
|||
|
can invoke the functions to designate and change them? How are these
|
|||
|
levels used by the system in various labeling functions and MAC decisions?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
10. INTEGRITY
|
|||
|
|
|||
|
108. How many hierarchical integrity categories does your
|
|||
|
system provide for? What mechanisms are available to define the
|
|||
|
internal/storage and external/print format? Who can invoke these
|
|||
|
mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
109. How many nonhierarchical integrity compartments does your
|
|||
|
system provide for? What mechanisms are available to define the
|
|||
|
internal/storage and external/print format? Who can invoke these
|
|||
|
mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
110. What is the internal TCB storage format of the integrity
|
|||
|
label?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
111. For each type of subject, where is the subject integrity
|
|||
|
label stored?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
112. For each type of object, where is the object integrity
|
|||
|
label stored?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
113. List the subjects and objects that do not have integrity
|
|||
|
labels. Why are they not labeled?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
114. How are the data that are imported (brought in) labeled
|
|||
|
with an integrity label? Is a human being involved in the labeling? If
|
|||
|
so, who is it? Does the user labeling require special privileges? What
|
|||
|
are those privileges?
|
|||
|
|
|||
|
|
|||
|
115. Who can change the integrity labels on a subject? How?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
116. Who can change the integrity labels on an object? How?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
117. Describe the integrity policy for various access modes
|
|||
|
such as read and write. Provide a brief description of the formal policy
|
|||
|
model.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
118. Does the system use the integrity labels to enforce the
|
|||
|
integrity policy? If not, what information is used to enforce the
|
|||
|
integrity policy?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
119. List the subjects, objects, and circumstances under which
|
|||
|
the integrity policy is not enforced. Why?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
11. AUDIT
|
|||
|
|
|||
|
120. Provide a brief description (preferably in block diagram
|
|||
|
form) of audit data flow in terms of how the data are created,
|
|||
|
transmitted, stored, and viewed for analysis. How are the audit logs
|
|||
|
protected?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
121. How can the audit log be read? Who can invoke these
|
|||
|
mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
122. How can the audit log be written or appended? Who can
|
|||
|
invoke these mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
123. How can the audit log be deleted? Who can invoke these
|
|||
|
mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
124. Provide a list of auditable events. Are the following
|
|||
|
events auditable: attempted logins, logouts, creation of subjects,
|
|||
|
deletion of subjects, assignment of privileges to subjects, change of
|
|||
|
subject privileges, use of privileges by subjects, creation of objects,
|
|||
|
deletion of objects, initial access to objects (in other words
|
|||
|
introduction of the object into user address space or, e.g., file open),
|
|||
|
accesses that exploit covert storage channels, change in the device
|
|||
|
designation of single-level or multilevel, change in device level, change
|
|||
|
in device minimum or maximum level, override of banner page or page top
|
|||
|
and bottom markings, assumption of the role of security administrator.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
125. Which actions by the privileged users are auditable?
|
|||
|
Which are not? Examples of trusted users are system operator, account
|
|||
|
administrator, system security officer/administrator, auditor, system
|
|||
|
programmer, etc.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
126. What data are recorded for each audit event? Are the
|
|||
|
following data recorded for each event: date, time, user, user sensitivity
|
|||
|
level, object, object sensitivity level, object DAC information (e.g.,
|
|||
|
ACL), type of event, invoked or not invoked, why not invoked, success or
|
|||
|
failure in execution, terminal identification, etc?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
127. Under what circumstances can the password become part of
|
|||
|
the audit record?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
128. What mechanisms are available to designate and change the
|
|||
|
activities being audited? Who can invoke these mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
129. What mechanisms are available for selective auditing
|
|||
|
(i.e., selection of events, subjects, objects, etc., to be audited)? What
|
|||
|
parameters or combination of parameters can be specified for the selective
|
|||
|
auditing? Examples of parameters are: individual or group of subjects,
|
|||
|
individual objects, subjects within a sensitivity range, objects within a
|
|||
|
sensitivity range, event type, etc. Who can invoke these mechanisms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
130. When do changes to the audit parameters take effect (e.g.,
|
|||
|
immediately for all processes, for new processes)?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
131. What tools are available to output raw or processed (i.e.,
|
|||
|
analyzed and reduced) audit information? Who can invoke these tools?
|
|||
|
What do the tools do in terms of audit data reduction? What are the
|
|||
|
internal formats of audit records. What are the formats of the
|
|||
|
reports/outputs generated by these tools?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
132. Are the audit reduction tools part of the TCB? If not, is
|
|||
|
there a trusted mechanism to view/output the audit log?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
133. Does the system produce multiple audit logs? If yes, what
|
|||
|
tools, techiques and methodologies are available to correlate these logs?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
134. Who (e.g., operator, system administrator or other trusted
|
|||
|
user) is notified when the audit log gets full? What options are
|
|||
|
available to handle the situation?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
135. What other action does the TCB take when the audit log
|
|||
|
becomes full? Examples of the TCB options are: halt the system, do not
|
|||
|
perform auditable events, overwrite oldest audit log data, etc. In the
|
|||
|
worst case, how much audit data can be lost? Describe the worst case
|
|||
|
scenario. When does it occur?
|
|||
|
|
|||
|
|
|||
|
136. What happens to the audit data in the memory buffers when
|
|||
|
the system goes down? Are the data recovered as part of the system
|
|||
|
recovery? In the worst case, how much data can be lost? Describe the
|
|||
|
worst case scenario. When does it occur?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
137. How does the TCB designate and change the occurrence or
|
|||
|
accumulation of events that require real-time notification? Who can
|
|||
|
invoke these mechanisms? Who gets the real-time notification? What
|
|||
|
actions/options are available to the individual being notified? What does
|
|||
|
the TCB do about the event and the process that caused this alert?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
12. MODELING AND ANALYSIS
|
|||
|
|
|||
|
138. Provide a copy of the Verification Plan, a brief
|
|||
|
description of its contents, or an annotated outline. Provide a schedule
|
|||
|
for completion of the Verification Plan.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
139. What tools, techniques and methodologies are used to
|
|||
|
represent the formal model of the system security policy? What policies
|
|||
|
are represented in the formal model. Examples include: MAC, DAC,
|
|||
|
privileges, other protection mechanisms, object reuse, etc.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
140. What tools, techniques and methodologies are used to
|
|||
|
verify through formal means the model against its axioms?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
141. What tools, techniques and methodologies are used to
|
|||
|
represent the Descriptive Top Level Specification (DTLS)? What portions
|
|||
|
of the TCB are represented by the DTLS?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
142. What tools, techniques and methodologies are used to
|
|||
|
identify, analyze, calculate, and reduce the bandwidths of data flows in
|
|||
|
violation of the system security policy? How are the occurrences of these
|
|||
|
flow violations audited?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
143. What tools, techniques and methodologies are used to show
|
|||
|
that the DTLS is consistent with the formal security policy model?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
144. What tools, techniques and methodologies are used to
|
|||
|
represent the Formal Top Level Specification (FTLS)? What portions of the
|
|||
|
TCB are represented by the FTLS?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
145. What tools, techniques and methodologies are used to
|
|||
|
verify or show that the FTLS is consistent with the formal security policy
|
|||
|
model?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
146. What tools, techniques and methodologies are used to
|
|||
|
identify the implemented code modules that correspond to the FTLS?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
147. What tools, techniques and methodologies are used to show
|
|||
|
that the code is correctly implemented vis- a-vis the FTLS?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
13. TESTING
|
|||
|
|
|||
|
148. What routines are available to test the correct operation
|
|||
|
of the system hardware and firmware? What elements of the system hardware
|
|||
|
are tested through these routines? What elements of the system firmware
|
|||
|
are tested through these routines? What elements of the system hardware
|
|||
|
and firmware are not tested through these routines?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
149. How are the system hardware and firmware tested? Does the
|
|||
|
testing include boundary and anomalous conditions? Is the emphasis on
|
|||
|
diagnosing and pinpointing faults or is it on ensuring the correct
|
|||
|
operation of the system hardware and firmware? The latter may require
|
|||
|
more of the same testing or different kinds of testing.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
150. How are these routines invoked? Who can invoke these
|
|||
|
routines? Do they run under the control of the operating system or do
|
|||
|
they run in stand-alone mode?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
151. When can these routines be run? When should these
|
|||
|
routines be run? If they run automatically, when do they run? Examples
|
|||
|
include powerup, booting, rebooting, etc.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
152. Describe the software development testing methodology. In
|
|||
|
the methodol-ogy, include various testing steps such as unit, module,
|
|||
|
integration, subsystem, system testing. For each step, provide a
|
|||
|
description of test coverage criteria and test cases development
|
|||
|
methodology.
|
|||
|
|
|||
|
|
|||
|
153. Provide a copy of the security test plan, brief
|
|||
|
description of its contents, or an annotated outline. Does the test plan
|
|||
|
include the following information: system configuration for testing,
|
|||
|
procedures to generate the TCB, procedures to bring up the system, testing
|
|||
|
schedule, test procedures, test cases, expected test results? Provide a
|
|||
|
schedule for development of the security test plan.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
154. How thorough is the security testing? Do the test cases
|
|||
|
include nominal, boundary, and anomalous values for each input? What
|
|||
|
about the combinations of inputs? Describe the test coverage criteria.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
155. How are the test cases developed? Are they based on the
|
|||
|
concept of func-tional testing, structural testing, or a combination of
|
|||
|
the two?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
156. What tools and techniques (automated, manual, or a
|
|||
|
combination of the two) will be used to do the functional and/or
|
|||
|
structural analysis in order to develop a thorough set of test cases?
|
|||
|
Indicate how you plan to use FTLS and DTLS in this analysis. If you do
|
|||
|
not plan to use them, how do you plan to show consistency among FTLS,
|
|||
|
DTLS, and the code?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
157. How do you plan to develop the scenarios for penetration
|
|||
|
testing?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
158. How do you plan to test the bandwidths of flow violation
|
|||
|
channels?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
159. How do you plan to ascertain that only a few errors
|
|||
|
remain?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
14. OTHER ASSURANCES
|
|||
|
|
|||
|
160. Describe the Configuration Management (CM) system in place
|
|||
|
in terms of organizational responsibilities, procedures, and tools and
|
|||
|
techniques (automated, manual, or a combination of the two). Describe the
|
|||
|
version control or other philosophy to ensure that the elements represent
|
|||
|
a consistent system, i.e., object code represents the source code, which
|
|||
|
in turn represents the FTLS and DTLS, etc. If the CM system is different
|
|||
|
for some of the elements listed under Question 25, answer this question
|
|||
|
for each of the elements.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
161. When was the CM instituted? Provide the approximate date
|
|||
|
as well as the system life-cycle phase (e.g. design, development, testing).
|
|||
|
|
|||
|
|
|||
|
|
|||
|
162. List the TCB elements that are under the Configuration
|
|||
|
Management. List the TCB elements that are not under CM. Examples of TCB
|
|||
|
elements are: hardware elements, firmware elements, formal security policy
|
|||
|
model, FTLS, DTLS, design data and documentation, source code, object
|
|||
|
code, software engineering environment, test plans, Security Features
|
|||
|
User's Guide, Trusted Facilities Manual, etc.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
163. Describe the protection mechanisms in place to safeguard
|
|||
|
the CM elements.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
164. When (e.g., before user authentication) and how (e.g. by
|
|||
|
typing a specific control character sequence) can the trusted path be
|
|||
|
invoked by the user? What TCB elements are involved in establishing the
|
|||
|
trusted path?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
165. List separately the functions that can be performed by
|
|||
|
each of the trusted users such as the operator, security administrator,
|
|||
|
accounts administrator, auditor, systems programmer, etc. For each of
|
|||
|
these persons/roles, list the system data bases that can be accessed and
|
|||
|
their access modes. For each of these roles, also list the privileges
|
|||
|
provided to them.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
166. How does the TCB recognize that a user has assumed one of
|
|||
|
the above-mentioned trusted roles? Which of the above-mentioned functions
|
|||
|
can be performed without the TCB recognizing this role?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
167. When and how does the TCB invoke the trusted path? What
|
|||
|
TCB elements are involved in establishing the trusted path?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
168. How does the TCB ensure that the trusted path is
|
|||
|
unspoofable?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
169. How does the system recovery work? What system resources
|
|||
|
(e.g., memory, disks blocks, files) are protected prior to and during the
|
|||
|
system recovery? How are they protected? What resources are not
|
|||
|
protected?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
170. Does the system have a degraded mode of operation? What
|
|||
|
can cause this to occur? How long can the system keep running in this
|
|||
|
mode? How does an operator get the system back to full operation? What
|
|||
|
security-related services are provided in the degraded mode? What
|
|||
|
security-related services are not provided?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
171. Describe the tools, techniques and procedures used to
|
|||
|
ensure the integrity of the TCB elements (hardware, firmware, software,
|
|||
|
documents, etc.) supplied to the customers. Examples include trusted
|
|||
|
courier, electronic seals, physical seals, etc.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
15. OTHER DOCUMENTATION
|
|||
|
|
|||
|
172. Describe the methodology used in the design of the system.
|
|||
|
Provide a list of documents that capture the system design. For each
|
|||
|
document, provide a copy, a brief description of its contents, or an
|
|||
|
annotated outline. Provide a schedule for development of the design
|
|||
|
documents.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
173. Provide a copy of the Security Features User's Guide
|
|||
|
(SFUG), a brief description of its contents, or an annotated outline.
|
|||
|
Does the document describe the protection mechanisms available to the
|
|||
|
users? Does it include examples of how to use the protection mechanisms
|
|||
|
in conjunction with one another to meet the user security objectives?
|
|||
|
Provide a schedule for development of the SFUG.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
174. Provide a copy of the Trusted Facility Manual (TFM), a
|
|||
|
brief description of its contents, or an annotated outline. Provide a
|
|||
|
schedule for development of the TFM.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
175. Does the TFM contain procedures to configure the secure
|
|||
|
system? Does it list the devices and hardware elements that are part of
|
|||
|
the evaluated configuration? Does it contain procedures for configuring
|
|||
|
each of the devices, for connecting them, and for configuring the entire
|
|||
|
system? Does it list the devices that are not part of the evaluated
|
|||
|
configuration? Does it list the procedures for securely configuring them
|
|||
|
out and for disconnecting them?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
176. Does the TFM contain procedures to generate the TCB from
|
|||
|
source code? For each system parameter or input, does the TFM list valid
|
|||
|
values for a secure TCB generation?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
177. Does the TFM list the functions, privileges, and data
|
|||
|
bases that are to be controlled? Does it list how these are controlled?
|
|||
|
Does it list the consequences of granting access to them as warnings?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
178. Does the TFM provide procedures for maintaining the audit
|
|||
|
log? Does it describe how the audit log can be turned on, turned off,
|
|||
|
combined, and backed up? Does it describe how to detect the audit log is
|
|||
|
getting full, or is full, and what actions to take in order to minimize
|
|||
|
the loss of audit data?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
179. Does the TFM contain the structure of the audit log file
|
|||
|
and the format of the audit records? Does it describe how the audit
|
|||
|
records can be viewed? Does it describe the capabilities of the audit
|
|||
|
reduction tool, how to invoke these capabilities, and the format of the
|
|||
|
tool output?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
180. Does the TFM contain the procedures and warnings for
|
|||
|
secure operation of the computing facility? Does it address the physical,
|
|||
|
personnel, and administrative aspects of security in order to ensure the
|
|||
|
protection of computing hardware, firmware, software, and privileged
|
|||
|
devices such as the operator terminals? Does it address the protection
|
|||
|
of hard-copy outputs?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
181. Does the TFM provide a list of trusted users and
|
|||
|
processes? For each trusted user or process, does it list the functions,
|
|||
|
privileges, and data bases to be accessed? Examples of trusted users are
|
|||
|
system operator, security administrator, accounts administrator, auditor,
|
|||
|
etc. Examples of trusted processes are device queue manipulation, user
|
|||
|
profile editor, etc. Examples of functions are creating and deleting
|
|||
|
users, changing user security profile, setting up defaults for
|
|||
|
discretionary and mandatory access controls, selecting auditing events in
|
|||
|
terms of functions, subjects, objects, sensitivity levels, and/or a
|
|||
|
combination of them, etc. Examples of data bases are user security
|
|||
|
profiles, authentication data base, etc.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
182. Does the TFM include a list of TCB modules that make up
|
|||
|
the security kernel?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
183. Does the TFM contain the procedures for securely
|
|||
|
starting/booting/initializing the system?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
184. Does the TFM contain the procedures for securely
|
|||
|
restarting/resuming the system after a lapse in system operation?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
185. Does the TFM contain the procedures for securely
|
|||
|
recovering the system after a system failure?
|
|||
|
|
|||
|
|
|||
|
|
|||
|
GLOSSARY
|
|||
|
|
|||
|
Access
|
|||
|
A specific type of interaction between a subject and an
|
|||
|
object that results in the flow of information from one to the other.
|
|||
|
|
|||
|
Access List
|
|||
|
A list of users, programs, and/or processes and the
|
|||
|
specifications of access categories to which each is assigned.
|
|||
|
|
|||
|
Administrative User
|
|||
|
A user assigned to supervise all or a portion of an ADP
|
|||
|
system.
|
|||
|
|
|||
|
Audit
|
|||
|
To conduct the independent review and examination of
|
|||
|
system records and activities.
|
|||
|
|
|||
|
Audit Trail
|
|||
|
A chronological record of system activities that is
|
|||
|
sufficient to enable the reconstruction, reviewing, and examination of the
|
|||
|
sequence of environments and activities surrounding or leading to an
|
|||
|
operation, a procedure, or an event in a transaction from its inception to
|
|||
|
final results.
|
|||
|
|
|||
|
Auditor
|
|||
|
An authorized individual, or role, with administrative
|
|||
|
duties, which include selecting the events to be audited on the system,
|
|||
|
setting up the audit flags that enable the recording of those events, and
|
|||
|
analyzing the trail of audit events.
|
|||
|
|
|||
|
Authenticate
|
|||
|
(1) To verify the identity of a user, device, or other
|
|||
|
entity in a computer system, often as a prerequisite to allowing access to
|
|||
|
resources in a system.
|
|||
|
|
|||
|
(2) To verify the integrity of data that have been stored,
|
|||
|
transmitted, or otherwise exposed to possible unauthorized modification.
|
|||
|
|
|||
|
Authenticated User
|
|||
|
A user who has accessed an ADP system with a valid
|
|||
|
identifier and authentication combination.
|
|||
|
|
|||
|
Authorization
|
|||
|
The granting of access rights to a user, program, or
|
|||
|
process.
|
|||
|
|
|||
|
Bandwidth
|
|||
|
A characteristic of a communication channel that is the
|
|||
|
amount of information that can be passed through it in a given amount of
|
|||
|
time, usually expressed in bits per second.
|
|||
|
|
|||
|
Bell-LaPadula Model
|
|||
|
A formal state transition model of computer security
|
|||
|
policy that describes a set of access control rules. In this formal
|
|||
|
model, the entities in a computer system are divided into abstract sets of
|
|||
|
subjects and objects. The notion of a secure state is defined, and it is
|
|||
|
proven that each state transition preserves security by moving from secure
|
|||
|
state to secure state, thereby inductively proving that the system is
|
|||
|
secure. A system state is defined to be "secure" if the only permitted
|
|||
|
access modes of subjects to objects are in accordance with a specific
|
|||
|
security policy. In order to determine whether or not a specific access
|
|||
|
mode is allowed, the clearance of a subject is compared to the
|
|||
|
classification of the object, and a determination is made as to whether
|
|||
|
the subject is authorized for the specific access mode. The
|
|||
|
clearance/classification scheme is expressed in terms of a lattice. See
|
|||
|
Star Property (*-property) and Simple Security Property.
|
|||
|
|
|||
|
Channel
|
|||
|
An information transfer path within a system. May also
|
|||
|
refer to the mechanism by which the path is effected.
|
|||
|
|
|||
|
Covert Channel
|
|||
|
A communication channel that allows two cooperating
|
|||
|
processes to transfer information in a manner that violates the system's
|
|||
|
security policy.
|
|||
|
|
|||
|
Covert Storage Channel
|
|||
|
A covert channel that involves the direct or indirect
|
|||
|
writing of a storage location by one process and the direct or indirect
|
|||
|
reading of the storage location by another process. Covert storage
|
|||
|
channels typically involve a finite resource (e.g., sectors on a disk)
|
|||
|
that is shared by two subjects at different security levels.
|
|||
|
|
|||
|
Covert Timing Channel
|
|||
|
A covert channel in which one process signals information
|
|||
|
to another by modulating its own use of system resources (e.g., CPU time)
|
|||
|
in such a way that this manipulation affects the real response time
|
|||
|
observed by the second process.
|
|||
|
|
|||
|
Coverage Analysis
|
|||
|
Qualitative or quantitative assessment of the extent to
|
|||
|
which the test conditions and data show compliance with required
|
|||
|
properties (e.g., security model and TCB primitive properties). See: Test
|
|||
|
Condition, Test Data, Security Policy Model.
|
|||
|
|
|||
|
Data
|
|||
|
Information with a specific physical representation.
|
|||
|
|
|||
|
Data Integrity
|
|||
|
The property that data meet an a priori expectation of
|
|||
|
quality.
|
|||
|
|
|||
|
Degauss
|
|||
|
To reduce magnetic flux density to zero by applying a
|
|||
|
reverse magnetizing field.
|
|||
|
|
|||
|
Descriptive Top Level Specification (DTLS)
|
|||
|
A top-level specification that is written in a natural
|
|||
|
language (e.g.,English), an informal program design notation, or a
|
|||
|
combination of the two.
|
|||
|
|
|||
|
Discretionary Access Control (DAC)
|
|||
|
A means of restricting access to objects based on the
|
|||
|
identity and need-to-know of the user, process and/or groups to which they
|
|||
|
belong. The controls are discretionary in the sense that a subject with a
|
|||
|
certain access permission is capable of passing that permission (perhaps
|
|||
|
indirectly) on to any other subject.
|
|||
|
|
|||
|
Dominate
|
|||
|
Security level S1 is said to dominate security level S2 if
|
|||
|
the hierarchical classification of S1 is greater than or equal to that of
|
|||
|
S2 and the nonhierarchical categories of S1 include all those of S2 as a
|
|||
|
subset.
|
|||
|
|
|||
|
Exploitable Channel
|
|||
|
Any channel that is usable or detectable by subjects
|
|||
|
external to the Trusted Computing Base whose purpose is to violate the
|
|||
|
security policy of the system.
|
|||
|
|
|||
|
Flaw
|
|||
|
An error of commission, omission, or oversight in a system
|
|||
|
that allows protection mechanisms to be bypassed.
|
|||
|
|
|||
|
Flaw Hypothesis Methodology
|
|||
|
A system analysis and penetration technique in which
|
|||
|
specifications and documentation for the system are analyzed and then
|
|||
|
flaws in the system are hypothesized. The list of hypothesized flaws is
|
|||
|
prioritized on the basis of the estimated probability that a flaw actually
|
|||
|
exists and, assuming a flaw does exist, on the ease of exploiting it and
|
|||
|
on the extent of control or compromise it would provide. The prioritized
|
|||
|
list is used to direct a penetration attack against the system.
|
|||
|
|
|||
|
Formal Proof
|
|||
|
A complete and convincing mathematical argument,
|
|||
|
presenting the full logical justification for each proof step, for the
|
|||
|
truth of a theorem or set of theorems.
|
|||
|
|
|||
|
Formal Security Policy Model
|
|||
|
A mathematically precise statement of a security policy.
|
|||
|
To be adequately precise, such a model must represent the initial state of
|
|||
|
a system, the way in which the system progresses from one state to
|
|||
|
another, and a definition of a "secure" state of the system. To be
|
|||
|
acceptable as a basis for a TCB, the model must be supported by a formal
|
|||
|
proof that if the initial state of the system satisfies the definition of
|
|||
|
a "secure" state and if all assumptions required by the model hold, then
|
|||
|
all future states of the system will be secure. Some formal modeling
|
|||
|
techniques include: state transition models, temporal logic models,
|
|||
|
denotational semantics models, algebraic specification models.
|
|||
|
|
|||
|
Formal Top-Level Specification (FTLS)
|
|||
|
A Top-Level Specification that is written in a formal
|
|||
|
mathematical language to allow theorems showing the correspondence of the
|
|||
|
system specification to its formal requirements to be hypothesized and
|
|||
|
formally proven.
|
|||
|
|
|||
|
Formal Verification
|
|||
|
The process of using formal proofs to demonstrate the
|
|||
|
consistency between a formal specification of a system and a formal
|
|||
|
security policy model (design verification) or between the formal
|
|||
|
specification and its program implementation (implementation verification).
|
|||
|
|
|||
|
Functional Testing
|
|||
|
The segment of security testing in which the advertised
|
|||
|
mechanisms of a system are tested, under operational conditions, for
|
|||
|
correct operation.
|
|||
|
|
|||
|
Identification
|
|||
|
The process that enables recognition of an entity by a
|
|||
|
system, generally by the use of unique machine-readable user names.
|
|||
|
|
|||
|
Integrity
|
|||
|
Sound, unimpaired or perfect condition.
|
|||
|
|
|||
|
Internal Security Controls
|
|||
|
Hardware, firmware, and software features within a system
|
|||
|
that restrict access to resources (hardware, software, and data) to
|
|||
|
authorized subjects only (persons, programs, or devices).
|
|||
|
|
|||
|
Isolation
|
|||
|
The containment of subjects and objects in a system in
|
|||
|
such a way that they are separated from one another, as well as from the
|
|||
|
protection controls of the operating system.
|
|||
|
|
|||
|
Lattice
|
|||
|
A partially ordered set for which every pair of elements
|
|||
|
has a greatest lower bound and a least upper bound.
|
|||
|
|
|||
|
Least Privilege
|
|||
|
This principle requires that each subject in a system be
|
|||
|
granted the most restrictive set of privileges (or lowest clearance)
|
|||
|
needed for the performance of authorized tasks. The application of this
|
|||
|
principle limits the damage that can result from accident, error, or
|
|||
|
unauthorized use.
|
|||
|
|
|||
|
Mandatory Access Control (MAC)
|
|||
|
A means of restricting access to objects based on the
|
|||
|
sensitivity (as repre-sented by a label) of the information contained in
|
|||
|
the objects and the formal authorization (i.e., clearance) of subjects to
|
|||
|
access information of such sensitivity.
|
|||
|
|
|||
|
Multilevel Device
|
|||
|
A device that is used in a manner that permits it to
|
|||
|
simultaneously process data of two or more security levels without risk of
|
|||
|
compromise. To accomplish this, sensitivity labels are normally stored on
|
|||
|
the same physical medium and in the same form (i.e., machine-readable or
|
|||
|
human-readable) as the data being processed.
|
|||
|
|
|||
|
Object
|
|||
|
A passive entity that contains or receives information.
|
|||
|
Access to an object potentially implies access to the information it
|
|||
|
contains. Examples of objects are: records, blocks, pages, segments,
|
|||
|
files, directories, directory tree, and programs, as well as bits, bytes,
|
|||
|
words, fields, processors, video displays, keyboards, clocks, printers,
|
|||
|
network nodes.
|
|||
|
|
|||
|
Object Reuse
|
|||
|
The reassignment and reuse of a storage medium (e.g., page
|
|||
|
frame, disk sector, magnetic tape) that once contained one or more
|
|||
|
objects. To be securely reused and assigned to a new subject, storage
|
|||
|
media must contain no residual data (magnetic remanence) from the
|
|||
|
object(s) previously contained in the media.
|
|||
|
|
|||
|
Penetration
|
|||
|
The successful act of bypassing the security mechanisms of
|
|||
|
a system.
|
|||
|
|
|||
|
Process
|
|||
|
A program in execution.
|
|||
|
|
|||
|
Protection-Critical Portions of the TCB
|
|||
|
Those portions of the TCB whose normal function is to deal
|
|||
|
with the control of access between subjects and objects. Their correct
|
|||
|
operation is esssential to the protection of data on the system.
|
|||
|
|
|||
|
Read
|
|||
|
A fundamental operation that results only in the flow of
|
|||
|
information from an object to a subject.
|
|||
|
|
|||
|
Read Access (Privilege)
|
|||
|
Permission to read information.
|
|||
|
|
|||
|
Reference Monitor Concept
|
|||
|
An access-control concept that refers to an abstract
|
|||
|
machine that mediates all accesses to objects by subjects.
|
|||
|
|
|||
|
Security Level
|
|||
|
The combination of a hierarchical classification and a set
|
|||
|
of nonhierarchical categories that represents the sensitivity of
|
|||
|
information.
|
|||
|
|
|||
|
Security Policy
|
|||
|
The set of laws, rules, and practices that regulate how an
|
|||
|
organization manages, protects, and distributes sensitive information.
|
|||
|
|
|||
|
Security Policy Model
|
|||
|
A formal presentation of the security policy enforced by
|
|||
|
the system. It must identify the set of rules and practices that regulate
|
|||
|
how a system manages, protects, and distributes sensitive information.
|
|||
|
See Bell-La Padula Model and Formal Security Policy Model.
|
|||
|
|
|||
|
Security-Relevant Event
|
|||
|
Any event that attempts to change the security state of
|
|||
|
the system, (e.g., change discretionary access controls, change the
|
|||
|
security level of the subject, change user password). Also, any event
|
|||
|
that attempts to violate the security policy of the system, (e.g., too
|
|||
|
many attempts to log in, attempts to violate the mandatory access control
|
|||
|
limits of a device, attempts to downgrade a file).
|
|||
|
|
|||
|
Security Testing
|
|||
|
A process used to determine that the security features of
|
|||
|
a system are implemented as designed. This includes hands-on functional
|
|||
|
testing, penetration testing, and verification.
|
|||
|
|
|||
|
Simple Security Property
|
|||
|
A Bell-La Padula security model rule allowing a subject
|
|||
|
read access to an object only if the security level of the subject
|
|||
|
dominates the security level of the object. Also called simple security
|
|||
|
condition.
|
|||
|
|
|||
|
Single-Level Device
|
|||
|
An automated information systems device that is used to
|
|||
|
process data of a single security level at any one time.
|
|||
|
|
|||
|
Spoofing
|
|||
|
An attempt to gain access to a system by posing as an
|
|||
|
authorized user. Synonymous with impersonating, masquerading or mimicking.
|
|||
|
|
|||
|
Star Property
|
|||
|
A Bell-La Padula security model rule allowing a subject
|
|||
|
write access to an object only if the security level of the object
|
|||
|
dominates the security level of the subject. Also called confinement
|
|||
|
property, *-property.
|
|||
|
|
|||
|
Subject
|
|||
|
An active entity, generally in the form of a person,
|
|||
|
process, or device, that causes information to flow among objects or
|
|||
|
changes the system state. Technically, a process/domain pair.
|
|||
|
|
|||
|
Subject Security Level
|
|||
|
A subject's security level is equal to the security level
|
|||
|
of the objects to which it has both read and write access. A subject's
|
|||
|
security level must always be dominated by the clearance of the user the
|
|||
|
subject is associated with.
|
|||
|
|
|||
|
Terminal Identification
|
|||
|
The means used to provide unique identification of a
|
|||
|
terminal to a system.
|
|||
|
|
|||
|
Test Condition
|
|||
|
A statement defining a constraint that must be satisfied
|
|||
|
by the program under test.
|
|||
|
|
|||
|
Test Data
|
|||
|
The set of specific objects and variables that must be
|
|||
|
used to demonstrate that a program produces a set of given outcomes.
|
|||
|
|
|||
|
Test Plan
|
|||
|
A document or a section of a document which describes the
|
|||
|
test conditions, data, and coverage of a particular test of group of
|
|||
|
tests. See also: Test Condition, Test Data, Coverage Analysis.
|
|||
|
|
|||
|
Test Procedure (Script)
|
|||
|
A set of steps necessary to carry out one or a group of
|
|||
|
tests. These include steps for test environment initialization, test
|
|||
|
execution, and result analysis. The test procedures are carried out by
|
|||
|
test operators.
|
|||
|
|
|||
|
Test Program
|
|||
|
A program which implements the test conditions when
|
|||
|
initialized with the test data and which collects the results produced by
|
|||
|
the program being tested.
|
|||
|
|
|||
|
Top-Level Specification (TLS)
|
|||
|
A nonprocedural description of system behavior at the most
|
|||
|
abstract level, typically, a functional specification that omits all
|
|||
|
implementation details.
|
|||
|
|
|||
|
Trusted Computer System
|
|||
|
A system that employs sufficient hardware and software
|
|||
|
integrity measures to allow its use for processing simultaneously a range
|
|||
|
of sensitive or classified information.
|
|||
|
|
|||
|
Trusted Computing Base (TCB)
|
|||
|
The totality of protection mechanisms within a computer
|
|||
|
system * including hardware, firmware, and software * the combination of
|
|||
|
which is responsible for enforcing a security policy. It creates a basic
|
|||
|
protection environment and provides additional user services required for
|
|||
|
a trusted computer system. The ability of a trusted computing base to
|
|||
|
correctly enforce a security policy depends solely on the mechanisms
|
|||
|
within the TCB and on the correct input by system administrative personnel
|
|||
|
of parameters (e.g., a user's clearance) re-lated to the security policy.
|
|||
|
|
|||
|
Trusted Path
|
|||
|
A mechanism by which a person at a terminal can
|
|||
|
communicate directly with the Trusted Computing Base. This mechanism can
|
|||
|
only be activated by the person or the Trusted Computing Base and cannot
|
|||
|
be imitated by untrusted software.
|
|||
|
|
|||
|
User
|
|||
|
Person or process accessing an AIS either by direct
|
|||
|
connections (i.e., via terminals), or indirect connections (i.e., prepare
|
|||
|
input data or receive output that is not reviewed for content or
|
|||
|
classification by a responsible individual).
|
|||
|
|
|||
|
Verification
|
|||
|
The process of comparing two levels of system
|
|||
|
specification for proper correspondence (e.g., security policy model with
|
|||
|
top-level specification, TLS with source code, or source code with object
|
|||
|
code). This process may or may not be automated.
|
|||
|
|
|||
|
Write
|
|||
|
A fundamental operation that results only in the flow of
|
|||
|
information from a subject to an object.
|
|||
|
|
|||
|
Write Access (Privilege)
|
|||
|
Permission to write an object.
|
|||
|
|
|||
|
REFERENCES
|
|||
|
|
|||
|
1. Department of Defense, Trusted Computer System Evaluation
|
|||
|
Criteria, DoD 5200.28*STD, December 1985.
|
|||
|
|
|||
|
2. Department of Defense, ADP Security Manual - Techniques and
|
|||
|
Procedures for Implementing, Deactivating, Testing, and Evaluating Secure
|
|||
|
Resource Sharing ADP Systems, DoD 5200.28*M, revised June 1979.
|
|||
|
|
|||
|
3. Aerospace Report No. TOR-0086 (6777*25)1, "Trusted Computer System
|
|||
|
Evaluation Management Plan," 1 October 1985.
|
|||
|
|
|||
|
4. National Computer Security Center, NCSC*TG*002 Version *1, Trusted
|
|||
|
Product Evaluations - A Guide For Vendors, 1 March 1988 (DRAFT).
|
|||
|
|
|||
|
5. National Computer Security Center, NCSC*TG*004 Version 1, Glossary
|
|||
|
of Computer Security Terms, 21 October 1988.
|
|||
|
|
|||
|
6. National Computer Security Center, NCSC*TG*013 Version 1, Rating
|
|||
|
Maintenance Phase * Program Document, 23 June 1989
|