Listed on separate pages:
The globe icon is used before materials available from other sites.
Items provided by the ACM Digital Library require a subscription or ACM membership in order to access them.
For more information on an author, see the list of Multicians.
For explanations of Multics terms, see the Multics Glossary.
For links to other sites of interest, see the Multics Links Page.
This report summarizes the effects of reducing the current Multics hardcore supervisor (Ring 0), even as the entire Multics system is undergoing continuous development and enhancement. An evolutionary engineering discipline, rather than a structured, formal approach has been used to either modify or recommend changes to the system. Many of the proposed major changes have been demonstrated as being sound and useful. These system changes are documented in this report. For the purposes of this report, the security kernel is that part of the system which implements a reference monitor that enforces a specified protection policy. That is, a security kernel is a subset of the current Multics supervisor. This report will show that the engineering approach of undertaking trial designs and that the engineering approach of undertaking trial designs and implementation is indeed a major contribution to the eventual analytical development and certification of a Multics supervisor which can then be viewed as the Multics security kernel.
This report provides the status of certain engineering efforts to support the development of a secure general purpose computer.
Details of a planning study for USAF computer security requirements are presented. An Advanced development and Engineering program to obtain an open-use, multilevel secure computing capability is described. Plans are also presented for the related developments of communications security products and the interim solution to present secure computing problems. Finally a Exploratory development plan complementary to the recommended Advanced and Engineering development plans is also included.
This note is prompted by a number of observations. - After nearly twelve years of serious work on computer security, all that can be shown is two one-shot 'brassboard' systems and one commercially supported product that integrates the DoD security policy into the operating system. - The first round of research results on computer security were useful and by 1975 the principles of secure computers were well enough understood that the first demonstration models of security kernels had been completed. [SCHI 73] - In spite of hopes to the contrary, it has been amply demonstrated that the civil sector of government and virtually all of the private sector can satisfy their information protection needs with simple physical and procedural methods, coupled with using systems with "improved integrity". - In spite of the tiresomeness of its repetition, the fact is that the need for secure systems for important national defense applications has not been diminished in the slightest by any work that has gone on over the past twelve years.
Summary of a session of an unidentified conference, apparently a panel discussion in which Joe Ossanna described the status of Multics. But the pages that would tell us which conference it was (I didn't see it in the table of contents of the December 1968 FJCC) were not captured. There can't be very many conferences that had talks by Joe, Dave Farber, Gio Wiederhold, and Irwin Greenwald in the same session, but I haven't found it in any of the standard CS bibliographies, probably because it was a discussion-only session without a printed paper.
The legacy C-17 Support Equipment Data Acquisition and Control System (SEDACS) was initially designed as a test requirement document (TRD) and test program set (TPS) development system. Its applications have expanded to include word processing for a majority of the C-17 support equipment (SE) deliverable documentation, project management functions, and line-replaceable-unit (LRU) and shop-replaceable-unit (SRU) tracking. While the SEDACS system enabled MDA to support C-17 test and early operation, this legacy SEDACS has some drawbacks. Recently, the SEDACS was upgraded from a host-based Honeywell/Multics mainframe to a new client/server system. The TPS document management system (DMS) was designed to provide the environment to create and edit documents as well as to control their configurations, and it is the first step toward becoming an electronic document management system. The system has increased efficiency and productivity, improved and safeguarded file sharing, and provides better management of document revisions. This TPS DMS was developed using an integrated application software package that runs on IBM PCs. This paper describes how the integrated application software was developed and how the deliverable documents were transferred from the existing mainframe system to the client/server system. The software products identified in this paper were chosen to meet our particular applications requirements and are provided only as examples.
Building and prototyping an agricultural electronic marketing system involved experimenting with distributed synchronization, atomic activity, and commit protocols and recovery algorithms.
The User Requirements Analyzer URA is one of two major components of computer aided requirements analysis. URA is used in conjunction with the User Requirements Language URL to generate, maintain, and analyze URL data bases. URL is described in ESD-TR-75-88, Vol. II. This document describes URA version 2.1 in the H6l80Multics computing environment. URA version 2.0 is implemented in the IBM 370158TS0 computing environment and described in ESD-TR-75-88, Vol. III.
For the past several years ESD has been involved in various projects relating to secure computer systems design and operation. One of the continuing efforts, started in 1972 at MITRE, has been secure computer system modeling. The effort initially produced a mathematical framework and a model [1, 2] and subsequently developed refinements and extensions to the model [3] which reflected a computer system architecture similar to that of Multics [4]. Recently a large effort has been proceeding to produce a design for a secure Multics based on the mathematical model given in [l, 2, 3]. Same as ESD-TR-75-306, DTIC AD-A023588
a view by Jean Bellec (FEB), from the other side of the Atlantic
AIMER (Automatic Integration of Multiple Element Radars) is an emulated model of a loosely coupled distributed radar tracking processor. The design goal of the model is to provide a reliable processing system whose computational bandwidth can be dynamically altered in response to changing ground scenario and availability of hardware. A large number of minicomputers connected with multiple packet networks was chosen as the framework for the design. This paper describes the current status of AIMER.
Commun. ACM 15, 5, pp 308-318, May 1972. As experience with use of on-line operating systems has grown, the need to share information among system users has become increasingly apparent. Many contemporary systems permit some degree of sharing. Usually, sharing is accomplished by allowing several users to share data via input and output of information stored in files kept in secondary storage. Through the use of segmentation, however, Multics provides direct hardware addressing by user and system programs of all information, independent of its physical storage location. Information is stored in segments each of which is potentially sharable and carries its own independent attributes of size and access privilege. Here, the design and implementation considerations of segmentation and sharing in Multics are first discussed under the assumption that all information resides in a large, segmented main memory. Since the size of main memory on contemporary systems is rather limited, it is then shown how the Multics software achieves the effect of a large segmented main memory through the use of the Honeywell 645 segmentation and paging hardware.
In the late 1980s, when André Bensoussan was about to retire from Honeywell (or perhaps he was retiring as a Bull employee working at Honeywell) I was also working at Honeywell and arranged to send two boxes of what he considered the historically most valuable Multics material with which he was willing to part to the Charles Babbage Institute (CBI) at the University of Minnesota.
This report contains the user's manuals and software documentation for the Remote Data Entry System which is the front-end to the MULTICS Pattern Recognition Facility and the Cluster Analysis package which was added to MULTICS OLPARS. The Remote Data Entry System was designed to allow users of the MULTICS Pattern Recognition Facility the ability to input their data over the ARPANET from a Tektronix remote storage device. Once the data is input into the MULTICS System, routines are provided so that the user can easily restructure or cluster his database to perform different classification experiments.
The Protection Analysis project was initiated at ISI by ARPA IPTO to further understand operating system security vulnerabilities and, where possible, identify automatable techniques for detecting such vulnerabilities in existing system software. The primary goal of the project was to make protection evaluation both more effective and more economical by decomposing it into more manageable and methodical subtasks so as to drastically reduce the requirement for protection expertise and make it as independent as possible of the skills and motivation of the actual individuals involved. The project focused on near-term solutions to the problem of improving the security of existing and future operating systems in an attempt to have some impact on the security of the systems which would be in use over the next ten years. A general strategy was identified, referred to as "pattern-directed protection evaluation" and tailored to the problem of evaluating existing systems. The approach provided a basis for categorizing protection errors according to their security-relevant properties; it was successfully applied for one such category to the MULTICS operating system, resulting in the detection of previously unknown security vulnerabilities.
180 pages.
A lot has been forgotten about security over the years, so it is necessary for somebody who's been around for a while to speak up.
64 pages.
he Honeywell 6180 is a new, large-scale computer for the MULTICS time-sharing system. This report describes the 6180, and examines the feasibility of emulating it with each of three microprogrammable processors: the Burroughs D-Machine, the Nanodata QM-1, and the Burroughs B1700. Benchmark emulations are presented for each of these machines.
The operation of a computer system in a secure fashion requires the control of access to all parts of the system. One part of the system which is often neglected when access and security controls are developed is the input/output (I/O) subsystem. This paper develops a general Concept of Operations for I/O in a secure computer system. This concept is then applied to the proposed two-level, Secret-Top Secret, MULTICS System at the Air Force Data Services Center (AFDSC). The most unusual operational feature recommended for the AFDSC MULTICS is the use of autonomous processes to perform all I/O, preventing any user from directly accessing any I/O device. Procedures are described to provide the necessary controls for operation in the Data Services Center environment.
This article spells Wes Burner's name wrong throughout
The C-17 Support Equipment Data Acquisition and Control System (SEDACS) Test Program Set/Test Requirements Document (TPS/TRD) development system was upgraded from a host-based Honeywell/Multics mainframe system to a new client/server system with Internet connectivity. Reliability, flexibility, and supportability were the requirements for the new system. The combination of the client/server model and commercial software met these requirements by exploiting fast and inexpensive hardware and commercial off-the-shelf (COTS) software such as word processing and project and circuit analysis software. Greater efficiencies were realized by reducing the required time needed to train users, develop TPSs, and prepare supporting documentation. Quality was improved by incorporating configuration management tools and integrated spell checking into the applications suite and by designing around a centralized database. This paper briefly describes how we developed our new system and how we migrated from our existing mainframe (or legacy) system to a client/server system.
This paper discusses user considerations and how they affected the design of Tactical Control Directives (TCD). TCDs were a system extension to the Enhanced Naval Warfare Gaming System (ENWGS). They were a forward chaining rule-based language and runtime environment that allowed users to construct and execute simulations of complex naval doctrine. They differed significantly from other rule-based environments of the time in that rules could be triggered by a combination data conditions and real-time events.
82 pages
Some aspects of the Multics operating system are critically examined. In particular, the properties of the command and language are noted as allowing considerable general purpose programming power. The strength and weaknesses are discussed and a quantitative evaluation of speed is attempted based on a comparison of programming the "Towers of Hanoi" and Ackermann's function in both Multics command language and pll. The programs also serve to exemplify the use of the command language.
The development of interactive graphics computer systems for use in detection, identification, and transformation of patterns contained in high- dimensional data has been a continuing program at the Rome Air Development Center since 1968. This long standing effort has resulted in the implementation of OLPARS (the On-Line Pattern Analysis and Recognition System), IFES (the Image Feature Extraction System), and WPS (the Waveform Processing System). This report contains detailed design and user-oriented information related to MOOS (the MULTICS OLPARS Operating System), and advanced version of OLPARS currently resident upon the Honeywell 6180 MULTICS computer system. The currently operational system represents an implemented version of the operations described in a previous report (RADC-TR-73-241); appropriate selections of that report are retained within this document. This report contains brief descriptions of the MOOS system and the mathematics underlying the system algorithms. A major portion of this document is reserved for a user's manual (providing detailed information relating to the operation of all system options) and for MOOS program documentation.
The report describes the implementation of a series of applications programs and graphic techniques on the On-Line Pattern Analysis and Recognition System (OLPARS) previously developed at RADC. The report is an addendum to RADC-TR-71-177 (AD-732 235) and is therefore a continuation in format and information content to that document. Sections 1 and 2 describe the overview of the additions and modifications to OLPARS in the areas of structure analysis, logic design, and measurement reduction. The remainder of the report contains changes, additions and deletions to the user manuals, programmer's manual and flow charts previously published as RADC-TR-71-177.
A key strategic question that a paging algorithm must answer whenever a new page is needed is: "Which page should be removed from core memory?" (1.9MB PDF)
(Also published in Tutorial: Software Management, Reifer, Donald J. (ed), IEEE Computer Society Press, l979; Second Edition l981; Third Edition, 1986.) A reasonable question of a software manager might be "What possible insight can I gain from the agonies of someone else's project?"
One of the principal hurdles in developing multiplexed computer systems is acquiring sufficient insight into the apparently complex problems encountered. This paper isolates two system objectives by distinguishing between problems related to multiplexing and those arising from sharing of information. In both cases, latent problems of noninteractive systems are shown to be aggravated by interacting people. Viewpoints such as reversibility of binding, and mechanisms such as segmentation, are suggested as approaches to acquiring insight. It is argued that only such analysis and functional understanding can lead to simplifications needed to allow design of more sophisticated systems.
Multics (Multiplexed Information and Computing Service) is a comprehensive, general-purpose programming system which is being developed as a research project. The initial Multics system will be implemented on the GE 645 computer. One of the overall design goals is to create a computing system which is capable of meeting almost all of the present and near-future requirements of a large computer utility.
First we review the goals, history and current status of the Multics project. This review is followed by a brief description of the appearance of the Multics system to its various classes of users. Finally several topics are given which represent some of the research insights which have come out of the development activities.
It is the purpose of this paper to discuss briefly the need for time-sharing, some of the implementation problems, an experimental time-sharing system which has been developed for the contemporary IBM 7090, and finally a scheduling algorithm of one of us (FJC) that illustrates some of the techniques which may be employed to enhance and be analyzed for the performance limits of such a time-sharing system.
Talk presented at a symposium on Advances in Software Technology held in February, 1968, at the opening of the Honeywell EDP Technology Center, Waltham, Massachusetts.
What I am really trying to address is the class of systems that for want of a better phrase, I will call "ambitious systems." It almost goes without saying that ambitious systems never quite work as expected. Things usually go wrong and sometimes in dramatic ways. And this leads me to my main thesis, namely, that the question to ask when designing such systems is not: "if something will go wrong, but when will it?"
The "candy stripe" manual describing early versions of CTSS.
Letter to Multicians on the occasion of the shutdown of the last Multics.
A case study of a Classroom Assembly Program. Textbook for MIT course 6.251, System Programming, in 1962-63. FAP for the IBM 7090.
Partitioning, paging, and segmentation techniques are employed with virtual memory to provide more secure and efficient storage and transfer of information. The virtual memory is divided into a plurality of partitions with real memory storage provided by paging the plurality of partitions. User programs are segmented into logical units and stored in assigned partitions thereby isolating user programs and data. Unsegmented programs may be run by storage in a partition with direct addressing. Segment descriptors including partition, base, and bound are utilized in accessing memory. User domains are expandable by temporarily passing descriptor parameters from one routine to another with access flags limiting access thereto. By shrinking passed descriptors the receiving routine can be restricted to only a portion of the information defined by the descriptor.
Looseleaf. 454 pages.
The value of a computer system to its users is greatly enhanced if a user can, in a simple and general way, build his work upon procedures developed by others. The attainment of this essential generality requires that a computer system possess the features of equipment-independent addressing, an effectively infinite virtual memory, and provision for the dynamic linking of shared procedure and data objects. The paper explains how these features are realized in the Multics system.
The need for a versatile on-line secondary storage complex in a multiprogramming environment is immense.
The introduction of an interactive electronic meeting facility, called Forum, within Honeywell's Large Information Systems Division (LISD), a large multi-national organization, has had profound effets. The environment set up by Forum closely mimics that of a face-to-face meeting. The user interface, based on a TTY-style terminal, allows the users to concentrate on the content of the meeting instead of on the interface or the computer. Forum is briefly described, and LISD's experiences, both good and bad, are discussed.
Do the hardware and software security features of the Air Force Data Services Center (AFDSC) Multics system comply with the Department of Defense security requirements. To answer this question AFDSC commissioned MITRE to undertake a study to compare intrinsic features of the AFDSC Multics system with the applicable requirements set forth in DoD Requirement 5200.28 and expanded upon in DoD Manual 5200.28-M. (also available as DTIC AD-A034985)
This paper is concerned with the features and concepts of system software for a parallel associative array processor---STARAN. Definitions of parallel processors have appeared often. Essentially they are machines with a large number of processing elements. They have the capability to operate on multiple data streams with a single instruction stream. STARAN is a line of parallel processors with a variable number of processing elements.
A particularly troublesome phenomenon, thrashing, may seriously interfere with the performance of paged memory systems, reducing computing giants (Multics, IBM System 360, and others not necessarily excepted) to computing dwarfs. The term thrashing denotes excessive overhead and severe performance degradation or collapse caused by too much paging. Thrashing inevitably turns a shortage of memory space into a surplus of processor time.
The linear approximation relating mean time between page transfers between levels of memory, as reported by Saltzer for Multics, is examined. It is tentatively concluded that this approximation is untenable for main memory, especially under working set policies; and that the linearity of the data for the drum reflects the behavior of the Multics scheduler for background jobs, not the behavior of programs.
There are a number of different philosophies concerning the problems of pricing the resources of a multi-access computer utility. Although some have been proposed only academically, others have actually been implemented by the various fledgling systems that have come into existence during the past few years.
The MADAM system was developed to provide the framework for conducting information system research, design, implementation, measurement and evaluation experiments within the context of the Multics operating system. This paper overviews some of the more important aspects of the design philosophy of MADAM.
This report describes the clustering algorithms added to the MULTICS OLPARS Operating System under this effort.
This report discusses the procedure used to run a series of machine-dedicated performance evaluation tests without any machine operator intervention, either before or after the tests, and with minimum disruption to normal time-sharing service. The procedure involves, among other things, setting up a control program to execute at some optimum time in the future, whereupon MULTICS is automatically induced to remove itself from its normal user support status, log in a predetermined set of artificial users for the duration of the test, and following this, restore itself to its normal user (time-sharing) status. 43 pages.
This report describes work done during the second year of a research and development program aimed ultimately at a Rugged Programming Environment for JOVIAL. The RPE/1 verification system designed and built during the first year has been greatly extended and improved in several ways. The basic method of verification remains the same--that of inductive assertions. The input processor has been modified to handle virtually of all JOCIT instead of the small subset covered by the RPE/1 system. The overall speed of verification has been increased by a factor of over 25. Ease of user interaction with the system has been greatly enhanced by adding facilities for carrying out and saving partial proofs of programs, for extending the assertion language, and for enabling top-down and bottom-up proofs for well-structured programs. Moreover, the entire system has been translated into MACLISP, the system files have been transferred to the RADC-MULTICS Honeywell 6180 computer, and a sample verification (shown in the report) has been carried out entirely on the RADC computer.
also in Information, A Scientific American Book, W. H. Freeman & Co., pp. 76-95, 1966
An I/0 system has been implemented in the Multics system that facilitates dynamic switching of I/0 devices. This switching is accomplished by providing a general interface for all I/O devices that allows all equivalent operations on different devices to be expressed in the same way. Also particular devices are referenced by symbolic names and the binding of names to devices can be dynamically modified. Available I/0 operations range from a set of basic I/0 calls that require almost no knowledge ...
The TEACH system was developed at MIT to ease the cost and improve the results of elementary instruction in programming. To the student, TEACH offers loosely guided experience with a conversational language which was designed with teaching in mind. Faculty involvement is minimal. A term of experience with TEACH is discussed. Pedagogically, the system appears to be successful; straightforward reimplementation will make it economically successful as well. Similar programs of profound tutorial skill will appear only as the results of extended research. The outlines of this research are beginning to become clear.
A debugging study was conducted which surveyed current research being performed in the area of software debugging during integration level testing. Particular emphasis was placed on assessing debugging tools and techniques which were applicable to embedded software developments. The purpose of the debugging study was to define a software debugging methodology applicable to diverse environments to be utilized during integration testing of system software. The results of the study are contained in three volumes. This volume presents the application of the debugging methodology to three specific environments. 122 pages.
This is a pair of memos I wrote in 1974 when I was a graduate student working on the Multics project. (precursors of MIT CSR-RFC-123)
A document that provides the prospective Multics FORTRAN user with sufficient information to enable him to create and execute FORTRAN programs on Multics. It contains a complete definition of the Multics FORTRAN language as well as a description of the FORTRAN command and error messages. It also describes how to communicate with non-FORTRAN programs and discusses some of the fundamental characteristics of Multics that affect the FORTRAN user. 68 pages. -- Organick
Description of the Multics version 1 PL/I compiler implementation.
This paper introduces the notion of usage counts, shows how usage counts can be developed by algorithms that eliminate redundant computations, and describes how usage counts can provide the basis for register allocation. The paper compares register allocation based on usage counts to other commonly used register allocation techniques, and presents evidence which shows that the usage count technique is significantly better than these other techniques.
(also available as DTIC AD-A034986)
A minor hardware extension the Honeywell 6180 processor is demonstrated to allow the primary memory requirements of a process in Multics to be approximated. The additional hardware required for this estimate to be computed consists of a program accessible register containing the miss rate of the associative memory used for page table words. This primary memory requirement estimate was employed in an experimental version of Multics to control the level of multiprogramming in the system and to bill for memory usage. The resulting system's tuning parameters display configuration insensitivity, and it is conjectured that the system would also track shifts in the referencing characteristics of its workload and keep the system in tune.
This report covers the procedures required to protect critical phases of the design, development, and certification of a secure Multics. Involved is protection of the security kernel software from unauthorized alteration or sabotage. The facilities of the Government Information Security Program are applied. The program includes protection of a security kernel for Multics and a security kernel for the Secure Communications Processor.
The problem of maintaining information privacy in a multi-user, remote-access system is quite complex. Hopefully, without going into detail, some idea can be given of the mechanisms that have been used in the Multics operating system at MIT.
In the late spring and early summer of 1964 it became obvious that greater facility in the computing system was required if time-sharing techniques were to move from the state of an interesting pilot experiment into that of a useful prototype for remote access computer systems. Investigation proved computers that were immediately available could not be adapted readily to meet the difficult set of requirements time-sharing places on any machine. However, there was one system that appeared to be extendible into what was desired. This machine was the General Electric 635.
The National Computer Security Center (NCSC) uses DOCKMASTER, a Honeywell DPS-8/70 mainframe running the B2-evaluated Multics operating system. DOCKMASTER provides a central electronic facility for technical interchange between NCSC personnel, computer vendors, and the US computer security community on unclassified topics related to computer security. To support this role, DOCKMASTER is used to store a considerable amount of vendor proprietary data. Up until January 1989, this information was protected using only a discretionary security policy enforced by the Multics Access Control List (ACL) mechanisms. In January 1989, the NCSC began utilizing the Multics Access Isolation Mechanism (AIM) to provide Mandatory Access Controls (MAC) to protect vendor-proprietary information stored on DOCKMASTER. Modifications to standard AIM were necessary to increase the number of compartments in order to adequately separate vendor data (i.e., each vendor has a single compartment). This paper discusses the modifications made to Multics to increase the number of compartments used in the enforcement of its Mandatory Access Control policy. These modifications included revisions to the Trusted Computing Base (TCB). This paper will describe the reason for the changes, the extent of work required to make the changes, the adjustments made by users to utilize AIM, and the impact of the changes on user productivity.
In this paper we will define and discuss a solution to some of the problems concerned with protection and security in an information processing utility. This paper is not intended to be an exhaustive study of all aspects of protection in such a system. Instead, we concentrate our attention on the problems of protecting both user and system information (procedures and data) during the execution of a process. We will give special attention to this problem when shared procedures and data are permitted.
edited transcript of a talk given at NSA November 20, 1969.
This paper presents a mechanism for containing the spread of computer viruses by detecting at run-time whether or not an executable has been modified since its installation. The detection strategy uses encryption and is held to be better for virus containment than conventional computer security mechanisms which are based on the incorrect assumption that preventing modification of executables by unauthorized users is sufficient. Although this detection mechanism is most effective when all executables in a system are encrypted, a scheme is presented that shows the usefulness of the encryption approach when this is not the case. The detection approach is also better suited for use in untrusted computer systems. The protection of this mechanism in untrusted computing environments is addressed.
This thesis describes the implementation of a code generator for the Seal language on the Multiplexed Information and Computing Service. The implementation developed extensive error handling techniques for both the code generator itself, and the Seal programs it compiles.
This paper addresses the choice of Lisp as the implementation language, and its consequences, including some of the implementation issues. The detailed history of Multics Emacs, its system-level design considerations, and its impact on Multics and its user community are discussed in [Greenberg]. One of the immediate and profound consequences of this choice has been to assert Lisp's adequacy, indeed, superiority, as a full-fledged systems and applications programming language. Multics Emacs ...
Multics Emacs is a video-oriented text preparation and editing system running on Honeywell's Multics system, being distributed as an experimental offering in Multics Release 7.0. From the viewpoint of Multics, it represents the first video-management software to be implemented, the first time character-at-a-time-interaction has been used, and a radical and complete departure from other editing and text preparation tools and techniques prevalent on Multics.
If you are not already familiar with LISP, in some detail, including the traditional implementations and value/object issues, you probably should not be reading this.
This paper describes the Multics multilevel paging system, the Page Multilevel algorithm or PML for short, with particular emphasis on the algorithms used to move pages from one level of the storage hierarchy to another. The paper also discusses some of the history and background of the development in particular where it relates to changes in the algorithms. Although Multics has been in working existence for many years, many of its features are still novel and implemented on few if any other operating systems. For this reason, a discussion of some of the terminology as it relates to Multics is also included as background for the reader. Finally, a discussion is presented which predicts probable future developments both on Multics and other systems with respect to hierarchically organized memories (storage hierarchies) in light of what we have learned from Multics.
In the past two decades, thousands of computers have been applied successfully in various industries. How much more widespread will their use become? Martin Greenberger, who is associate professor at the School of Industrial Management of M.I.T., has been working with computers for fourteen years.
PL/I source for moo is available online.
The Graphic Display Monitoring System (GDM) is an experimental monitoring facility for Multics, a general purpose time-sharing system implemented at Project MAC cooperatively with General Electric and the Bell Telephone Laboratories. GDM allows design, systems programming, and operating staff to graphically view the dynamically changing properties of the time-sharing system. It was designed and implemented by the author to provide a medium for experimentation with the real-time observation of time-sharing system behavior. GDM has proven to be very useful both as a measuring instrument and a debugging tool and as such finds very general use.
Results are reported showing the changing pattern of command use by introductory business data processing students. Using the ability of the University of Calgary's Honeywell Multics Operating System to tailor a command and response environment, a subset of commands and responses (called GENIE) was set up in a user-friendly environment to facilitate novice students programming at CRT terminals. Frequency and time of usage of all commands was metered and changing patterns of usage emerged as the semester progressed. For example, "help" usage -- which was originally quite extensive and broad -- limited itself over time to questions only about specific topics. Reluctance to use an "audit" facility to capture an interactive session disappeared as the commands for such usage were likened to a movie camera taking pictures over a student's shoulder. It is further noted that specific emphasis was placed on simplifying commands and reducing options.The whole idea of a restricted command environment is compared to the "abstract machine" concept of Hopper, Kugler, and Unger who are building a universal command and response language (NICOLA, a NIce Standard COmmand LAnguage). GENIE is seen as an example of what such an abstract machine could be if the Multics operating system were viewed as a basic or "parent" abstract machine. Interactive environments such as Multics provides are viewed as essential to providing a satisfactory time-sharing system for the various, but frequently intermittent uses, in the social sciences.
The most effective approach to evaluating the security of complex systems is to deliberately construct the systems using security patterns specifically designed to make them evaluable. Just such an integrated set of security patterns was created decades ago based on the Reference Monitor abstraction. An associated systematic security engineering and evaluation methodology was codified as an engineering standard in the Trusted Computer System Evaluation Criteria (TCSEC). This paper explains how the TCSEC and its Trusted Network Interpretation (TNI) constitute a set of security patterns for large, complex and distributed systems and how those patterns have been repeatedly and successfully used to create and evaluate some of the most secure government and commercial systems ever developed.
(also available as DTIC AD-A034221)
This report describes the design of a Secure Data Management System (DMS) that is to operate on a Secure MULTICS Operating System Kernel. The DMS achieves its security by mapping its data base into the security structure provided by the operating system, with the result that the DMS need contain no security enforcement code. The logical view chosen for the DMS is the relational view of data.
The goal of Project Guardian is to design, develop and certify a secure Multics to provide a certified secure multilevel computer utility. This report covers preliminary work in development of a specification describing the characteristics of the secure system.
As part of an effort to engineer a security kernel for Multics, the dynamic linker has been removed from the domain of the security kernel. The resulting implementation of the dynamic linking function requires minimal security kernel support and is consistent with the principle of least privilege. In the course of the project, the dynamic linker was found to implement not only a linking function, but also an environment initialization function for executing procedures. This report presents an analysis of dynamic linking and environment initialization in a multi-domain process, isolating three sets of functions requiring different sets of access privileges. A design based on this decomposition of the dynamic linking and environment initialization functions is presented.
As part of an effort to engineer a security kernel for Multics, the dynamic linker has been removed from the domain of the security kernel. The resulting implementation of the dynamic linking function requires minimal security kernel support and is consistent with the principle of least privilege. In the course of the project, the dynamic linker was found to implement not only a linking function, but also an environment initialization function for executing procedures. This report presents an analysis of dynamic linking and environment initialization in a multi-domain process, isolating three sets of functions requiring different sets of access privileges. A design based on this decomposition of the dynamic linking and environment initialization functions is presented.
multiple references to Multics use at MIT
multiple references to Multics use at MIT
An on-line simulation system allows both the user and the computer to cooperate and share the task of performing the simulation. It does this by providing facilities for the user to interact with the computer so that they may both play active roles in the simulation process as it is occurring. Thus, the user may perform some of the simulation functions himself and the computer performs the remaining ones. Alternately, the user may act only as a monitor and observe, verify and record data or modify and redirect the simulation when it strays erroneously from the desired path. A second feature of an on-line simulation system is that it may allow the actual phenomenon being simulated to become a part of the simulation.
Later published as Honeywell GA01
In this paper, we describe an on-line and interactive programming system, TICS(1) (for Teacher-Interactive Computer System), which is aimed at facilitating the authoring of interactive computer programs. The system includes particular features for creating instructional software, and in that application it is intended for direct use by teachers or other persons whose expertise lies in the subject matter being addressed, but not necessarily in computer programming. To that purpose, the system provides a greater degree of computer-assistance for the authoring process itself than has been afforded in earlier languages and programming systems of similar orientation(2-5). TICS is implemented within the M. I. T. Multics time-sharing system (6) in two components: an author system and a delivery system. The former provides the tools for writing, investigating, editing, and trying out programs. The latter provides a special environment for student use of the programs.
This paper defines the lattice security model and shows it to be useful in private sector applications of decentralized computer networks. It examines discretionary security models and shows them to be inadequate to protect against 'Trojan Horse' attacks. It examines the management of large security lattices and proposes solutions to the proliferation of categories problem.
A security evaluation of Multics for potential use as a two-level (Secret/Top Secret) system in the Air Force Data Services Center (AFDSC) is presented. An overview is provided of the present implementation of the Multics Security controls. The reports then details the results of a penetration exercise of Multics on the HIS 645 computer. In addition, preliminary results of a penetration excise of Multics on the new HIS 6180 computer are presented. The report concludes that Multics as implemented today is not certifiably secure and cannot be used in an open use multi-level system. However, the Multics security design principles are significantly better than other contemporary systems. Thus, Multics as implemented today, can be used in a benign Secret/Top Secret environment. In addition, Multics forms a base from which a certifiably secure open use multi-level system can be developed.
Almost thirty years ago a vulnerability assessment of Multics identified significant vulnerabilities, despite the fact that Multics was more secure than other contemporary (and current) computer systems. Considerably more important than any of the individual design and implementation flaws was the demonstration of subversion of the protection mechanism using malicious software (e.g., trap doors and Trojan horses). A series of enhancements were suggested that enabled Multics to serve in a relatively benign environment. These included addition of "Mandatory Access Controls" and these enhancements were greatly enabled by the fact the Multics was designed from the start for security. However, the bottom-line conclusion was that "restructuring is essential" around a verifiable "security kernel" before using Multics (or any other system) in an open environment (as in today's Internet) with well-motivated professional attacks employing subversion. The lessons learned from the vulnerability assessment are highly applicable today as governments and industry strive (unsuccessfully) to "secure" today's weaker operating systems through add-ons, "hardening", and intrusion detection schemes.
Building a high-assurance, secure operating system for memory constrained systems, such as smart cards, introduces many challenges. The increasing power of smart cards has made their use feasible in applications such as electronic passports, military and public sector identification cards, and cell-phone based financial and entertainment applications. Such applications require a secure environment, which can only be provided with sufficient hardware and a secure operating system. We argue that smart cards pose additional security challenges when compared to traditional computer platforms. We discuss our design for a secure smart card operating system, named Caernarvon, and show that it addresses these challenges, which include secure application download, protection of cryptographic functions from malicious applications, resolution of covert channels, and assurance of both security and data integrity in the face of arbitrary power losses. The paper is of interest to Multicians, because the Caernarvon operating system uses a clone of the Multics quota mechanism to control usage of the very limited amount of persistent memory on the smart card.
This dissertation examines two major limitations of capability systems: an inability to support security policies that enforce confinement and a reputation for relatively poor performance when compared with non-capability systems.
An organized record of actual flaws can be useful to computer system designers, programmers, analysts, administrators, and users. This survey provides a taxonomy for computer program security flaws, with an Appendix that documents 50 actual security flaws. These flaws have all been described previously in the open literature, but in widely separated places. For those new to the field of computer security, they provide a good introduction to the characteristics of security flaws and how they ...
The computer department of the General Electric Corporation began with the winning of a single contract to provide a special purpose computer system to the Bank of America, and expanded to the development of a line of upward compatible machines in advance of the IBM System/360 and whose descendants still exist in 1995, to a highly successful time-sharing service, and to a process control business. Over the objections of the executive officers of the Company the computer department strived to become the number two in the industry, but after fifteen years, to the surprise of many in the industry, GE sold the operation and got out of the competition to concentrate on other products that had a faster turn around on investment and a well established first or second place in their industry. This paper looks at the history of the GE computer department and attempts to draw some conclusions regarding the reasons why this fifteen year venture was not more successful, while recognizing that there were successful aspects of the operation that could have balanced the books and provided necessary capital for a continued business.
This article is a follow-up and extension of the first author's 1995 Annals article entitled, "The Rise and Fall of the General Electric Corporation Computer Department." It is divided into three parts: a study of the financial implications of rental versus sales in the larger GE environment, a collection of differing views with respect to the GE management paradigm and its effect on the Computer Department, and a set of corrections to the original article.
The currently developed user language interfaces of information systems are generally intended for serious users. These interfaces commonly ignore potentially the largest user group, i.e., casual users. This project discusses the concepts and implementations of a natural query language system which satisfy the nature and information needs of casual users by allowing them to communicate with the system in the form of their native (natural) language. In addition, a framework for the development of such an interface is also introduced for the MADAM (Multics Approach to Data Access and Management) system at the University of Southwestern Louisiana.
52 pages
For a secure computer system in the B2, B3 and A1 classes (as defined by the DoD Trusted Computer System Evaluation Criteria), the problem of confining a process such that it may not transmit information in violation of the *-property is an analyzable and solvable problem. This paper examines the problem of covert channels and attempts to analyze and resolve them relative to satisfying the B2 security requirements. A novel solution developed for the Multics computer system for a class of covert channels is presented.
In a previous article, I introduced the idea of a mechanism (the covert channel limiter) that would watch for the potential uses of covert channels and affect the responsible process (or process group) only when such potential uses exceeded the allowable bandwidth for covert channels. Recent work involving the design of the Opus operating system (target class B3) has refined and extended this idea. This paper extends the informal basis for the covert channel limiter and extends its possible utility.
textbookland.com lists the price for this report as 10 trillion dollars.
A distinctive concern in the U.S. military for computer security dates from the emergence of time-sharing systems in the 1960s. This paper traces the subsequent development of the idea of a "security kernel" and of the mathematical modeling of security, focusing in particular on the paradigmatic Bell-La Padula model. The paper examines the connections between computer security and formal, deductive verification of the properties of computer systems. It goes on to discuss differences between the cultures of communications security and computer security, the bureaucratic turf war over security, and the emergence and impact of the Department of Defense's Trusted Computer System Evaluation Criteria (the so-called Orange Book), which effectively took its final form in 1983. The paper ends by outlining the fragmentation of computer security since the Orange Book was written.
Most aspects of our private and social lives--our safety, the integrity of the financial system, the functioning of utilities and other services, and national security--now depend on computing. But how can we know that this computing is trustworthy? In Mechanizing Proof, Donald MacKenzie addresses this key issue by investigating the interrelations of computing, risk, and mathematical proof over the last half century from the perspectives of history and sociology. His discussion draws on the technical literature of computer science and artificial intelligence and on extensive interviews with participants.
The book is a slice through the history of those mainframe machines as experienced by GE and Honeywell old timer Russ McGee, manager in Phoenix and creator of the VMM virtual machine monitor. Many interesting insights into the politics at LISD.
The objective of the research described in this report was the development and software implementation of a Long Waveform Analysis System (WAVES) on the Honeywell 6180 Computer System running under the MULTICS operating System. The currently operational WAVES System is an open-ended and flexible system for primary use in feature definition and extraction and, as such, serves as a front-end to the MULTICS version of OLPARS (On-Line Pattern Analysis and Recognition System). The development of computer-based interactive feature definition and pattern classification systems has been a continuing program at Rome Air Development Center since 1968. This long standing effort has resulted in the implementation of OLPARS, IFES (the Image Feature Extraction System), IDRS (the Interactive Digital Receiver Simulator System), and WPS (the Waveform Processing System). WAVES represents a furtherance of this continuing effort and a logical expansion and improvement of currently available waveform analysis and feature definition systems.
The simulation of continuous systems, simulation languages in general and CSSLIV in particular are discussed briefly, followed by a description of the attempts made to create a more user friendly environment for the CSSLIV implementation on the Honeywell Multics system at the University of Calgary.
The PL/I language's facilities for handling exceptional conditions are analyzed. The description is based on the new PL/I standard. Special attention is given to fine points which are not well known. The analysis is generally critical. It emphasizes problems in regards to implementation and structured programming. A few suggestions for future language design are offered.
Article in the "New Applications" column about Industrial Nucleonics.
The 1967 Spring Joint Computer Conference session organized by Willis Ware and the 1970 Ware Report are widely held by computer security practitioners and historians to have defined the field's origin. This article documents, describes, and assesses new evidence about two early multilevel access, time-sharing systems, SDC's Q-32 and NSA's RYE, and outlines its security-related consequences for both the 1967 SJCC session and 1970 Ware Report. Documentation comes from newly conducted Charles Babbage Institute oral histories, technical literature, archival documents, and recently declassified sources on National Security Agency computing. This evidence shows that early computer security emerged from the intersection of material, cultural, political, and social events and forces.
There are many good arguments for implementing information systems as distributed systems. These arguments depend on the extent to which interactions between machines in the distributed implementation can be minimized. Sharing among users of a computer utility is a type of interaction that may be difficult to provide in a distributed system. This paper defines a number of parameters that can be used to characterize such sharing. This paper reports measurements that were made on the M.I.T. Multics system in order to obtain estimates of the values of these parameters for that system. These estimates are upper bounds on the amount of sharing and show that although Multics was designed to provide active sharing among its users, very little sharing actually takes place. Most of the sharing that does take place is sharing of system programs, such as the compilers and editors.
From Herbert Stoyan Collection on LISP Programming, Lot Number X5687.2010
(A book review of Organick's book.) "The miracle is that it works and provides a level of service sufficient for customers of Honeywell to buy it and M.I.I users to use it. Nevertheless, there must be a better way to achieve an information utility than such a complex system as Multics."
Parallel modification of software modules by different programming teams is an inherent problem of large scale system software efforts. In the Multics Project experiment and analysis have lead to the development of an interactive program, merge_ascii, which competently merges related texts.
The trusted computer system evaluation criteria defined in this document classify systems into four broad hierarchical divisions of enhanced security protection. They provide a basis for the evaluation of effectiveness of security controls built into automatic data processing system products. The criteria were developed with three objectives in mind: (a) to provide users with a yardstick with which to assess the degree of trust that can be placed in computer systems for the secure processing of classified or other sensitive information; (b) to provide guidance to manufacturers as to what to build into their new, widely-available trusted commercial products in order to satisfy trust requirements for sensitive applications; and (c) to provide a basis for specifying security requirements in acquisition specifications. Two types of requirements are delineated for secure processing: (a) specific security feature requirements and (b) assurance requirements. Some of the latter requirements enable evaluation personnel to determine if the required features are present and functioning as intended. The scope of these criteria is to be applied to the set of components comprising a trusted system, and is not necessarily to be applied to each system component individually. Hence, some components of a system may be completely untrusted, while others may be individually evaluated to a lower or higher evaluation class than the trusted product considered as a whole system. In trusted products at the high end of the range, the strength of the reference monitor is such that most of the components can be completely untrusted. Though the criteria are intended to be application-independent, the specific security feature requirements may have to be interpreted when applying the criteria to specific systems with their own functional requirements, applications or special environments (e.g., communications processors, process control computers, and embedded systems in general). The underlying assurance requirements can be applied across the entire spectrum of ADP system or application processing environments without special interpretation.
The security protection provided by the Honeywell Multics MR 11.0 operating system, with the B2-specific changes applied, configured according to the most secure manner described in the Trusted Facility Manual, and running on the Honeywell Level 68/DPS or Honeywell DPS 8/70M multiprocessor has been evaluated by the National Computer Security Center (NCSC). The security features of Multics were evaluated against the requirements specified by the DoD Trusted Computer System Evaluation Criteria (the Criteria) dated 15 August 1983. (6MB PDF)
Numerous papers and conference talks have recently been devoted to the affirmation or reaffirmation of various common-sense principles of computer program design and implementation, particularly with respect to operating systems ad to large subsystems such as language translators. These principles are nevertheless little observed in practice, often to the detriment of the resulting systems. This paper attempts to summarize the most significant principles, to evaluate their applicability in the real world of large multi-access systems, and to assess how they can be used more effectively.
This paper summarizes current research at SRI aimed at developing secure operating systems and verifying certain critical properties of these systems. It is seen that proofs of design properties can be relatively straightforward when the design is specified in suitable formal specification language. These proofs demonstrate the correspondence between the desired properties and a specification of the system design. Various on-line tools aid considerably in this process. In addition, correctness proofs for implementations of such systems are now feasible, because of both various theoretical advances and the use of supporting tools.
(755KB PDF)
This paper deals with some of the problems encountered at The University of Calgary during the tuning and optimization of system performance. It presents some of the characteristics to be found in both the scheduling system and the virtual memory environment of Multics, and attempts to put forward a heuristic model of system action to permit a tuner to improve performance.
In the middle 1960s IBM responded to pressure from its most prestigious customers to hasten the development and availability of computer time-sharing systems. When MIT and Bell Laboratories chose General Electric computers for their new time-sharing system, IBM management feared that the 'prestige luster' of these customers would lead other customers to demand the same capabilities and that there would be a 'snow-balling' effect as more customers rejected IBM computers. IBM worked on a time-sharing product and brought it to market by the end of the decade despite greater-than-expected costs. Meanwhile MIT, Bell Laboratories, and GE worked together on a new time-sharing system known as Multics. By examining IBM's role in and response to the development of time-sharing, this article illustrates the nontechnological criteria that even high-technology companies use to decide what products to develop and market.
Multics as it was in the 60s. Reprint available from M. I. T. Press.
This volume provides an overview of the Multics system developed at M.I.T.--a time-shared, general purpose utility like system with third-generation software. The advantage that this new system has over its predecessors lies in its expanded capacity to manipulate and file information on several levels and to police and control access to data in its various files. On the invitation of M.I.T.'s Project MAC, Elliott Organick developed over a period of years an explanation of the workings, concepts, and mechanisms of the Multics system. This book is a result of that effort, and is approved by the Computer Systems Research Group of Project MAC.
In keeping with his reputation as a writer able to explain technical ideas in the computer field clearly and precisely, the author develops an exceptionally lucid description of the Multics system, particularly in the area of "how it works." His stated purpose is to serve the expected needs of designers, and to help them "to gain confidence that they are really able to exploit the system fully, as they design increasingly larger programs and subsystems."
The chapter sequence was planned to build an understanding of increasingly larger entities. From segments and the addressing of segments, the discussion extends to ways in which procedure segments may link dynamically to one another and to data segments. Subsequent chapters are devoted to how Multics provides for the solution of problems, the file system organization and services, and the segment management functions of the Multics file system and how the user may employ these facilities to advantage. Ultimately, the author builds a picture of the life of a process in coexistence with other processes, and suggests ways to model or construct subsystems that are far more complex than could be implemented using predecessor computer facilities.
This volume is intended for the moderately well informed computer user accustomed to predecessor systems and familiar with some of the Multics overview literature. While not intended as a definitive work on this living, ever-changing system, the book nevertheless reflects Multics as it has been first implemented, and should reveal its flavor, structure and power for some time to come.
This paper discusses the general communications and input/output switching problems in a large-scale multiplexed computing system.
Magic 6 was a paged, segmented, dynamic linked, operating system for the Interdata series of mini-computers inspired by Multics.
36 pages
This is Report 2 of a series entitled Implementation and Evaluation of Interval Arithmetic Software. Interval arithmetic can be used to determine the precision of the arithmetic required to guarantee a given precision in the results of an algorithm. In general, whether using interval or regular arithmetic, the greater the precision the longer the run time required for a given algorithm. A 56 decimal digit version of the original MULTICS interval package was implemented on the MULTICS system. It is concluded that the use of single precision and 56 decimal digit extended precision interval arithmetic can, at times, be extremely useful. The testing showed that, when using the 56 decimal digit data type, much better bounds were obtained for the results than when using the single precision interval data type.
The UK South West Universities Computer Network (SWUCN) was implemented on a homogenous set of computers, before the emergence of accepted standard protocols for networking. The paper outlines problems of evolving from this network to a heterogeneous one, in which standard protocols are used. A particular application of the strategy involved is described that includes the implementation of a network connection using the X.25 Recommendation on the Honeywell Multics system.
The report describes the design and evaluation of seismic classifiers for distinguishing among humans, heavy trucks, armored personnel carriers, helicopters, and C-131 aircraft. The data used to develop these classifiers consisted of many digitized seismometer responses to each of the intrusion targets and was collected by the Sensor Development Section of the Surveillance and Control Division at the West Lee Test Site. The Interactive Processing Section of the Information Sciences Division analyzed this waveform data and extracted an initial set of 48 features. The on-line pattern analysis and recognition system (OLPARS) was then used to develop several seismic classifier designs which are based on different subsets of the initial 48 features.
Source material for a written history of PL/I has been preserved and is available in dozens of cartons, each packed with memos, evaluations, language control logs, etc. A remembered history of PL/I is retrievable by listening to as many people, each of whom was deeply involved in one aspect of its progress. This paper is an attempt to gather together and evaluate what I and some associates could read and recall in a few months. There is enough material left for several dissertations. The exercise is important, I think, not only because of the importance of PL/I, but because of the breadth of its subject matter. Since PL/I took as its scope of applicability virtually all of programming, the dialogues about its various parts encompass a minor history of computer science in the middle sixties. There are debates among numerical analysts about arithmetic, among language experts about syntax, name scope, block structure, etc., among systems programmers about multi-tasking, exception handling, I/O, and more.
114 pages
87 pages
This paper presents a brief history of the early development of the Unix operating system. It concentrates on the evolution of the file system, the process-control mechanism, and the idea of pipelined commands. Some attention is paid to social conditions during the development of the system.
The C programming language was devised in the early 1970s as a system implementation language for the nascent Unix operating system. Derived from the typeless language BCPL, it evolved a type structure; created on a tiny machine as a tool to improve a meager programming environment, it has become one of the dominant languages of today. This paper studies its evolution.
This paper discusses the overall architecture of Tactical Control Directives (TCD). TCDs were a system extension to the Enhanced Naval Warfare Gaming System (ENWGS). They were a forward chaining rule-based language and runtime environment that allowed users to construct and execute simulations of complex naval doctrine. They differed significantly from other rule-based environments of the time in that rules could be triggered by a combination data conditions and real-time events.
Achievement of finite element methods leads nowadays to the development of general purpose packages. FLUX, developed by the Laboratoire d'Electrotechnique de l'Institut National Polytechnique de Grenoble is an interactive system in which graphic facilities are combined with a convenient command language to allow a high level of conversational use. FLUX is made of three independent programs : a pre-processor : ENTREE for geometrical, physical and finite element descriptions of the model, the computation processor RESOL in which equations occurring from finite elements are solved, and, finally EXPLOI, the post-processor for flux plots, field visualisation, forces and torques. FLUX is implemented under the conversational system MULTICS on the HB68 computer of the Centre Inter-universitaire de Calcul de Grenoble. It is available in France through TRANSRAC, the french computer network, and in all western EUROPE through EURONET.
Predicting the performance of a proposed automatically managed multilevel memory system requires a model of the patterns by which programs refer to the information stored in the memory. Some recent experimental measurements on the Multics virtual memory suggest that, for rough approximations, a remarkably simple program reference model will suffice. The simple model combines the effect of the information reference pattern with the effect of the automatic management algorithm to produce a ...
This paper describes system design and human engineering considerations pertinent to the processing of the character stream between a remote terminal and a general-purpose, interactive computer system. The Multics system is used to provide examples of: terminal escape conventions which permit input of a full character set from a limited terminal, single character editing for minor typing mistakes, and reformatting of input text to produce a canonical stored form. A formal description of the Multics canonical form for stored character strings appears in an appendix.
An array of measuring tools devised to aid in the implementation of a prototype computer utility is discussed. These tools include special hardware clocks and data channels, general purpose programmed probing and recording tools, and specialized measurement facilities. Some particular measurements of interest in a system which combines demand paging with multiprogramming are described in detail. Where appropriate, insight into effectiveness (or lack thereof) of individual tools is provided.
An array of measuring tools devised to aid in the implementation of a prototype computer utility is discussed. These tools include special hardware clocks and data channels, general purpose programmed probing and recording tools, and specialized measurement facilities. Some particular measurements of interest in a system which combines demand paging with multiprogramming are described in detail. Where appropriate, insight into effectiveness (or lack thereof) of individual tools is provided.
The design of mechanisms to control the sharing of information in the Multics system is described. Five design principles help provide insight into the tradeoffs among different possible designs. The key mechanisms described include access control lists, hierarchical control of access specifications, identification and authentication of users, and primary memory protection. The paper ends with a discussion of several known weaknesses in the current protection mechanism design.
This seminal paper collected and established many the of the fundamental principles and terms used in computer security over the last three decades. In addition to the eight "Saltzer/Schroeder Design Principles" and other basic principles of information protection in section 1, it provides an overview of descriptor-based protection systems in section 2, and surveys the state of the art in section 3. Although the paper dates from 1974, most of it is still highly relevant to systems being designed today.
ABSTRACT: This tutorial paper explores the mechanics of protecting computer-stored information from unauthorized use or modification. It concentrates on those architectural structures--whether hardware or software--that are necessary to support information protection. The paper develops in three main sections. Section I describes desired functions, design principles, and examples of elementary protection and authentication mechanisms. Any reader familiar with computers should find the first section to be reasonably accessible. Section II requires some familiarity with descriptor-based computer architecture. It examines in depth the principles of modern protection architectures and the relation between capability systems and access control list systems, and ends with a brief analysis of protected subsystems and protected objects. The reader who is dismayed by either the prerequisites or the level of detail in the second section may wish to skip to Section III, which reviews the state of the art and current research projects and provides suggestions for further reading.
This paper provides an introspective retrospective on the history and development of the United States Department of Defense Trusted Computer System Evaluation Criteria (TCSEC). Known to many as the Orange Book, the TCSEC contained a distillation of what many researchers considered to be the soundest proven principles and practices for achieving graded degrees of sensitive information protection on multiuser computing systems. While its seven stated evaluation classes were explicitly directed to standalone computer systems, many of its authors contended that its principles would stand as adequate guidance for the design, implementation, assurance, evaluation and certification of other classes of computing applications including database management systems and networks. The account is a personal reminiscence of the author, and concludes with a subjective assessment of the TCSEC's validity in the face of its successor evaluation criteria.
This paper describes a drum space allocation and accessing strategy called "folding", whereby effective drum storage capacity can be traded off for reduced drum page fetch time. A model for the "folded drum" is developed and an expression is derived for the mean page fetch time of the drum as a function of the degree of folding. In a hypothetical three-level memory system of primary (directly addressable), drum, and tertiary (usually disk) memories, the tradeoffs among drum storage capacity, drum page fetch time, and page fetch traffic to tertiary memory are explored. An expression is derived for the mean page fetch time of the combined drum-tertiary memory system as a function of the degree of folding. Measurements of the MULTICS three-level memory system are presented as examples of improving multi-level memory performance through drum folding. A methodology is suggested for choosing the degree of folding most appropriate to a particular memory configuration.
The military has a heavy responsibility for protection of information in its shared computer systems. The military must insure the security of its computer systems before they are put into operational use. That is, the security must be "certified", since once military information is lost it is irretrievable and there are no legal remedies for redress. Most contemporary shared computer systems are not secure because security was not a mandatory requirement of the initial hardware and software design. The military has reasonably effective physical, communication, and personnel security, so that the nub of our computer security problem is the information access controls in the operating system and supporting hardware. We primarily need an effective means for enforcing very simple protection relationships, (e.g., user clearance level must be greater than or equal to the classification level of accessed information); however, we do not require solutions to some of the more complex protection problems such as mutually suspicious processes. Based on the work of people like Butler Lampson we have espoused three design principles as a basis for adequate security controls:
These three principles are central to the understanding of the deficiencies of present systems and provide a basis for critical examination of protection mechanisms and a method for insuring a system is secure. It is our firm belief that by applying these principles we can have secure shared systems in the next few years.
The Department of Defense has recently published Trusted Computer System Evaluation Criteria that provide the basis for evaluating the effectiveness of security controls built into computer systems. This paper summarizes basic security requirements and the technical criteria that are used to classify systems into eight hierarchical classes of enhanced security protection. These criteria are used in specifying security requirements during acquisition, guiding the design and development of trusted systems and evaluating systems used to process sensitive information.
The state of the science of information security is astonishingly rich with solutions and tools to incrementally and selectively solve hard problems. In contrast, the state of the actual application of science, and the general knowledge and understanding of existing science, is lamentably poor. Still we face a dramatically growing dependence on information technology, e.g., the Internet, that attracts a steadily emerging threat of well-planned, coordinated hostile attacks. A series of hard-won scientific advances gives us the ability to field systems having verifiable protection, and an understanding of how to powerfully leverage verifiable protection to meet pressing system security needs. Yet, we as a community lack the discipline, tenacity and will to do the hard work to effectively deploy such systems. Instead, we pursue pseudoscience and flying pigs. In summary, the state of science in computer and network security is strong, but it suffers unconscionable neglect.
In the early days of computers, security was easily provided by physical isolation of machines dedicated to security domains. Today's systems need high-assurance controlled sharing of resources, code, and data across domains in order to build practical systems. Current approaches to cyber security are more focused on saving money or developing elegant technical solutions than on working and protecting lives and property. They largely lack the scientific or engineering rigor needed for a trustworthy system to defend the security of networked computers in three dimensions at the same time: mandatory access control (MAC) policy, protection against subversion, and verifiability--what I call a defense triad. Fifty years ago the U.S. military recognized subversion as the most serious threat to security. Solutions such as cleared developers and technical development processes were neither scalable nor sustainable for advancing computer technology and growing threats. In a 1972 workshop, I proposed "a compact security 'kernel' of the operating system and supporting hardware--such that an antagonist could provide the remainder of the system without compromising the protection provided." I concluded: "We are confident that from the standpoint of technology there is a good chance for secure shared systems in the next few years. However, from a practical standpoint the security problem will remain as long as manufacturers remain committed to current system architectures, produced without a firm requirement for security. As long as there is support for ad hoc fixes and security packages for these inadequate designs, and as long as the illusory results of penetration teams are accepted as a demonstration of computer system security, proper security will not be a reality."
Protection of computations and information is an important aspect of a computer utility. In a system which uses segmentation as a memory addressing scheme, protection can be achieved in part by associating concentric rings of decreasing access privilege with a computation. This paper describes hardware processor mechanisms for implementing these rings of protection. The mechanisms allow cross-ring calls and subsequent returns to occur without trapping to the supervisor. Automatic hardware ...
This paper describes a research project to engineer a security kernel for Multics, a general-purpose, remotely accessed, multiuser computer system. The goals are to identify the minimum mechanism that must be correct to guarantee computer enforcement of desired constraints on information access, to simplify the structure of that minimum mechanism to make verification of correctness by auditing possible, and to demonstrate by test implementation that the security kernel so developed is capable of supporting the functionality of Multics completely and efficiently. The paper presents the overall viewpoint and plan for the project and discusses initial strategies being employed to define and structure the security kernel.
The Multiplexed Information and Computing Service (Multics) of Project MAC at M.I.T. runs on a General Electric 645 computer system. The processors of this hardware system contain logic for both paging and segmentation of addressable memory. They directly accept two-part addresses of the form (segment number, word number) which they translate into absolute memory addresses through a series of indexed table lookups. To speed this address translation each processor contains a small, fast associative memory which remembers the most recently used address translation table entries. This paper reports the results of performance measurements on this associative memory. The measurements were made by attaching an electronic counter directly to a processor while Multics was in operation, and were taken for several associative memory sizes. The measurements show that for the observed load 16 associative registers are enough.
We describe a plan to create an auditable version of Multics. The engineering experiments of that plan are now complete. Type extension as a design discipline has been demonstrated feasible, even for the internal workings of an operating system, where many subtle intermodule dependencies were discovered and controlled. Insight was gained into several tradeoffs between kernel complexity and user semantics. The performance and size effects of this work are encouraging. We conclude that ...
Describes MIDAS (Multics Intrusion Detection and Alerting System).
A model of paging behavior of programs under multiprogramming and a model of dual processor multi-memory processing system with virtual memory are developed. Combining these two models, it is possible to evaluate the throughput of multiprogrammed virtual-memory computer systems realistically. Numerical results obtained by these models are then compared with the measurement data of the Multics system of M.I.T. Finally, the effect of multiprogramming and sharing upon a system's throughput is numerically evaluated.
The World Wide Military Command and Control System (WWMCCS) is a composite of military command facilities, communications, warning systems, and computers located throughout the world to support military command and control activities. A followup review was conducted to determine whether the multilevel computer security requirements of WWMCCS were being properly provided for by the Department of Defense (DOD) and if Air Force efforts to solve this problem had been properly considered by DOD. At the time of the review, WWMCCS officials had not endorsed or supported Air Force efforts on multilevel computer security even though the Air Force had demonstrated a potential for resolving the shortcomings of WWMCCS software. However, the Air Force terminated its efforts to develop multilevel computer security because of insufficient financing. The Departments of the Army and Navy also have a need for multilevel security in their computerized systems and had been waiting for the developed capability by the Air Force. The apparent need for a multilevel security system and the lack of a concentrated effort to meet it, as well as cancellation of the Air Force program which showed promise of meeting this need, resulted from a lack of centralized responsibility and authority for development of a multilevel system. An office within the Office of the Secretary of Defense should be given budget authority and responsibility for: control of all computer security research and development in DOD; review and approval of computer security requirements for all three services; review and approval of all computer security specifications, methodologies, and procurements; and review and approval of all long-range plans for WWMCCS and the services.
It seems fitting to try to answer the question Boebert posed in his talk: What can we learn from the past? Why did Multics fail?
A fast software block encryption algorithm with a 72-bit key was written by (then) Major Roger R. Schell (United States Air Force) in April 1973 and released as part of the source code for the Multics operating system. The design of the Multics encipher_ algorithm includes features such as variable data-dependent rotations that were not published until the 1990s - 20 years after the Multics cipher. This article describes the history and details of the Multics encipher_algorithm and how it was used for Key Generation, File Encryption, and Password Hashing. A cryptographic analysis of the algorithm has not been performed, although similarities are noted with algorithms such as XTEA, SEAL, and RC5.
Unix has a reputation as an operating system that is difficult to secure. This reputation is largely unfounded. Instead, the blame lies partially with the traditional use of Unix and partially with the poor security consciousness of its users. Unix's reputation as a nonsecure operating system comes not from design flaws but from practice. For its first 15 years, Unix was used primarily in academic and computer industrial environments --- two places where computer security has not been a priority until recently. Users in these environments often configured their systems with lax security, and even developed philosophies that viewed security as something to avoid. Because they cater to this community, (and hire from it) many Unix vendors have been slow to incorporate stringent security mechanisms into their systems. This paper describes how the history and development of Unix can be viewed as the source of many serious problems. Some suggestions are made of approaches to help increase the security of your system, and of the Unix community.
Essential to any multi-process computer system is some mechanism to enable coexisting processes to communicate with one another. The basic inter-process communication (IPC) mechanism is the exchange of messages among independent processes in a commonly accessible data base and in accordance with some pre-arranged convention.By introducing several system wide conventions for initiating communication, and by utilizing the Traffic Controller it is possible to expand the basic IPC mechanism into a general purpose IPC facility. The Multics IPC facility is an extension of the central supervisor which assumes the burden of managing the shared data base and of respecting the IPC conventions, thus providing a simple and easy way for the programmer to use the interface.
Long considered an afterthought, software maintenance is easiest and most effective when built into a system from the ground up.
This paper describes the Janus data management and analysis system which has been designed at the Cambridge Project. A prototype of Janus is currently running on the Multics time-sharing system at M.I.T. The data model for the design of Janus is very general and should be usable as a model for data handling in general, as well as for Janus in particular. The Janus command language is an English-like language based on procedural functions - such as define, display, and delete - which act on logical objects from the data model, such as datasets, attributes and entities. For example, delete-attribute, define-attribute and define-dataset are all commands. The implementation of Janus is interesting for a number of reasons: it runs on the Multics system which has segmented and paged memory; it is based almost entirely on datasets (tables), which describe each other as well as themselves; and it is organized in a functionally modular way that is often talked about, but less often done.
The underlying objective of the Rome Air Development Center Associative Processor (RADCAP) Project is to investigate solutions to data processing problems which strain conventional approaches due to high data rates and heavy processing requirements. One group of data processing functions, those inherent in the USAF Airborne Warning and Control System (AWACS, now called the E-3A), have been chosen as being representative of this class of problems. This report describes the results of a five-year project which involved the implementation of the AWACS functions on the RADCAP testbed system which consists of a STARAN S-1000P associative processor interfaced to a Honeywell Information Systems 645-MULTICS computer (later upgraded to a HIS 6180). Based on these results, the key characteristics of an associative processor to handle this type of problem are identified and some general conclusions as to the applicability of associative/parallel processing to real-world, real-time processing problems are drawn. The report also makes some general statements concerning the future of associative/parallel processing.
Air Force Systems Command terminated the effort which this document describes before the effort reached its logical conclusion. This report is incomplete but was published in the interest of capturing and disseminating the computer security technology that was available at the time of the termination.
In this paper, the functional capabilities and economic features of the Relational Data Management System (RDMS) are discussed. RDMS is a generalized on-line data management system written in PL/1 for the Multics operating system. The basic concepts of RDMS are introduced and the similarities between the conventional file concept and the relation concept are discussed. A data-base is shown to be a set of relations. By generalizing the concept of field to be a property of the data-base, and by labeling relations with the names of their columns (fields), relations of a data-base may be implicitly linked by virtue of having a common column or field name (the dataclass name). On-line commands for operations on two such relations which yield a third result relation are illustrated. Other facilities of RDMS, such as computational, report-generation, and query-report packages are discussed. In RDMS, the relation concept is implemented as a matrix of reference numbers which refer to character string datums which are stored elsewhere in distinct dataclass files. In addition to significant storage savings, this allows a single representation-independent logical interface to the storage and access of character string data. RDMS was developed from graduate work done at M.I.T. by L. A. Kraning and A. I. Fillat in 1970 and is now being used by the administrative departments at M.I.T.
This report is part of a series that deals with a Computer-Aided Design and Specification Analysis Tool (CADSAT). The purpose of the tool is to describe the requirements for information processing systems and to record such descriptions in machine-processable form. The major components of CADSAT are the User Requirements Language (URL) and the User Requirements Analyzer (URA) which can operate in an interactive computer environment. This report describes how the formal URL may be used to define systems. It explains the language statements available, their use and application on a Honeywell 6180 Multics Computer.
This report is part of a series that deals with a Computer-Aided Design and Specification Analysis Tool (CADSAT). Its purpose is to describe the requirements for information processing systems and to record such descriptions in machine-processable form. The major components of CADSAT are the User Requirements Language (URL) and the User Requirement Analyzer (URA) which can operate in an interactive computer environment. In parts I and II, this report describes how the formal URL may be used to define systems. It explains the language statements available, their use and application on a Honeywell 6180 Multics Computer. This manual describes the User Requirements Language (URL) to be used with Version 3.2 of the User Requirements Analyzer (URA). Part I gives a detailed description of the URL statements available and their use. Part II is a reference manual which gives the proper syntax for each statement.
This Directive establishes the DoD Computer Security Evaluation Center (CSEC), provides policy, and assigns responsibilities for the technical evaluation of computer system and network security, and related technical research.
Reprinted in IEEE Tutorial on Software Maintenance, 1981. Features of the Multics system programming process lead to high programmer productivity with a small programming staff and a finished system with high software reliability. Other workers' predictions of increasing difficulty of system maintenance with time have not been observed; reasons for this are suggested.
My colleague Noel Morris and I implemented both an electronic mail command and a text messaging facility for the Massachusetts Institute of Technology's Compatible Time-Sharing System (CTSS) in 1965.
Rome Air Development Center currently operates two R and D computer facilities: an HIS GCOS system and an HIS Multics system. Another Air Force site also operates both a GCOS and a Multics installation. In both cases, the GCOS system has preceded the Multics system by several years. There is thus a large GCOS user applications and data files. Many of these users desire to transfer these programs, applications, and data files from the GCOS environment to the Multics environment in order to take advantage of the unique design features of the Multics system. To facilitate this transfer, and to make the process as simple and easy to use as possible, Rome Air Development Center contracted with Honeywell Information Systems to specify, design, and implement procedures and software to provide an integrated capability for the transfer of information, programs, and procedures from the GCOS to the Multics environment. This technical report describes the activities conducted in the performance of this contract.
The effort described in this report consisted of enhancements to the GCOS/Multics File Transfer Facility which was developed under contract. The facility provides for the transfer of data files from the GCOS environment to the Multics environment. In particular, data base and file backup facilities, performance monitoring instrumentation, and Inner Ring Program/Data Protection have been added.
This report describes the H6180 Virtual Machine Monitor Performance Analysis. Included as part of this report is a description of the Virtual Machine Monitor. This report also includes an approach for enhancing the baseline VMM functionality by use of a service machine to control peripheral sharing. The actual experimentation performed in this effort identifies the feasibility of a VMM in a Programming Environment and the performance tradeoffs required for its optimized utilization.
This paper is a preliminary report on a system which has not yet been implemented. Of necessity, it therefore reports on status and objectives rather than on performance.
The design of ICSSM, a nonreal time computer-aided simulation and analysis tool for communications systems, is presented, ICSSM is capable of supporting modeling, simulation, and analysis of any system representable in terms of a network of multiport functional blocks. Its applicability is limited only by the modeler's ingenuity to decompose the system to functional blocks and to represent these functional blocks algorithmically. ICSSM has been constructed modularly, consisting of five subsystems to facilitate the tasks of formulating the model, exercising the model, evaluating and showing the simulation results, and storing and maintaining a library of modeling elements, analysis, and utility subroutines. It is written exclusively in ANSI Standard Fortran IV language, and is now operational in a Honeywell DPS 7/80 M computer under the MULTICS Operating System. Description of a recent simulation using ICSSM and some generic modules of general interest developed as a result of the modeling work are also presented.
The IEEE Computer Society History Committee prepared a document in June 2011 in honor of the 50th anniversary of CTSS, edited by Dave Walden and Tom Van Vleck. It contains an extensive bibliography and interviews with Corby, Marge Daggett, Bob Daley, Peter Denning, David Alan Grier, Dick Mills, Roger Roach, Allan Scherr, and Tom Van Vleck.
The history of time-sharing and networks and ARPA's part in supporting the activities. It has one or two chapters which focus on CTSS and Multics. It also includes the saga of PARC.
Certifying an entire operating system to be reliable is too large a task to be practicable. Instead, we are designing a Security Kernel which will provide information security. The kernel's job is to monitor information flow in order to prevent compromise of security. Sound design is encouraged by using a technique called Structured Specification, in which successively more detailed models of the Security Kernel are developed. The initial model, M0, is an abstract description which formalizes governmental security applied to computer systems. Subsequent levels of modeling provide increasingly more detail, and gradually the models begin to resemble a particular system (Multics in this case). The second model, M1, defines a tree-structured file system, and an interagent communication system while M2 adds details concerning segmentation in a dynamic environment. It is intended that the final level of modeling will specify the primitive commands for the kernel of a Multics-like system and will enumerate precisely those assertions which must be proved about the implementation in order to establish correctness.
Information Systems, MIT's campus-wide computing service organization, recently reorganized and strengthened its resources. Out of this recent effort came the decision to explore several ways of reporting on the expanded range of systems and services we offer. One service that central computing facilities must provide is timely notice of changes to the supported systems. This paper presents the design and implementation of Information Systems' "On-Line News System", which keeps users updated about changes in the wide variety of services offered by Information Systems.
The results of a 1973 security study of the Multics Computer System are presented detailing requirements for a new access control mechanism that would allow two levels of classified data to be used simultaneously on a single Multics system. The access control policy was derived from the Department of Defense Information Security Program. The design decisions presented were the basis for subsequent security enhancements to the Multics system.
The results of a 1973 security study of the Multics computer system are presented detailing requirements for a new access control mechanism that would allow two levels of classified data to be used simultaneously on a single Multics system. The access control policy was derived from the Department of Defense Information Security Program. The design decisions presented were the basis for subsequent security enhancements to the Multics system.
One of the popular misconceptions concerning PL/I is that programs written in PL/I are necessarily inefficient and hard to debug. Several years experience with the Multics PL/I compiler running on the Honeywell 645 has shown that in spite of the apparent complexity of the PL/I language, PL/I programs are easily debugged in the Multics environment, even by novice users who are newcomers to PL/I and are unfamiliar with the Honeywell 645. In most cases the user can debug his program symbolically without having to refer to a listing of the generated instructions or add debugging output statements to the program. This is due to a number of factors: * the run-time environment provided by the system. * the implementation of PL/I. * the availability of a variety of powerful debugging facilities.
One of the main goals of the Cambridge Project is a Consistent System of programs, data, and models for use in the behavioral sciences. A framework for the System has been constructed on the Multics time-sharing system at M.I.T., and a collection of programs has begun to accumulate within it. This session will be devoted to that framework and to three examples of subsystems that are being fitted into it. They will be described briefly, and the reasons why they are expected to be more useful when surrounded by the rest of the Consistent System will be discussed.
The Cambridge Project is a cooperative effort by a number of scientists at M.I.T. and Harvard; its purpose is to make the digital computer more useful and usable by scientists in the basic and applied behavioral sciences, and in other sciences that have similar computing problems. The most notable single achievement of the half year covered in this report was the transfer of the entire Consistent System from the old Multics computer, which was a Honeywell 645, to a new Multics computer, a Honeywell 6180, and the subsequent transfer to another 6180 operated by the Air Force Data Services Center.
Software; Operating systems (Computers); Cornell university; MIT; Whirlwind computer; Bell Telephone Laboratories; Telecommunications; timesharing; Computer science; UNIX
Software; Operating systems (Computers); Multics; timesharing; UNIX; Bell Telephone Laboratories; Word processing; Computer science; Plan 9; Dartmouth College
Dan Bricklin and Bob Frankston discuss the creation of VisiCalc, the pioneering spreadsheet application. Bricklin and Frankston begin by discussing their educational backgrounds and experiences in computing, especially with MIT's Multics system. Bricklin then worked for DEC on typesetting and word-processing computers and, after a short time with a small start-up company, went to Harvard Business School. After MIT Frankston worked for White Weld and Interactive Data. The interview examines many of the technical, design, and programming choices in creating VisiCalc as well as interactions with Dan Fylstra and several business advisors. Bricklin comments on entries from his dated notebooks about these interactions. The interview reviews the incorporation of Software Arts in 1979, then describes early marketing of VisiCalc and the value of product evangelizing.
All systems will fail. The question is not whether some mishap will happen, but rather what to do when it does occur. In this Turing Award address, Corbató examines the problems associated with the development of ambitious or complex systems and identifies why they always fail. Sources of complexity that contribute to this failure include the number of personnel required, the levels of management, the lack of willingness to report bad news, and the inability of any one person to understand the complete system. He offers solutions to each of these problems, including simplicity in design, use of metaphors, constrained languages for design, anticipation of errors, design for modification, cross education of team members, and learning from past mistakes. Frenkel's interview, conducted after Corbató's Turing Award lecture, complements it. The questions and answers provide a comprehensive overview of the development of the time-sharing systems CTSS and Multics, and a good overview of some of the individuals involved in these efforts. One of the most interesting parts of this interview is the support (or lack of interest) of some of the major computer manufacturers in the 1960s, including GE, IBM, and DEC. The support of Bell Labs for Multics and its eventual disengagement are examined. The relationship between UNIX and Multics is discussed in some detail, as are the problems in the development of these systems. The discussion concludes with an examination of the transition from mainframes to workstations and PCs. (Thomas C. Richards)
Bob Freiburghouse discusses his extensive career in computer science in this oral history. His interview begins with a description of his early life in Wisconsin and his initial interest in history and political science. He discusses his first step into computer science through the US Air Force as a punch card machine operator, displaying real aptitude in programming. Freiburghouse follows with the story of his career, starting with the Air Force Security Service and moving on to Honeywell and General Electric. He then started his own company, Translation Systems, before transitioning to Stratus Computer. He also describes teaching AP Computer Science in the Caribbean, and his ideas of using a communications mechanism to streamline the healthcare system and make healthcare delivery more efficient.
John William "Bill" Poduska was interested in electronics from an early age. He received his Bachelor, Masters and doctorate in Electrical Engineering and Computer Science from MIT in seven years. He taught at MIT and served in the Army Signal Corps. He joined project MAC and later Honeywell Research where he became director of its Cambridge Research Center. He left the Center to co-found Prime Computer 1972. In 1979 He left Prime to found Apollo Computer, an early workstation manufacturer. In 1985 Poduska founded Stellar Computer Inc., a graphic supercomputer company which in 1989 merged with Ardent Computer Corp to become Stardent Computer Inc.
See Greenblatt's description of the AI Lab view of Multics about page 39.
On the day following the Celebration of the 25th anniversary of Project MAC held in Cambridge on October 16 and 17, 1988, two small groups of participants in the developments of CTSS and Project MAC met to exchange recollections about their activities. These interviews are separated into two parts, concentrating on each of the two developmental stages of time-sharing, although it was impossible to strictly maintain the separation since the discussions naturally overlapped the time periods. By choice, the interviewers guided the discussion to concentrate on the more personal and background aspects of this history, since the technological history has been well documented in the open literature.
Interviews about the development of CTSS, electronic mail, and Multics with Van Vleck, Corbató, Fano, and Saltzer.
Corbató discusses computer science research, especially time-sharing, at the Massachusetts Institute of Technology (MIT). Topics in the first session include: Phil Morse and the establishment of the Computation Center, Corbató's management of the Computation Center, the development of the WHIRLWIND computer, John McCarthy and research on time-sharing, cooperation between International Business Machines (IBM) and MIT, and J. C. R. Licklider and the development of Project MAC. Topics in the second session include: time-sharing, the development of MULTICS by the General Electric (GE) Computer Division, IBM's reaction to MIT working with GE, the development of CTSS, the development of UNIX in cooperation with Bell Labs, interaction with the Information Processing Techniques Office of the Defense Advanced Research Projects Agency, interaction with Honeywell after they purchased GE's Computer Division, and the transformation of Project MAC into the Laboratory for Computer Science.
Fano discusses his move to computer science from information theory and his interaction with the Advanced Research Projects Agency (ARPA). Topics include: computing research at the Massachusetts Institute of Technology (MIT); the work of J. C. R. Licklider at the Information Processing Techniques Office of ARPA; time-sharing and computer networking research; Project MAC; computer science education; CTSS development; System Development Corporation (SDC); the development of ARPANET; and a comparison of ARPA, National Science Foundation, and Office of Naval Research computer science funding.
Dennis describes his educational background and work in time-sharing computer systems at the Massachusetts Institute of Technology (MIT). The interview focuses on time-sharing. Dennis discusses the TX0 computer at MIT, the work of John McCarthy on time-sharing, and the influence of the Information Processing Techniques Office of the Advanced Research Projects Agency (later the Defense Advanced Research Projects Agency) on the development of time-sharing. Dennis also recalls the competition between various firms, including Digital Equipment Corporation, General Electric, Burroughs, and International Business Machines, to manufacture time-sharing systems. He describes the development of MULTICS at General Electric.
Louis Pouzin's contributions to the development of computer communications have been significant both as a research scientist and as a leader in the areas of network architecture and international protocol development. His achievements often placed him at odds with government and business entities vested in the technology of the day, but some of his key ideas have survived to become founding principles of today's Internet.
Pouzin attended École Polytechnique and worked for the French computer company Bull, managing a team of software engineers, before traveling to the US to work at MIT on their first large scale time-sharing system (CTSS). For this system, he wrote a program for simplifying commands (RUNCOM) that he termed a 'shell' program, which became a forerunner of an entire class of command language programs. After returning to France, Pouzin was chosen to lead a government sponsored networking project to help promote the country's computer industry. Before starting the project, Pouzin traveled back to the US to meet with key developers of Arpanet who shared with him lessons they had learned building their newly operational network. Pouzin returned to France with ideas for improvements. Based on theoretical simulation studies by Donald Davies at NPL, and on his own predilection for simplicity, Pouzin designed the CYCLADES network, as it became known, without the need for IMP hardware over a subnet. He used a new idea of packet switching, a packet he coined a "Datagram", that could be sent over PTT provided telephone circuits.
While the French government stopped funding the CYCLADES program in the late '70's, and the network went offline in 1981, the concepts Pouzin had implemented were heavily influential in the development of future Internet architecture, especially the TCP/IP protocols. While their decision to abandon CYCLADES left the French on the sidelines in the future development of the Internet, Pouzin's vision of a simplified method for connecting diversified networks together contributed greatly to its future design.
I was able to meet with Pouzin at a busy restaurant in Ft. Lauderdale, FL. A gracious and polite man, Pouzin talked freely about his work and past. I enjoyed our brief time together and I hope I have asked questions that help clarify his considerable contributions.
MIT; ARPANET; Ethernet; Metcalfe, Robert; Xerox PARC; token ring; Farber, David; UC Irvine; Proteon
ARPANET; Internet Working Group; ICCB; TCP/IP; DSP; Proteon, Inc.
Roger R. Schell is an authority on high-assurance computing and has spent more than 20 years in the U. S. Air Force before working in private industry. As one of the lead authors of the U. S. Department of Defense Trusted Computer System Evaluation Criteria (TCSEC), known as the Orange Book, Schell has first-hand knowledge of the standards required for classified computer systems. Published in 1983 by the National Computer Security Center, where he served as deputy director, the TCSEC was replaced in 2005 by an international standard, the Common Criteria for Information Technology Security Evaluation. The co-founder and vice president of Gemini Computers Inc., Schell led the development of the Gemini Multi-processing Secure Operating System, known as GEMSOS. In 2001, he founded Aesec Corp., which acquired Gemini Computers and its security kernel in 2003. He also served as the corporate security architect at Novell. Marcus Ranum spoke with Schell, now a professor of engineering at the University of Southern California Viterbi School of Engineering, about the security practices of the U.S. government, the National Security Agency's A1-class systems--Gemini was one--and the development of a secure operating system. Is it even feasible at this point?
An interview with Corby by Dave Walden, IEEE History Committee
Fernando Corbató reviews his early educational and naval experiences in the Eddy program during World War II. Corbató attended Cal Tech and MIT, where he received his PhD under the tutelage of Professor Phil Morse and worked with Whirlwind. A detailed exploration of Corbató's time-sharing systems projects including the Compatible Time-Sharing System (CTSS), Project MAC, and Multics completes the oral history.
Dr. Roger R. Schell, a retired U.S. Air Force Colonel and current president of AEsec Corporation, is one of the foremost contributors to and authorities on "high assurance" computer security. In this oral history he discusses his formulation of the secure kernel and reference monitor concepts (in the early 1970s), his work that led to security enhancements to Honeywell-Multics (mid-1970s), his role as deputy director of the National Computer Security Center (including leadership on TCSEC or "The Orange Book" in the early to mid-1980s), and commercial (high assurance) computer security enterprises he's led since retiring from the Air Force.
David Elliott Bell is a mathematician and computer security pioneer who co-developed the highly influential Bell-LaPadula security model. This interview discusses the context of his pivotal computer security work at MITRE Corporation, and his later contributions at the National Security Agency and Trusted Information Systems (including his leadership on TIS's Trusted Xenix B2-rated system).
Thomas Van Vleck is a time-sharing and computer security pioneer. As a user he worked with MIT's Compatible Time-Sharing System (CTSS) and MULTICS as a MIT student prior to helping to design enhancements (including security enhancements) to the MULTICS system first as a technical staff member at MIT and later on Honeywell-MULTICS as a technical staff member and manager at Honeywell. The interview discusses the security issues/risks on CTSS that resulted in modest changes (password protection) to CTSS and influenced the far more extensive security design elements of MULTICS. His long association w/ MULTICS in both the MIT and Honeywell setting provides unique perspective on the evolution of MULTICS security over the long term. He also briefly discusses his post-Honeywell career working on computer security as a manager at several other firms.
Steven B. Lipner is a computer security pioneer with more than 40 years of experience as a researcher, development manager, and general manager in IT Security. He helped form and served on the Anderson Panel for the Air Force in the early 1970s (was MITRE's representative), oversaw path breaking computer security high assurance mathematical model work at MITRE later that decade, was a leader in Digital Equipment Corporation's (DEC) effort to build an A1 (TCSEC certification) system in the 1980s, and led the creation of Microsoft's Security Development Lifecycle in the 2000s. This interview focuses primarily on Lipner's involvement on the Anderson Panel, his work at MITRE, and his work at DEC.
In this interview, computer security pioneer Peter G. Neumann relates his education at Harvard University (A.B. in Math, S.M. and Ph.D. in Applied Math), including an influential (to his perspective and career) two-hour long meeting/discussion as an undergraduate with Albert Einstein (discussing "complexity" and other topics). The vast majority of the interview addresses the many facets of his highly influential career in computer security research. With regard to the latter, this includes discussion of his work at Bell Labs and extensive involvement with MULTICS security, and his subsequent four-decade (and continuing) career as a research scientist at SRI International. He tells of his work and leadership with the Provably Secure Operating System (PSOS), research and writing on risks (including moderating the ACM Risks Forum), insider misuse and intrusion-detection systems (IDES, NIDES, EMERALD), and his current work on two DARPA-funded projects that builds on key lessons of the past to design and develop secure/trustworthy computer systems. He also relates the computer security research infrastructure and how it evolved, as well as comments on a number of other topics such as the major computer security conferences and the range of perspectives of researchers in the computer security research community.
In this oral history, computer security pioneer Daniel Edwards discusses his long-term career as a computer security researcher at the National Security Agency (NSA). He discusses Trojan Horse attacks, a term he introduced in the computer security field to describe a particular type of computer security vulnerability of hidden malicious code within a seemingly harmless program. He provides perspective on the evolving relationship of communications security (COMSEC) and computer security (COMPUSEC) at the NSA. Edwards became part of the NSA's National Computer Security Center and was principally involved with the development of the NCSC's/DOD's Trusted Computer Security Evaluation Criteria (TCSEC) and elaborates on the processes and considerations in developing and refining this influential set of computer security standards.
This interview focuses on Peter Denning's pioneering early contributions to computer security. This includes discussion of his perspective on CTSS and Multics as a graduate student at MIT, pioneering (with his student Scott Graham) the critical computer security concept of a reference monitor for each information object as a young faculty member at Princeton University, and his continuing contributions to the computer security field in his first years as a faculty member at Purdue University. Because of an extensive, career spanning oral history done with Denning as part of the ACM Oral History series (which includes his contributions as President of ACM, research on operating systems, and principles of computer science), this interview is primarily limited to Denning's early career when computer security was one of his fundamental research areas.
This interview with computer security pioneer Marvin Schaefer discusses his roles and perspectives on computer security work at the System Development Corporation over many years (an organization he began working at in the summer of 1965), as well as his work at the National Computer Security Center in helping to create the Trusted Computer System Evaluation Criteria (TCSEC). With the latter he relates the challenges to writing the criteria, the debates over the structure and levels, and the involvement of criteria lawyers. He also summarizes his work at the company Trusted Information Systems. In addition to detailing his pivotal work in computer security, he offers insightful commentary on issues in the field such as the Bell-LaPadula Model, John McLean's System Z, and other topics.
Computer security pioneer Earl Boebert discusses his education at Stanford University before the bulk of the interview focuses on his work within the Air Force and at Honeywell. Among the topics he discusses are the Air Force Undergraduate Navigator Training System, efforts to save and market Multics (and the inherent challenges given GE's existing systems and the economics of the mainframe business), PSOS, Sidewinder, the formation of Secure Computing Corporation. Also discussed is his role in the broader computer security research community including serving on many National Research Council committees, including the one producing the influential 1991 Computers at Risk.
I also found the attached "Look Ahead" column, probably from a 1969 Datamation, but with no bibliographic info whatever.
Notwithstanding the headline, the article is actually about dedication of a new Honeywell Multics center in Phoenix.
Opens with stories about Multics security and project ZARF.
It is not easy to make a computer system secure, but neither is it impossible. The greatest error is to ignore the problem.
Introduction to an issue describing the beginnings of time-sharing at MIT.
Yes, Multics was a market failure but not because the market had changed. It was because Honeywell (which bought out the GE computer division) worked hard not to sell it. ...Did the world pass Multics by? As noted above, Honeywell wounded it and then eventually killed it. But Unix, though weak as an implementation of Multics, has achieved great success in the marketplace.
October 2009 marked an important milestone in the history of computing. It was exactly 40 years since the first Multics computer system was used for information management at the Massachusetts Institute of Technology.
In the early 1960s, Fernando Corbató helped deploy the first known computer password.
How Multics programs access data files.
Aired on WGBH-TV Boston. The initial sequence shows the CTSS account of M1416 786 (Bob Daley) running a square root program.
Short film from 1964 taken in Prof. Fano's Project MAC office. He uses CTSS from a Model 35 Teletype.
ARPA film about the ARPANet. Starts with Corby and Lick.
Introduction by Jerry Saltzer, Co-Head of the Computer Systems research division of Project MAC. Demonstration by David Clark and Sze-Ping Kuo, graduate students of the Computer Systems research division of Project MAC.
Demonstration by David Clark and Sze-Ping Kuo, graduate students of the Computer Systems research division of Project MAC.
Multiple segments: this is the first. Includes Corbató, Fano, Morse, Teager, Fredkin, McCarthy
Lunchtime presentation at Stratus by Steve about Multics.
John Gintell gave a lengthy talk about the history of Multics to a combined Greater Boston ACM chapter / IEEE computer society meeting in 1989. A number of Multicians were there and there is a long comment section at the end.
Gary's experience with Multics development, training, and support.
LCS 50th Anniversary
Peter Neumann introduces Corby.
Abstract: The computing climate and facilities at MIT in the early 1950's and 1960's will be briefly described. This will be followed by a sketch of the events that led to the formation of Project MAC and the decision to embark on the Multics project.
Bio: Fernando J. Corbató, Professor Emeritus in the Department of Electrical Engineering and Computer Science at M.I.T., has achieved wide recognition for his pioneering work on the design and development of multiple-access computer systems. He was associated with the M.I.T. Computation Center from its organization in 1956 until 1966. In 1963 he was a founding member of Project MAC, the antecedent of CSAIL. An early version of the Compatible Time-Sharing System (CTSS) was first demonstrated in November 1961, at the M.I.T. Computation Center. In the fall of 1963, after further development, the system began daily operation at Project MAC.
Abstract: At a time when computers are increasingly involved in all aspects of our lives, our computer systems are too easily broken or subverted. The current state of affairs is, no doubt, unsurprising to Multicians who are painfully aware of the design and security compromises that went into the base design of today's mainstream systems. The past 30 years has also brought vast changes in the availability and costs of computer hardware as well as significant advances in formal methods. How do we exploit these advances to make computer systems worthy of the trust we are now placing in them? We specifically take a clean-slate approach to computer architectures and system designs based on modern costs and threats. We spend now cheap hardware to reduce or eliminate traditional security-performance tradeoffs and to provide stronger hardware safety and security interlocks that prevent gross security and safety violations even when there are bugs in the code. We embrace well-known security principles of least and separate privileges and complete mediation of operations. Our system revisits many pioneering Multics concepts including gates between software components with different-privileges, small and verified system components, and formal information flow properties and guarantees.
Project paper: http://www.crash-safe.org
Bio: Andre DeHon received S.B., S.M., and Ph.D. degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 1990, 1993, and 1996 respectively. From 1996 to 1999, Andre co-ran the BRASS group in the Computer Science Department at the University of California at Berkeley. From 1999 to 2006, he was an Assistant Professor of Computer Science at the California Institute of Technology. In 2006 he joined the Electrical and Systems Engineering Department at the University of Pennsylvania, where he is now a Full Professor. He is broadly interested in how we physically implement computations from substrates, including VLSI and molecular electronics, up through architecture, CAD, and programming models. He places special emphasis on spatial programmable architectures (e.g. FPGAs) and interconnect design and optimization.
Multics BIO: Andre DeHon is a bastard child of the tail end of LISP Machine and Multics eras, having been a research assistant for Knight and a teaching assistant for Saltzer. As a member of MIT's Student Information Processing Board (SIPB), he was part of the group that pushed Multics access to MIT students and was logged in during the decommissioning of MIT-Multics. So, while he never contributed to Multics, he was around in time to learn that there were computer systems that predated Unix and Windows and that did have a principled way to address safety and security. He hopes the world is now ready for many of the Multics and LISPM ideas that were ahead of their time and have mostly been forgotten during the dark ages of mainstream Internet growth.
Abstract: November 2015 marks the fiftieth anniversary of the 1965 Fall Joint Computer Conference at which the description of Multics was presented to the computer community. To mark this anniversary I have set for myself three initiatives to mark the event. They are:
1. Write a follow-on book to Organick's text which will describe the second generation of Multics hardware and software;
2. Lobby to get a technical society to sponsor a Multics fifty year anniversary conference centered on the pioneering role Multics has vis-a-vis today's commercial operating systems;
3. Champion the completion of a Multics VM based on the current efforts to emulate the 6180/DPS-8/M. I will be describing my approaches to and current status of these three initiatives.
Bio: After working on the data management and B2 certification projects at CISL, Michael moved to Stratus Computer where he designed and prototyped several PCBs in addition to writing firmware for other boards. At Banyan Systems Michael maintained their Unix System V kernel and implemented Intel APIC mediated multiprocessing on PC platforms. Most recently at EMC Michael supported tape library robotics and designed Linux drivers. Currently Michael has taken early retirement from EMC and is engaged in processor design utilizing FPGAs.
LCS 50th Anniversary
Interviewed by Marc Pachter, Director Emeitus, National Portrait Gallery, Smithsonian Institution.
Interviewed by Steven H. Webber.
I gave an hour long talk about my history with the DPS8/M emulator at the Vintage Computer Festival Pacific Northwest 2018; VCF taped the talk and have put it online. Links to slides and notes at simulator.html.
booting MR12.6e on a simulated 6180 under Linux. The system boots up to idle in just under 2 minutes, which, from what I understand, is blazingly fast.
Stan demonstrates Multics in 26-100
Source: LCS document handed out at the Project MAC 25th reunion, updated by Jerry Saltzer 5/8/98. The Library 2000 project at MIT scanned many old MAC TRs and the images were available on a server provided by the MIT libraries.
See also the LCS on-line list of publications.
This thesis examines the various mechanisms for naming the information objects stored in a general-purpose computing utility, and isolates a basic set of naming facilities that must be protected to assure complete control over user interaction and that allow desired interactions among users to occur in a natural way. Minimizing the protected naming facilities consistent with the functional objective of controlled, but natural, user interaction contributes to defining a security kernel for a general-purpose computing utility. The security kernel is that complex of programs that must be correct if control on user interaction is to be assured. The Multics system is used as a test case, and its segment naming mechanisms are redesigned to reduce the part that must be protected as part of the supervisor. To show that this smaller protected naming facility can still support the complete functionality of Multics, a test implementation of the design is performed. The new design is shown to have a significant impact on the size and complexity of the Multics supervisor.
This report describes the Classroom Information and Computing Service (Clics), a pedagogical computer-based information system that is used as a case study in the subject "Information Systems" in the Department of Electrical Engineering at M.I.T. Clics is an abstraction of the Multiplexed Information and Computing Service (Multics) that is being implemented by Project MAC at M.I.T. As such, it is an example of a computer utility. Clics is derived from Multics by a combination of simplifying the mechanisms of Multics and removing some of its more exotic features; and embodies research into ways to simplify the mechanisms of Multics without sacrificing service objectives. This report is a specification of the hardware, control programs, and system implementation language of the Clics system, as developed to date. The system is specified in sufficient detail for students to develop a structural as well as a functional understanding of its operation and mechanisms. As the primary case study for an undergraduate subject, Clics provides specific examples of the complexities in a general purpose information system, and methods of coping with them.
In many large systems today, input/output is not performed directly by the user, but is done interpretively by the system for him, which causes additional overhead and also restricts the user to whatever algorithms the system has implemented. Many causes contribute to this involvement of the system in user input/output, including the need to enforce protection requirements, the inability to provide adequate response to control signals from devices, and the difficulty of running devices in a virtual environment, especially a virtual memory. The goal of this thesis was the creation of an input/output system which allows the user the freedom of direct access to the device, and which allows the user to build input/output control programs in a simple and understandable manner. This thesis presents a design for an input/output subsystem architecture which, in the context of a segmented, paged, time-shared computer system, allows the user direct access to input/output devices. This thesis proposes a particular architecture, to be used as an example of a class of suitable designs, with the intention that this example serve as a tool in understanding the large number preferable form.
contents:
See individual entries for the RFCs.
It is now clear that it is possible to create a general-purpose time-shared multiple access system on most contemporary computers. However, it is equally clear that none of the existent computers are well designed for multiple access systems. At present, good service to a few dozen simultaneous users is considered state-of-the-art. Discussions include: clocks, memory protection and supervisor mode, program relocation and common subroutines which expose the reader to the difficulties encountered with contemporary machines when multiple user multiple-processor systems are considered.
A model for the auxiliary memory function of a segmented, multiprocessor, time-shared computer system is set up. A drum system in particular is discussed, although no loss of generality is implied by limiting the discussion to drums. Particular attention is given to the queue of requests waiting for drum use. It is shown that a shortest access time first queue discipline is the most efficient, with the access time being defined as the time required for the drum to be positioned, and is measured from the finish of service of the last request to the beginning of the data transfer for the present request. A detailed study of the shortest access time queue is made, giving the minimum access time probability distribution, equations for the number in the queue, and equations for the wait in the queue. Simulations were used to verify these equations; the results are discussed. Finally, a general Markov Model for Queues is discussed in an Appendix.
part 5 of LCS-TM-87
part 6 of LCS-TM-87
This thesis reports the design, conducting, and results of an experiment intended to measure the paging rate of a virtual memory computer system as a function of paging memory size. This experiment, conducted on the Multics computer system at MIT, a large interactive computer utility serving an academic community, sought to predict paging rates for paging memory sizes larger than the existent memory at the time. A trace of all secondary memory references for two days was accumulated, and simulation techniques applicable to "stack" type page algorithms (of which the least-recently-used discipline used by Multics is one) were applied to it. A technique for interfacing such an experiment to an operative computer utility in such a way that adequate data can be gathered reliably and without degrading system performance is described. Issues of dynamic page deletion and creation are dealt with, apparently for the first reported time. The successful performance of this experiment asserts the viability of performing this type of measurement on this type of system. The results of the experiment are given, which suggest models of demand paging behavior.
The problem of dynamic observation of the state of a time-shared computer system is investigated. The Graphical Display Monitoring System was developed as a medium for this experimental work. It is an integrated system for creating graphic displays, dynamically retrieving data from Multics Time-Sharing System supervisor data bases, and on-line viewing of this data via the graphic displays. On-line and simulated experiments were performed with various members of the Multics staff at Project MAC in an effort to determine what data is most relevant for dynamic monitoring, what display formats are most meaningful, and what sampling rates are most desirable. The particular relevance of using a graphic display as an output medium for the monitoring system is noted. As a guide to other designers, a generalized description of the principles involved in the design of this on-line, dynamic monitoring device includes special mention of those areas of particular hardware or software system dependence. Several as yet unsolved problems relating to time-sharing system monitoring, including those of security and data base protection, are discussed.
This thesis presents a design for a paging system that may be used to implement a virtual memory on a large scale, demand paged computer utility. A model for such a computer system with a multi-level, hierarchical memory system is presented. The functional requirements of a paging system for such a model are discussed, with emphasis on the parallelism inherent in the algorithms used to implement the memory management functions. A complete, multi-process design is presented for the model system. The design incorporates two system processes, each of which manages one level of the multi-level memory, being responsible for the paging system functions for that memory. These processes may execute in parallel with each other and with user processes. The multi-process design is shown to have significant advantages over conventional designs in terms of simplicity, modularity, system security, and system growth and adaptability. An actual test implementation on the Multics system was carried out to validate the proposed design.
A problem currently confronting computer scientists is to develop a method for the production of large software systems that are easy to understand and certify. The most promising methods involve decomposing a system into small modules in such a way that there are few intermodule dependencies. In contrast to previous research, this thesis focuses on the nature of the intermediate module dependencies, with the goal of identifying and eliminating those that are found to be unnecessary. Using a virtual memory subsystem as a case study, the thesis describes a structure in which apparent dependencies can be eliminated. Owing to the nature of virtual memory subsystems, many higher level functions can be performed by lower level modules that exhibit minimal interaction. The structuring methods used in this thesis, inspired by the structure of the LISP world of atomic objects, depend on the observation that a subsystem can maintain a copy of the name of an object without being dependent upon the object manager. Since the case study virtual memory subsystem is similar to that of the Multics system, the results reported here should aid in the design of similar sophisticated virtual memory subsystems in the future.
(Also available as NTIS AD-A040 808/8)
This thesis develops a complete set of protocols, which utilize a block cipher, e.g., the NBS data encryption standard, for protection interactive user-computer communication over physically unsecured channels. The use of the block cipher protects against disclosure of message contents to an intruder, and the protocols provide for the detection of message stream modification and denial of message service by an intruder. The protocols include facilities for key distribution, two-way login authentication, resynchronization following channel disruption, and expedition of high priority messages. The thesis presents designs for modules to implement the protocols, both in the terminal and in a host computer system, and discusses the results of a test implementation of the modules on Multics.
part 7 of LCS-TM-87
This thesis demonstrates that the amount of protected, privileged code related to process initiation in a computer utility can be greatly reduced by making process creation unprivileged. The creation of processes can be controlled by the standard mechanism for controlling entry to a domain, which forces a new process to begin execution at a controlled location. Login of users can thus be accomplished by an unprivileged creation of a process in the potential user's domain, followed by authentication of the user by an unprivileged initial procedure in that domain. The thesis divides the security constraints provided by a computer utility into three classes: Access control, prevention unauthorized denial of service, and confinement. We develop a model that divides process changing, resource control, authentication, and environment initialization. We show which classes of security constraints depend on each of these functions and show how to implement the functions such that these are the only dependencies present. The thesis discusses an implementation of process initiation for the Multics computer utility based on the model. The major problems encountered in this implementation are presented and discussed. We show that this implementation is substantially simpler and more flexible than that used in the current Multics system.
MACLISP is a dialect of Lisp developed at M.I.T.'s Project MAC (now the MIT Laboratory for Computer Science) and the MIT Artificial Intelligence Laboratory for use in artificial intelligence research and related fields. Maclisp is descended from Lisp 1.5, and many recent important dialects (for example Lisp Machine Lisp and NIL) have evolved from Maclisp. David Moon's original document on Maclisp, The Maclisp Reference Manual (alias the Moonual ) provided in-depth coverage of a number of areas of the Maclisp world. Some parts of that document, however, were never completed (most notably a description of Maclisp's I/O system); other parts are no longer accurate due to changes that have occurred in the language over time. This manual includes some introductory information about Lisp, but is not intended as tutorial. It is intended primarily as a reference manual; particularly, it comes in response to user's please for more up-to-date documentation. Much text has been borrowed directly from the Moonual, but there has been a shift in emphasis. While the Moonual went into greater depth on some issues, this manual attempts to offer more in the way of examples and style notes. Also, since Moon had worked on the Multics implementation, the Moonual offered more detail about compatibility between ITS and Multics Maclisp. While it is hoped that Multics users will still find the information contained herein to be useful, this manual focuses more on the ITS and TOPS-20 implementations since those were the implementations most familiar to the author.
In any computer system primitive functions are needed to control the actions of processes in the system. This thesis discusses a set of six such process control primitives which are sufficient to solve many of the problems involved in parallel processing as well as in the efficient multiplexing of system resources among the many processes in a system. In particular, the thesis documents the work performed in implementing these primitives in a computer system, the Multics system, which is being developed at Project MAC of M.I.T. During the course of work that went into the implementation of these primitives, design problems were encountered which caused the overall design of the programs involved to go through two iterations before the performance of these programs was deemed acceptable. The thesis discusses the way design of these program evolved over the course of work.
This thesis presents a simply structured design for the implementation of process in a kernel-structured operating system. The design provides a minimal mechanism for the support of two distinct classes of processes found in the computer system - those which are part of the kernel operating system itself, and those used to execute user-specified computations. The design is broken down into two levels, one which implements a fixed number of virtual processors, which are then used to run kernel processes, and are multiplexed to provide processes for user computation. Eventcount primitives are provided, in order to provide a simple unified interprocess control communication mechanism. The design is intended to be used in the creation of a secure kernel for the Multics Operating System.
BCPL is a language which is readable and easy to learn, as well as admitting of an efficient compiler capable of generating efficient code. It is made self consistent and easy to define accurately by an underlying structure based on a simple idealized object machine. The treatment of data types is unusual and it allows the power and convenience of a language with dynamically varying types and yet the efficiency of FORTRAN. BCPL has been used successfully to implement a number of languages and has proved to be a useful tool for compiler writing. The BCPL compiler itself is written in BCPL and has been designed to be easy to transfer to other machines; it has already been transferred to more than ten different systems.
The Multics project was begun in 1964 by the Computer Systems Research group of M.I.T. Project MAC. The goal was to create a prototype of a computer utility. This technical report represents the Introduction to the users manual for the Multics System. It is published in this form as a convenient method of communications with researchers and students of computer system design. It is divided into three major parts: 1) Introduction to Multics, 2) Reference Guide to Multics and 3) Subsystems Writers' Guide to Multics.
part 1 of LCS-TM-87
part 3 of LCS-TM-87
part 4 of LCS-TM-87
This thesis presents an orderly design approach for dynamically changing the configuration of constituent physical units in a modular computer system. Dynamic reconfiguration contributes to high system availability by allowing preventative maintenance, development of new operating systems, and changes in system capacity on a non-interference basis. The design presented includes the operating system primitives and hardware architecture for adding and removing any (Primary or secondary) storage module and associated processing modules while the system is running. Reconfiguration is externally initiated by a simple request from a human operator and is accomplished automatically without disruption to users of the system. This design allows the modules in an installation to be partitioned into separate non-interfering systems. The viability of the design approach has been demonstrated by employing it for a practical implementation of processor and primary memory dynamic reconfiguration in the Multics system at M.I.T.
This thesis presents a comprehensive set of hierarchically organized modular analytical models developed for the performance evaluation of multiprogrammed virtual-memory time-shared computer systems using demand paging. The hierarchy of models contains a user behavior model, a secondary memory model, a program behavior model, a processor model, and a total system model. This thesis is particularly concerned with the last three models. The program behavior model developed in this thesis allows us to estimate the frequency of paging expected on a given processing system. The processor model allows us to evaluate the throughput of a given multi-processor multi-memory processing system under multiprogramming. Finally, the total system model allows us to derive the response time distribution of an entire computer system under study. Since all major factors (such as various system overhead times and idle times) which may decrease a system's computational capacity available for users' useful work are explicitly considered in the analyses using the above models, the performance predicted by these analyses is very realistic. A comparison of the performance of an actual system, the Multics system of M.I.T., and the corresponding performance predicted by these analyses confirms the accuracy of performance prediction by these models. Then, these analyses are applied to the optimization of computer systems and to the selection of the best performing system for a given budget. The framework of a performance evaluation using these hierarchically organized analytical models guides human intuition in understanding the actual performance problems and provides us with reliable answers to most of the basic quantitative performance questions concerning throughput and response time of actual modern large-scale time-shared computer systems.
This thesis describes a design for an automatic backup mechanism to be incorporated in a computer utility for the protection of on-line information against accidental or malicious destruction. This protection is achieved by preserving on magnetic tape recent copies of all items of information known to the on-line file system. In the event of a system failure, file system damage is automatically assessed and missing information is recovered from backup storage. For isolated mishaps, users may directly request the retrieval of selected items of information. The design of the backup mechanism presented in this thesis is based upon existing backup mechanism contained in the Multics system. As compared to the present Multics backup system, the new design lessens overhead, drastically reduces recovery time from system failures, eliminates the need to interrupt system operation for backup purposes, and scales up significantly better with on-line storage growth.
part 2 of LCS-TM-87
Published by GE/Honeywell.
Al Kossow at bitsavers.org has scanned many GE and Honeywell Multics manuals and placed them online.
Internal design documents used by the development team in the 1960s. Three series, M, G, and B, for MIT, GE, and Bell Labs. This table is derived from TOC memos M0116, M0117, M0118, and M0119.
The Multics Design Document series, specifically produced by Honeywell for the B2 evaluation effort, includes some documents written for the project. Others were existing manuals that were found to be adequate for the evaluation but were to eventually be re-written for consistency. [info from Ed Ranzenbach]
Documents produced by Project MAC and BTL people at the beginning of Multics design.
A multi-section management document describing Multics production milestones and tasks in 1967-69.
Documents given to machine operators at the MIT and GE/Honeywell development sites. Later incorporated into Honeywell manuals.
Here is a list, thanks to Bruce Sanderson, of documents produced by Warren Johnson and Jim Homan, describing operational lore useful to site analysts and operators.
Memos local to particular sites.
Designed for Chemical and Petroleum Engineering students.
Introductory document for Virginia Tech students
Documentation for 34 commands, 7 active functions, and 25 subroutines contributed by the MIT user community, including XPL, TECO and BCPL. (5.6M pdf)
Documentation for 5 commands and 2 subroutines installed locally at the MIT site, including Multics versions of BMD and SSP. (1.8M pdf)
Documentation of the Multics Simulator.