Misplaced Pages

Time Sharing Option

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Time Sharing Option ( TSO ) is an interactive time-sharing environment for IBM mainframe operating systems, including OS/360 MVT , OS/VS2 (SVS) , MVS , OS/390 , and z/OS .

#903096

53-418: In computing, time-sharing is a design technique that allows many people to use a computer system concurrently and independently—without interfering with each other. Each TSO user is isolated; it appears to each one that they are the only user of the system. TSO is most commonly used by mainframe system administrators and programmers. It provides: TSO interacts with users in either a line-by-line mode or in

106-446: A multi-processor machine, with the goal of speeding up computations—parallel computing is impossible on a ( one-core ) single processor, as only one computation can occur at any instant (during any single clock cycle). By contrast, concurrent computing consists of process lifetimes overlapping, but execution does not happen at the same instant. The goal here is to model processes that happen concurrently, like multiple clients accessing

159-431: A full screen, menu-driven mode. In the line-by-line mode, the user enters commands by typing them in at the keyboard; in turn, the system interprets the commands, and then displays responses on the terminal screen. But most mainframe interaction is actually via ISPF , which allows for customized menu-driven interaction. This combination is called TSO/ISPF . TSO can also provide a Unix-style environment on OS/390 and z/OS via

212-418: A given set of wires (improving efficiency), such as via time-division multiplexing (1870s). The academic study of concurrent algorithms started in the 1960s, with Dijkstra (1965) credited with being the first paper in this field, identifying and solving mutual exclusion . Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks. Examples follow. At

265-542: A major technological shift in the history of computing. By allowing many users to interact concurrently with a single computer, time-sharing dramatically lowered the cost of providing computing capability, made it possible for individuals and organizations to use a computer without owning one, and promoted the interactive use of computers and the development of new interactive applications . The earliest computers were extremely expensive devices, and very slow in comparison to later models. Machines were typically dedicated to

318-443: A network. The exact timing of when tasks in a concurrent system are executed depends on the scheduling , and tasks need not always be executed concurrently. For example, given two tasks, T1 and T2: The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, concurrent/sequential and parallel/serial are used as opposing pairs. A schedule in which tasks execute one at

371-655: A particular set of tasks and operated by control panels, the operator manually entering small programs via switches in order to load and run a series of programs. These programs might take hours to run. As computers grew in speed, run times dropped, and soon the time taken to start up the next program became a concern. Newer batch processing software and methodologies, including batch operating systems such as IBSYS (1960), decreased these "dead periods" by queuing up programs ready to run. Comparatively inexpensive card punch or paper tape writers were used by programmers to write their programs "offline". Programs were submitted to

424-608: A second deployment of CTSS was installed on an IBM 7094 that MIT has purchased using ARPA money. This was used to support Multics development at Project MAC . JOSS began time-sharing service in January 1964. Dartmouth Time-Sharing System (DTSS) began service in March 1964. Throughout the late 1960s and the 1970s, computer terminals were multiplexed onto large institutional mainframe computers ( centralized computing systems), which in many implementations sequentially polled

477-443: A server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel. For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via time-sharing slices: only one process runs at a time, and if it does not complete during its time slice, it

530-641: A time (serially, no parallelism), without interleaving (sequentially, no concurrency: no task begins until the prior task ends) is called a serial schedule . A set of tasks that can be scheduled serially is serializable , which simplifies concurrency control . The main challenge in designing concurrent programs is concurrency control : ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions. Potential problems include race conditions , deadlocks , and resource starvation . For example, consider

583-416: Is paused , another process begins or resumes, and then later the original process is resumed. In this way, multiple processes are part-way through execution at a single instant, but only one process is being executed at that instant. Concurrent computations may be executed in parallel, for example, by assigning each process to a separate processor or processor core, or distributing a computation across

SECTION 10

#1732780891904

636-461: Is a set of extensions to the original TSO. TSO/E is a base element of z/OS. Before z/OS, TSO Extensions (TSO/E) was an element of OS/390 and was a licensed program for the MVS and MVS/ESA System Products. Since all z/OS installations usually have both TSO and TSO/E functions installed, it is normal to refer to both TSO and TSO/E as "TSO". When first released, TSO module names outside of SVCs always had

689-476: Is greater than for a procedure call. These differences are often overwhelmed by other performance factors. Concurrent computing developed out of earlier work on railroads and telegraphy , from the 19th and early 20th century, and some terms date to this period, such as semaphores. These arose to address the question of how to handle multiple trains on the same railroad system (avoiding collisions and maximizing efficiency) and how to handle multiple transmissions over

742-474: Is probably the most widely used in industry at present. Many concurrent programming languages have been developed more as research languages (e.g. Pict ) rather than as languages for production use. However, languages such as Erlang , Limbo , and occam have seen industrial use at various times in the last 20 years. A non-exhaustive list of languages which use or provide concurrent programming facilities: Many other languages provide support for concurrency in

795-420: Is the concurrent sharing of a computing resource among many tasks or users by giving each task or user a small slice of processing time . This quick switch between tasks or users gives the illusion of simultaneous execution. It enables multi-tasking by a single user or enables multiple-user sessions. Developed during the 1960s, its emergence as the prominent model of computing in the 1970s represented

848-715: Is time-sharing". For DEC, for a while the second largest computer company (after IBM), this was also true: Their PDP-10 and IBM's 360/67 were widely used by commercial timesharing services such as CompuServe, On-Line Systems, Inc. (OLS), Rapidata and Time Sharing Ltd. The advent of the personal computer marked the beginning of the decline of time-sharing. The economics were such that computer time went from being an expensive resource that had to be shared to being so cheap that computers could be left to sit idle for long periods in order to be available as needed. Although many time-sharing services simply closed, Rapidata held on, and became part of National Data Corporation . It

901-564: The IBM 2741 ) with two different seven-bit codes. They would connect to the central computer by dial-up Bell 103A modem or acoustically coupled modems operating at 10–15 characters per second. Later terminals and modems supported 30–120 characters per second. The time-sharing system would provide a complete operating environment, including a variety of programming language processors, various software packages, file storage, bulk printing, and off-line storage. Users were charged rent for

954-550: The UK . By 1968, there were 32 such service bureaus serving the US National Institutes of Health (NIH) alone. The Auerbach Guide to Timesharing (1973) lists 125 different timesharing services using equipment from Burroughs , CDC , DEC , HP , Honeywell , IBM , RCA , Univac , and XDS . In 1975, acting president of Prime Computer Ben F. Robelen told stockholders that "The biggest end-user market currently

1007-915: The UNIX System Services command shell , with or without ISPF. TSO commands can be embedded in REXX execs or CLISTs , which can run interactively or in batch. TSO eliminated the need to punch cards on a keypunch machine, and send card decks to the computer room to be read by a card reading machine. Prior to TSO, IBM had introduced limited function time sharing applications such as Remote Access Computing System (RAX), Conversational Programming System (CPS), Conversational Remote Batch Entry (CRBE) and Conversational Remote Job Entry (CRJE) for S/360. These either ran user programs only in an interpreter or had no ability to run user programs at all, only to edit, retrieve and submit batch jobs. In addition, universities had written time sharing systems both for

1060-488: The "prefix" IKJ, in some cases followed by the second and third letters of an associated pre-TSO functional group (IEA = original functional group of "supervisor", hence a TSO module name of IKJEAxxx, IEB = original functional group of "dataset utilities", hence a TSO module name of IKJEBxxx, etc.). It is common to run TSO in batch (as opposed to interactively): all the usual TSO line-mode interactive commands can be also executed via Job Control Language (JCL) by running any of

1113-422: The 1970s, Ted Nelson 's original " Xanadu " hypertext repository was envisioned as such a service. Time-sharing was the first time that multiple processes , owned by different users, were running on a single machine, and these processes could interfere with one another. For example, one process might alter shared resources which another process relied on, such as a variable stored in memory. When only one user

SECTION 20

#1732780891904

1166-469: The 360/67, e.g., Michigan Terminal System (MTS), and for systems prior to S/360, e.g. Compatible Time-Sharing System (CTSS). When it was introduced in 1971, IBM considered time-sharing an "optional feature", as compared to standard batch processing , and hence offered TSO as an option for OS/360 MVT . With the introduction of MVS in 1974, IBM made it a standard component of their top-end mainframe operating system. TSO/E ("Time Sharing Option/Extensions")

1219-537: The University of Illinois in early 1961. Bitzer has long said that the PLATO project would have gotten the patent on time-sharing if only the University of Illinois had not lost the patent for two years. The first interactive , general-purpose time-sharing system usable for software development, Compatible Time-Sharing System , was initiated by John McCarthy at MIT writing a memo in 1959. Fernando J. Corbató led

1272-422: The behavior of concurrent systems. Software transactional memory borrows from database theory the concept of atomic transactions and applies them to memory accesses. Concurrent programming languages and multiprocessor programs must have a consistency model (also known as a memory model). The consistency model defines rules for how operations on computer memory occur and how results are produced. One of

1325-430: The change in the meaning of the term time-sharing a source of confusion and not what he meant when he wrote his paper in 1959. There are also examples of systems which provide multiple user consoles but only for specific applications, they are not general-purpose systems. These include SAGE (1958), SABRE (1960) and PLATO II (1961), created by Donald Bitzer at a public demonstration at Robert Allerton Park near

1378-734: The computer much the same way that the average household buys power and water from utility companies." Christopher Strachey , who became Oxford University's first professor of computation, filed a patent application in the United Kingdom for "time-sharing" in February 1959. He gave a paper "Time Sharing in Large Fast Computers" at the first UNESCO Information Processing Conference in Paris in June that year, where he passed

1431-440: The computer's resources, such as when a large JOSS application caused paging for all users. The JOSS Newsletter often asked users to reduce storage usage. Time-sharing was nonetheless an efficient way to share a large computer. As of 1972 DTSS supported more than 100 simultaneous users. Although more than 1,000 of the 19,503 jobs the system completed on "a particularly busy day" required ten seconds or more of computer time, DTSS

1484-538: The concept on to J. C. R. Licklider . This paper was credited by the MIT Computation Center in 1963 as "the first paper on time-shared computers". The meaning of the term time-sharing has shifted from its original usage. From 1949 to 1960, time-sharing was used to refer to multiprogramming without multiple user sessions. Later, it came to mean sharing a computer interactively among multiple users. In 1984 Christopher Strachey wrote he considered

1537-648: The concept, but did not use the term, in the 1954 summer session at MIT . Bob Bemer used the term time-sharing in his 1957 article "How to consider a computer" in Automatic Control Magazine and it was reported the same year he used the term time-sharing in a presentation. In a paper published in December 1958, W. F. Bauer wrote that "The computers would handle a number of problems concurrently. Organizations would have input-output equipment installed on their own premises and would buy time on

1590-409: The development of the system, a prototype of which had been produced and tested by November 1961. Philip M. Morse arranged for IBM to provide a series of their mainframe computers starting with the IBM 704 and then the IBM 709 product line IBM 7090 and IBM 7094 . IBM loaned those mainframes at no cost to MIT along with the staff to operate them and also provided hardware modifications mostly in

1643-438: The earliest days of personal computers, many were in fact used as particularly smart terminals for time-sharing systems. DTSS's creators wrote in 1968 that "any response time which averages more than 10 seconds destroys the illusion of having one's own computer". Conversely, timesharing users thought that their terminal was the computer, and unless they received a bill for using the service, rarely thought about how others shared

Time Sharing Option - Misplaced Pages Continue

1696-438: The field of concurrent computing include Edsger Dijkstra , Per Brinch Hansen , and C.A.R. Hoare . The concept of concurrent computing is frequently confused with the related but distinct concept of parallel computing , although both can be described as "multiple processes executing during the same period of time ". In parallel computing, execution occurs at the same physical instant: for example, on separate processors of

1749-459: The first consistency models was Leslie Lamport 's sequential consistency model. Sequential consistency is the property of a program that its execution produces the same results as a sequential program. Specifically, a program is sequentially consistent if "the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in

1802-406: The following algorithm to make withdrawals from a checking account represented by the shared resource balance : Suppose balance = 500 , and two concurrent threads make the calls withdraw(300) and withdraw(350) . If line 3 in both operations executes before line 5 both operations will find that balance >= withdrawal evaluates to true , and execution will proceed to subtracting

1855-493: The form of RPQs as prior customers had already commissioned the modifications. There were certain stipulations that governed MIT's use of the loaned IBM hardware. MIT could not charge for use of CTSS. MIT could only use the IBM computers for eight hours a day; another eight hours were available for other colleges and universities; IBM could use their computers for the remaining eight hours, although there were some exceptions. In 1963

1908-538: The ideas of dataflow theory. Beginning in the late 1970s, process calculi such as Calculus of Communicating Systems (CCS) and Communicating Sequential Processes (CSP) were developed to permit algebraic reasoning about systems composed of interacting components. The π-calculus added the capability for reasoning about dynamic topologies. Input/output automata were introduced in 1987. Logics such as Lamport's TLA+ , and mathematical models such as traces and Actor event diagrams , have also been developed to describe

1961-405: The most commonly used programming languages that have specific constructs for concurrency are Java and C# . Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang

2014-480: The next starts. This is a property of a system—whether a program , computer , or a network —where there is a separate execution point or "thread of control" for each process. A concurrent system is one where a computation can advance without waiting for all other computations to complete. Concurrent computing is a form of modular programming . In its paradigm an overall computation is factored into subcomputations that may be executed concurrently. Pioneers in

2067-409: The operations team, which scheduled them to be run. Output (generally printed) was returned to the programmer. The complete process might take days, during which time the programmer might never see the computer. Stanford students made a short film humorously critiquing this situation. The alternative of allowing the user to operate the computer directly was generally far too expensive to consider. This

2120-410: The order specified by its program". A number of different methods can be used to implement concurrent programs, such as implementing each computational execution as an operating system process , or implementing the computational processes as a set of threads within a single operating system process. In some concurrent computing systems, communication between the concurrent components is hidden from

2173-404: The programmer (e.g., by using futures ), while in others it must be handled explicitly. Explicit communication can be divided into two classes: Shared memory and message passing concurrency have different performance characteristics. Typically (although not always), the per-process memory overhead and task switching overhead is lower in a message passing system, but the overhead of message passing

Time Sharing Option - Misplaced Pages Continue

2226-615: The programming language level: At the operating system level: At the network level, networked systems are generally concurrent by their nature, as they consist of separate devices. Concurrent programming languages are programming languages that use language constructs for concurrency . These constructs may involve multi-threading , support for distributed computing , message passing , shared resources (including shared memory ) or futures and promises . Such languages are sometimes described as concurrency-oriented languages or concurrency-oriented programming languages (COPL). Today,

2279-457: The programs IKJEFT01 , IKJEFT1A , and IKJEFT1B and supplying the line commands in a file pointed to by the SYSTSIN DD . The primary difference between the three programs is their handling of return codes from the executed commands. Batch execution of TSO is one way to allow an IBM mainframe application to access DB2 resources. Time-sharing In computing , time-sharing

2332-537: The rise of microcomputing in the early 1980s, time-sharing became less significant, because individual microprocessors were sufficiently inexpensive that a single person could have all the CPU time dedicated solely to their needs, even when idle. However, the Internet brought the general concept of time-sharing back into popularity. Expensive corporate server farms costing millions can host thousands of customers all sharing

2385-652: The same common resources. As with the early serial terminals, web sites operate primarily in bursts of activity followed by periods of idle time. This bursting nature permits the service to be used by many customers at once, usually with no perceptible communication delays, unless the servers start to get very busy. Genesis In the 1960s, several companies started providing time-sharing services as service bureaus . Early systems used Teletype Model 33 KSR or ASR or Teletype Model 35 KSR or ASR machines in ASCII environments, and IBM Selectric typewriter -based terminals (especially

2438-735: The terminal, a charge for hours of connect time, a charge for seconds of CPU time, and a charge for kilobyte-months of disk storage. Common systems used for time-sharing included the SDS 940 , the PDP-10 , the IBM 360 , and the GE-600 series . Companies providing this service included GE 's GEISCO , the IBM subsidiary The Service Bureau Corporation , Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), AL/COM , Bolt, Beranek, and Newman (BBN) and Time Sharing Ltd. in

2491-473: The terminals to see whether any additional data was available or action was requested by the computer user. Later technology in interconnections were interrupt driven, and some of these used parallel data transfer technologies such as the IEEE 488 standard. Generally, computer terminals were utilized on college properties in much the same places as desktop computers or personal computers are found today. In

2544-535: The withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources benefit from the use of concurrency control, or non-blocking algorithms . The advantages of concurrent computing include: Introduced in 1962, Petri nets were an early attempt to codify the rules of concurrent execution. Dataflow theory later built upon these, and Dataflow architectures were created to physically implement

2597-402: Was able to handle the jobs because 78% of jobs needed one second or less of computer time. About 75% of 3,197 users used their terminal for 30 minutes or less, during which they used less than four seconds of computer time. A football simulation, among early mainframe games written for DTSS, used less than two seconds of computer time during the 15 minutes of real time for playing the game. With

2650-429: Was because users might have long periods of entering code while the computer remained idle. This situation limited interactive development to those organizations that could afford to waste computing cycles: large universities for the most part. The concept is claimed to have been first described by Robert Dodds in a letter he wrote in 1949 although he did not use the term time-sharing . Later John Backus also described

2703-414: Was primarily driven by the time-sharing industry and its customers. Time-sharing in the form of shell accounts has been considered a risk. Significant early timesharing systems: Concurrent computing Concurrent computing is a form of computing in which several computations are executed concurrently —during overlapping time periods—instead of sequentially— with one completing before

SECTION 50

#1732780891904

2756-484: Was still of sufficient interest in 1982 to be the focus of "A User's Guide to Statistics Programs: The Rapidata Timesharing System". Even as revenue fell by 66% and National Data subsequently developed its own problems, attempts were made to keep this timesharing business going. Beginning in 1964, the Multics operating system was designed as a computing utility , modeled on the electrical or telephone utilities. In

2809-543: Was using the system, this would result in possibly wrong output - but with multiple users, this might mean that other users got to see information they were not meant to see. To prevent this from happening, an operating system needed to enforce a set of policies that determined which privileges each process had. For example, the operating system might deny access to a certain variable by a certain process. The first international conference on computer security in London in 1971

#903096