Misplaced Pages

Creative Computing Benchmark

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The Creative Computing Benchmark , also called Ahl's Simple Benchmark , is a computer benchmark that was used to compare the performance of the BASIC programming language on various machines. It was first introduced in the November 1983 issue of Creative Computing magazine with the measures from a number of 8-bit computers that were popular at the time. Over a period of a few months, the list was greatly expanded to include practically every contemporary machine, topped by the Cray-1 supercomputer, which ran it in 0.01 seconds.

#388611

24-659: The Creative Computing Benchmark was one of three common benchmarks of the era. Its primary competition in the early 1980s in the United States was the Byte Sieve , of September 1981, while the earlier Rugg/Feldman benchmarks of June 1977 were not as well known in the United States, but were widely used in the United Kingdom. The benchmark first appeared in the November 1983 issue of Creative Computing under

48-506: A half hours), longer even than interpreted languages like BASIC. A notable feature of this first run was that C, Pascal and PL/1 all turned in a roughly similar performance that easily beat the various interpreters. A second set of tests was carried out on more powerful machines, with Motorola 68000 assembly language turning in the fastest times at 1.12 seconds, slightly besting C on a PDP-11/70 and almost twice as fast as 8086 assembler. Most PDP-11 and HP-3000 times were much slower, on

72-528: A small language benchmarking program for some time, desiring one that would be portable across languages, small enough that the program code would fit on a single printed page, and did not rely on specific features like hardware multiplication or division. The solution was inspired by a meeting with Chuck Forsberg at the January 1980 USENIX meeting in Boulder, CO , where Forsberg mentioned an implementation of

96-490: A variety of machines, mostly Zilog Z80 or MOS 6502 -based. The best time was initially 16.5 seconds, turned in by Ratfor on a 4 MHz Z80 machine, but Gary Kildall personally provided a version in Digital Research 's prototype version of PL/1 that ran in 14 seconds and set the mark for this first collection. The slowest was Microsoft COBOL on the same machine, which took a whopping 5115 seconds (almost one and

120-550: A way to compare performance against the competition, and as a general benchmark. Byte once again revisited the sieve later in August 1983 as part of a whole-magazine series of articles on the C language. In this case the use was more in keeping with the original intent, using a single source code and running it on a single machine to compare the performance of C compilers on the CP/M-86 operating system, on CP/M-80 , and for

144-675: A widely used machine benchmark. The Sieve was one of the more popular benchmarks of the home computer era, another being the Creative Computing Benchmark of 1983, and the Rugg/Feldman benchmarks , mostly seen in the UK in this era. Byte later published the more thorough NBench in 1995 to replace it. Jim Gilbreath of the Naval Ocean System Center had been considering the concept of writing

168-462: Is not specified, but a number of details mean it does not run under early versions of Microsoft BASIC (4.x and earlier), among these the use of long variable names like SIZE and FLAGS . The lack of line numbers may suggest a minicomputer variety that reads source from a text file, but may have also been a printing error. And in C, with some whitespace adjustments from the original: NBench Too Many Requests If you report this error to

192-439: Is often used as an example of functional programming in spite of the common version not actually using the sieve algorithm. The provided implementation calculated odd primes only, so the 8191 element array actually represented primes less than 16385. As shown in a sidebar table, the 0th element represented 3, 1st element 5, 2nd element 7, and so on. This is the original BASIC version of the code presented in 1981. The dialect

216-569: The IBM PC . In spite of Gilbreath's concern in the original article, by this time the code had become almost universal for testing, and one of the articles remarked that "The Sieve of Eratosthenes is a mandatory benchmark". It was included in the Byte UNIX Benchmark Suite introduced in August 1984. New versions of the code continue to appear for new languages, eg Rosetta Code and GitHub has many versions available. It

240-641: The 68k generally beat except for the very fastest machines like the IBM 3033 and high-end models of the VAX . Older machines like the Data General Nova , PDP-11 and HP-1000 were nowhere near as fast as the 68k. Gilbreath's second article appeared as the benchmark was becoming quite common as a way to compare the performance of various machines, let alone languages. In spite of his original warning not to do so, it soon began appearing in magazine advertisements as

264-559: The individual operating the stopwatch. Its last appearance is in the May 1984 issue, which included values for 183 machines. This issue included a note that the many criticisms of the system had been taken to heart and a new benchmark program was under design. However, such a program never appeared in the magazine. In the September 1985 issue, David Ahl responded to a Letter to the Editor about

SECTION 10

#1732791774389

288-591: The magazine with an ever-growing list of results. By March the fastest machine on the list was the Cray-1 at 0.01 seconds, and the slowest was the TI SR-50 programmable calculator at 12.7 days. The benchmark had several problems that made it less useful for general purposes. For instance, the system did not test any string manipulation, whose performance varied widely across platforms. It also did not take advantage of any "speedups" available on different platforms, like

312-429: The new benchmark program with "Several analysts spent many hours working out three new benchmark tests ... none gave different or better results". The benchmark continued to be used as a general-purpose tool after this date, but as the importance of BASIC dwindled it became less common. This is the original version from the November 1983 edition: The following is from later versions of the benchmark code, which reduced

336-480: The number of compound statements on a line: Byte Sieve The Byte Sieve is a computer-based implementation of the Sieve of Eratosthenes published by Byte as a programming language performance benchmark . It first appeared in the September 1981 edition of the magazine and was revisited on occasion. Although intended to compare the performance of different languages on the same computers, it quickly became

360-504: The order of 10 to 50 seconds. Tests on these machines using only high-level languages was led by NBS Pascal on the PDP-11, at 2.6 seconds. UCSD Pascal provided another interesting set of results as the same program can be run on multiple machines. Running on the dedicated Ithaca InterSystems Pascal-100 machine, a Pascal MicroEngine based computer, it ran in 54 seconds, while on the Z80 it

384-414: The possible use of integer variables for loop indexes or turning off video access on machines with shared main memory. These limitations were widely debated at the time. The November 1983 article stipulated using an "accurate stopwatch" to time the program execution on machines lacking a real-time clock: When applied to the faster machines, this would yield test results highly dependent on the reaction time of

408-460: The results of these measures. Yet, the results provide some interesting comparative data. The initial results were provided for common machines of the era, including the Apple II , Commodore 64 and the recently-released IBM Personal Computer . Most of these machines ran some variation of the stock Microsoft BASIC and thus provided similar times on the order of two minutes, while the 16-bit PC

432-435: The sieve written by Donald Knuth . Gilbreath felt the sieve would be an ideal benchmark as it avoided indirect tests on arithmetic performance, which varied widely between systems. The algorithm mostly stresses array lookup performance and basic logic and branching capabilities. Nor does it require any advanced language features like recursion or advanced collection types. The only modification from Knuthโ€™s original version

456-455: The title "Benchmark Comparison Test". In the article, author David H. Ahl was careful to state that it tested only a few aspects of the BASIC language, mostly its looping performance. He stated: ... the benchmark program presented here is not representative of the way computers are actually used; it measures only a few aspects of performance, and no one should buy a computer based solely on

480-471: Was 239, and 516 on the Apple II. Gilbreath, this time along with his brother Gary, revisited the code in the January 1983 edition of Byte . This version removed most of the less popular languages, leaving Pascal, C, FORTRAN IV, and COBOL, while adding Ada and Modula-2 . Thanks to readers providing additional samples, the number of machines, operating systems and languages compared in the resulting tables

504-452: Was greatly expanded. Motorola 68000 (68k) assembly remained the fastest, almost three times the speed of the Intel 8086 running at the same 8 MHz clock. Using high-level languages the two were closer in performance, with the 8086 generally better than half the speed of the 68k and often much closer. A wider variety of minicomputers and mainframes was also included, with times that

SECTION 20

#1732791774389

528-510: Was near the top of the list at only 24 seconds. the fastest machine in this initial suite was the Olivetti M20 at 13 seconds, and the slowest was Atari BASIC on the Atari 8-bit computers at 6 minutes 58 seconds. In the months following its publication, the magazine was inundated with results for other platforms. It became a regular feature for a time, placed prominently near the front of

552-410: Was quick to point out that: I should emphasize that this benchmark is not the only criterion by which to judge a language or compiler. The article provided reference implementations in ten languages, including more popular selections like BASIC , C , Pascal , COBOL , and FORTRAN , and some less well-known examples like Forth , ZSPL , Ratfor , PL/1 and PLMX . Example runs were provided for

576-453: Was to remove a multiplication by two and replace it with an addition instead. With the original version, machines with hardware multipliers would otherwise run so much faster that the rest of the performance would be hidden. After six months of effort porting it to as many platforms as he had access to, the first results were introduced in the September 1981 edition of Byte in an article entitled "A High-Level Language Benchmark". Gilbreath

#388611