An "interface" is the connection between a machine and a human being. For example, in a car, the steering wheel, dashboard, gear shift, and foot pedals are the interface. An interface allows you to control a machine. In computers, it is often called the user interface, or UI.
Before the PC, computers were most often used by experts who could dedicate a good deal of time to training, learning how to use complex ways of controlling a computer. These methods were usually closer to "machine language" than they were to human language.
In the 1950's, my father was a college student at MIT. He used their computer, called "Whirlwind." It was a first-generation computer, which used vacuum tubes. And the user interface was punch-hole paper tapes. My father would tell the computer what to do by punching holes into a piece of paper tape. He would then take the program to the computer operator. The operator then put the tape into the computer, which read the holes in the tape. The computer then executed the program, and returned the results on another piece of tape. The operator gave the tape back to my father, who could read the little holes to find his answers.
Later, this system was improved: the 150-year-old idea of punch cards was revived:
Even the Altair 8800, the first affordable computer (only for hobbyists) in 1975, used switches and the classic blinking lights:
Computers for the Rest of Us
These old UIs were workable for experts, but they were not really practical for most people. When the "personal computer" arrived in the 1970's, it was intended for anyone to use, not just experts. So the companies making computers went to work trying to make the UI better.
That is the goal for the user interface: to make the computer easier to use. A well-designed user interface should be "invisible"; the user should be able to understand it naturally, without training or effort.
The CLI: A Computer a Young Person Could Use
Early personal computers used the CLI, or Command Line Interface. These did not use a mouse or other object-oriented controls. You had to learn a hybrid human-computer language in order to operate the computer. If you used the wrong command or made a spelling mistake, the computer would answer with “SYNTAX ERROR” and would not do anything.
The CLI made computers more usable than before, but they were still difficult to use because you had to learn the special language used in the CLI. Non-experts could use this UI, but most found it too difficult or confusing to use.
Before creating the "Windows" OS, Microsoft sold a CLI called "MS-DOS."
If you would like to try using the MS-DOS CLI instead of Windows, then just go to the "Start" menu, go to "All Programs," go up to "Accessories," and select "Command Prompt." This opens a CLI to operate your computer.
Macintosh computers also have a CLI, the UNIX shell used in the program called "Terminal." Do a search for "Terminal" and open the app.
With both of these CLIs, you will probably not be able to do anything unless you have prior experience and training.
The GUI: A Computer Your Mother Could Use
The GUI (Graphics User Interface) began in the 1960's, when Douglas Engelbart developed the mouse, menus, and other GUI elements. These ideas were used, improved upon, and added to by XEROX in a computer called the "Alto," but that computer was never developed for the popular market.
The first popular GUI computers were the "Lisa" and "Macintosh" computers developed by Apple in 1983 and 1984. The 1984 release of the Macintosh computer was the first big release of such an operating system, and changed the way we now use computers. Microsoft followed with the Windows GUI OS in 1985--although the first versions of Windows were really the MS-DOS CLI with a bad imitation-GUI placed on top of it.
A GUI was different from previous interfaces because it used visual metaphors. A "visual metaphor" is a graphic element which makes computer data appear to be an "object" which the user understands. Everybody knows what a "window" is, for example: you look through it to see something. Everyone knows that you work on a desktop. Everyone knows that a folder contains documents. By creating an environment which looks more like the real world, computers became easier to understand.
Important point: The GUI was truly a breakthrough because it allowed almost anyone to use a computer with relative ease. Instead of only computer technicians, now regular people could use a computer without any special training. Your parents probably found it easy to learn, although your grandparents maybe still had difficulty using it.
Multitouch: A Computer Your Grandmother Could Use
Most people consider the GUI to be "the" way to use a computer. It has been used for 25 years ("forever" in computer-time), and most people under 30 have never known anything different.
However, the GUI is far from perfect. The keyboard and the mouse are "disconnected" from what is happening on the screen. To test this, find and run a drawing program on a computer using a mouse. Try to draw a picture using just the mouse, a picture you can draw well by hand. You will find the mouse-drawn image clumsy and childish-looking, as if you drew with the wrong hand. This demonstrates that the mouse is not really a good controller.
The newest OS type is multitouch. "Multitouch" means that you can touch the screen with more than one finger, and the computer "sees" each separate finger. A touch screen can only sense one finger or a "stylus" (pen) touch. "Multitouch" can sense many different contact points at the same time.
A multitouch OS is superior because it allows the user direct access to the computer, giving the most natural way to control a device. A child will take to this naturally, and you will even see videos of animals using a multitouch computer! If an animal can learn how to use a computer, it must be a good interface.
Multi-touch is still very new, and has only come out on a few devices. It could be decades before we jump to the next UI after that one. One possibility is voice control, but that is limited by what can be spoken and, more importantly, what can be understood by the computer; usually it is much easier to "show" the computer using your hands.
Another likely candidate is motion sensors (explained in the "Input" chapter of the Hardware Unit), where the computer watches your actions with a camera, and "sees" where your hands are and what you are trying to do.
For several years, I told me students that their computers of the future would probably be a pair of glasses. The lenses would act as monitors, there would be cameras and radio antennas in the frame, and mic and speakers in the temples. Most of the computing would be done in the "Cloud." In 2012, it came out that Google was thinking along the exact same lines, and introduced Google Glasses!
Farther into the future? Already scientists are working on systems where you can control computers directly with your brain, just by thinking about it. That, however, is much farther off, if it ever comes to pass.
Below is a video with examples of computer interfaces.