Widening the Road
All applications are threaded, and by default an application is single threaded. Since a processor can do only one thing at a timeassuming a single-core processorsingle threading hasn't been a big issue for the average developer. If it does become a big issue, then a program can be broken into threads that can run in a somewhat concurrent fashion. With hyperthreading added to processors a few years ago, it became possible to get more work done by breaking an application into threads. Hyperthreading allowed the processor to help swap the threads around to get more work done.
Until recently, most processors for personal computers were like the single-lane roads described previouslythey had a single core. Dual-core processors have been released, and more recently quad-core processors have become readily available. These additional cores are like additional lanes on a roadway. If you have a single-threaded application, you are still going to be using a single core. You gain no real speed from having the extra lanes because your application knows how to use only one. (Note, you might gain some speed as the operating system and other applications that are running use other cores, but this gain is an indirect benefit to your application.)
In fact, the individual core speed within most multicore processors is slower than some of the single-core processors, which means you may be stuck behind traffic as well as operating at a reduced speed! Your application, if single threaded, may go slower on a newer processor because of its lower core speeds.
If you look at dual-core processors from the last few years, you might have noticed that the published speeds were closer to 1.5 or 2.0 GHz rather than the over 3 GHz of the single-core machines being released. The recently announced quad-core processors being used in the Macintosh can be as fast as 3.0 GHz. As more cores are added to a processor, you shouldn't be surprised to see the speed of the initially released chips slow down. As such, if you're building standard applications that are processor intensive, then this reduction in speed should cause you concern if you are not doing something specific to architect your application to use the added cores.
Speed Versus Power, Heat Versus Performance
Why don't processor companies like Intel and AMD simply keep increasing the speed? What's changed? As Charles Congdon at Intel said, "It is all about power and heat." When you push up the speed, you generally generate more heat.
|Figure 1. Transistor Switch: A transistor's speed is based on how fast electrons travel from the switch's in side to its out side.|
Processors are based on transistors, or more specifically, transistor switches. In simple terms, the speed of a transistor is measured based on how fast electrons can travel from the "in" side of a switch to the "out" side of the switch (see Figure 1). In the past, the size of the transistor switch was reduced to gain speeds. A transistor works by having an insulator between two conductors: the in-flow side and the out-flow side of the switch. A gate helps to regulate when the electrons will flow from the in to the out. The speed gains have come from reducing the size of the insulator and thus reducing the size of the gap from the in side to the out side.
The width of the insulation has been shrunk in the past to gain speed. Shrinking the insulated area allows there to be a shorter distance to travel. Shorter distances can be crossed more quickly.
Unfortunately, this insulator is now at the point where it can't get much smaller; the insulator is already as small as it can go with current technology. The modern insulator in a transistor switch is only about 10 to 12 atoms thick. If it gets any smaller, it no longer acts as an insulator.