Moore's Law Still Stands, Intel exec says

SANTA CLARA, Calif. ( TheStreet) -- At a circuits conference in 2003, Gordon Moore told the audience that his Moore's Law -- doubling the number of transistors in a chip roughly every two years -- would likely reach its zenith in a decade. An entire industry promptly patted him on the head and resumed ignoring him.

More than 45 years after Moore published his law for cramming the maximum amount of transistors on a chip at minimal costs, the co-founder and former chief executive of Intel ( INTC) watched chip sizes shrink to double-digit nanometers and the functionality of phones and computers expand in kind. Meanwhile, the cost per function shriveled to the size of the transistors that made it all possible.

The size of modern computers would astonish the designers of Eniac, which made its first official calculation in 1946. An Intel executive says computers aren't done with their simultaneous shrinking and improvement.

After Moore's former company took the stage at the Consumer Electronics Show on Wednesday and unveiled a second-generation Core processor that is 62% faster than its predecessor and combines graphics and computing on one chip, TheStreet spoke with Mike Mayberry, Intel's vice president of components research, about the state of Moore's law and just how long it can continue. Like the tiny technology he develops, Mayberry says an end to Moore's Law is increasingly hard to see:

How has Moore's law kept up its momentum?

Mayberry: The original paper was an economic observation about an attributive technology. The simple version is that if you make integrated circuits, you're making a lot of transistors as parallels and, therefore, the cost of each function or each transistor is reduced each time you put more and more on the chip. That reduction in cost allows you to implement more functionality within a given envelope. What that drives is people making more interesting products and charging more money for them. You can make things for less, which means you can make more profit, and that's a beneficial cycle. The reason it keeps going is that if you can do it from an economic sense and sell it from an interesting product sense, there's no reason it can't keep going forever -- in fact, it's been going for decades at this point.

Are there limits to integration?

Mayberry: It comes down to "Are there limits to our ability to keep technology going?" It's my group's responsibility to make sure that we can keep going. We are currently focusing on what technology features are needed for production in 2015 and 2017. We don't have any reason to believe that we can't deliver a process in those years, so then the limit to our visibility is a little bit beyond that. I don't know at this point what goes into the 2019 process, but then we have some time to do some invention and figure out what that would be. The other question is "What are the compelling products that we need to make?" I can't begin to guess what would be in fashion in 2019, and I don't know that anyone else really knows, but what people are looking for today are things that are more convenient and combine more function in a smaller form. Rather than putting the economic benefit into making more powerful computers -- although we're still doing that -- we've changed our focus to power efficiency and smaller forms. Rather than making the biggest, baddest chip we can do, like having the biggest, baddest motorcycle, we're doing things that are smaller, cheaper, consume less power and can fit into smaller forms.

For consumer products to be more efficient and functional, does the infrastructure around them -- including server capacity -- need to grow at the rate of Moore's Law as well?

Mayberry: It's relatively easy to point to different workloads that drive different optimization points for the chip. More transistors and more performance benefits all of it. From a technology point of view, my group can start working on better transistors without necessarily knowing what the workloads are. People that have to do the architecture of the chips themselves have to worry about trade-offs about how many cores you run in parallel. If you're doing a server taking Web orders, for example, you want to run as many simultaneous customers as possible. That's a different workload than a computer system that does a lot of floating point calculations to guess what the price of a series of bonds might be. The way that you build the computer ends up being different in those two cases -- even though they're both servers and nominally cloud-based -- but the need for more transistors, more performance and fitting within a smaller area remains the same.

From the consumer's perspective, where is there the most need for further integration of consumer-based services?

Mayberry: If I look around my office, I can see four or five different computation capabilities. There's computation in my desktop phone, I have a portable computer in the smartphone in my pocket, I have a laptop that's effectively my productivity tool and there are probably several more here that are behind the walls and embedded in the infrastructure. From a consumer point of view, you want these things to all work and, if they need to be connected to one another, to be connected without having to deal with something around it. When I drive around with my mobile phone and lose cell phone service, it's a frustrating thing. When Intel looks at what we can do for integration, some of the stuff we do ends up being invisible -- we try to make the functions within the device turn on and off seamlessly so it consumes less battery power but is invisible to the user. Some of the things we like to do are more visible, like a higher data rate that lets you see a video screen with less jerkiness. When you put in more features, you are typically consuming more power and putting in more costs, but integration is typically more beneficial than having to do it in more separate pieces.

Does the cost of devices and the energy consumed by them limit integration and innovation?

Mayberry: Consumers typically don't do ROI calculations in their heads. They look at how much something costs and if they can afford it. Integration often benefits costs as well -- not always, but often -- so it can be a push. From a standpoint of power, most of my devices are now mobile and at least part of the time relies on battery, so it's more important than it was five or 10 years ago, when mobile computing was kind of a novelty.

Does the fact that the end of Moore's Law isn't in sight mean it doesn't exist?

Mayberry: Moore's Law isn't a law of nature, it's a law of innovation. We have this expectation that we've been doing this so long that we have to keep inventing at a certain rate. We can never see completely into the future so we always say, "There's a limit here." I have a bunch of clippings of people saying that Moore's Law will end in the '80s or Moore's Law will end in the '90s or Moore's Law will end in the 2000s. We don't know how long we can go, but we have a whole army of people that have grown up and been trained on the notion that if it ends, it's not going to end on our watch. It's going to end after we retire. We have a certain limit to visibility. We know we can keep going at least that long, and we have the confidence that we can continue to invent things and keep going still further.

-- Written by Jason Notte in Boston.

>To contact the writer of this article, click here: Jason Notte.

>To follow the writer on Twitter, go to http://twitter.com/notteham.

>To submit a news tip, send an email to: tips@thestreet.com.

RELATED STORIES:



Follow TheStreet.com on Twitter and become a fan on Facebook.
Jason Notte is a reporter for TheStreet.com. His writing has appeared in The New York Times, The Huffington Post, Esquire.com, Time Out New York, the Boston Herald, The Boston Phoenix, Metro newspaper and the Colorado Springs Independent.