The Linley Group’s 2014 Mobile Conference in Santa Clara, California, is an annual event focused on mobile processors, one of the critical components in a wide range of consumer electronics and automotive products – think notebooks, smartphones and BMWs.
The speakers and panelists agreed that the world desperately needs innovative solutions to reduce power consumption in all types of mobile devices if industry is going to keep providing the ever-more sophisticated features and applications that users expect. The mantra is reducing power to prolong battery life: People are already frustrated with smartphones and tablets that need daily (at least) recharging, and this issue will be more pronounced as smartwatches and other wearables enter the market.
But the biggest culprit is heat: mobile processors work so hard (for example, to render a video game) that their increased power consumption leads unavoidably to massive heat production. If a mobile processor were allowed to run full tilt for more than a few seconds, it would overheat and destroy itself. Thus, while advancements in battery technology are always welcome, the power management problem is in fact the more important technical challenge.
Engineers throughout the industry are approaching this problem from several directions. ARM’s big.LITTLE architecture combines slower, low-power processor cores with more powerful and power-hungry ones. Most of the time, the lower power core is used, with the big core brought online for brief sprints of intense processing. System-on-Chip (SOC) designers are experimenting with various combinations (four big, four little; two big, four little, one big, one little, etc.) optimized for particular applications.
More recently, this approach is expanding into so-called heterogeneous multiprocessor architectures, where besides large and small general purpose processor cores, the SoC can include graphics processing (GPU) cores and digital signal processing (DSP) cores as well. These cores have a fundamentally different architecture, each better suited to certain types of processing tasks. It’s nothing that a regular core couldn’t do, but the GPU and DSP cores can do it with greater power efficiency.
On a different front, moving to a smaller “process geometry” (that is, how tightly packed the transistors are in the integrated circuit) is another way chip designers are reducing power consumption. Smaller geometries mean more circuits per square millimeter, which means faster speeds and greater power efficiency. This relentless improvement of manufacturing process underpins the famous Moore’s Law, and companies such as Intel, IBM, and TSMC are continuously innovating to be the first to advance the state of the art.
Meanwhile, the DRAM market is increasingly being dominated by mobile (i.e., lower power) DRAM. At the conference, Rambus presented on the advantages of their R+LPDDR3 architecture compared to regular LPDDR3.
All these companies are succeeding through innovation, rather than a grim “race to the bottom” of cost reduction. Their innovation is protected by patents, giving them the ability to license their designs to other companies, or manufacture their own products, as they see fit. In the end, everyone benefits, including everyone with a cellphone in his pocket.