top of page
Search
Writer's pictureAdolfo Ruiz

TWO DIFFERENT ARCHITECTURES ARM AND X86, WHAT'S THE DIFFERENCE?

what's the difference between ARM and x86?

There isn’t too much of a difference between ARM and x86. You can still run Google Chrome and watch YouTube on either one. In fact, you may be doing so right now, as nearly all Androids and every iPhone uses an ARM-based processor.

The biggest difference for most people is that older applications meant for x86 will need to be recompiled to run on ARM as well. For some things this is easy, but not everything will be supported, especially legacy software. However, even that can usually run through x86 emulation, which Windows is starting to support.

For developers, there are a lot differences in how applications get compiled, but these days, most compilers do a good job of supporting the major instruction sets, and you won't really have to make many changes to get it compiling for multiple platforms.


How is ARM running faster?

ARM and x86 are both instruction sets, also known as architectures, which basically are a list of micro-code “programs” that the CPU supports. This is why you don’t need to worry about running a Windows app on a specific AMD or Intel CPU; they’re both x86 CPUs, and while the exact designs are different (and perform differently), they both support the same instructions. This means any program compiled for x86 will, in general, support both CPUs.

CPUs basically execute operations sequentially, like a machine given a list of tasks to do. Each instruction is known as an opcode, and architectures like x86 have a lot of opcodes, especially considering they’ve been around for decades. Because of this complexity, x86 is known as a “Complex Instruction Set,” or CISC.

CISC architectures generally take the design approach of packing a lot of stuff into a single instruction. For example, an instruction for multiplication may move data from a memory bank to a register, then perform the steps for the multiplication, and shuffle the results around in memory. All in one instruction.

Under the hood though, this instruction gets unpacked into many “micro-ops,” which the CPU executes. The benefit of CISC is memory usage, and since back in the day it was at a premium, CISC used to be better.

However, that’s not the bottleneck anymore, and this is where RISC comes into play. RISC, or Reduced Instruction Set, basically does away with complex multi-part instructions. Each instruction mostly can execute in a single clock cycle, though many long operations will need to wait on results from other areas of the CPU or memory.

While this seems like going backwards, it has huge implications for CPU design. CPUs need to load all their instructions from RAM and execute them as fast as possible. It turns out it’s far easier to do that when you have many simple instructions versus a lot of complex ones. The CPU runs faster when the instruction buffer can be filled up, and that’s a lot easier to do when the instructions are smaller and easier to process.

RISC also has the benefit of something called Out-of-Order execution, or OoOE. Essentially, the CPU has a unit inside of it that reorders and optimizes instructions coming into it. For example, if an application needs to calculate two things, but they don’t depend on each other, the CPU can execute both in parallel. Usually, parallel code is very complicated for developers to write, but at the lowest levels of the CPU, it can make use of multi-tasking to speed things up. The Apple M1 chip uses OoOE to great effect.


Modern 64-bit CPU architectures

Today, 64-bit architectures are mainstream across smartphones and PCs, but this wasn’t always the case. Phones didn’t make the switch until 2012, around a decade after PCs. In a nutshell, 64-bit computing leverages registers and memory addresses large enough to use 64-bit (1s and 0s) long data types. As well as compatible hardware and instructions, you also need a 64-bit operating system too, such as Android.

Industry veterans may remember the hoopla when Apple introduced its first 64-bit processor ahead of its Android rivals. The move to 64-bit didn’t transform day-to-day computing. However, it is important to run math efficiency when using high-accuracy floating-point numbers. 64-bit registers also improve 3D rendering accuracy, encryption speed, and simplifies addressing more than 4GB RAM.


Today, both architectures support 64-bit, but it’s more recent in mobile

PCs moved to 64-bit well before smartphones, but it wasn’t Intel that coined the modern x86-64 architecture (also known as x64). That accolade belongs to AMD’s announcement from 1999, which retrofitted Intel’s existing x86 architecture. Intel’s alternative IA64 Itanium architecture dropped by the wayside.

Arm introduced its ARMv8 64-bit architecture in 2011. Rather than extend its 32-bit instruction set, Arm offers a clean 64-bit implementation. To accomplish this, the ARMv8 architecture uses two execution states, AArch32 and AArch64. As the names imply, one is for running 32-bit code and one for 64-bit. The beauty of the ARM design is the processor can seamlessly swap from one mode to the other during its normal execution. This means that the decoder for the 64-bit instructions is a new design that doesn’t need to maintain compatibility with the 32-bit era, yet the processor as a whole remains backwardly compatible.


Arm’s Heterogeneous Compute won over mobile

Arm’s low power approach is perfectly suited to the 3.5W Thermal Design Power (TDP) requirements of mobile, yet performance scales up to match Intel’s laptop chips too. Meanwhile, Intel’s 100W TDP typical Core i7 wins big in servers and high-performance desktops, but historically struggles to scale down below 5W.

Of course, we mustn’t forget the role that silicon manufacturing processes have played in vastly improving power efficiency over the past decade either. Broadly speaking, smaller CPU transistors consume less power. Intel has been stuck trying to move past its 2014 in-house 14nm process. In that time, smartphone chipsets have moved from 20nm to 14, 10, and now 7nm designs, with 5nm expected in 2021. This has been achieved simply by leveraging competition between Samsung and TSMC foundries. However, one unique feature of Arm’s architecture has been particularly instrumental in keeping TDP low for mobile applications — heterogeneous compute. The idea is simple enough, build an architecture that allows different CPU parts (in terms of performance and power) to work together for improved efficiency.


ARM's ability to share workloads across high and low performance CPU cores is a boon for energy efficiency

Arm’s first stab at this idea was big.LITTLE back in 2011 with the big Cortex-A15 and little Cortex-A7 core. The idea of using bigger out-of-order CPU cores for demanding applications and power-efficient in-order CPU designs for background tasks is something smartphone users take for granted today, but it took a few attempts to iron out the formula. Arm built on this idea with DynamIQ and the ARMAv8.2 architecture in 2017, allowing different CPUs to sit in the same cluster, sharing memory resources for far more efficient processing. DynamIQ also enables the 2+6 CPU design that’s increasingly common in mid-range chips.

Intel’s rival Atom chips, sans heterogeneous compute, couldn’t match Arm’s balance of performance and efficiency. It’s taken until 2020 for Intel’s Foveros, Embedded Multi-die Interconnect Bridge (EMIB), and Hybrid Technology projects to yield a competing chip design — the 10nm Lakefield. Lakefield combines a single, high-performance Sunny Cove core with four power-efficient Tremont cores, along with graphics and connectivity features. However, even this package is targeted at connected laptops with a 7W TDP, which is still too high for smartphones.

Today, Arm vs x86 is increasingly fought in the sub-10W TDP laptop market segment, where Intel scales down and Arm scales up increasingly successfully. Apple’s news that it will switch to its own custom Arm chips for Mac is a prime example of the growing performance reach of the Arm architecture, thanks in part to heterogeneous computing along with custom optimizations made by Apple.


Custom ARM cores and instruction sets

Another important distinction between Arm and Intel is that the latter controls its whole process from start to finish and sells its chips directly. Arm simply sells licenses. Intel keeps its architecture, CPU design, and even manufacturing entirely in-house. Arm, by comparison, offers a variety of products to partners like Apple, Samsung, and Qualcomm. These range from off the shelf CPU core designs like the Cortex-A78, designs built in partnership through its Arm CXC program, and custom architecture licenses that allow companies like Apple and Samsung to build custom CPU cores and even make adjustments to the instruction set.

Building custom CPUs is an expensive and involved process, but done correctly can clearly lead to powerful results. Apple’s CPUs showcase how bespoke hardware and instructions push Arm’s performance much closer to mainstream x86. Although Samsung’s Mongoose cores have been more contentious.


The world’s most powerful supercomputer, Fugaku, runs on Arm


Intel’s architecture remains out in front in terms of raw performance in the consumer hardware space. But Arm is now very competitive in product segments where high performance and energy efficiency remain key, which includes the server market. At the time of writing, the world’s most powerful supercomputer is running on Arm CPU cores for the first time ever. Its A64FX SoC is Fujitsu-designed and the first running the Armv8-A SVE architecture.


ARM vs x86

Over the past decade of the Arm vs x86 rivalry, Arm has won out as the choice for low power devices like smartphones. The architecture is now also making strides into laptops and other devices where enhanced power efficiency is in demand. Despite losing out on phones, Intel’s low power efforts have improved over the years too, with Lakefield now sharing much more in common with traditional Arm processors found in phones.

That said, Arm and x86 remain distinctly different from an engineering standpoint and they continue to have individual strengths and weaknesses. However, consumer use cases across the two are becoming blurred as ecosystems increasingly supporting both architectures. Yet, while there’s crossover in the Arm vs x86 comparison, it’s Arm that is certain to remain the architecture of choice for the smartphone industry for the foreseeable future.



18 views0 comments

Recent Posts

See All

Comentarios


bottom of page