For many, maintaining backward compatibility with 32-bit is still an anecdote that doesn’t affect the performance of their PCs, but maintaining that compatibility also means keeping the way each of the 64-bit instructions performs with the same performance as 32-bit when sharing the data path.
Intel and AMD may consider revamping their 64-bit architecture, compatible with all software but with much higher performance by completely revamping the way they internally process each of the 64-bit x86 instructions.
But what is a 64-bit architecture? A 32-bit architecture is considered to be any architecture that can address and therefore access up to 232 bytes of RAM memory or whatever is the same: 4 GB of RAM, while a 64-bit processor can see up to 264 bytes of RAM and can therefore directly address a much larger amount of memory.
Little by little, we see how the standard capacity of the computers on sale exceeds the figure of 4 GB of RAM, this means that they can run the 64-bit versions of the programs that increase. Since the lifespan of a computer does not exceed five years, it will reach the point where every PC on the market can run 64-bit versions of the programs.
This is why the duality between 32-bit and 64-bit apps should disappear at some point to leave 32-bit apps in the trash of history forever.
The origin of 32 bits in x86 processors
The x86 architecture is originally a 16-bit architecture that started with the 8086/8088 series. The first time Intel implemented a 32-bit version of the architecture was in the 80386. Intel made a very ingenious design decision regarding its processor registers: instead of creating new registers for 32-bit mode, they extended 16 bits, which AMD repeated later when developing the 64-bit extension of the architecture.
The idea of using extended versions of registers instead of new registers is a way that Intel had of not having to duplicate the data paths of each instruction on the one hand and on the other hand not to make incompatible the original assembly instructions. This ensured that 16-bit x86-compiled programs would run on later processors.
From 80386 to Pentium Intel, the data path of each instruction during the recovery and decode steps was the same, but Intel from Pentium Pro decided to redo the data paths of all 32-bit instructions to optimize its operation. at the expense of 16 bits.
The Pentium Pro, a historical precursor of change
In the Pentium Pro, Intel decided to completely renew the entire instruction cycle of the Pentium Pro’s full 32-bit instruction set, an improvement that I do not apply to 16-bit instructions so that 16-bit programs MS-DOS or Windows 3.1 did not perform better than on an Intel Pentium, so to get the most out of this processor it was necessary to use 32-bit applications.
To the instructions of the Pentium Pro MMX were added and it became the Pentium II, that’s when the application and operating system developers were very clear that it was time to abandon the 16 bits once and for all. Which ended with the 2001 version of Windows XP. the first Windows for desktop with NT base, at that time nobody thought of developing 16-bit applications anymore.
In the case of a reform of the same type coming from the hand of Intel and / or AMD, they can be used to redo the data paths of each of the instructions. The consequences are that 32-bit applications would not run as fast as 64-bit applications, but there would be no reason for anyone to stay 32-bit, as the benefits that a full move to 64-bit would bring would be. huge. bigger.
The move away from 32-bit PCs is the key to SSD standardization
First of all, we need to understand that unlike hard drives, NAND Flash memory used in SSDs can be addressed as if it were RAM. It should not be confused with the fact that the content is directly accessible, as the SSD is lower in the hierarchy. But the high speed of the PCI Express interface allows data to be copied very quickly and almost transparently to a part of the RAM that would be used as a cache.
Until now, when using large files, hard drive was used which has much lower access speed and huge latency apart from huge search time. When accessing a hard drive, the way to access it is not in the RAM memory address, but by using virtual memory and a slower and more complex access mechanism
On the other hand, SSD disks allow direct addressing from the program as if it were part of RAM memory, which is the advantage of having enormous addressing. But for this, it is better to talk about specific examples at the software level.
The advantage is that 32-bit x86 applications cannot have immediate access to more than 4 GB of RAM, even if the addressing was only 48 bits, we would speak of 256 TB of directly accessible memory. Imagine for a moment that you are working with 4K video in a program with Sony Vegas where the master is several tens of gigabytes in size and imagine being able to control the timeline with all the ease in the world.
Another example would be in a huge database, imagine you are working on an application that needs to continuously search the database at full speed, taking advantage of the enormous speed that SSD memory allows for addressing. In reality, the possibilities are numerous, especially in applications that use large volumes of data.
At the level of the operating system, and finally, it must be taken into account that many applications use system libraries, having the entire operating system at the address of the processor greatly accelerates the operation of each of the applications.