It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Well I have to NOW if I want to play Cyberpunk at anything faster than 10FPS...
Beware that for older MB like the ones with the 4XX chipset once you update to a new BIOS allowing for the 5000 series you give up support for some older CPUs and you can't get it back.

On another note Anandtech has a nice review of the 4000 series APUs for desktop, they are actually available to big OEMs only but they may be available to some retail online sellers.
https://www.anandtech.com/show/16308/testing-the-worlds-best-apus-desktop-amd-ryzen-4750g-4650g-and-4350g
btw do you think that we'll soon use arm cpu's in desktop pc's? since that apple m1 chip released and apparently it is a very powerful cpu, i wonder if there will be even a zen 4 or amd instead starts its own arm cpu lineup for desktop. it looks like to me, that arm is the future for desktop.
avatar
apehater: btw do you think that we'll soon use arm cpu's in desktop pc's? since that apple m1 chip released and apparently it is a very powerful cpu, i wonder if there will be even a zen 4 or amd instead starts its own arm cpu lineup for desktop. it looks like to me, that arm is the future for desktop.
I can see Apple switching to ARM for desktop if they haven't already.

I don't see the rest of the industry switching anytime soon. There are a lot of Windows games that require x86 CPUs to run, and until enough gets ported or there's CPU emulation capable of running modern games, much of the population, particularly those looking for even semi-decent gaming set-ups, will still require x86. The need for x86 also applies to business and other types of proprietary software.

Linux computers will still be using x86 for the forseeable future simply because Windows computers, which are the most widely available and can easily be converted to run Linux (more easily than ARM Windows, at any rate, and I believe nobody has Linux work on ARM macs yet), are still dominant. You *can* run ARM Linux with the right hardware; if you want to try it out (without emulation), just get something like a Raspberry Pi or PineBook Pro.
avatar
apehater: btw do you think that we'll soon use arm cpu's in desktop pc's? since that apple m1 chip released and apparently it is a very powerful cpu, i wonder if there will be even a zen 4 or amd instead starts its own arm cpu lineup for desktop. it looks like to me, that arm is the future for desktop.
I'm still not convinced the ISA makes much of a difference. In general, you make a CPU better by engineering it better & using a newer process, not by using a different instruction set. That's not to say the ISA makes no difference whatsoever, but considering how much of a difference deep pipelining, parallel execution units, huge caches and branch prediction in modern high performance CPUs do, I'd guess the role of the instruction set is pretty small.

"Someone finally produced a performant ARM chip" isn't really enough excuse to throw away all the amd64 and the engineering effort behind it.
Post edited December 19, 2020 by clarry
I guess with some added extensions ARM could take over a lot of the market but over many years.
avatar
apehater: btw do you think that we'll soon use arm cpu's in desktop pc's? since that apple m1 chip released and apparently it is a very powerful cpu, i wonder if there will be even a zen 4 or amd instead starts its own arm cpu lineup for desktop. it looks like to me, that arm is the future for desktop.
avatar
clarry: I'm still not convinced the ISA makes much of a difference. In general, you make a CPU better by engineering it better & using a newer process, not by using a different instruction set. That's not to say the ISA makes no difference whatsoever, but considering how much of a difference deep pipelining, parallel execution units, huge caches and branch prediction in modern high performance CPUs do, I'd guess the role of the instruction set is pretty small.

"Someone finally produced a performant ARM chip" isn't really enough excuse to throw away all the amd64 and the engineering effort behind it.
i see an advantage on arm side if you look into all the benchmark results of apple's m1 chip, even the benchmarks with their x86 emulator rosetta 2 on m1, cause the arm chips are much more power efficient. but of course i'm no engineer or computer scientist, so its an opinion of a noob.
avatar
Themken: I guess with some added extensions ARM could take over a lot of the market but over many years.
well that was what i assumed before all the benchmark results of apples m1 arm chip. now i think that these transition process will happen much faster and it sounds great cause these chips are more power efficient and don't need much cooling. don't have to buy a heavy cooler for ones desktop cpu sounds great.
avatar
apehater: btw do you think that we'll soon use arm cpu's in desktop pc's? since that apple m1 chip released and apparently it is a very powerful cpu, i wonder if there will be even a zen 4 or amd instead starts its own arm cpu lineup for desktop. it looks like to me, that arm is the future for desktop.
avatar
dtgreene: I can see Apple switching to ARM for desktop if they haven't already.

I don't see the rest of the industry switching anytime soon. There are a lot of Windows games that require x86 CPUs to run, and until enough gets ported or there's CPU emulation capable of running modern games, much of the population, particularly those looking for even semi-decent gaming set-ups, will still require x86. The need for x86 also applies to business and other types of proprietary software.

Linux computers will still be using x86 for the forseeable future simply because Windows computers, which are the most widely available and can easily be converted to run Linux (more easily than ARM Windows, at any rate, and I believe nobody has Linux work on ARM macs yet), are still dominant. You *can* run ARM Linux with the right hardware; if you want to try it out (without emulation), just get something like a Raspberry Pi or PineBook Pro.
can't say much about linux on arm, but when it comes to x86 emulation on arm, then the benchmark results about that are impressive too. its named rosetta 2 and here are some bench results.
Post edited December 19, 2020 by apehater
avatar
apehater: btw do you think that we'll soon use arm cpu's in desktop pc's? since that apple m1 chip released and apparently it is a very powerful cpu, i wonder if there will be even a zen 4 or amd instead starts its own arm cpu lineup for desktop. it looks like to me, that arm is the future for desktop.
Soon definitely no in the future who knows.
Honestly Apple's is remarkable as a first attempt but is far from being exceptional, ARM is an ISA just a tad youger than x86 and it's been around since 1985 and since then has had much significant penetration in low power markets than in HPC and servers.

That doesn't mean that isn't as powerful as x64: there are a lot of factors at play but there but there are two key points that contribute to this 50 year old debate:

First it was MIPS, then PowerPC, then Alpha, then SPARC and then RISC to replace x86
Every architecture sooner or later meets its limitations.
Software would have to be rewritten for the new archicture, this could take many years, not just the act itself but to steer the whole inidustry to a new standard is a massive task.

The x86/x64 debate is old and while will happen sooner or later is definitely likely to happen later than sooner.
Post edited December 19, 2020 by Judicat0r
avatar
clarry: I'm still not convinced the ISA makes much of a difference. In general, you make a CPU better by engineering it better & using a newer process, not by using a different instruction set. That's not to say the ISA makes no difference whatsoever, but considering how much of a difference deep pipelining, parallel execution units, huge caches and branch prediction in modern high performance CPUs do, I'd guess the role of the instruction set is pretty small.

"Someone finally produced a performant ARM chip" isn't really enough excuse to throw away all the amd64 and the engineering effort behind it.
avatar
apehater: i see an advantage on arm side if you look into all the benchmark results of apple's m1 chip, even the benchmarks with their x86 emulator rosetta 2 on m1, cause the arm chips are much more power efficient. but of course i'm no engineer or computer scientist, so its an opinion of a noob.
What I was trying to say is it's not "good because it's ARM", it's good because it's well engineered and produced on a modern process. No reason it has to be ARM, and I don't think there's any reason why you couldn't design a chip with similar power / performance characteristics using some other architecture.

Also power efficiency and performance are slightly at odds.. you can't just take a chip and see how well it performs at 2GHz and linearly interpolate to 5GHz. There's a reason why mobile chips and desktop chips aren't competing with each other. There's always trade-offs involved in brute performance versus power efficiency. For similar reasons, server CPUs are clocked lower (they care about power bills and efficiency in large data centers) than desktop CPUs, but you probably don't want to buy one for gaming.
avatar
clarry: I'm still not convinced the ISA makes much of a difference. In general, you make a CPU better by engineering it better & using a newer process, not by using a different instruction set. That's not to say the ISA makes no difference whatsoever, but considering how much of a difference deep pipelining, parallel execution units, huge caches and branch prediction in modern high performance CPUs do, I'd guess the role of the instruction set is pretty small.

"Someone finally produced a performant ARM chip" isn't really enough excuse to throw away all the amd64 and the engineering effort behind it.
"when you make something idiot proof, the world will find a better idiot"

Crap, now that I think about it I may be in the later group O.o

This is to say that, the more powerfull the CPU is, more ways to bloat a program, being the operating system or a game.

The good and bad thing about x86/x64 is the backwards compatibility. I can run many 90's games on my 2020 computer with minimal effort.
ARM is "backwards compatibility free" at the moment and that means, unlike x86/x64, all the resources get to be fully utilized. IMHO, if someone design a x64 with only 1 aplication in mind, I have no doubt it will be more efficient than a ARM chip.
avatar
apehater: i see an advantage on arm side if you look into all the benchmark results of apple's m1 chip, even the benchmarks with their x86 emulator rosetta 2 on m1, cause the arm chips are much more power efficient. but of course i'm no engineer or computer scientist, so its an opinion of a noob.
avatar
clarry: What I was trying to say is it's not "good because it's ARM", it's good because it's well engineered and produced on a modern process. No reason it has to be ARM, and I don't think there's any reason why you couldn't design a chip with similar power / performance characteristics using some other architecture.

Also power efficiency and performance are slightly at odds.. you can't just take a chip and see how well it performs at 2GHz and linearly interpolate to 5GHz. There's a reason why mobile chips and desktop chips aren't competing with each other. There's always trade-offs involved in brute performance versus power efficiency. For similar reasons, server CPUs are clocked lower (they care about power bills and efficiency in large data centers) than desktop CPUs, but you probably don't want to buy one for gaming.
interesting, i was assuming linearity. but if arm is as hot and power hungry at 5 ghz, then there's no point in transition. i guess, the only advancement in transition of desktop and notebook cpu's to arm would be the same isa archeticture as tablet and mobile cpu's and therefore easier development for software devs.
avatar
apehater: ...
Clocks is a major component of CPU power consumption. Voltage is another (even bigger), and as overclockers tend to find, you usually have to increase voltage to remain stable at a higher frequency. (Conversely, underclocking and undervolting improves power efficiency).

Beyond that, there's the issue of gate delay. There's a certain amount of logic that a CPU has been designed to perform during one clock cycle. You can change your CPU frequency, but you cannot change how fast signals propagate through these logic gates. All the gates that are there must settle during a clock period. There's also some (constant) overhead per each clock cycle during which the gates aren't performing any useful work.

What this means is that a CPU designed for high power efficiency is designed for lower clock speed and a higher number of gates per clock. This reduces overhead and the power draw from clock switching. But the higher number of gates puts a ceiling on how fast your clocks can be; the gates won't switch any faster, and if you try to clock faster than the gates work, you'll just crash. The performance implications of number of gates per clock are harder to reason about but the consensus seems to favor reasonably short stages (6-8 FO4 delays, see link at the bottom) and deep pipelines. Longer stages can help power efficiency at the expense of peak performance. But this is a sweeping generalization and there is a probably quite a bit of headroom for engineers to optimize things one way or another. Also gates can be designed more or less efficient, which may affect their delay.

So in addition to the inherent power draw of faster clock switching, there is an inherent tradeoff in designing a CPU for performance versus designing a CPU for performance per watt, and that is why one simply cannot extrapolate the clock frequency of a CPU of a given design and assume how it would perform at some arbitrary point.

[url=http://www.eecs.harvard.edu/~dbrooks/cs246/deep-pipes.pdf]http://www.eecs.harvard.edu/~dbrooks/cs246/deep-pipes.pdf[/url]
Post edited December 19, 2020 by clarry
avatar
Judicat0r: Software would have to be rewritten for the new archicture, this could take many years, not just the act itself but to steer the whole inidustry to a new standard is a massive task.
It's worth noting that this issue isn't as bad as it sounds, and it's mainly a problem for proprietary software that the developer doesn't want to compile for the new architecture, or for abandoned proprietary software (a category that includes a lot of games).

All that needs to be done is as follows:
* Compilers need to support the new architecture as a target; both gcc and clang support aarch64, so this isn't a problem here.
* Assembly programs need to be converted to the new architecture; fortunately, assembly is used very little these days (OS kernels and device drivers being the cases where you're most likely to see this as well as embedded systems). (ZSNES is one example of a program that won't transfer because of this, but then again, it doesn't even support amd64, which is currently the most common desktop/laptop ISA.)
* Any bugs related to subtle differences that affect higher level languages need to be taken care of. This shouldn't be too much of an issue, but there might be programs that try to do things like access 4 byte values at addresses that aren't multiples of 4.

Solve those, and the software is now running on the new CPU type.
avatar
Dark_art_: ARM is "backwards compatibility free" at the moment and that means, unlike x86/x64, all the resources get to be fully utilized. IMHO, if someone design a x64 with only 1 aplication in mind, I have no doubt it will be more efficient than a ARM chip.
Not quite true; most aarch64 CPUs support 32-bit ARM programs.

(The ones in the Raspberry Pi 3 and 4, for example, do so, and in fact Raspberry Pi OS runs in 32-bit mode.)
Post edited December 19, 2020 by dtgreene
All this ARM talk has got me thinking about dual booting with Linux again and exercising my old GNU roots from DOS 5.0 days.

It's certainly tempting, though it would be more for pet projects than anything.
Does 5000 is better than 4000 series?
Post edited December 21, 2020 by LeahKerr