It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
wolfsite: Went out today and still no luck for 5000 series CPU's. Did get a 2TB M.2 hard drive at $100 off so when my new rig is finished it will be a beast.
Nice! 2TB NVME drives are expensive so the savings sounds awesome :)
Just a heads up, I've been really starting to enjoy my 5700 XT more now that the current drivers have really stabilized it. I still wonder about the 6800/XT and if it will take a while before it ages properly too.

The 5000 Series Ryzen is really intriguing, but I don't get my BIOS update to run it till January anyway, so like I said, holding off for now is no big deal.

Has anyone else picked up a 5000 series? Or did anyone get lucky and snag one of the new 6800/XT's that went live today?
Have a huge backlog to finish among other priorities. Will wait for its laptop iteration or get a ryzen 6000/ RDNA 3 / Whatever nvidia's next is?

Big navi is impressive. However, it is not for people who want ray tracing and DLSS or game at 4k.
DLSS maybe, but the raytracing performance of the 6800XT really depends a lot on the game.

On the games that were developed for nVidia RTX explicitly it typically performs badly- which is to be expected since nVidia's and AMD's approaches to raytracing are different (AMD favours culling duplicates with less raw power, nVidia favours raw power). OTOH, Watch Dogs Legion and Dirt 5 have AMD's RT being near equivalent to nVidia's- and a 6800XT is actually faster than a 3080 at RT on Dirt 5. Unsurprising as they're both console ports, and consoles use AMD's RT solution.

So looking at current performance is misleading, since you aren't going to get many titles designed explicitly and solely for RTX any more. Any console port utilising RT will be using the AMD version and the performance will be closer to that seen in Dirt 5/ WDL rather than Quake 2 RTX or MInecraft RTX.

(Theoretical RT wise the 6800XT should fall mid way between the 3070 and 3080)
Post edited November 19, 2020 by Phasmid
AMD's Super Resolution is on the way anyway and being the same technology in the XBOX and PS5 it should't suck at all at least on paper.
avatar
Judicat0r: AMD's Super Resolution is on the way anyway and being the same technology in the XBOX and PS5 it should't suck at all at least on paper.
Super Resolution is the reason I can game in 4k on a 1080p monitor. It's one thing that AMD does really well. It's been around for some time now.

They may be strengthening it, but it already works really great if your card is powerful enough to upscale.

I play H:ZD in 1440p to stress test new patches, and The Sinking City in 4k. Very few games turn it off, but Death Stranding is one of them.
avatar
CymTyr: Super Resolution is the reason I can game in 4k on a 1080p monitor. It's one thing that AMD does really well. It's been around for some time now.
Why would you want to do this (other than testing)? Wouldn't this just use up more electricity (and make the computer hotter) without any benefit because the monitor can't actually display 4k?
avatar
Judicat0r: AMD's Super Resolution is on the way anyway and being the same technology in the XBOX and PS5 it should't suck at all at least on paper.
avatar
CymTyr: Super Resolution is the reason I can game in 4k on a 1080p monitor. It's one thing that AMD does really well. It's been around for some time now.

They may be strengthening it, but it already works really great if your card is powerful enough to upscale.

I play H:ZD in 1440p to stress test new patches, and The Sinking City in 4k. Very few games turn it off, but Death Stranding is one of them.
Yeah, my bad I should have been more specific:
AMD Virtual Super Resolution: https://www.amd.com/en/technologies/vsr is what you are referring to.

FidelityFX Super Resolution is the DLSS equivalent by AMD, is still being worked on, ibelongs to a suite of new technologies aimed at improving image quality and is going to come nobody knows exactly when, 2021, if I had to make a, rather safe, bet.

The techinque of rendering an image at higher resoultion then downscaling it is an old thing aimed at improving general quality, reducing jaggies and aliasing to some extent because the source has more info then a lower res image.

Something similar is commonly used in 2D graphics when you need to scale an image while limting the quality loss.
avatar
dtgreene: Why would you want to do this (other than testing)? Wouldn't this just use up more electricity (and make the computer hotter) without any benefit because the monitor can't actually display 4k?
You would get less aliasing (and more details).
Post edited November 20, 2020 by clarry
avatar
dtgreene: Why would you want to do this (other than testing)? Wouldn't this just use up more electricity (and make the computer hotter) without any benefit because the monitor can't actually display 4k?
avatar
clarry: You would get less aliasing (and more details).
How would you get more details, seeing as the screen isn't actually capable of displaying them?
Talking to all my local places the only way to guarantee getting either the CPU or GPU is to put down a full payment to be put on a waiting list, right now it's 3 weeks for the CPU and two to three months for the GPU..

Main concern though is that some cities in my area are getting more restrictions being put in place due to the current pandemic situation so if the city the store I'm working in gets pushed into a different phase they could get shut down.
avatar
dtgreene: How would you get more details, seeing as the screen isn't actually capable of displaying them?
Rasterization (and ray tracing alike) work on what are essentially point samples. If you have a single sample per pixel, then you get aliasing and jaggies. Small (sub-pixel) details may not render at all, or render very inconsistently, unless the developer pulls crazy tricks to work around the limited resolution. (There are antialiasing techniques that are tricks which work at polygon edges but don't work at all or work poorly in many other situations)

For an example that is easy to visualize, imagine a scene with a phone line hanging in the distance. If the line is thinner than a pixel at the resolution you're rendering at, you might see disconnected dots and line segments hanging in the air, depending on whether or not a point sample happens to sit on the polygons that make up the line. If you move or rotate the camera ever so slightly, different parts of the line will appear while others vanish. It is obvious that details that exist there simply vanish.

By working at a higher resolution, you get more point samples per pixel, which are then averaged together. At a sufficiently high res, every pixel along the line hits the line at one or more points, which gives you a continuous line. Thus, less detail is lost.

Although I talked about polygon intersection, the same thing applies to all texture detail inside a polygon. Those are sampled too, and sampling at a higher resolution preserves more detail. You can't ever sample too much, except due to performance issues.

What you see with your eye or a camera isn't point samples. A "pixel" within a camera's sensor will collect all the photons that land on it, which can come from the entire volume of space that the lens focuses onto the sensor. Thus a camera can see, for example, a tiny but bright light source miles away that wouldn't be visible if you rendered the scene by sampling thousands of times per pixel. There's no "phone line is disconnected black chunks floating in the air because eye's resolution isn't high enough to capture all parts of it" in the real world.
Post edited November 20, 2020 by clarry
avatar
dtgreene: How would you get more details, seeing as the screen isn't actually capable of displaying them?
avatar
clarry: Rasterization (and ray tracing alike) work on what are essentially point samples. If you have a single sample per pixel, then you get aliasing and jaggies. Small (sub-pixel) details may not render at all, or render very inconsistently, unless the developer pulls crazy tricks to work around the limited resolution. (There are antialiasing techniques that are tricks which work at polygon edges but don't work at all or work poorly in many other situations)

For an example that is easy to visualize, imagine a scene with a phone line hanging in the distance. If the line is thinner than a pixel at the resolution you're rendering at, you might see disconnected dots and line segments hanging in the air, depending on whether or not a point sample happens to sit on the polygons that make up the line. If you move or rotate the camera ever so slightly, different parts of the line will appear while others vanish. It is obvious that details that exist there simply vanish.

By working at a higher resolution, you get more point samples per pixel, which are then averaged together. At a sufficiently high res, every pixel along the line hits the line at one or more points, which gives you a continuous line. Thus, less detail is lost.

Although I talked about polygon intersection, the same thing applies to all texture detail inside a polygon. Those are sampled too, and sampling at a higher resolution preserves more detail. You can't ever sample too much, except due to performance issues.

What you see with your eye or a camera isn't point samples. A "pixel" within a camera's sensor will collect all the photons that land on it, which can come from the entire volume of space that the lens focuses onto the sensor. Thus a camera can see, for example, a tiny but bright light source miles away that wouldn't be visible if you rendered the scene by sampling thousands of times per pixel. There's no "phone line is disconnected black chunks floating in the air because eye's resolution isn't high enough to capture all parts of it" in the real world.
That gives me an idea.

If you zoom into the Mandelbrot set, you might find that parts of it appear disconnected, even though it's been proven that the set is, in fact, connected. So, if you were to render the at a higher resolution and average the pictures, might the connections that are otherwise not visible at that resolution appear?

Maybe this is something I should try experimenting with.
avatar
dtgreene: If you zoom into the Mandelbrot set, you might find that parts of it appear disconnected, even though it's been proven that the set is, in fact, connected. So, if you were to render the at a higher resolution and average the pictures, might the connections that are otherwise not visible at that resolution appear?
If they appear by zooming in a bit, then they should also appear if you render at a higher resolution. Zooming in is functionally equivalent to increasing the resolution and then cropping the edges to bring you back to the original resolution.

Of course averaging a huge number of samples into a single pixel means dimnishing returns. Chances are that at some point adding more samples doesn't change the resulting image anymore. But that also depends on the dynamic range of the picture. A "hot spot" could change the final pixel value by quite a bit even if its averaged together with lots of other samples.
avatar
dtgreene: How would you get more details, seeing as the screen isn't actually capable of displaying them?
avatar
clarry: Rasterization (and ray tracing alike) work on what are essentially point samples. If you have a single sample per pixel, then you get aliasing and jaggies. Small (sub-pixel) details may not render at all, or render very inconsistently, unless the developer pulls crazy tricks to work around the limited resolution. (There are antialiasing techniques that are tricks which work at polygon edges but don't work at all or work poorly in many other situations)

For an example that is easy to visualize, imagine a scene with a phone line hanging in the distance. If the line is thinner than a pixel at the resolution you're rendering at, you might see disconnected dots and line segments hanging in the air, depending on whether or not a point sample happens to sit on the polygons that make up the line. If you move or rotate the camera ever so slightly, different parts of the line will appear while others vanish. It is obvious that details that exist there simply vanish.

By working at a higher resolution, you get more point samples per pixel, which are then averaged together. At a sufficiently high res, every pixel along the line hits the line at one or more points, which gives you a continuous line. Thus, less detail is lost.

Although I talked about polygon intersection, the same thing applies to all texture detail inside a polygon. Those are sampled too, and sampling at a higher resolution preserves more detail. You can't ever sample too much, except due to performance issues.

What you see with your eye or a camera isn't point samples. A "pixel" within a camera's sensor will collect all the photons that land on it, which can come from the entire volume of space that the lens focuses onto the sensor. Thus a camera can see, for example, a tiny but bright light source miles away that wouldn't be visible if you rendered the scene by sampling thousands of times per pixel. There's no "phone line is disconnected black chunks floating in the air because eye's resolution isn't high enough to capture all parts of it" in the real world.
Good stuff as usual.

This pretty much explains why Codemasters games look so "pixelated" but look good when rendering at higher resolutions or lots of anti-aliasing.