It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Now if it were for DooM: knee deep in the dead then I'd be impressed.
avatar
GameRager: Akin to the VR craze? Because that headset seems bulky and is very pricey for most people(400-500 bucks for most sets).
avatar
hedwards: Unlike VR helmets, Ray tracing is definitely coming, it's just a question of how long until consumer grade kit can handle it. If this demo is any indication, that's probably sometime in the next 2-3 years.

So, the GP is likely right, depending upon how you define foreseeable future and the degree to which the demo was honest

VR helmets are unlikely to ever be a common thing as the technology as currently designed is inherently incompatible with how human eyes work. You'll always have that disconnect between where the brain thinks your eyes should focus and where they actually want to focus.

It's unlikely that that will ever be solved with a helmet. It may eventually be solved via something like a holodeck with some sort of contraption in the middle to walk around, but probably never via a head mountable unit.
avatar
hedwards: lol, good one
I disagree. What are we are shown in the games and demos it's just a rather simple form of raytracing which is even capable of bringing to their knees the most powerful comsumer-grade graphic cards.

Raytracing can be of inifinite complexity so the need for computing power is not enough.
What we see are simpler models of it running on modern hardware that can barely keep up at frame rates that are well below what traditional rendering algorithms can do and often you have to eyeball-scan the pictures to spot the differences.
And I'm not even factoring in geometry complexity and number of light sources.

Clearly less computing intensive raytracing settings can be used to draw graphics but: still what's the result compared to traditional graphics? Can hardware keep up? Is it really worth it?

Will we be able to see games with decent raytracing complexity playable at high framerates in the next future (say 2-3 years to cite you)?
I have doubts.

Keep in mind that there are render farms that use thousands of CPUs and GPUs dedicated specifically to the rendering of short clips, the special effects that we enjoy in movies, that take days to render.
avatar
Judicat0r: I disagree. What are we are shown in the games and demos it's just a rather simple form of raytracing which is even capable of bringing to their knees the most powerful comsumer-grade graphic cards.
The tech is evolving though. This current iteration of it is just that, the first consumer real-time ray trace solution and I can see it only getting better by the generation, just as GPUs are generally getting better by the generation. I presume by the 3rd generation, most games will do, or will at least have an option for it, most of their lighting and reflections with ray tracing. The next generation of consoles will very likely be able to do real-time ray tracing as well.

Really, the need for real-time ray tracing isn't coming only from the consumer side, it is coming from the industry side as well, as CGI companies would more than welcome the possibility for a cheaper and faster rendering pipeline. It could really be beneficial for the movie industry as well, allowing directors directly see a closer approximation on set how their digital end results looks.
avatar
GameRager: Akin to the VR craze? Because that headset seems bulky and is very pricey for most people(400-500 bucks for most sets).
avatar
hedwards: Unlike VR helmets, Ray tracing is definitely coming, it's just a question of how long until consumer grade kit can handle it. If this demo is any indication, that's probably sometime in the next 2-3 years.

So, the GP is likely right, depending upon how you define foreseeable future and the degree to which the demo was honest

VR helmets are unlikely to ever be a common thing as the technology as currently designed is inherently incompatible with how human eyes work. You'll always have that disconnect between where the brain thinks your eyes should focus and where they actually want to focus.

It's unlikely that that will ever be solved with a helmet. It may eventually be solved via something like a holodeck with some sort of contraption in the middle to walk around, but probably never via a head mountable unit.
THAT, or a brain interface of some sort.

Sidenote: Did you ever see the full body suits deal in lawnmower man? I loved that film but god would that be inconvenient/uncomfortable.
avatar
hedwards: Unlike VR helmets, Ray tracing is definitely coming, it's just a question of how long until consumer grade kit can handle it. If this demo is any indication, that's probably sometime in the next 2-3 years.

So, the GP is likely right, depending upon how you define foreseeable future and the degree to which the demo was honest

VR helmets are unlikely to ever be a common thing as the technology as currently designed is inherently incompatible with how human eyes work. You'll always have that disconnect between where the brain thinks your eyes should focus and where they actually want to focus.

It's unlikely that that will ever be solved with a helmet. It may eventually be solved via something like a holodeck with some sort of contraption in the middle to walk around, but probably never via a head mountable unit.

lol, good one
avatar
Judicat0r: I disagree. What are we are shown in the games and demos it's just a rather simple form of raytracing which is even capable of bringing to their knees the most powerful comsumer-grade graphic cards.

Raytracing can be of inifinite complexity so the need for computing power is not enough.
What we see are simpler models of it running on modern hardware that can barely keep up at frame rates that are well below what traditional rendering algorithms can do and often you have to eyeball-scan the pictures to spot the differences.
And I'm not even factoring in geometry complexity and number of light sources.

Clearly less computing intensive raytracing settings can be used to draw graphics but: still what's the result compared to traditional graphics? Can hardware keep up? Is it really worth it?

Will we be able to see games with decent raytracing complexity playable at high framerates in the next future (say 2-3 years to cite you)?
I have doubts.

Keep in mind that there are render farms that use thousands of CPUs and GPUs dedicated specifically to the rendering of short clips, the special effects that we enjoy in movies, that take days to render.
You have your doubts, but I don't see anything in your post that would point that way. The processing requirements of raytracing are primarily about the number of rays that have to be processed per frame. The complexity of the environment does matter somewhat, but not nearly to the degree you're suggesting.

The tech demo we're seeing is good enough for the majority of games where you'd want to be ray tracing. It may not be good enough for something like Crisis' jungle scenes, but for the types of 3d games that people typically play, this is already more than good enough.

Also, as far as framerates go, this kind of technology is much easier to target specific framerates than the current method of rendering scenes. You can scale the number of rays up and down to a significant degree without having it be as obvious that you're doing it.

As far as those render farms go, yes they exist, but they're also rendering higher resolution and more frames. They're also doing it for a medium where the only thing that matters is what you see, very different from games where you can vary the number of rays in a scene if you need to. Plus, they're often rendering a much finer level of detail, things like individual hairs that are not even remotely necessary for games when this first showing up in games. Games have been varying detail levels for years to help get through areas where the hardware can't quite keep up.

avatar
Judicat0r: I disagree. What are we are shown in the games and demos it's just a rather simple form of raytracing which is even capable of bringing to their knees the most powerful comsumer-grade graphic cards.
avatar
tomimt: The tech is evolving though. This current iteration of it is just that, the first consumer real-time ray trace solution and I can see it only getting better by the generation, just as GPUs are generally getting better by the generation. I presume by the 3rd generation, most games will do, or will at least have an option for it, most of their lighting and reflections with ray tracing. The next generation of consoles will very likely be able to do real-time ray tracing as well.

Really, the need for real-time ray tracing isn't coming only from the consumer side, it is coming from the industry side as well, as CGI companies would more than welcome the possibility for a cheaper and faster rendering pipeline. It could really be beneficial for the movie industry as well, allowing directors directly see a closer approximation on set how their digital end results looks.
That's the thing there. Compared with 1st generation 3d graphics, 1st generation raytracing is likely to look good for decades after the point where people stop working on the games. The main difference at this point between what we're seeing and what we could see is in the models and the number of rays being used. The models are and will continue to be the thing holding realism back for the foreseeable future as raytracing is a relatively simple thing to do, just very resource intensive.

I'd be very surprised if those first games don't include some method of increasing the number of rays for when future hardware can handle more as it's unlikely to require much effort to include that. Sort of like how some older games had graphics settings that weren't practical at the time for many of the customers.
avatar
hedwards: Unlike VR helmets, Ray tracing is definitely coming, it's just a question of how long until consumer grade kit can handle it. If this demo is any indication, that's probably sometime in the next 2-3 years.

So, the GP is likely right, depending upon how you define foreseeable future and the degree to which the demo was honest

VR helmets are unlikely to ever be a common thing as the technology as currently designed is inherently incompatible with how human eyes work. You'll always have that disconnect between where the brain thinks your eyes should focus and where they actually want to focus.

It's unlikely that that will ever be solved with a helmet. It may eventually be solved via something like a holodeck with some sort of contraption in the middle to walk around, but probably never via a head mountable unit.
avatar
GameRager: THAT, or a brain interface of some sort.

Sidenote: Did you ever see the full body suits deal in lawnmower man? I loved that film but god would that be inconvenient/uncomfortable.
I could see that, I would not want to screw around with something like that until it had been extremely well tested, but I'd be surprised if it doesn't eventually happen.
Post edited May 29, 2019 by hedwards
avatar
hedwards: The tech demo we're seeing is good enough for the majority of games where you'd want to be ray tracing. It may not be good enough for something like Crisis' jungle scenes, but for the types of 3d games that people typically play, this is already more than good enough.
I just imagined this tech trying to apply to all of Crysis and wondered if any modern PC could pull it off(like the meme). :\
avatar
hedwards: The tech demo we're seeing is good enough for the majority of games where you'd want to be ray tracing. It may not be good enough for something like Crisis' jungle scenes, but for the types of 3d games that people typically play, this is already more than good enough.
avatar
GameRager: I just imagined this tech trying to apply to all of Crysis and wondered if any modern PC could pull it off(like the meme). :\
Crisis will probably be one of the last games where this technology is used successfully to render. Just the sheer number of leaves in some of those scenes is going to bring this technology to its knees.

But, most games won't have anywhere near that number of plants and leafs.

In most games, the environment will be a lot less complicated, just naturally without having to make things look artificial.
avatar
GameRager: THAT, or a brain interface of some sort.

Sidenote: Did you ever see the full body suits deal in lawnmower man? I loved that film but god would that be inconvenient/uncomfortable.
avatar
hedwards: I could see that, I would not want to screw around with something like that until it had been extremely well tested, but I'd be surprised if it doesn't eventually happen.
One guy I saw on youtube said that's how VR will likely progress.....first these headsets, then body suits, then brain interface/etc. If so I will wait for the last one and hope it's given for free to poor people as some sort of health benefit.

avatar
GameRager: I just imagined this tech trying to apply to all of Crysis and wondered if any modern PC could pull it off(like the meme). :\
avatar
hedwards: Crisis will probably be one of the last games where this technology is used successfully to render. Just the sheer number of leaves in some of those scenes is going to bring this technology to its knees.

But, most games won't have anywhere near that number of plants and leafs.

In most games, the environment will be a lot less complicated, just naturally without having to make things look artificial.
Yup...I now remember(btw) this one game/tech demo(it's actually a tech demo i;'m not just calling it that) where the guy had like thousands of individually animated/etc palm/etc trees in a virtual jungle and they needed a serve farm to run it.
Post edited May 29, 2019 by GameRager
avatar
DreamedArtist: I own an RTX 2080ti but tbh I do not like the look of the game and how they got rid of the griddy look of Quake. but hey I will give it a shot and see how it looks at 4k and post some screens of it here.
avatar
exorio: I'm currently using Quake 2 XP. Looks more gorgeous than this, with much lower end hardware requirements obviously.
avatar
DreamedArtist: it is legit a PR stunt to get people to adopt RTX with a free download of there childhood classic. I seen this all before but hey there are people who will take this.
Still looks fugly and won't justify RTX purchase lol

Okay probably the ability to pull 4k+ resolution, but there're sooo many ports that able to do 4k by now.

Why not give a way a real RTX-only games like back in the days, where you get all those awesome CDs bundled with the cards.
avatar
Judicat0r: I disagree. What are we are shown in the games and demos it's just a rather simple form of raytracing which is even capable of bringing to their knees the most powerful comsumer-grade graphic cards.
avatar
tomimt: The tech is evolving though. This current iteration of it is just that, the first consumer real-time ray trace solution and I can see it only getting better by the generation, just as GPUs are generally getting better by the generation. I presume by the 3rd generation, most games will do, or will at least have an option for it, most of their lighting and reflections with ray tracing. The next generation of consoles will very likely be able to do real-time ray tracing as well.

Really, the need for real-time ray tracing isn't coming only from the consumer side, it is coming from the industry side as well, as CGI companies would more than welcome the possibility for a cheaper and faster rendering pipeline. It could really be beneficial for the movie industry as well, allowing directors directly see a closer approximation on set how their digital end results looks.
The tech is clearly evolving as it is the complexity of what has to be rendered: is a snake that eats its tail.
I agree: powerful hardware is welcome BUT is it really cheaper? Nvidia top cards cost an arm and a leg.
avatar
Judicat0r: I disagree. What are we are shown in the games and demos it's just a rather simple form of raytracing which is even capable of bringing to their knees the most powerful comsumer-grade graphic cards.

Raytracing can be of inifinite complexity so the need for computing power is not enough.
What we see are simpler models of it running on modern hardware that can barely keep up at frame rates that are well below what traditional rendering algorithms can do and often you have to eyeball-scan the pictures to spot the differences.
And I'm not even factoring in geometry complexity and number of light sources.

Clearly less computing intensive raytracing settings can be used to draw graphics but: still what's the result compared to traditional graphics? Can hardware keep up? Is it really worth it?

Will we be able to see games with decent raytracing complexity playable at high framerates in the next future (say 2-3 years to cite you)?
I have doubts.

Keep in mind that there are render farms that use thousands of CPUs and GPUs dedicated specifically to the rendering of short clips, the special effects that we enjoy in movies, that take days to render.
avatar
hedwards: You have your doubts, but I don't see anything in your post that would point that way. The processing requirements of raytracing are primarily about the number of rays that have to be processed per frame. The complexity of the environment does matter somewhat, but not nearly to the degree you're suggesting.

The tech demo we're seeing is good enough for the majority of games where you'd want to be ray tracing. It may not be good enough for something like Crisis' jungle scenes, but for the types of 3d games that people typically play, this is already more than good enough.

Also, as far as framerates go, this kind of technology is much easier to target specific framerates than the current method of rendering scenes. You can scale the number of rays up and down to a significant degree without having it be as obvious that you're doing it.

As far as those render farms go, yes they exist, but they're also rendering higher resolution and more frames. They're also doing it for a medium where the only thing that matters is what you see, very different from games where you can vary the number of rays in a scene if you need to. Plus, they're often rendering a much finer level of detail, things like individual hairs that are not even remotely necessary for games when this first showing up in games. Games have been varying detail levels for years to help get through areas where the hardware can't quite keep up.

avatar
tomimt: The tech is evolving though. This current iteration of it is just that, the first consumer real-time ray trace solution and I can see it only getting better by the generation, just as GPUs are generally getting better by the generation. I presume by the 3rd generation, most games will do, or will at least have an option for it, most of their lighting and reflections with ray tracing. The next generation of consoles will very likely be able to do real-time ray tracing as well.

Really, the need for real-time ray tracing isn't coming only from the consumer side, it is coming from the industry side as well, as CGI companies would more than welcome the possibility for a cheaper and faster rendering pipeline. It could really be beneficial for the movie industry as well, allowing directors directly see a closer approximation on set how their digital end results looks.
avatar
hedwards: That's the thing there. Compared with 1st generation 3d graphics, 1st generation raytracing is likely to look good for decades after the point where people stop working on the games. The main difference at this point between what we're seeing and what we could see is in the models and the number of rays being used. The models are and will continue to be the thing holding realism back for the foreseeable future as raytracing is a relatively simple thing to do, just very resource intensive.

I'd be very surprised if those first games don't include some method of increasing the number of rays for when future hardware can handle more as it's unlikely to require much effort to include that. Sort of like how some older games had graphics settings that weren't practical at the time for many of the customers.
avatar
GameRager: THAT, or a brain interface of some sort.

Sidenote: Did you ever see the full body suits deal in lawnmower man? I loved that film but god would that be inconvenient/uncomfortable.
avatar
hedwards: I could see that, I would not want to screw around with something like that until it had been extremely well tested, but I'd be surprised if it doesn't eventually happen.
I actually wrote that the complexity of the raytracing is infinite: I don't know how to state it in an other way.
You overlook a couple of things: yes the number of rays is one the most heavy variables in the equation but you don't take into account the number of steps which, visually, and computationally is very impactful.
That, clearly, impacts fthe framerates as well, and if you need to tune the number of rays and steps until it affects badly the visual result then wat's the point of all of that? Have a shiny writing on a splash screen or on a GPU box that reads RAYTRACING?

Geometry complexity usually isn't heavy but once you factor in displacement and tesseletion I can assure you that it becomes a problem AND geometry detail is not going to decrease year on year, it's the contrary.
Also light sources, you have to keep in miind that if you want to achieve decent results you are going to need a lots of them.

In render farms it doesn't work like that: if they can output a clip in a week today, it will output the same clip in a day 5 years from now, clearly, but the demand for power will increase accordingly as the technology moves on: in five years render farms will produce professional-grade outputs in times that are comparable as today's maybe even longer.
Don't think that they render at much higher resolution than what's possible today where you can easily reach south of 50M pixels with just 3 displays.
And "much finer level of detail, things like individual hairs" factor in importantly apparently so geometry complexity is a thing then. ;)

All the above is just to address your points but you need, if you like, to understand that what we are shown is just marketing: raytracing is not viable unless it is a half assed form of it.

The tech demo features a kind of rendering algorithm based on ray casting that a involves a great deal of approximation in the form of multi samplig to achieve decent results, it's no raytracing, simple as that.
I repeat myself three years for real time raytracing?
I'm doubtful.

Edit: fixed spelling.
Post edited May 30, 2019 by Judicat0r
avatar
hedwards: You have your doubts, but I don't see anything in your post that would point that way. The processing requirements of raytracing are primarily about the number of rays that have to be processed per frame. The complexity of the environment does matter somewhat, but not nearly to the degree you're suggesting.

The tech demo we're seeing is good enough for the majority of games where you'd want to be ray tracing. It may not be good enough for something like Crisis' jungle scenes, but for the types of 3d games that people typically play, this is already more than good enough.

Also, as far as framerates go, this kind of technology is much easier to target specific framerates than the current method of rendering scenes. You can scale the number of rays up and down to a significant degree without having it be as obvious that you're doing it.

As far as those render farms go, yes they exist, but they're also rendering higher resolution and more frames. They're also doing it for a medium where the only thing that matters is what you see, very different from games where you can vary the number of rays in a scene if you need to. Plus, they're often rendering a much finer level of detail, things like individual hairs that are not even remotely necessary for games when this first showing up in games. Games have been varying detail levels for years to help get through areas where the hardware can't quite keep up.

That's the thing there. Compared with 1st generation 3d graphics, 1st generation raytracing is likely to look good for decades after the point where people stop working on the games. The main difference at this point between what we're seeing and what we could see is in the models and the number of rays being used. The models are and will continue to be the thing holding realism back for the foreseeable future as raytracing is a relatively simple thing to do, just very resource intensive.

I'd be very surprised if those first games don't include some method of increasing the number of rays for when future hardware can handle more as it's unlikely to require much effort to include that. Sort of like how some older games had graphics settings that weren't practical at the time for many of the customers.

I could see that, I would not want to screw around with something like that until it had been extremely well tested, but I'd be surprised if it doesn't eventually happen.
avatar
Judicat0r: I actually wrote that the complexity of the raytracing is infinite: I don't know how to state it in an other way.
You overlook a couple of things: yes the number of rays is one the most heavy variables in the equation but you don't take into account the number of steps which, visually, and computationally is very impactful.
That, clearly, impacts fthe framerates as well, and if you need to tune the number of rays and steps until it affects badly the visual result then wat's the point of all of that? Have a shiny writing on a splash screen or on a GPU box that reads RAYTRACING?

Geometry complexity usually isn't heavy but once you factor in displacement and tesseletion I can assure you that it becomes a problem AND geometry detail is not going to decrease year on year, it's the contrary.
Also light sources, you have to keep in miind that if you want to achieve decent results you are going to need a lots of them.

In render farms it doesn't work like that: if they can output a clip in a week today, it will output the same clip in a day 5 years from now, clearly, but the demand for power will increase accordingly as the technology moves on: in five years render farms will produce professional-grade outputs in times that are comparable as today's maybe even longer.
Don't think that they render at much higher resolution than what's possible today where you can easily reach south of 50M pixels with just 3 displays.
And "much finer level of detail, things like individual hairs" factor in importantly apparently so geometry complexity is a thing then. ;)

All the above is just to address your points but you need, if you like, to understand that what we are shown is just marketing: raytracing is not viable unless it is a half assed form of it.

The tech demo features a kind of rendering algorithm based on ray casting that a involves a great deal of approximation in the form of multi samplig to achieve decent results, it's no raytracing, simple as that.
I repeat myself three years for real time raytracing?
I'm doubtful.

Edit: fixed spelling.
These are things that mostly apply to movies and TV, not video games as there are other trade offs to be had. Rendering a video game in real time via ray tracing is a much easier problem than you're letting on because you get to control the level of detail shown tot he player in a way that you can't with other media.

You're not likely to have the budget for the huge numbers of artists needed to generate the assets necessary to get that level of complexity. Even with AA games, you don't see that level of detail because of the costs and storage space associated with such levels of detail. Not to mention the increased cognitive load on the player that has to decide what to pay attention to. That's not going to change just because we start using raytracing.

Granted, you don't quite get the option of doing it via distance the way that raster engines do, but the situations you're using just won't apply to 1st gen raytracing games. And may never apply as those are mostly things that distract the player from playing the game.

As far as the sampling goes, that doesn't make it not raytracing and it's likely a lot easier to remove that than it is to get the raytracing engine going in the first place.

As far as geometric complexity goes, I never said that wasn't an issue, just that it's not something that's likely to be an issue in the first gen raytraced games. They'll just design the games around the constraint the same way they always have. They get to choose what the geometry is like and you're not likely to see them adding realistic hair like that until they have that detail solved.

In short, these seem like overly picky standards and an attempt at rationalizing why it's going to take such a long time. In practice, I doubt very much that once this demo is released that it won't be good enough for typical players, it certainly looks better than what we currently have.
avatar
DreamedArtist: I own an RTX 2080ti but tbh I do not like the look of the game and how they got rid of the griddy look of Quake. but hey I will give it a shot and see how it looks at 4k and post some screens of it here.

it is legit a PR stunt to get people to adopt RTX with a free download of there childhood classic. I seen this all before but hey there are people who will take this.
avatar
exorio: Still looks fugly and won't justify RTX purchase lol

Okay probably the ability to pull 4k+ resolution, but there're sooo many ports that able to do 4k by now.

Why not give a way a real RTX-only games like back in the days, where you get all those awesome CDs bundled with the cards.
You make it sound like it's fuglier than the game was originally. This is a tech demo as in they likely turned the effects up to make it clear to the viewer what the changes were. I'd personally wait until the actual demo is released before making a judgment call as it's pretty clear that they tuned it to make the effects as obvious as possible.

I could be wrong about that, but I'd expect that they'll tone it down a bit to make the experience a bit more consistent with what they're selling.
Post edited May 30, 2019 by hedwards
Quake II runs on a potato

Oh no... red grapes... artificial grapes with grape flavours splattering everywhere.

Quake II is getting a Potato upgrade makes everything look like a potato with light hitting it.

and the shiny reflective mirror like knives cutting into the potato.

blackberries making fly sounds and moving like flies.

well that describes the graphics of that ancient game.
avatar
hedwards: The processing requirements of raytracing are primarily about the number of rays that have to be processed per frame. The complexity of the environment does matter somewhat, but not nearly to the degree you're suggesting.
That sounds too optimistic. I'd phrase it like this: the number of rays is a multiplier on scene complexity. Which means, yes, you can trivially blow your processing budget just by throwing more rays at it. But what's the processing a ray does? Yes, it intersects with the scene!
avatar
exorio: Still looks fugly and won't justify RTX purchase lol

Okay probably the ability to pull 4k+ resolution, but there're sooo many ports that able to do 4k by now.

Why not give a way a real RTX-only games like back in the days, where you get all those awesome CDs bundled with the cards.
avatar
hedwards: You make it sound like it's fuglier than the game was originally. This is a tech demo as in they likely turned the effects up to make it clear to the viewer what the changes were. I'd personally wait until the actual demo is released before making a judgment call as it's pretty clear that they tuned it to make the effects as obvious as possible.

I could be wrong about that, but I'd expect that they'll tone it down a bit to make the experience a bit more consistent with what they're selling.
It still looks out-of-place/off from the original feel of the game(imo)....also, ANY demo is supposed to make the product look good to potential buyers....that's a good part of the POINT of a tech demo(and trailers as well). If it can't succeed in that how are we to assume the final product will be any better?