Quote:
Originally Posted by OttoMoto
alex d you must admit that realtime 3d graphics did'nt change much from Half-life 2 time. And i don't care what ambient occlusion shaders you put in - things won't change much. And that's exactly what i was about. Not about new techniques and features, but about really different look. And that means that DNF would look as any other game on the market.
|
What exactly do you mean by "really different look"? So called "realistic" rendering techniques using shaders and rasterization have exactly the same "look" as the ray tracing techniques you're lauding up, the only difference is in quality not "looking different." If you compare things from HL2 to now, you'll see major changes in how things are done.
Shaders, being the programmable pipeline they are, allow coders to make their games look as similar or as different to the norm as they please. If they used completely "revolutionary" methods of coding a rendering engine it would still be pretty much exactly the same as otherwise in terms of "looks," just not in quality nor performance. We're at the stage where games look the same because developers want them to look like that, not because of restrictive fixed-function API's like back in the days of the GeForce 4.
For example, in my engine I've got a shader set up to give a ton of different visual styles by just passing an Attributemap texture as a parameter. Just changing the texture and my projects can change from being shaded with a "genericly realistic" Lambertian style to a fully NPR cel-shaded look. These are as "different" looks as you can get and they're using a single shader, doesn't even require any so called "upgrading" to achieve.
Quote:
By the way - i'm not impressed by new Crytek engine and it's lighting model. At all. It's not realistic, it's just glamourous... (imho half life lighting still kicks ass). I'd say i more impressed by id software achievement of megatextures. They did something really special, when others just upgrade shaders.
|
Hate to break it to you but MegaTexturing is just a further extension of the Clip Mapping technique for reducing mipmap data and the Texture Atlas Streaming concept, along with plenty of "upgraded" shader wizardry for figuring out the correct mip-levels and texture coordinates. Have you read any of the papers detailing the technique? They're well worth the read, as are Crytek's papers on Light Volume Propagation.
As far as I'm concerned, LVP is a far greater step forward in real time rendering techniques than MegaTexturing. MegaTexturing was an incremental step forward while LVP is a very novel way of simulating ray-traced style global illumination lighting in a rasterized renderer while maintaining performance on DX9 level kit.
For all intents and purposes it's a completely dynamic hybrid Rasterizer/Ray-Tracer calculating full global illumination every frame in realtime, which is much closer to the "something completely different" ray-tracing you're demanding than the pre-processed static lightmapping of HL2. When it was released, HL2's lighting model was far, far closer to that of HL1 than it is to modern Deferred/Inferred Renderers. It wasn't until Ep2 came out that they actually got a decent renderer in the engine.
Quote:
Now a word about realtime rendering (i'm sorry guys but i have to enlarge this post a lil). "old comparing to overall computer graphics evolution" means that there are many other things you can include in your realtime renderer. Or maybe change the way you render stuff. And as example there was raytracing. Yeah sure it's damn heavy for modern gpu's, but you gotta start from something. In that case DirectX11 is not a revolution - just another upgrade. So dear mr. alex d - you did'nt get my message at all. But you act angry and blame me in all sins possible. It's not a good behaviour. Don't you think?
|
Anger? Blame? Putting emphasis behind one's point isnt any indication of anger. Any anger or blame you gleaned from my post is just in your head. I also never said DX11 was completely revolutionary, I was merely pointing out that it is a big step forward compared to what came before and not stuck in the mud like you were claiming. As such, it seems clear that you were the one to miss my point, my friend. So, I'll repeat the point for you:
It is completely and utterly irrelevent to compare the progress of realtime and non-realtime graphics development as the two
situations have nothing in common! Ofcourse people can develop utterly revolutionary and unique rendering methods for non-realtime purposes pretty damn quickly as they have no worries what so ever about the performance of such techniques. They can leave each individual frame rendering on a server farm for months if they need to, which they often do.
The demands of realtime rendering, especially in games where the processors are occupied by other performance critical systems, requires each frame to be rendered a matter of
milliseconds and quite often more than once. This is the reason why the greatest engine-coding minds in the world like Carmack, Sweeny and the Crytek guys can "only upgrade a few shaders" and not give you something which looks "completely different" using a completely new rendering method not rooted in already established tech.
It's not like people aren't developing these new methods, it's the fact that they're [/i]simply not possible in realtime game situations yet[/i]. Id Software are actively persuing Ray-Casting through Sparse Voxel Octrees for their next engine (idTech 6), but that's not looking likely to go anywhere productive until near the end of the decade atleast. Even ignoring the whole processing power issues and ignoring that it's very difficult to have dynamic geometry, the size of the data-sets currently required for a single, relatively tiny scene range in the terrabytes! I'm in love with the technique, but damn is it a huge step backwards on current hardware compared to what's possible with the "boring old" rasterization shaders.
Further more, your whole arguement against DX11 seems to be alongs the lines that it's holding developers back from making their own unique rendering paths and escaping the restrictions of rasterisation. That's far from the case as it's yet another step closer to fully programmable GPU's of the style which Larabee was meant to be.
Eventually, yes, there will be plenty enough power (and more importantly memory bandwidth) in these GPU's to handle such tasks as fully-raytraced scenes but not right now. Sure it's a "start" as you say, but a "start" is no use for a commercial game if it's lower-quality and more resource-intensive at a lower frame-rate with far more static pieces of environment than the "standard" option.
Basically, what you're saying makes very little sense at all in terms of what you're talking about. You say you want progress and for games not to look the same as they have been for years, yet you're utterly unimpressed with the major steps forward across the industry and are loudly demanding they switch to techniques which would set the industry back a decade in capability.
Unless you mean you would prefer games to have lower quality visuals with lower performance just for the sake of having some revolutionary new way of it being drawn to the screen? That would make sense even if it isn't exactly a realistic point of view as "how" it's drawn to the screen won't effect you, the gamer, at all.
Using established techniques which the developers are very familiar with means they can give you a lot higher quality a product than if they were to use a brand new technology using utterly revolutionary techniques they have never used before and don't know how on earth to do what they want with it.