It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
:D ..um, well, yes. But quantum computing as a concept relies on producing very complex instruction sets that execute virtually instantly, and so on. And this is.. very old theory. 70s, before the x86 approach proved to be most useful. PPC and various simd and mips type processors survived, though. In anything from game-consoles to mp3-players. And it's of course very obvious that relying on linear processing power to increase forever is not realistic.

For example, most if not all of the performance increases on intel hardware since core2duo has been done on the microcode level, not in hardware improvement. The hardware engineer advances have made the cores smaller, but the fundamental problem with performance on silicon until a certain temperature and effect limit follows it around. It's impressive to get very high performance down on a smaller chip that has massively lower power consumption than before. But the last 10 years have been scraping the barrel, technologically speaking.

So the optimisations are essentially drawn out of reducing x86 atomic instructions that recur very often into the product of the process. On the very low level now - you have a reasonably complex instruction word. And when it's split up, you add a register to another, subtract the third, add the tail, increment, etc. Something like that. So when the result completes (that take several clock-cycles), the result of that initial operation is kept, and can be produced instantly without actually performing the reduction operation again. That's efficient, and increased cache size and therefore hits essentially do 90% of that. Since a lot of these operations also do run over and over again for more complex high-level tasks, the optimisation gain is fairly high.

Then there's cisc-principles. On a mathematical level, you can deduce that a series of operations actually always result in the same less complex instruction, for example. There are numerous examples of sequences of these very common instruction commands that can be logically reduced to different and less complex instructions. So while the instruction set offered on the processor doesn't actually compute it directly without subdividing it, when that command occurs, it can be shortened. So a good compiler will try to reduce these known instructions into batches, and further increase the optimisations of the kind in the paragraph over here. This sub-set is where intel wins out over AMD in tests.

At the same time, none of these optimisations are required for it to run on an x86-compatible processor, it just won't run as fast.

Outside of that - there's also the problem, of course, that no amount of platform optimisation can rescue extremely inefficient code. Although people claim that, the truth is that better executed code will always be faster than what an automatic compiler can do. If we assume programmers generally are not very good, make use of high-level sdks, and so on - then the real-world output is of course different. Then the human made code should be as linear as possible, and without anything that could possibly have done away with lower running-time if it potentially could be much longer. That's where we get the "law" that we should always program with a linear constant, no matter how high, as in "practice", the linear constant is still lower than an exponential run-time element. We of course know that's not true, but it's still done that way.

There's also the extended instruction sets (encryption for example, or video encode and decode - frequently a target here) - and that's probably where the biggest increase in performance will be over the next ten years. In specific compilation optimisations that are designed to execute specific routines on specific memory areas the platform will crunch. ..It's no different than Nintendos black boxes on the snes cartridges, really.. But again, the package will still execute all the instructions without those extended instructions, just not as fast.

And we want to keep that general approach to programming on a conceptual level because it means your compiler always has the same target, and the target can always be executed on different hardware. That's the strength of all of this, and it's not a bad thing at all to have a general programming language on the low-level. Just imagine if we had one instruction set for AMD and another one on Intel. With each of their horrible in-house compilers possibly working in concert with some programming languages but not well with others, etc. It'd be the end of open source, for one. But it's proven to be the most useful way to go about executing high-level code consistently.

Not in the least because you know that this will not force high-level conventions into the programming languages because of the low-level limitations.

Alas, that is of course the result anyway, when we have such a thing as particular ways to compile with extended platform-dependent instruction sets. And arguably, what we've seen with C++ a lot is that we know on beforehand that certain types of routines run much, much faster than anything else. So you start making these shortcuts that actually lock you into a specific type of hardware. And that's where we get things such as a particular benchmark being the golden standard for measuring performance - while it really gives no indication of how fast the computer actually will finish actual "real world" tasks. The amount of favoritism and conventions here frankly border on the religious. Although C++ on it's own is completely flexible and allows anything from a high-level object-oriented, completely agnostic approach, and all the way down to specific near assembly type implementations as well as specific memory addressing and manual handling of locks on specific memory areas, and so on. So that's not actually the programming language that causes this, but the practical output you get with a particular approach.

What could conceivably replace that? Thing is that before something actually does replace common programming conventions, it's going to be superceded by an approach to programming that actually allows that alternative to exist. It's not going to be a new language that everyone switch to. It's going to have to be a structure and an approach to high-level programming that then on the low-level potentially benefits in huge and massive amounts by parallel distribution and execution of routines.

And that's complex. It means that you would have to design your code on the high-level to actually subdivide the task in what you already now detemine is going to potentially be parallelizable. And you just need talent to see that. It's easy with graphics and per pixel-operations, of course, because it always depends on what's directly next to the origin, without fail. All the graphics card logic and the way simd is used on graphics cards depend on that. The entire industry with graphics cards with hardware instruction sets (of the "extended instruction set" kind in the example above) comes from that. The ability to parallelize with 100% predictability the task in the graphics contexts.

But if you wanted to do this with more complex objects, that rely on potentially a lot of objects, and a very large amount of functions, complex math, and so on, then you're suddenly running into a problem. Because when things area not 100% parallelizable, the performance drops utterly and completely. It's just not useful.

So that works against you if you design graphics code like that. You still can't get above a certain level of processing power without essentially making the shader code obsolete. OpenCL is a good answer to that, because it attempts to include incrementally more complex math on the peripheral cards. But it's like going over the stream to fetch water from a more tech-nerd perspective.

And... what you want to do instead is to program complex instruction sets manually. To create routines that will execute that complex math in for example the same pattern as you would execute shader code on a graphics card. But that then and also rely on that more complex "cpu math".

But it's not something that will turn up with a new language, or even with just hardware that was capable of doing this. Like explained, the principles have been around since the 70s, and we've had numerous examples of this in consumer electronics to a varying degree (although not to the level on the Cell BE... rip... but be that as it may).

Instead it's going to have to be reliant on a different approach to programming that doesn't rely on very simple solutions that anyone can do. And that's... not an easy sell, right..? Over "predictable platform increase every single year". It's like selling home-made buns as a market concept over a Deli chain. Doesn't happen.
Sounds like the graphics card and the lane is the biggest problem, I know pcie is still in the shed with all the advancement we have seen with CPU's and GPGPU's. I don't think thats the only issue, gpu's just don't seem aligned with processors there is still a major gap. We have seen increases in graphics processing power, we have seen larger ram maybe even better cache. But there is a whole larger issue at work besides bottlenecks, partly corporate control and partly because we really havn't hit a an actual breakthrough in GPU processing power.

That is only taking in the hardware end of the spectrum, this is where I think your point shines regarding how software is written and how the compilers actually get used. But then we enter the whole argument about why "silicon sucks in the long run" fact is its not just the math its also the physical limits of the hardware that needs to run those extra special newtonian phsyics.

Thats where we enter The Matrix - What is real how do you define real.

We could write a beautiful essay or bad poetry on the subject and still not hit the key notes.
Hehe, guess we already did write an essay. But you're right, we don't want real. Not in film, not in books, and not in games. You want something fantastic that's still believable. So the sales-pitch for cinematograph wouldn't be "here's beautiful photos that you can, incidentally, show in rapid succession so it looks like it's moving. But look at those photos!". It'd be.. "it's just like you're there, seeing it yourself, even though you're not".

And the way to get that sense of realism in semi-vr, I think, is with movement, shadows, expressions, lack of pop-in, no sharp edges, actual freedom to move and look around freely and.. without blocks for shoulders, being shifted around on a cart. And so on. That'd help immersion much earlier than before you can start to worry about photorealism.
Oh boi we are really kicking the tin can!

Your right we don't really want real world realism in games, the whole reason we read books, watch movies, play games is to escape reality. Not repeat the everyday.

I have never been a fan of VR I think the technology is overrated but thats just me. I think the future lies in concepts like the Google glass, also known as augmented reality. Though honestly we have yet to see a model representation of it, as highly unlikely as that is.

The VR argument is allot like a kid in the candy store, at some point you eventually get sick of candy.
Some interesting posts in this topic, especially regarding the gameplay which I also have my questions about.

I have been playing some Open World games these last twelve months, and while in the beginning it is all new and interesting the gameplay tends to become rather repetitive after a while once you have a general understanding of how everything works.

You set up some goals like 'building a house' or 'a settlement', set up transport routes or trading routes, work on your reputation. In case resources need to be managed you set up farms and factories and assign NPCs to these if they are available, and create an infrastructure in which you are the decider.

It is interesting see how well your management skills are on running a city or a civilization. If you suck at it in the beginning it is an incentive to become better on it until you are successful on the highest difficulty degree, but most people probably already turn away when they have had some success at the medium difficulty level to play another game or do another activity/hobby, perhaps only occasionally coming back to an Open World game (when some new content of mods have been released for it)

I kind of fear that I myself will probably get bored before completing this game, check out what the mystery is that is taking place at the galactic center on Youtube and then probably stop playing this game.

A shame as this does look very nice and I happen to love science fiction involving exploration.
Post edited March 16, 2016 by thedutchghost
Think that's sort of what they've shown so far, that there's enough stuff going on with the factions and the exploration that this navigating through this is the draw on it's own, more than "completing objectives" to unlock more things. Or, making the way you play interesting outside the abstract design.

Seems a bit like the approach they had at Particle Systems with I-War and I-War 2 (over more traditional approaches to exploration like in Elite, etc). That what they have is a travel and flight system where you feel as if you are exploring within the same constraints as the automated bots. But where it's free to a degree that you also feel what you are doing is original and imaginative. That when you decide to go to a new system, you think you're breaking the rules in some way. That was what I thought were the best parts with piracy as well as finding the new systems in I-War 2, for example - you punched through and found a path into a less traveled system, found some new allies, new ships, and new targets. As opposed to advancing to the next node, and capturing so and so many cargo-containers to level up. Arguably, the fact that I-War had a full solar system to travel freely in, where factions opened up new travel routes as you discovered them sometimes, etc., was probably the foundation of that. That with a scripted set of locations to be in, the illusion wouldn't work.

Will be interesting to see if they'll pull it off. But it's not like it's impossible to do it, and that any open-world game has to be inside the constraints of your average "open world" title.. And like mentioned a few times, when you actually have the kind of free flight they've demonstrated, with being able to zap between systems, and fly down to the planet from space, etc., then the amount of things you need to add to keep the illusion of a fairly dynamic world should be fairly small..

I understand that not everyone played I-War as a space-tour across the Sol-system (and the badlands in IWar2), where people get endless enjoyment out of dropping out of formation, and engaging lds manually, to visit the rings of Saturn, and things like that. :D But that was fun to me. Just exploring for no reason. The entire Sol system in I-War being largely to scale as well, finding out how far it was to Pluto and things like that was probably what I spent most time on >_<

But pretty sure the ones who are most worried about ending up with a scenario like "yay, planet number 182 explored, let's see another cutscene of a frontier base magically popping up out of the ground, now on to the next one" are Hello Games, though. But should be interesting to see what they're putting in to populate the planet-systems.
One interesting point raised blew my mind, planetary bodies have movement, have orbit, have day and night cycles based on what they orbit. The biggest one, landing and taking off is never in the same location. You could potentially enter and exit a planets atmosphere and realise that the world isn't just a static sphere held up by magical forces. It throws a curve ball in everything we have yet seen in sims and the X series doesn't even allow landing on planets.
I'm very optimistic i cannot wait for this game to come out :) Sci Fi has always been my weak point and i don't mind the goods and bads and everything else, i just want to play it and see what happens :)
avatar
Shadowcaster787: One interesting point raised blew my mind, planetary bodies have movement, have orbit, have day and night cycles based on what they orbit. The biggest one, landing and taking off is never in the same location. You could potentially enter and exit a planets atmosphere and realise that the world isn't just a static sphere held up by magical forces. It throws a curve ball in everything we have yet seen in sims and the X series doesn't even allow landing on planets.
One thing that came to mind now is that this means if we take off, stop engines, and wait a few minutes/hours we should see the planet move off on its own. that to me is awesome.
avatar
Shadowcaster787: One interesting point raised blew my mind, planetary bodies have movement, have orbit, have day and night cycles based on what they orbit. The biggest one, landing and taking off is never in the same location. You could potentially enter and exit a planets atmosphere and realise that the world isn't just a static sphere held up by magical forces. It throws a curve ball in everything we have yet seen in sims and the X series doesn't even allow landing on planets.
avatar
bravoman: One thing that came to mind now is that this means if we take off, stop engines, and wait a few minutes/hours we should see the planet move off on its own. that to me is awesome.
Actually, that's not entirely how space works :D At least not IRL. Not saying it won't work in the game like that.

In space, you're always orbiting something basically, so unless you actually use your engines to hold yourself in a certain position, you will stay near the body you're orbiting.

But a fun way you could see the movement of planets in a game like Elite Dangerous for instance, is to leave the game and come back a few hours later to see that when you log back in, objects in space have indeed moved.

But other than that, yeah I think planets do actually move around in space, around the star(s?) they're orbiting.