Developers Say Memory Is Faster on PS4 Than Xbox One

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Strazdas said:
Erm, no, not exactly. Cloud computing is kinda special, since it does not even require servers to exist. Basically in the cloud you have 100 (example number) xboxes conencted together sharing resources. so for example whne you are at work Xbox owner A uses your xbox power to help calcualtion, and when you are home you get to use Xbox owner A pwoer if hes not using it. and that would be shared between many xboxes. usually it needs a serer to find other xboxes, but you can program it to "find on his own" by tryin possible ranges and then asking xboxes found for a list. this is how P2P download work on prnciple, and it can survive without a master server (tracker) due to sharing of IP lists.
Thing is, no company has ever done a proper cloud yet. at least noone has announced that, who knows maybe we got some uy in his basement that made it. (the guy in this case being some military system that created cloud for local usage of military operations). What is quite popular now is cluster servers, basically if you take a bunch of computers, put them next to eachother and ad make them work together. but thats just a server anyway.
Clodu storage however exist, and it basically stores in all places where file is available and then use the least loaded server to send you the data, so it sort of uses cloud principle of load sharing. but cloud computing and cloud storage is a long road of difference.
That being said, i do not believe microsoft will even attempt to do a real cloud and are only throwing buzzwords. morel ikely they do server computing where they try to offload some of computation to the server (hence their 300 million investment into servers), and likely fail due to horrific americans internet connection.
The real cloud would be functionally identitical to the type of server processing you and I both assume they'll be doing.

Real cloud computing also introduces an entire host of problems including making some already existing issues (like internet connection) even more problematic.

So when companies say cloud computing, I assume they are lying to us until proven differently and that they just mean server-side processing. As of right now, server side processing is just so much more efficient. In fact, for the most part cloud computing would only benefit companies like console manufactures so that they dont' have to host servers.

very much true, and this is why cloud computing can only work when we all have fiber optics. So go bug your ISP to get you one.
Cloud computing would cause some other issues that really shouldn't be the customer's responsibility. I don't want other people to be utilizing any of my bandwidth or my console's power unless I'm in the game with them. I don't think cloud computing should be the way of the future. Not unless it's for some altruistic ventures or something like that where we are giving some of our machines face time willingly for a good cause or if we're being compensated accordingly.

P.S. the sarcasm in that post was quite obvious, not sure why people took it literary. then again, you never know some people would post something like that and be Sirius..../
That's the primary reason why Poe's law now applies to any extreme position to hold rather than just fundamentalism. Got to throw that ol' winking smiley on everything or expect it to be missed. But I felt like the context of this one was quite strong

Micalas said:
I don't understand how the company that made Windows can be so bad at hardware. It's baffling. You know how this stuff works, Microsoft! Christ...
It's important to remember two things:

1. Microsoft did not make a bad piece of hardware. They only made a machine that is weak when compared to its competitor, Sony's ps4. The XBO is likely to be a powerful machine capable of doing quite a bit even if it falls short of the ps4. So don't think that they created crap. They just created something that fails to impress thanks to the competition doing a better job.

2. Sony is a hardware company. They actually should have an advantage here. You'd probably be surprised how bad an OS designed by Sony for a pc would perform.
 

MeChaNiZ3D

New member
Aug 30, 2011
3,104
0
0
As someone who knows very little about GDDR5 as opposed to DDR3, I was sufficiently confused by people arguing that they would be roughly equivalent to stop having an opinion. I am glad to now have some professional opinion to set the record straight.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Kross said:
First of all id like to say im honored the master has spoken to me. Ive been stalking soem of your comments and its really nice you took the time.
Ok, now onto arguing.

Current SSDs are both SMART capable
But SMART is a lie. its designed to show very good health as long as possible so warranty costs would be less. It hides broken clusters from you as lnog as it can and even keeps reserve clusters to mascarade as good ones instead of broken ones. If SMART is showing problems its time to start panicking because the hardware has already failed to disguise its own failure. doom is imminent at this point, and yet windows STILL tries to hide broken clusters and compensate failures. the whole HDD failure detection is designed in a way to hide it as good as possible from the end user.
Though i find that the most failures of my hard drives was simply because the motor wore out and started giving up after 7 years untill it eventually crashed. SSDs having no moving parts do go around this problem. Still, they are way too expensive for the benefit unless they are used for something like a webserver or something, which is great for sites liek Escapist but not really that great for a regular gamer.

Cloud computing is virtual machines.

Dynamically sharing computing problems with a client network is a form of Distributed Computing.

Cloud computing operates on a similar resource allocation principle as Airlines overbooking flights (maximum hardware utilization with an uneven demand). Using the theory that 100% of your RAM/CPU/disk IO capacity is rarely used by one operating system + processes. So you put another virtual machine on the same system that can use the idle resources.

Distributed computing solves large or ongoing problems, so it will always use 100% of whatever resources you allocate it, which is TERRIBLE in a "cloud" environment, as it means a single instance will choke out every other instance on the same machine for a particular resource (usually CPU in traditional distributed computing, more typically things like disk I/O from running a database in a VM)

From a user's perspective "cloud" anything is EXACTLY the same as any other sort of "online service". They are hosted services at a data center somewhere, that the administrators can spin up at greater speeds then having to hook up new hardware. There's no end user difference from something being in "the cloud" other then companies can offer online services for a cheaper outlay then they could previously.
From a end user perspecitve its online services regardless, true, but thats the problem to begin with. apperently most people dont want online services to run thier SP games. Not that it could even work with centralized servers that MS is trying to do, but could in theory with distributed computing if you have enough machines in the neighboarhood.
Will distribution always use 100% though? what if supply is greater than demand of resources? P2P does not always utilize all resources allocated to it even though it technically could, because there is not enough demand. WOuldnt that be the same with other calculations if we had supply larger than demand?
Currently distributed computing solves large and ongoing problems[footnote]Hence my comment about it not beign used for many things at all yet[/footnote], but it can be used for other stuff too.

Lightknight said:
The real cloud would be functionally identitical to the type of server processing you and I both assume they'll be doing.

Real cloud computing also introduces an entire host of problems including making some already existing issues (like internet connection) even more problematic.

So when companies say cloud computing, I assume they are lying to us until proven differently and that they just mean server-side processing. As of right now, server side processing is just so much more efficient. In fact, for the most part cloud computing would only benefit companies like console manufactures so that they dont' have to host servers.
Very much true, they indeed simply mean server processing usually. cloud is just a popular buzzword now that dont seem to mean anything anymore anyway.

Cloud computing would cause some other issues that really shouldn't be the customer's responsibility. I don't want other people to be utilizing any of my bandwidth or my console's power unless I'm in the game with them.
Yes, and people like you IS THE PROBLEM. you want to utilize resources from other people, but dont want to give yours in return, hence the chope point is noone is sharing.
if you got fiber optics your bandwitch is a nonissue though. proper fiber optics give you so much bnadwitch you can do anything with your internet and it does not choke, and there are no bandwitch limiters from ISP on fibers anyway (if yours make one - they are terrible people and should never sell a contract again). processing power however does raise your electricity bill and wear down your hardware, so its understnadable you not wnating to give any, but then you wotn recieve any either, and were back to local hardware only.

1. Microsoft did not make a bad piece of hardware.
I diasgree. While they try to cover it up a lot there were leaks that showed they had massive problems with their attempt to make different CPU design that was supposed to be cheaper, but failed, thus they wasted a lot of money and due to tests instability we should expect high failure rates.
Now unless multiple sources flat out lied about it, they did make bad peice of hardware. unless they hired a magician that fixed all those problems in last few months.
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Strazdas said:
Lightknight said:
Cloud computing would cause some other issues that really shouldn't be the customer's responsibility. I don't want other people to be utilizing any of my bandwidth or my console's power unless I'm in the game with them.
Yes, and people like you IS THE PROBLEM. you want to utilize resources from other people, but dont want to give yours in return, hence the chope point is noone is sharing.
if you got fiber optics your bandwitch is a nonissue though. proper fiber optics give you so much bnadwitch you can do anything with your internet and it does not choke, and there are no bandwitch limiters from ISP on fibers anyway (if yours make one - they are terrible people and should never sell a contract again). processing power however does raise your electricity bill and wear down your hardware, so its understnadable you not wnating to give any, but then you wotn recieve any either, and were back to local hardware only.
I don't want to use the resources from other people either. I don't think any product on the market is good enough with load balancing if one of any processors augmenting my game has low bandwidth or gets shut off in the middle of computation. I want companies to stop trying to shift all the costs on consumers and buy their own damn servers just like we've done for decades. If they're going to use consumers machines then you're talking about making our hardware do their work. If that's the case, we should be compensated.

If I got fiber optics bandwidth, it would still be an issue. Just because I have a fast internet connection does not mean these other servers do. In internet traffic, the weakest link is the speed you get to go.

But yes, living in google fiber areas or even areas that google fiber is threatening to enter would be nice. I don't even care about 1GB up/down speeds. 100MBs up/down would already be fantastic for everything but downloading the largest files and even then you'd be cutting download times by several hours for those.

1. Microsoft did not make a bad piece of hardware.
I diasgree. While they try to cover it up a lot there were leaks that showed they had massive problems with their attempt to make different CPU design that was supposed to be cheaper, but failed, thus they wasted a lot of money and due to tests instability we should expect high failure rates.
Now unless multiple sources flat out lied about it, they did make bad peice of hardware. unless they hired a magician that fixed all those problems in last few months.
Another round of high failure rates? If that happened then yes, I would absolutely agree at that point. But as of now with the numbers we have, I think they just made a weaker machine and forcing the kinect (which they admitted costs almost as much as the console) is what has inflated costs. So what we're really looking at is something like a $300 machine compared to Sony's $400 model which by all accounts is looking like quite the machine for that price tag. Except MS is charging us a mandatory $500 for it.
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Kross said:
Cloud computing is virtual machines [http://en.wikipedia.org/wiki/Hypervisor].

Dynamically sharing computing problems with a client network is a form of Distributed Computing [http://en.wikipedia.org/wiki/List_of_distributed_computing_projects].

Cloud computing operates on a similar resource allocation principle as Airlines overbooking flights (maximum hardware utilization with an uneven demand). Using the theory that 100% of your RAM/CPU/disk IO capacity is rarely used by one operating system + processes. So you put another virtual machine on the same system that can use the idle resources.

Distributed computing solves large or ongoing problems, so it will always use 100% of whatever resources you allocate it, which is TERRIBLE in a "cloud" environment, as it means a single instance will choke out every other instance on the same machine for a particular resource (usually CPU in traditional distributed computing, more typically things like disk I/O from running a database in a VM)

From a user's perspective "cloud" anything is EXACTLY the same as any other sort of "online service". They are hosted services at a data center somewhere, that the administrators can spin up at greater speeds then having to hook up new hardware. There's no end user difference from something being in "the cloud" other then companies can offer online services for a cheaper outlay then they could previously.
That's what I was thinking. There was a period of time where everyone, even cloud computing leaders were using the term for any kind of online service at all, including the idea of load balancing across multiple machines (distributed computing). I work with VMs all day long. I'm the sole manager of them for an international corporation and it isn't part of my job description. Heh.

Of all the companies, Sony's PS3 volunteer computing for the folding project is the best example of successful distributed computing. I really hope they allow us to do things like that this generation too. I loved going to bed at night or work in the morning knowing that my ps3 was actually doing something useful. I'd like the ability to set it to work more autonomously.

I'm glad you agree that this would cause a host of other problems where gaming is concerned. As far as I can tell, the optimum scenario would be a large datastore with a very fast internet connection so that users are only bottlenecked by their own speeds. We haven't seen anything better than that aside from the client machine being powerful enough to handle it.

So I think it's a safe assumption that MS is doing what we traditionally think of as the datastore model.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Lightknight said:
I don't want to use the resources from other people either.
Then you dont want clouds. fair enough, but we need to have a want for cloud for this theoretical situation. without such need, we should just go witout one, quite obviously.

I don't think any product on the market is good enough with load balancing if one of any processors augmenting my game has low bandwidth or gets shut off in the middle of computation.
hence "it has never been done in reality yet".

I want companies to stop trying to shift all the costs on consumers and buy their own damn servers just like we've done for decades. If they're going to use consumers machines then you're talking about making our hardware do their work. If that's the case, we should be compensated.
but your hardware was doing the work for all history of videogames. you always covered costs of local computer and your hardware did all the work, locally. if anything, given perfect condition, distributed computing is cheaper for end user because you need cheaper hardware. but conditions are not perfect, obviuosly. Granted, there is OnLive that does shift work to servers, but this isnt popular atm (mostly becuase publishers refuse to work with it, not due to lack of demand from users).

If I got fiber optics bandwidth, it would still be an issue. Just because I have a fast internet connection does not mean these other servers do. In internet traffic, the weakest link is the speed you get to go.
Yes. EVERYONE needs to have fiber optics in this case. sadly there are plenty of reasons this is intentionality being sabotaged.

But yes, living in google fiber areas or even areas that google fiber is threatening to enter would be nice. I don't even care about 1GB up/down speeds. 100MBs up/down would already be fantastic for everything but downloading the largest files and even then you'd be cutting download times by several hours for those.
Yes, 100mbps is good enough. personally i chose that over 300 since i dont really need 300. they do offer up to 1 gbps here. but your seem to be using GB and MB instead of gb and mb. not sure if intentional. if intentional then you dont really need 800mbps (100MB) speed really, and 8gbps (1GB) is only accessible to industries as of yet.

Another round of high failure rates? If that happened then yes, I would absolutely agree at that point. But as of now with the numbers we have, I think they just made a weaker machine and forcing the kinect (which they admitted costs almost as much as the console) is what has inflated costs.
Like i said, if the leaked info was correct we will see that, if somone was just trying to make the company look bad, then we dont know. i guess we will have to wait and see.

As for kinect, i had a thoery that kinect does some of the processing to compensate for weak hardware in the box itself, hence mandatory connection. since it does have to have its own processor for the visual recognition and quite a powerful one. but if the statement that it can be disconnected for machine to work is true then this thoery is bust.
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Strazdas said:
Then you dont want clouds. fair enough, but we need to have a want for cloud for this theoretical situation. without such need, we should just go witout one, quite obviously.
I do want... "clouds". I want these companies to have centrally located server farms that do the necessary processing for my online games.

I specifically do not want ANY cloud based anything regarding single player titles. Screw EA for trying it with Sim City (lying about it) and for publishers who would offload menial tasks that the XBO could otherwise handle just to shoehorn in always online gameplay under the guise of "But servers process necessary functions" without any statement as to whether or not the XBO could have done it.

hence "it has never been done in reality yet".
There is no proven advantage to distributed computing but there is a multitude of shortcomings that are known. Even if bandwidth became 100 times better then it still wouldn't be better to have distributed computing than dedicated servers because that bandwidth would only improve the dedicated servers too.

Do you think there's some kind of serious problem with how servers work now? As a COD gamer I can tell you that there's nothing distributed computing done perfectly would contribute to gaming as we know it. The demands of gaming would have to become astronomically more advanced than they are now and I don't see that happening out of sync with current tech. We are rapidly approaching a day where the graphics and physics engines are better than necessary. Just as word processing became trivial for a pc to process, so will video games eventually arrive at a degree of realism for which additional processing just isn't necessary. I imagine a future where every game that wants to look realistic, can. Because of common and cheap game engines made after years of having reached that era. Imagine a future where a game can only succeed on its story since every game has access to the most advanced engines necessary to make things as realistic as possible. Heh, we'll probably even see a renewed move towards artistic games too.

An entirely new kind of application would have to be created that demands far more than the modern processing seems capable of meeting without vast amounts of computing. Something like a brain interface that adds to the senses in some way that requires sensory input and output and a highly complex level. Even then, there is no reason why datastore servers wouldn't still do a better job than distributed computing.

So I reject the premise that distributed computing would benefit videogames in any way that a server or cloud computing as we know it doesn't currently meet.

but your hardware was doing the work for all history of videogames. you always covered costs of local computer and your hardware did all the work, locally. if anything, given perfect condition, distributed computing is cheaper for end user because you need cheaper hardware. but conditions are not perfect, obviuosly. Granted, there is OnLive that does shift work to servers, but this isnt popular atm (mostly becuase publishers refuse to work with it, not due to lack of demand from users).
My hardware has done the work for ME. This is no different from saying that my blender blends food for me. I paid for it to do so and that shouldn't mean that my neighbor can walk in and borrow it to use when I'm not using it. This shift would mean that my XBO would never not be working. When I'm at work or on vacation, if my XBO was left in a sleep state it would be processing work for other gamers, thereby incurring wear and tear on my console at no benefit to me. Even if I don't play the online multiplayer titles. I'm sorry but that simply isn't my responsibility. If I had consoles that did this, I would kill the power to them after each and every play session.

Yes. EVERYONE needs to have fiber optics in this case. sadly there are plenty of reasons this is intentionality being sabotaged.
I would love to spend my days unplugging my XBO from the internet every few hours. Just imagining someone somewhere seeing lag at just the wrong moment. Regardless, me having fiber optics is relatively simple. Getting the entire gaming community to move to it on any reasonable timeframe in the next ten years? Not likely. That $70 google fiber charges for the excellent bandwidth is just $10 less than what I pay for 10mbps and cable in two rooms of my house per month. I doubt the average gamer would see a need to upgrade to that unless they're doing a ton of downloading. I only download huge files once every few months. Those would be the games I purchase on steam and I have no qualms about waiting half a day for those to download.

Yes, 100mbps is good enough. personally i chose that over 300 since i dont really need 300. they do offer up to 1 gbps here. but your seem to be using GB and MB instead of gb and mb. not sure if intentional. if intentional then you dont really need 800mbps (100MB) speed really, and 8gbps (1GB) is only accessible to industries as of yet.
Nope, just stuck in RAM notation. Common mistake from what I've seen. What's funny is I had it right but "corrected" myself.

Like i said, if the leaked info was correct we will see that, if somone was just trying to make the company look bad, then we dont know. i guess we will have to wait and see.
If the leaked information was correct we MAY see that. The testing done previously wouldn't have just been regarding the CPU. It'd have been testing for all of the components and replacing the CPU wouldn't necessarily have undone that work. These are also relatively old CPUs. The benefit of living in a world where CPUs are now just glorified switchboard operators that offload all the work to the RAM and GPU is that the CPUs don't have to be cutting edge i7s or something. I still regret having sprung for the extra $100 i7 in my pc as my processor is never over 10% utilization. Could have grabbed another video card instead to bridge.

As for kinect, i had a thoery that kinect does some of the processing to compensate for weak hardware in the box itself, hence mandatory connection. since it does have to have its own processor for the visual recognition and quite a powerful one. but if the statement that it can be disconnected for machine to work is true then this thoery is bust.
Well, that statement was from an MS representative's own mouth so we can assume it is bust.

Additionally, having the kinect process some of the work would make the hardware proprietary again. Stealing the advantage of being x86. So I would have contended that postulation before as well. Not that MS hasn't been making huge mistakes anyways.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Lightknight said:
Strazdas said:
Then you dont want clouds. fair enough, but we need to have a want for cloud for this theoretical situation. without such need, we should just go witout one, quite obviously.
I do want... "clouds". I want these companies to have centrally located server farms that do the necessary processing for my online games.
What you want is streaming. not clouds.

I specifically do not want ANY cloud based anything regarding single player titles. Screw EA for trying it with Sim City (lying about it) and for publishers who would offload menial tasks that the XBO could otherwise handle just to shoehorn in always online gameplay under the guise of "But servers process necessary functions" without any statement as to whether or not the XBO could have done it.
very much in agreement here.

There is no proven advantage to distributed computing but there is a multitude of shortcomings that are known. Even if bandwidth became 100 times better then it still wouldn't be better to have distributed computing than dedicated servers because that bandwidth would only improve the dedicated servers too.
Costs. Thats what it all boids down to anyway right? Its cheaper to use distributed computing than server farms if distributed computing works as intended, which so far it doesnt.

Do you think there's some kind of serious problem with how servers work now? As a COD gamer I can tell you that there's nothing distributed computing done perfectly would contribute to gaming as we know it.
I dont think there is a problem with how servers work. i do not advocate cloud gaming at all, im just saying that such a thing is theoretically possible and efficient. kety word here though, - theoretically.

You play COD, so you probably are going to be buying a machine powerful enough to run the newest title, right? That is expenditure of hardware. If you could use your old hardware, helped by other 5 old ahrdwares in distributed computing, to play the title. that saves expenditure for hardware (and ecology becuase you dont make new hardware). id say thats a fair contribution.

The demands of gaming would have to become astronomically more advanced than they are now and I don't see that happening out of sync with current tech. We are rapidly approaching a day where the graphics and physics engines are better than necessary. Just as word processing became trivial for a pc to process, so will video games eventually arrive at a degree of realism for which additional processing just isn't necessary. I imagine a future where every game that wants to look realistic, can. Because of common and cheap game engines made after years of having reached that era. Imagine a future where a game can only succeed on its story since every game has access to the most advanced engines necessary to make things as realistic as possible. Heh, we'll probably even see a renewed move towards artistic games too.
But demands og gaming have become astronomically more advanced than they were, say, 20 years ago when they used to have these 3 man teams for top games.
I would dispute you on the word necessary. what is necessary? for some, 10 year old graphics is the necessary limit, personally i wont stop pushing till we get reality simuilation. im not a sucker for graphics, heck they dont evne come into consideration when im choosign a game, but i got a dream that one day we will have a virtual reality game where we would have a whole city or similar landscape simulated with realistic phyiscs destruction mechanics ect ect. and our graphical and phtysical engines arent even close to that. our server farms arent even close to that. we got nothing that could do this. but maybe, some day, we will. necessary is intangible. Crytek boasted how crysis 1 was photorealistic. turns out it wanst even clsoe to what, say, crysis 3 put out. realism wont exist till we get reality simulation.
so yes, its a nice dream to have a world where every game has acess to realistic looks, but its one thats not coming soon and one that our computers are far from being able to handle.

So I reject the premise that distributed computing would benefit videogames in any way that a server or cloud computing as we know it doesn't currently meet.
so you reject the fact that more accessible calculating power closer to your geography would be beneficial?

My hardware has done the work for ME. This is no different from saying that my blender blends food for me. I paid for it to do so and that shouldn't mean that my neighbor can walk in and borrow it to use when I'm not using it. This shift would mean that my XBO would never not be working. When I'm at work or on vacation, if my XBO was left in a sleep state it would be processing work for other gamers, thereby incurring wear and tear on my console at no benefit to me. Even if I don't play the online multiplayer titles. I'm sorry but that simply isn't my responsibility. If I had consoles that did this, I would kill the power to them after each and every play session.
thats the kind of thinking that makes clouds impossible. and thats basically using the old local ahrdware processing. which i perfectly understand, and i want to use it this way as well, but that kind of thinking will have to die before we even attempt clouds.
I would love to spend my days unplugging my XBO from the internet every few hours. Just imagining someone somewhere seeing lag at just the wrong moment.
the point of always online is that you dont unplug it. now your just being sinister.

Getting the entire gaming community to move to it on any reasonable timeframe in the next ten years? Not likely.
oh i completely agree, we wont see this any time soon.

hat $70 google fiber charges for the excellent bandwidth is just $10 less than what I pay for 10mbps and cable in two rooms of my house per month. I doubt the average gamer would see a need to upgrade to that unless they're doing a ton of downloading. I only download huge files once every few months. Those would be the games I purchase on steam and I have no qualms about waiting half a day for those to download.
Ech, google fucked up again it seems. 70 dollars, are they seriuos? any company asking that here would be laughed out of the market. I pay 15 dollars for 100/100.
Then still you get more bandwitch AND cheaper that your previuos one. why not upgrade?

Personally around 30 GB of traffic comes though my PC every day, but i guess i download more than the average person.

Nope, just stuck in RAM notation. Common mistake from what I've seen. What's funny is I had it right but "corrected" myself.
Fair enough, thats what i though. But just to be sure.

If the leaked information was correct we MAY see that. The testing done previously wouldn't have just been regarding the CPU. It'd have been testing for all of the components and replacing the CPU wouldn't necessarily have undone that work. These are also relatively old CPUs. The benefit of living in a world where CPUs are now just glorified switchboard operators that offload all the work to the RAM and GPU is that the CPUs don't have to be cutting edge i7s or something. I still regret having sprung for the extra $100 i7 in my pc as my processor is never over 10% utilization. Could have grabbed another video card instead to bridge.
But you see, the test they done have shown that theis new CPU design (altrough i dont think we can call it just CPU anymore) is actually failing and they had trouble meeting it bellow failure rates they expect. and if CPU fails your console still not going to work anyway.
ANd yes sadly CPUs are not utilized as much as they should, i often get even my old dualcore loaded to 30% while my GPU is choking. im going to go for I5 instead of i5 when buying new pc (soon). makes me wish somone would write a program that would force drivers to offload some processing into CPU instead, (i know, that would never work, its just me being stupid), i could do so much more then (my CPU is more powerful than my GPU sadly)
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Strazdas said:
What you want is streaming. not clouds.
I thought Kross already went over this with you. Cloud computing is just a more efficient method using the exact same structure of server farms in conjunction with virtualization to make the maximum use of available resources. So yes, I want clouds. Give me all of the clouds. It's not strictly streaming as streaming is (thanks to google's definition I don't have to make one up) a method of relaying data (esp. video and audio material) over a computer network as a steady continuous stream, allowing playback to proceed while subsequent data is being received.

This requires a lot of uploading (information you're sending to the server to trigger it running specific calculations) and downloading (sending back the response). That's what is already being done.

Costs. Thats what it all boids down to anyway right? Its cheaper to use distributed computing than server farms if distributed computing works as intended, which so far it doesnt.
It'll always be the same cost. You're just talking about offloading the cost onto console owners for a method that has no benefits for gaming and nothing but disadvantages compared to the current server farm method.

I dont think there is a problem with how servers work. i do not advocate cloud gaming at all, im just saying that such a thing is theoretically possible and efficient. kety word here though, - theoretically.
I must have missed, did you disagree with Kross' correction of terminology where distributed computing is what you're talking about? It is not theoretically efficient. It is definitionally less efficient than server farm computing or cloud computing. Cloud computing, as in a server farm filled with multiple VMs to get the most out of resources is not just possible but it's already here.

You play COD, so you probably are going to be buying a machine powerful enough to run the newest title, right? That is expenditure of hardware. If you could use your old hardware, helped by other 5 old ahrdwares in distributed computing, to play the title. that saves expenditure for hardware (and ecology becuase you dont make new hardware). id say thats a fair contribution.
No. Terrible idea. I do not want distributed computing for gaming. I also do not want my own machines to be used thus. My obligation to COD or any other title ends when I buy the game unless they require an online subscription in which case my ONLY obligation is to pay the fee to continue playing.

I do not owe COD the use of my machine for processing games I'm not playing in. I especially don't owe companies that I don't play games for at all to be able to use my machine.

For something like Folding at home? Sure, distributed processing is great. But not real time processing for MMOs or FPS titles. Certainly not single player games.

But demands og gaming have become astronomically more advanced than they were, say, 20 years ago when they used to have these 3 man teams for top games.
So? Computing power has also grown exponentially (doubling every few years). By complexity or demands I'm not talking about the complexity of development, I'm talking about the complexity required to process the game. Most games don't even make use of more than 4GB of RAM. We've only recently been freed by x64 environments and it's going to take a long time for games to really catch up as the average computer is still at 4GB of RAM.

In my house, I have a 16GB RAM machine that I can easily upgrade to 32GBs of RAM. I've got a relatively new i7 with a decent video card that I can likewise bridge if I need to any time soon. The available hardware has vastly outstripped gaming demands and that's only going to get more and more apparent as newer machines become the norm.

I would dispute you on the word necessary. what is necessary? for some, 10 year old graphics is the necessary limit, personally i wont stop pushing till we get reality simuilation. im not a sucker for graphics, heck they dont evne come into consideration when im choosign a game, but i got a dream that one day we will have a virtual reality game where we would have a whole city or similar landscape simulated with realistic phyiscs destruction mechanics ect ect. and our graphical and phtysical engines arent even close to that. our server farms arent even close to that. we got nothing that could do this. but maybe, some day, we will. necessary is intangible. Crytek boasted how crysis 1 was photorealistic. turns out it wanst even clsoe to what, say, crysis 3 put out. realism wont exist till we get reality simulation.
so yes, its a nice dream to have a world where every game has acess to realistic looks, but its one thats not coming soon and one that our computers are far from being able to handle.
Necessary: what is needed to accomplish a goal. What I mean is that computing power will eventually outstrip the feasible demands of even the most demanding video games. There may be a day when the most powerful video games utilize 32GBs of RAM but the average machine has 64GB or something higher.

I'm not saying "necessary" as in some subjective term. I'm saying that video games require X and the average computer can provide X+1. Pure and mathmatical.

Take word processing for example. I have a word document open right now. It's using around 20MBs of RAM. You've got to understand that there was a time when that wasn't even possible on computers and the word processing at the time had to be designed to require significantly fewer resources. Movies and other media have fallen under those demands now too. Video editing is getting close and that can be just as demanding as video games.

In that same way, game demands will eventually reach a threshold where processing power far outstrips it. You can only go so far towards realism before you're there. Then there is no additional step to take unless we discover some kind of ultra reality. Even virtual reality (if it's possible to input false senses into the brain) would eventually reach that threshold. Computer technology is already capable of creating 3D environments as we've seen with the Occulus Rift. Already existing games can already use the rift because games no longer process only what you're looking at and actually render the entire room/setting for faster transitions. So the rift, while providing VR isn't much more than a screen strapped to your head that functions as the camera controller via your head movement.

We already have beautiful games. Games that are getting very close to realistic. Yet the console generation has pushed forward something like 10x the power of the previous console. Understand that this is 10x the machine that is currently capable of games like Skyrim. As the consoles get more and more powerful, that leap will be less and less but not necessarily less impressive (2x 100 isn't less impressive than the 5x 20 that may have led to the 100, it's then 10x 20). I don't even think the next big step is much more than a firming up of graphics. I think the next big step is in NPC AI and physics.

so you reject the fact that more accessible calculating power closer to your geography would be beneficial?
Yes. I categorically reject that premise in the way it is implemented. Right now, there's only two connections that we are concerned with. Mine to the internet and their server to the internet. This connection is remarkably simple and is already good enough to let people from across the world play games like COD in real time. Where a second off is a big difference.

There being much weaker machines nearer doesn't matter. These large multiplayer games benefit greatly from sharing common servers that process the data in real time. Splitting them out would be terrible.

And again, to what benefit? What need isn't being met? We've got some very technically advanced titles that our systems are well within the realm of handling. There's no reason to involve outside processing in single player games and the big online titles are already being more than met by the simply cloud/server processing that's already in place.

thats the kind of thinking that makes clouds impossible. and thats basically using the old local ahrdware processing. which i perfectly understand, and i want to use it this way as well, but that kind of thinking will have to die before we even attempt clouds.
Don't care. Not my responsibility. There is currently no need that isn't being met by already existing servers. What you're not explaining is any reason at all why we'd benefit from this. We'd get a much less efficient setup with a myriad of issues all to save the developers money on our dime. That's bullshit.

Again, there is NO benefit to this. I don't know what you think distributed computing does but it isn't good for this sort of processing. A server farm with a good internet connection is perfect. For the kind of traffic and processing demands we have there is 0 need to even change things up. Let alone the fact that distributed computing would be one of the least efficient ways to do this kind of processing. You would be multiplying the response time by however many machines are involved and their relative bandwidths with the weakest link being the deciding factor. And for what? We wouldn't see any advantages of that. Any game that would need the likes of a supercomputer to participate in wouldn't be something that the masses could partake in and would undoubtedly only suffer from distributed computing.

Ech, google fucked up again it seems. 70 dollars, are they seriuos? any company asking that here would be laughed out of the market. I pay 15 dollars for 100/100.
Then still you get more bandwitch AND cheaper that your previuos one. why not upgrade?
You have 100/100 mbps? Hot damn.

To put this in context, I recently switched from Comcast to Century Link because Comcast tried to charge me $70 for 16 mbps but Century link was offering me 10 mbps along with a full cable selection with HD DVRs all for around the same price. What's more, the 16 mbps was something I almost never got close to and I functionally had less than 10 mbps whereas century link in my area seldom drops below 9 mbps and is even willing to increase my mpbs if my cable TV isn't also running at the time.

$15 for any kind of internet isn't common. I do know that Google fiber has a 5/5 offer for a one time payment of $300 (can be broken up into multiple payments of $25). $70 for the 1gbps is the standard charge right now.

But you see, the test they done have shown that theis new CPU design (altrough i dont think we can call it just CPU anymore) is actually failing and they had trouble meeting it bellow failure rates they expect. and if CPU fails your console still not going to work anyway.
I have not seen any evidence that this is the case.

ANd yes sadly CPUs are not utilized as much as they should, i often get even my old dualcore loaded to 30% while my GPU is choking. im going to go for I5 instead of i5 when buying new pc (soon). makes me wish somone would write a program that would force drivers to offload some processing into CPU instead, (i know, that would never work, its just me being stupid), i could do so much more then (my CPU is more powerful than my GPU sadly)
Why sadly? RAM/GPU is far cheaper than the CPU. Advances in CPU have been seriously bottlenecked whereas RAM and GPU continue exploding.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
I hope you dont take this the wrong way, i just like discussing things and thus i discuss.
...


Lightknight said:
Strazdas said:
What you want is streaming. not clouds.
I thought Kross already went over this with you. Cloud computing is just a more efficient method using the exact same structure of server farms in conjunction with virtualization to make the maximum use of available resources. So yes, I want clouds. Give me all of the clouds. It's not strictly streaming as streaming is (thanks to google's definition I don't have to make one up) a method of relaying data (esp. video and audio material) over a computer network as a steady continuous stream, allowing playback to proceed while subsequent data is being received.

This requires a lot of uploading (information you're sending to the server to trigger it running specific calculations) and downloading (sending back the response). That's what is already being done.
Fair enough.
It'll always be the same cost. You're just talking about offloading the cost onto console owners for a method that has no benefits for gaming and nothing but disadvantages compared to the current server farm method.
No. It would be the same cost if you always used your local hardware 100% effectively (or close to that). sicne you dont, a lot of time your hardware stands unused. if it was used in this time, you would need much less overall hardware across the whole community, thus it would be cheaper.

No. Terrible idea. I do not want distributed computing for gaming. I also do not want my own machines to be used thus. My obligation to COD or any other title ends when I buy the game unless they require an online subscription in which case my ONLY obligation is to pay the fee to continue playing.

I do not owe COD the use of my machine for processing games I'm not playing in. I especially don't owe companies that I don't play games for at all to be able to use my machine.

For something like Folding at home? Sure, distributed processing is great. But not real time processing for MMOs or FPS titles. Certainly not single player games.
You dont owe anything because you only use local hardware now. if you used their hardware, it is fiar they use yours as well. if that situation helps both of you save money on ahrdware expenditures, thats a real advantage.
you dont owe anything only as long as you do not use it.

Necessary: what is needed to accomplish a goal.
what goal? to make a game? you can do that on NES graphics. to make the game look good? how good is good? you want the game look how you want? you need graphics as good as your imagination.
Intangible.

What I mean is that computing power will eventually outstrip the feasible demands of even the most demanding video games. There may be a day when the most powerful video games utilize 32GBs of RAM but the average machine has 64GB or something higher.
I daisgree. i think the games will always keep up with the machines that they are played. remmeber what happened when dvds based games came? games stopped optimizing. when they met hardware limitations (current consoles) they leanrt the need to optimize again. games will always find a way to demand the hardware of average computer. you have a high end one so it feels like games arent catching up, when reality is game makers limit themselves in order to allow the average person to run the thing.

I'm not saying "necessary" as in some subjective term. I'm saying that video games require X and the average computer can provide X+1. Pure and mathmatical.
and you know that game require X when average ocmptuer can provide X+1 how?

Take word processing for example. I have a word document open right now. It's using around 20MBs of RAM. You've got to understand that there was a time when that wasn't even possible on computers and the word processing at the time had to be designed to require significantly fewer resources. Movies and other media have fallen under those demands now too. Video editing is getting close and that can be just as demanding as video games.
video editing is more demanding than video games. word processing didnt evolve though. you still write words like you used to. heck, most people still use same exact formats from 10-15 years ago. if you would have 10 new offices every year, you would see much more 365 monstrosities that can hog your 16 gb of ram just because it can.
other media, ah, i remember when MP3 lagged.... good days.
anyway, the thing is, movies and audio IS catching up to ahrdware. not as well but it tries. we got lossless audio formats that are much more demanding than previuos ones, we got video formats that needs a gaming PC to run them at all (for example 4k), sure, if you take a format from 15 years ago and run it on modern PC, it wont use a lot of resources. neither would a 15 year old game.
Its just that those are also limited by "Average hardware", in this case being televisions (which are statistically still on the SD side) and mp3 players (sure you can have much better lossless sound, but you still put MP3s on your ipod).
the devices are what limits us, they will never become more pwoerful than necessary, we will always find ways to utilize them. unless were talking about multivac from science fiction and even that one had a job it had to calculate till the heat death of universe to complete.

In that same way, game demands will eventually reach a threshold where processing power far outstrips it. You can only go so far towards realism before you're there.
You are aware that so far most powerful computers can simulate less than a cubic centimeter if we got for atom level of realism, right? were still very far away from realism.

Already existing games can already use the rift because games no longer process only what you're looking at and actually render the entire room/setting for faster transitions.
actually they always used to render entire room/setting. its just that when developers were forced to use 8 year old hardware that couldnt handle it they had to find shortcuts, such as rendering only what can be seen from the camera perspective and isntantly removing things you went past.
Now as we get new hardware we may start doing what we did before - render the whole damn room.

We already have beautiful games. Games that are getting very close to realistic.
beatiful - perhaps. thats subjective. realism - umm no. the only game where i saw jeans even look close to look like actual jeams was the last of us for example. we still very far from realism.

Understand that this is 10x the machine that is currently capable of games like Skyrim.
10x capable of console skyrim. less than 1x capable of PC Skyrim.

I think the next big step is in NPC AI and physics.
Id love to think that as well, but then reality comes back and we got cryteck CEO who says that graphics are the most important part of any game. The AI has been at quite a stand still and physics were only attempted by few, but graphics seems to be pushed by all....

Don't care. Not my responsibility. There is currently no need that isn't being met by already existing servers. What you're not explaining is any reason at all why we'd benefit from this. We'd get a much less efficient setup with a myriad of issues all to save consumers some money. That's bullshit.
Fixed the quote.

You have 100/100 mbps? Hot damn.
100/100 is a "slow" plan here. but since thats enough for me id rather take the cheaper one.

To put this in context, I recently switched from Comcast to Century Link because Comcast tried to charge me $70 for 16 mbps but Century link was offering me 10 mbps along with a full cable selection with HD DVRs all for around the same price. What's more, the 16 mbps was something I almost never got close to and I functionally had less than 10 mbps whereas century link in my area seldom drops below 9 mbps and is even willing to increase my mpbs if my cable TV isn't also running at the time.
so if we count the other services out, the internet in fact was much cheaper than comcasts. still its sad to hear they try to charge so much for such slow speeds. The services who offered 10mbps i didnt even look twice at.

I have not seen any evidence that this is the case.
and we wont till launch.

Why sadly? RAM/GPU is far cheaper than the CPU. Advances in CPU have been seriously bottlenecked whereas RAM and GPU continue exploding.
because i got more powerful CPU than GPU. i know, petty reasons.
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Strazdas said:
I hope you dont take this the wrong way, i just like discussing things and thus i discuss.
It is certainly nice to be able to discuss a topic fully. Anything you're not sure on you get to research in order to properly respond and by the end of the discussion you've got a fully researched topic at your disposal. No worries here.

No. It would be the same cost if you always used your local hardware 100% effectively (or close to that). sicne you dont, a lot of time your hardware stands unused. if it was used in this time, you would need much less overall hardware across the whole community, thus it would be cheaper.
It's the same processing cost. But no, it isn't necessarily less hardware. 1 server could have hundreds of gigs of ram and so many other things. No, this would require more actual machines and the same amount of processing. Not only that, but every additional machine involved in the processing is another potential weakest link. You're talking about one of the least efficient manners of server-side processing where real time processing is concerned.

You dont owe anything because you only use local hardware now. if you used their hardware, it is fiar they use yours as well. if that situation helps both of you save money on ahrdware expenditures, thats a real advantage.
you dont owe anything only as long as you do not use it.
Let me make this clear. I bought my ps3 from Sony and my 360 from Microsoft. My hardware does not belong to anyone but me. It is not something I purchased from the makers of COD or Bioshock or any other game (except first party stuff, I suppose). COD does not have the right to use my hardware. Hell, Microsoft doesn't either. I bought the hardware from them, it's mine. Making machines do work does wear on them. It's like letting someone drive my car when I'm not using it. Let's say it's done so efficiently that the inside of the car looks the same, the gas is the same level, and it's always there when I'm going to use it. It's still grinding down the parts of the engine and the brakes and everything else. No one has the right to incur that cost on my without compensation or my permission. After I've purchased the console, I owe nothing to anyone else.

Now let me ask you something you may not have considered. You say they use this while I'm not using it. Ok, now who gets to use it? Let's say 100 different games use this. Do all of them have an equal bid to use my processing power regardless of whether or not I've even purchased their game? Doesn't this mean that it's not unlikely to see my console getting used at the highest capacity they allow for distributed computing demands? Not to mention it using my bandwidth the entire time when I might very well be using bandwidth on another device.

They have no right to any of this.

what goal? to make a game? you can do that on NES graphics. to make the game look good? how good is good? you want the game look how you want? you need graphics as good as your imagination.
Intangible.
You can only get so realistic before you reach it. There is no step beyond real. We as humans can't even really tell the difference between high range FPS (frames per second, not first person shooter) at some point. There will be a day when the characters look like actors on the screen and there's nowhere to go once that's reached. My imagination isn't even capable of viewing things with the clarity of realism.

I daisgree. i think the games will always keep up with the machines that they are played. remmeber what happened when dvds based games came? games stopped optimizing. when they met hardware limitations (current consoles) they leanrt the need to optimize again. games will always find a way to demand the hardware of average computer. you have a high end one so it feels like games arent catching up, when reality is game makers limit themselves in order to allow the average person to run the thing.
This is merely because games are currently limited by hardware. That doesn't mean they can magically expand into the infinite. There are perhaps some types of games which could. Like a universe simulator that's meant to require more and more resources over time. But that'd only apply to a very specific type. But a game where you're in a town doing whatever the game has you in that town for isn't going to need to process the entire universe. It just needs to process your area and the surrounding areas to a point in such a way that the player can never get outside the given rendered area. That is to say, a finite and known amount of data needs to be processed. The only question is how detailed that data will be. When it is a finite arena, we can absolutely achieve a point where processing outstrips the possible needs of processing. As long as humans are limited by our senses and our brain, that's going to happen.

I'm not saying "necessary" as in some subjective term. I'm saying that video games require X and the average computer can provide X+1. Pure and mathmatical.
and you know that game require X when average ocmptuer can provide X+1 how?[/qoute] Let's look at Crysis. The original one. The game was capable of demanding X on its ultra setting. A lot of computers had trouble running that. Last year I built a power pc from scratch (with all the resources of a two income household that favors gaming and all the expertise of a two IT background household, so it's nice). Booted right up and played Crysis on all ultra settings. So that pc and those specs are capable of playing it on ultra. Frankly, it wasn't even demanding but that's not the point. While the exact number is unknown, it does exist. Most companies put out min/req/max specs to play their game. Let's say that those are X for all intents and purposes. There could and should be a day where the average computer exceeds that amount. At which point we can thankfully stop messing around with the settings or worrying as much about our hardware. A happy effect of this is that games will only be able to survive on plot, game mechanics, or artistic vision instead of getting a free pass for realistic graphics that everyone would have access to in this utopian gaming environment.

video editing is more demanding than video games.
Correction, HD video editing is more demanding than most video games at the moment. This is primarily because the last 6-7 years of video game development has largely been bottlenecked by console limitations even though pcs have shot past those hardware specs. Well, not the full 6-7 years. We saw a significant jump in video game tech for the first 4 or so years and then the past 2-3 years have been the bottleneck as developers try to optimize more and more in the limited space. That's why games like the Last of Us look so impressive despite still having to fit in what is essentially a half-a-decade old CPU with 512mb divided (unnecessarily) into two components. We occasionally get games made for pc that push the limits but those are rare because console markets are very lucrative and the 360 at least requires somewhat minimal effort to port. So it's best to first make most games playable on the console and then just provide upscalling utilities for the pc which only makes the game look a little better without really changing the core mechanics of the game that were limited by the console.

This next console advancement should give us another significant leap in gaming demands and capabilities for around the same period of time. We'll also see pc technology get moved along faster because of this (since high-power pcs are only needed for gaming and video editing and a stagnating gaming market does slow things down). With the introduction of higher resolution HD gaming (and even 3D gaming) we'll see gaming demands meet or exceed video editing.

word processing didnt evolve though. you still write words like you used to. heck, most people still use same exact formats from 10-15 years ago. if you would have 10 new offices every year, you would see much more 365 monstrosities that can hog your 16 gb of ram just because it can.
other media, ah, i remember when MP3 lagged.... good days.
I'm not sure how pointing to the past 10-15 years as word processing barely changing is proving any point. 15 years ago was 1998. Word processing software was first made in 1974 [http://www.computer.org/portal/web/computingnow/annals/extras/wordvol28n4]. You stating that things haven't really changed fundamentally over the past 10-15 years is only proving my point that the demands of word processing are somewhat finite and computer technology has so drastically exceeded its needs that running a word processing application doesn't so much as cause the RAM consumption to noticeably rise on most machines.

So this comment may have been the most incorrect comment you've made in this thread. Word processing has evolved significantly over the past 40 years and even in the past 10 years we've seen significant advances in word processing features, formats, and security. But at the end of the day it is just typing words on a page and maybe imbedding images here and there. Some day, video games will be in the same position where the environments are realistic and the individual can transit between environments without noticing it. That's all you need and then processing has advanced beyond gaming. It could be ten years, could be fifty. But it will happen.

anyway, the thing is, movies and audio IS catching up to ahrdware. not as well but it tries. we got lossless audio formats that are much more demanding than previuos ones, we got video formats that needs a gaming PC to run them at all (for example 4k), sure, if you take a format from 15 years ago and run it on modern PC, it wont use a lot of resources. neither would a 15 year old game.
Its just that those are also limited by "Average hardware", in this case being televisions (which are statistically still on the SD side) and mp3 players (sure you can have much better lossless sound, but you still put MP3s on your ipod).
the devices are what limits us, they will never become more pwoerful than necessary, we will always find ways to utilize them. unless were talking about multivac from science fiction and even that one had a job it had to calculate till the heat death of universe to complete.
No, they are not catching up to hardware. They are increasing in quality and ergo processing demands, but it will never take 4GBs of RAM to play an audio file. As for movies, there are some advancements left to be made but we're getting to definitions that the human eye can't really distinguish between. Beyond that, what's the point?

You are aware that so far most powerful computers can simulate less than a cubic centimeter if we got for atom level of realism, right? were still very far away from realism.
Why would we simulate to the atomic level if humans can't see that? That aside, what you just established was a goal. This is a goal that can be reached and then surpassed. I made no allusion to this being near future necessarily. Just that there is a finite distance we can go before arriving at realism or a quality of gaming that is virtually indistinguishable from realism (the only thing that matters). With that knowledge in mind, that goal will eventually be reached.

actually they always used to render entire room/setting. its just that when developers were forced to use 8 year old hardware that couldnt handle it they had to find shortcuts, such as rendering only what can be seen from the camera perspective and isntantly removing things you went past.
Now as we get new hardware we may start doing what we did before - render the whole damn room.
Not really, they render the full room now. Currently they delay the pop-in of texture. So in a complex game with limited hardware you can enter the room just fine but the textures won't pop in until loaded. This is exceeding apparent in games like RAGE where the texture file is particularly large. Once loaded, however, it's there. It takes significantly more processing to tie video processing into player actions.

beatiful - perhaps. thats subjective. realism - umm no. the only game where i saw jeans even look close to look like actual jeams was the last of us for example. we still very far from realism.
Interesting. I mentioned that game above. The thing is, it does really well. There are some areas where the texture doesn't load in time or you see white light along creases in the walls. But the game is pretty damn cool as far as graphics are concerned. Now imagine a machine that is 10x as powerful as a machine able to render that graphically advanced game.

10x capable of console skyrim. less than 1x capable of PC Skyrim.
It doesn't matter, the console market is what bottlnecks the game development process. Since pcs are an almalgamation of hardware components, those are upgraded regularly and the moment a console's specs are set in place the average pc starts to catch up and then exceed it. But even after the pc exceeds it the consoles still have a tremendous enough following (hundreds of millions of console owners) to demand they develop at its capabilities.

I'm also not certain where you're getting the 1x capable of PC Skyrim. The average computer is still stuck at 4GBs thanks to how long 32bit OS's kept us tethered to that number. And a console, even an x86 one, isn't able to be compared on raw numbers. Consoles allow for optimization in a way that the frankenstein's monster that is PCs, a cobbling of various non-standard hardware, is entirely unable to. This is why the minimum requirement for pc Skyrim was 2GB of RAM on a relatively modern cpu/video card but still playable on a 516MB machine with 6 year old cpu/gpu combinations. A computer with 8GB of RAM is not necessarily comparable with the consoles. A super powerful video card would ultimately decide whether or not it is.

Id love to think that as well, but then reality comes back and we got cryteck CEO who says that graphics are the most important part of any game. The AI has been at quite a stand still and physics were only attempted by few, but graphics seems to be pushed by all....
AI and physics play into graphics. The way a ball moves or wood splinters makes the world look more graphically impressive.

Shatter physics simulator [https://www.youtube.com/watch?v=ApSTZRXTwMc]

Your brain makes constant caculations about the world around it. It takes note when things don't behave like it knows it should and breaks what's known as atmosphere (aka, real immersion). As such, properly reacting NPCs and correct physics does more for "graphics" than just throwing more polygons will. With games like the last of us, we're not far from a polygon count that looks right. To make it really feel real you have to have them behave right. So don't make the mistake of thinking that physics/AI is necessarily unrelated to graphics. Also, cryteck =/= all developers everywhere. All you need is a few big named companies to great a new graphics engine with advanced physics and you'll start seeing other companies licensing the use of that engine. Bethesda is really good at this and so is Valve. And believe me when I say that Valve cares about physics and AI.

Fixed the quote.
How the Hell does this save the consumer money? By all means, tell me how translating the cost of a server farm to local machines will mean any more money in the consumers' wallet. The consoles themselves will be more expensive and the price of games isn't going to go less than $60 and the wear on the console will be heavier which will mean a shorter lifespan and possible replacement costs of an entire console. This, again, at no perceiveable benefit to the end user.

100/100 is a "slow" plan here. but since thats enough for me id rather take the cheaper one.
That's highly specific to your region, where ever it is. The average speed in the US is still around 10mbps. That is getting better but that's still it.

so if we count the other services out, the internet in fact was much cheaper than comcasts. still its sad to hear they try to charge so much for such slow speeds. The services who offered 10mbps i didnt even look twice at.
Comcast has enjoyed 30+ years of almost complete monopoly in my city. Such is the result here and in any other areas where this is true. Even in some cities where there's only a few I wouldn't be surprised if we learned of price fixing or collaboration to keep the services provided low.

and we wont till launch.
That is pretty much exactly my point.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Lightknight said:
It's the same processing cost. But no, it isn't necessarily less hardware. 1 server could have hundreds of gigs of ram and so many other things. No, this would require more actual machines and the same amount of processing. Not only that, but every additional machine involved in the processing is another potential weakest link. You're talking about one of the least efficient manners of server-side processing where real time processing is concerned.
But you already got thousands of hardware distributed at peoples homes, that is sitting idle most of the time. Thats a huge waste of hardware processing power, which if utilized would lower practical costs.
Yes, it has other downsides, plenty of them, and thus noone really uses it, but in a controlled enciroment it could be a good thing.
Then again i learnt some people actually keep their consoles sideways and whatnot so if they cant get this right....

Let me make this clear. I bought my ps3 from Sony and my 360 from Microsoft. My hardware does not belong to anyone but me. It is not something I purchased from the makers of COD or Bioshock or any other game (except first party stuff, I suppose). COD does not have the right to use my hardware. Hell, Microsoft doesn't either. I bought the hardware from them, it's mine. Making machines do work does wear on them. It's like letting someone drive my car when I'm not using it. Let's say it's done so efficiently that the inside of the car looks the same, the gas is the same level, and it's always there when I'm going to use it. It's still grinding down the parts of the engine and the brakes and everything else. No one has the right to incur that cost on my without compensation or my permission. After I've purchased the console, I owe nothing to anyone else.
Yes, you decribed current local hardware situation. This is irrelevant to a situation that would exist if distributed computing existed. you would buy hardware that you would own, but you would have to allow others to use it in order to use hardware from other users, just liek they have to let you use it and in turn you have to let them use it. Thats the notion of sharing. Neither microsoft nor Whatever studio made a game you play got nothing to do with it.
You know who lets somone use thier transport when their not using it? taxi. and in turn the ride costs less than owning a car.
Yes, there is the wear factor, however as i think somone pointed out to you arleady on another topic (correct me if im wrong, but i think he did quote you), the most wear and tear happens due to temperature changes and not actual usage in all nonmoving parts (which is basically everything but coolers and HDD). You alsoe get electricity surges turning it on and off as condensers fill up/drain out which contribute to wear and tear, hence its not advised to turn things on if your leaving for an hour or so.
but in essense you are correct, it would get worn out, just like you wear out other peoples while your playing. but if the hardware is made with quality, that is not a problem, as technology moves so fast nowadays that it will become obsolete before it wears out. My 100mhz Pentium I still works like new, but i wont be using it since its obsolete hardware already. Now of course that means we cant use the cheap hardware made to live for 2 years and break, but then again we shouldnt be using them to begin with.
And you do get compensation in a form of using other peoples hardware when you are gaming. If you get more out of it than you loose really depends on how much and what you game i guess.

Now let me ask you something you may not have considered. You say they use this while I'm not using it. Ok, now who gets to use it? Let's say 100 different games use this. Do all of them have an equal bid to use my processing power regardless of whether or not I've even purchased their game? Doesn't this mean that it's not unlikely to see my console getting used at the highest capacity they allow for distributed computing demands? Not to mention it using my bandwidth the entire time when I might very well be using bandwidth on another device.

They have no right to any of this.
Whoever needs to use it. The optimum way would be your processor being fully loaded to process however many games it can process at that time. it could be 1 or it could be 100, it really does not matter. As far as games concerned, it could go two ways. if it is merelly processing logical algoryths or whatever it may not need to even know what game it is for and that means your game collection does not matter. but this is harder to do. the other way is you already have the game engine on your box and thus it needs a lot less info to know what to process and what to send back. this would also save bandwitch (which is irrelevant). that would limit it to your game collection and would being inconsistency since people ahve different amount of games. but it would allow easier processing and could be utilized more efficiently perhaps.
Fiber optics make bandwitch problem irrelevant. i dont have the best plan my ISP offers and thus my fiber optics are throttled by ISP, and i still can watch multiple HD streams and online game without increase in ping if i chose to. bandwidth becomes a non-problem with fiber optics. once infrastructure is in place fiber optics is very cheap. Its the infrastructure thats costly.
The rights issue i adressed in the previuos quote.

You can only get so realistic before you reach it. There is no step beyond real. We as humans can't even really tell the difference between high range FPS (frames per second, not first person shooter) at some point. There will be a day when the characters look like actors on the screen and there's nowhere to go once that's reached. My imagination isn't even capable of viewing things with the clarity of realism.
Sureal? im joknig of course.
Yes, you cant get better graphics than reality. but what i was arguing is even if we double our computing power every year, we will still take hundreds of years before we trully go real simulation.
Heck, most humans cant tell difference between 25 and 60 FPS (provided their equipment works correctly, because most "omg it lags at 30 fps" problems are because your GPU and your monitor does not work together). that is limitation of our eyes however and not of reality. and at some point we will go into virtual reality and away from 2d monitors. heck, we already need double the framerates for this "Fake 3d" effect we are trying to make on our TVs.
Theres nowhere to go once its reached, but it will be reached when both you and i will be long dead (unless those talks about immorality at 2040 are actually real, ech). So its really not a wall we are hitting now where we could outhardware our needs.
Besides, even if we reach real simulation we will just go... bigger. simulating galaxies, you know, sort of how X3 games try to do. but "more real".

This is merely because games are currently limited by hardware. That doesn't mean they can magically expand into the infinite. There are perhaps some types of games which could. Like a universe simulator that's meant to require more and more resources over time. But that'd only apply to a very specific type. But a game where you're in a town doing whatever the game has you in that town for isn't going to need to process the entire universe. It just needs to process your area and the surrounding areas to a point in such a way that the player can never get outside the given rendered area. That is to say, a finite and known amount of data needs to be processed. The only question is how detailed that data will be. When it is a finite arena, we can absolutely achieve a point where processing outstrips the possible needs of processing. As long as humans are limited by our senses and our brain, that's going to happen.
Open world games. we are getting more and more of them not because people like them more, not because they are needed to tell a good story, but because we can. as in our hardware allows it. plenty of linear shotters woudl be openworld if the hardware could. once we can, we will have pretty much every game simulating the whole area. the "never go outside rendered area" is invisible walls due to technical limitations or, well, gamemaker didnt do anything there.
We will achieve a point eventually where we outstrip the hardware limitations and can simulate a closed in area. BUt at that point there is no need for the area to be closed in.
Eventually (think millions of years) if we still ahve videogames at this point they will all simulate whole universes even if everything happens in a single town. why? because they can. Why did humans went to moon? because they could.

Correction, HD video editing is more demanding than most video games at the moment.
Implying there is significant portion of populace that edits SD video. even local TV networks edit thier shows in HD and downscale them for SD transmission, while transmitting in HD for those that can recieve it. You will hardly find a youtube video that is bellow 720p in the last 2 years unless it is very specific thing that needs to show small area. video editing pretty much moved into HD some time ago. we are using HD, 2k and now even 4k is starting to come around.
Humans just like to make things bigger when they can. Heck, not so long ago i remmeber people inventing specialcodecs becuase average PC couldnt run average video format - the format was too large. so they ivnented codec that made GPU do part of CPU work. of course modern PCs now can decode 720p without trouble, but try running a 4k video and you will see same problems.
We make things bigger when technology allows it, world is not standing still while technology goes forward.
That's why games like the Last of Us look so impressive despite still having to fit in what is essentially a half-a-decade old CPU with 512mb divided (unnecessarily) into two components.
I think its worth mentioning that while yes RAM is the biggest failure of current gen Consoles and they seem to try to fix that with next gen, the CPU while old is not weak. PS3 CPU, in full thoeretical power, is faster than current fastest PC CPU. back then it was a mindbogling thing. thing is, noone actually used it fully. its hell to program for. not even naughty dog, a company working pretty much exclusively with this CPU for a decade managed that. but they did more than any other folk could, and thus the game looked good in comparison (while still bad compared to PC game looks).
We occasionally get games made for pc that push the limits but those are rare because console markets are very lucrative and the 360 at least requires somewhat minimal effort to port. So it's best to first make most games playable on the console and then just provide upscalling utilities for the pc which only makes the game look a little better without really changing the core mechanics of the game that were limited by the console.
yes, the average hardware, current consoles, our ability to simulate, is the bottleneck.

This next console advancement should give us another significant leap in gaming demands and capabilities for around the same period of time. We'll also see pc technology get moved along faster because of this (since high-power pcs are only needed for gaming and video editing and a stagnating gaming market does slow things down). With the introduction of higher resolution HD gaming (and even 3D gaming) we'll see gaming demands meet or exceed video editing.
Only untill we decide that editing 2k video is a good idea. But with video i noticed the similar tendency as with music, at some point people just stop wanting better. you could make audio in such quality that a modern PC choke during a playback, but the point of it is lost, and similar may happen with videos. Games however i think wont have this point any time soon due how interactivity.

I'm not sure how pointing to the past 10-15 years as word processing barely changing is proving any point. 15 years ago was 1998. Word processing software was first made in 1974. You stating that things haven't really changed fundamentally over the past 10-15 years is only proving my point that the demands of word processing are somewhat finite and computer technology has so drastically exceeded its needs that running a word processing application doesn't so much as cause the RAM consumption to noticeably rise on most machines.

So this comment may have been the most incorrect comment you've made in this thread. Word processing has evolved significantly over the past 40 years and even in the past 10 years we've seen significant advances in word processing features, formats, and security. But at the end of the day it is just typing words on a page and maybe imbedding images here and there. Some day, video games will be in the same position where the environments are realistic and the individual can transit between environments without noticing it. That's all you need and then processing has advanced beyond gaming. It could be ten years, could be fifty. But it will happen.
Fair enough, you are correct with word processing.

No, they are not catching up to hardware. They are increasing in quality and ergo processing demands, but it will never take 4GBs of RAM to play an audio file. As for movies, there are some advancements left to be made but we're getting to definitions that the human eye can't really distinguish between. Beyond that, what's the point?
Positive reinforcement is the point. We do things because we have the means. not because it is necessary. it can take 4 gb to play an audio file, though i doubt it will ever be a norm, we dont even have technology to properly put that to sound. (as in no such speakers). Video on the other hand still has a lot to go to. Human eyes are... underrated. WE can distinguish a lot even without actually seeing it fully.
Yes, at some point we will get there, but its not going to be soon.

Why would we simulate to the atomic level if humans can't see that? That aside, what you just established was a goal. This is a goal that can be reached and then surpassed. I made no allusion to this being near future necessarily. Just that there is a finite distance we can go before arriving at realism or a quality of gaming that is virtually indistinguishable from realism (the only thing that matters). With that knowledge in mind, that goal will eventually be reached.
Because physics. We cant see air but we wouldnt be well off without it. there are plenty of things that could only be "real"ly simmulated by simulating to atomic level.
As far as "virtually indistiguishable", i always had a problem with that, as it sounds more like a PR "you can do anything" talk than anything else. whenever they throw the "your character can do everything" card i think: "can you rape[footnote]Rape is just an easy example that no game ever allows so its easy to prove them wrong[/footnote] people? no? then stop telling us we can". You either simulate reality, or you dont.

Not really, they render the full room now. Currently they delay the pop-in of texture. So in a complex game with limited hardware you can enter the room just fine but the textures won't pop in until loaded. This is exceeding apparent in games like RAGE where the texture file is particularly large. Once loaded, however, it's there. It takes significantly more processing to tie video processing into player actions.
I consider a thing rendered when it is in complete state that the user is supposed to see. but thats semantics and you are correct here.

Interesting. I mentioned that game above. The thing is, it does really well. There are some areas where the texture doesn't load in time or you see white light along creases in the walls. But the game is pretty damn cool as far as graphics are concerned. Now imagine a machine that is 10x as powerful as a machine able to render that graphically advanced game.
Yes, and it took the most powerful CPU in the world to run it (ok thats not fair they didnt utilize even half of its power). A 10 times better graphics is, well, just that, 10 times better. not realistic. not by a long shot. its always sad when game makers trut around claiming thier game looks realistic when, while beatiful, its so far away from realism that they shouldnt even mention it. best example is stones in games. they never look like stones.

It doesn't matter, the console market is what bottlnecks the game development process. Since pcs are an almalgamation of hardware components, those are upgraded regularly and the moment a console's specs are set in place the average pc starts to catch up and then exceed it. But even after the pc exceeds it the consoles still have a tremendous enough following (hundreds of millions of console owners) to demand they develop at its capabilities.
What i meant is that even a 10 times more powerful console wont bring us closer to realism as we already have more powerful machines failing at that. at least thats what i think i meant.

I'm also not certain where you're getting the 1x capable of PC Skyrim. The average computer is still stuck at 4GBs thanks to how long 32bit OS's kept us tethered to that number. And a console, even an x86 one, isn't able to be compared on raw numbers. Consoles allow for optimization in a way that the frankenstein's monster that is PCs, a cobbling of various non-standard hardware, is entirely unable to. This is why the minimum requirement for pc Skyrim was 2GB of RAM on a relatively modern cpu/video card but still playable on a 516MB machine with 6 year old cpu/gpu combinations. A computer with 8GB of RAM is not necessarily comparable with the consoles. A super powerful video card would ultimately decide whether or not it is.
Minimum requirements and maximum graphics are diferent things on PC. And while yes 32 bit OS has kept us with 4 GB (actually funnily enough i seen plenty of PCs for sale with 8 gb and 32 bit OS, what the hell?) that by no means mean its a comparison to make.
You also do the mistake many console gamers do when they think of PCs. they aren't some Frankenstein monstrosities unless the owner makes them so. the "name" ones have standard equipment that all are connected with standard buses and they even share hardware drivers among many generations. This is no longer 2000, you can build a PC by trial and error and it would work. heck, as Intel announced they are going to merge CPUs and Mobos now the hardest part - adding a CPU is gone now too.
PCs beat consoles in all parameters except raw theoretical[footnote]never put in practice[/footnote] calculating speed. And thats okay, consoles were never meant to be the top hardware runners. but console is a gaming hardware, and thus should be compared to average gaming hardware, just like you dont compare an average phone with average calcualtor just becuase phone has a calculator option too. and average gaming PC is the one thats 1x capable of skyrim with HD textures pack.

AI and physics play into graphics. The way a ball moves or wood splinters makes the world look more graphically impressive.
I woudl agree that some gamemakers consider physics part of graphics, but plenty of them toat around that "They are also doing physics" so i dont know.
As far as AI i meant things like enemy soldiers AI, not splinter effect AI. as in the world not only looks, but acts real.

Your brain makes constant caculations about the world around it. It takes note when things don't behave like it knows it should and breaks what's known as atmosphere (aka, real immersion). As such, properly reacting NPCs and correct physics does more for "graphics" than just throwing more polygons will
Thats a very.... untraditional way to look at AI. but i guess by that logic it would be part of graphics, just like EVERYTHING else in the game.

How the Hell does this save the consumer money? By all means, tell me how translating the cost of a server farm to local machines will mean any more money in the consumers' wallet. The consoles themselves will be more expensive and the price of games isn't going to go less than $60 and the wear on the console will be heavier which will mean a shorter lifespan and possible replacement costs of an entire console. This, again, at no perceiveable benefit to the end user.
Right now you use local hardware. that means you need to have hardware at least as pwoerful as you need to run the most calcualtions needed in any game you play (or youll lag, duh). this costs a lot of money. now if could buy 3x less powerful hardare, and let other 2/3 of processing be done on other peoples consoles while they are at work, meanwhile yours work while your at work or w/e, you would need to buy only 1/3 powerful ahrdware. this costs less money.
hence it is logical, that by buying less costly hardware you save money.
you dont get free server farms. you pay them, by paying 60 dolalrs in the game price, for server farms. if by sharing console power you could negate the console farms, the price of game would be lower (that is, unless game makers decide to get richer, which they will, but then we can just write anything as "they will jsut be corrupted anyway")
I adressed wear and tear before.

That's highly specific to your region, where ever it is. The average speed in the US is still around 10mbps. That is getting better but that's still it.
Yes, currently it is. but barring an apocalypse, the world will catch up.

Comcast has enjoyed 30+ years of almost complete monopoly in my city. Such is the result here and in any other areas where this is true. Even in some cities where there's only a few I wouldn't be surprised if we learned of price fixing or collaboration to keep the services provided low.
Funny thing about monopolies and google. remember when monopolies sued google for laying fibers and competing, and they claimed that the state should ban google from providing services because the locals offer inferior services and cant compete.
As soon as the monopolies are rich enough to buy laws they remain monopolies. i noticed that the best scanario for consumers is 3 huge companies. 2 can easily fix prices, but with 3 you will get at least 1 of them trying to profit by stealing others costumers, huwever the companies being only 3 is still large enough to provide massive infreastructure and service variety to consumers that small and local business cant.
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Strazdas said:
But you already got thousands of hardware distributed at peoples homes, that is sitting idle most of the time. Thats a huge waste of hardware processing power, which if utilized would lower practical costs.
Processing cost is the amount of processing required to render or calculate or whatever. It isn't hardware cost so much as it is wear and tear incurred on the existing hardware. So a word document that takes up 4MB of RAM to perform an action will still amount to no less than a total of 4MB over multiple machines. In all honesty, it'll usually amount to more total.

As for the cost, I'm pretty damn sure that every one of those consoles cost money to make. Using the individually purchased client machines would actually be MUCH higher hardware cost. Just because the developers/publishers don't see the cost doesn't mean it isn't there. It's just shifted onto consumers now whether or not they're even buying the game.

And again, these are not a developer's machines. These are a gaming company's machine. These aren't any businesses machine unless some company bought a few. These are individuals' hardware and no one else has claim to it any more than they have a right to use my car when I'm not using it.

Yes, you decribed current local hardware situation. This is irrelevant to a situation that would exist if distributed computing existed. you would buy hardware that you would own, but you would have to allow others to use it in order to use hardware from other users, just liek they have to let you use it and in turn you have to let them use it. Thats the notion of sharing. Neither microsoft nor Whatever studio made a game you play got nothing to do with it.
You know who lets somone use thier transport when their not using it? taxi. and in turn the ride costs less than owning a car.
The taxi driver or the taxi company owns the taxi, not the person renting it. Anyone renting it also compensates the taxi driver. Anyone driving the taxi either owns it, is an employee of the taxi owner, or is a car thief.

What you're describing is a scenario in which we buy hardware and allow corporations (not consumers) to use them to support their product without our consent or compensation. Why should I provide a free service for a corporation who is charging me for theirs?

Yes, there is the wear factor, however as i think somone pointed out to you arleady on another topic (correct me if im wrong, but i think he did quote you), the most wear and tear happens due to temperature changes and not actual usage in all nonmoving parts (which is basically everything but coolers and HDD). You alsoe get electricity surges turning it on and off as condensers fill up/drain out which contribute to wear and tear, hence its not advised to turn things on if your leaving for an hour or so.
The HDD and coolers would need to be used in the process. Please explain why a company should be able to use my hardware and my bandwidth to support their software instead of getting their own server farm? ESPECIALLY since a server farm is more efficient at the process than distributed processing.

Fiber optics make bandwitch problem irrelevant. i dont have the best plan my ISP offers and thus my fiber optics are throttled by ISP, and i still can watch multiple HD streams and online game without increase in ping if i chose to. bandwidth becomes a non-problem with fiber optics. once infrastructure is in place fiber optics is very cheap. Its the infrastructure thats costly.
That's swell, but the average American is looking at 10mbps, not your 100. Even at 100mbps, why should another company get to use the internet I pay for to run their business? Why should the next GTA get to use my hardware and my bandwidth without compensating me. For that matter, why should any game company, especially those I'm not buying games from, get to use me as their personal server farm without my consent? Maybe I don't want nude beach volleyball online getting supported by my console? It simply isn't anyone's responsibility but the development studio.

You are also continuing to dismiss the statement that distributed processing is less efficient than a server farm. Even with fiber optics a server farm is also made that much more efficient while distributed processing will always have the issues unique to it that aren't magically waved away by better infrastructure or hardware. No matter how fast the connection is, you're still talking about multiple connections and multiple breaking points, the slowest of which being the speed you go at. Why do you think that benefits us? It only hurts the consumer. The only group who would say this is a good idea would be publishers/developers so they don't have to run servers for it.

Until you address the fact that distributed computing is less efficient than server farm processing where $10,000 in hardware gets you far more than $10,000 in consoles, the discussion of consumer rights is null. You have not known lag until distributed computing is applied to real-time gaming. Even if everyone involved had an instantaneous connection, it would not be better than one central server which would also be benefitting from this "future" internet infrastructure.

Also, don't neglect the idea that with improved infrastructure we may just see the amount of information being transmitted increase. Well, by "may" I really mean that we WILL. So just because 100 mbps seems huge to most of us now(albeit not you), does not mean that it'll be huge once companies can rely on faster speeds.

Yes, you cant get better graphics than reality. but what i was arguing is even if we double our computing power every year, we will still take hundreds of years before we trully go real simulation.
I assume you're talking about something like the holodeck. For now, let's discuss video games with the possible inclusion of something like the Rift for visual virtual reality. The ability to provide sensory input like touch or smell is limited by the interface, not necessarily processing. It is a guess at best to assume it would require some tremendous amount of processing rather than just additional processing.

Interestingly enough, are you familiar with the first successful human to human interface [http://www.washington.edu/news/2013/08/27/researcher-controls-colleagues-motions-in-1st-human-brain-to-brain-interface/] in which one guy controlled another guy's finger over the internet? What's interesting about that is because the signal was transmitted digitally, there's no reason why signals like that can't be recorded and replayed on demand. This could be a future where sensory input is indeed able to be transmitted digitally and that actually may not require as much processing as you may think so much as pulling up a pre-recorded message at the right time. So you are likely off about the amount of processing it could require with the biggest limitation being the human interface rather than the processing capabilities of the time.

heck, we already need double the framerates for this "Fake 3d" effect we are trying to make on our TVs.
Theres nowhere to go once its reached, but it will be reached when both you and i will be long dead (unless those talks about immorality at 2040 are actually real, ech). So its really not a wall we are hitting now where we could outhardware our needs.
Double the framerate means a lot less when the monitor is the size of a psp screen. You'd be surprised what a pc can handle at high framerates on much smaller screen. The trick of the Occulus Rift is that it mimics the human perspective by offsetting the view. Simple and brilliant at the same time.

Besides, even if we reach real simulation we will just go... bigger. simulating galaxies, you know, sort of how X3 games try to do. but "more real".
It ultimately doesn't matter as long as we can only see what we're looking at during any given time.

Hmm, is that what X3 is? I guess I'll pick up the humble bundle then [https://www.humblebundle.com/weekly]. I'd been on the fence about it.

Open world games. we are getting more and more of them not because people like them more, not because they are needed to tell a good story, but because we can. as in our hardware allows it. plenty of linear shotters woudl be openworld if the hardware could. once we can, we will have pretty much every game simulating the whole area. the "never go outside rendered area" is invisible walls due to technical limitations or, well, gamemaker didnt do anything there.
We will achieve a point eventually where we outstrip the hardware limitations and can simulate a closed in area. BUt at that point there is no need for the area to be closed in.
Eventually (think millions of years) if we still ahve videogames at this point they will all simulate whole universes even if everything happens in a single town. why? because they can. Why did humans went to moon? because they could.
Having larger environments doesn't mean an exponentially increasing processing demand. Again, you're limited by how much a human can take in at one time. For example. Let's say there is a city in a country that you're in. Let's say you're standing on top of the tallest building in that city and can see 20 miles in any direction. What's the point of processing 21 miles out? The other side of the mountain? The other cities and countries or anything else? Why process them in full detail? That would be bad coding. You can have objects change but when not being viewed by the player a lot less processing deman is in place. There's a significant difference between assets on a list in the software being tracked on pen and paper verse rendering it for a player that isn't even there. As the player begins to move towards other areas the game begins to load up the objects well before the player gets to them. This is how Skyrim functioned. The player entering a new cell triggered events before the player even got to them. Keeping track of more objects does demand more processing, but not exponentially.

not even naughty dog, a company working pretty much exclusively with this CPU for a decade managed that. but they did more than any other folk could, and thus the game looked good in comparison (while still bad compared to PC game looks).
Naughty Dog is owned by Sony, it is not a seperate company. But that only proves your point. Has been Sony owned since 2001. I don't know when the switch flipped but they've quickly become one of the best studios I've seen. Their command of coding coupled with their excellent writing has impressed me a lot.

Only untill we decide that editing 2k video is a good idea. But with video i noticed the similar tendency as with music, at some point people just stop wanting better. you could make audio in such quality that a modern PC choke during a playback, but the point of it is lost, and similar may happen with videos. Games however i think wont have this point any time soon due how interactivity.
There's a difference between wanting better and needing it. Better is a relative change and if you cannot tell any difference between what you have and "better" then why is it necessary? We are limited by our senses. It makes no sense to create an audio file that manages ranges we can't hear or videos at a definition above that which the human eye can percieve. So no, the highest quality that humans can actively percieve is not really going anywhere. It is a stable amount of quality that if we have not already reached it, we will soon.


Positive reinforcement is the point. We do things because we have the means. not because it is necessary. it can take 4 gb to play an audio file, though i doubt it will ever be a norm, we dont even have technology to properly put that to sound. (as in no such speakers). Video on the other hand still has a lot to go to. Human eyes are... underrated. WE can distinguish a lot even without actually seeing it fully.
Yes, at some point we will get there, but its not going to be soon.
I think we're a lot closer than you suspect.

But no, we don't do a lot of things just because we have the means. Creating a file that is entirely indistinguishable from another file except numerically is not something that deserves to be done in mass. What we're talking about is the average computer here and that implies the average machine's function. Why would the average machine play music that is well outside the realm of human senses? It's not that we can't make it, just that we would never have videos playing generated at those details of video or sound quality. It would cost unnecessary time and money in so many areas to do... nothing.

We would literally have to change human senses and these new definitions and sounds would have to be something we actually like.

Because physics. We cant see air but we wouldnt be well off without it. there are plenty of things that could only be "real"ly simmulated by simulating to atomic level.
The atomic level would only matter if you're virtually looking into a microscope. Even then, that would be detail in an extremely limited range. Video games already portray air just fine.

You should understand the difference between physics as we know it and quantum physics (mechanics). Objects at an atomic (microscopic) level behave differently than they do at the macro level we all interact in. In other words, our physics often do not apply to quantum physics. So I believe you're being unrealistic here. If we've learned anything from the double slit experiment it's that our own universe doesn't process atomic level actions in detail unless we're observing it...

Yes, and it took the most powerful CPU in the world to run it (ok thats not fair they didnt utilize even half of its power). A 10 times better graphics is, well, just that, 10 times better. not realistic. not by a long shot. its always sad when game makers trut around claiming thier game looks realistic when, while beatiful, its so far away from realism that they shouldnt even mention it. best example is stones in games. they never look like stones.
If realism is relative in technological projection towards actual realism then the statement is correct. But yes, I've seen very few examples that I thought actually looked like real life. The characters from the Last of Us are certainly getting there though. The environments have come a long way too. We'll have to see what devs do with the next gen.

And again, a powerful CPU doesn't really matter any more. The ps4's GPU and RAM combination FAR exceed anything the previous CPU could do and more efficiently now that processors are glorified switchboard operators for the faster/cheaper processing RAM/GPU provides.

What i meant is that even a 10 times more powerful console wont bring us closer to realism as we already have more powerful machines failing at that. at least thats what i think i meant.
The average console/pc being weaker bottlenecks production. A full fledged development team isn't going to spend years of development on a video game that only 1% of gaming computer owners can play. At the very least it'll be playable by most pcs and more than likely will be designed with consoles in mind.

With the average machine being brought up by the console market, we'll begin to see new game engines that push physics, ai, and graphics to new levels. Them all being x86 will do wonders to make those engines more easy to apply to other machine types that are also x86. These new engines will then be sold/licensed to other development studios who will then use them to render their own stories while the original company that made that engine gets to work on the next and better version of the engine.

The reason why humans are bottlenecked like this is because this kind of work is expensive in every way imagineable. There being a profit at the end of the line is the only thing that coordinates such large group efforts. In the end, it will benefit everyone.

Minimum requirements and maximum graphics are diferent things on PC. And while yes 32 bit OS has kept us with 4 GB (actually funnily enough i seen plenty of PCs for sale with 8 gb and 32 bit OS, what the hell?) that by no means mean its a comparison to make.
I'm not saying it is. I'm saying that the average machine is actually 4GB. I'm not saying that based on the minimum/maximum requirements of some game. I'm saying that based on consumer reports and looking at the average pc still be sold on sites like Amazon. 4GB machines still account for more than half of the market.

You also do the mistake many console gamers do when they think of PCs. they aren't some Frankenstein monstrosities unless the owner makes them so. the "name" ones have standard equipment that all are connected with standard buses and they even share hardware drivers among many generations. This is no longer 2000, you can build a PC by trial and error and it would work. heck, as Intel announced they are going to merge CPUs and Mobos now the hardest part - adding a CPU is gone now too.
PCs beat consoles in all parameters except raw theoretical[footnote]never put in practice[/footnote] calculating speed. And thats okay, consoles were never meant to be the top hardware runners. but console is a gaming hardware, and thus should be compared to average gaming hardware, just like you dont compare an average phone with average calcualtor just becuase phone has a calculator option too. and average gaming PC is the one thats 1x capable of skyrim with HD textures pack.
Actually, I'm thinking of pcs like an IT person thinks of them. Need more disk space? Throw in a new or second HDD/SDD/Hybrid. Video Card not powerful enough? Replace it as long as the motherboard and power supply can handle it. Etc.

It doesn't matter. There are too many companies and too many names and too many ways to customize them. From the market perspective, there is NO standard. I think Valve is about to announce something that will change that by giving us the equivalent of a "console" pc for the living room. That'll be interesting and can result in a product that is not limited to traditional console markets if they allow for tiered versions of the steam box.

I woudl agree that some gamemakers consider physics part of graphics, but plenty of them toat around that "They are also doing physics" so i dont know.
As far as AI i meant things like enemy soldiers AI, not splinter effect AI. as in the world not only looks, but acts real.
That's what I meant about AI. I just talked about the other things as part of my physics side of things. AI has been advancing. You should see the difference between Black Ops1 bots and Black Ops 2 bots.

Thats a very.... untraditional way to look at AI. but i guess by that logic it would be part of graphics, just like EVERYTHING else in the game.
I said AI AND Physics. I was talking about both at the same time. So if a ball doesn't fall at the right speed your mind notices and something seems off. The same actually is true with AI though. If a person doesn't respond to something correctly then it also seems off. Like a person hiding from you behind a crate when they should know damn well that you have a clear shot at them.

Yes, currently it is. but barring an apocalypse, the world will catch up.
So will the demands of information being transmitted.

Funny thing about monopolies and google. remember when monopolies sued google for laying fibers and competing, and they claimed that the state should ban google from providing services because the locals offer inferior services and cant compete.
As soon as the monopolies are rich enough to buy laws they remain monopolies. i noticed that the best scanario for consumers is 3 huge companies. 2 can easily fix prices, but with 3 you will get at least 1 of them trying to profit by stealing others costumers, huwever the companies being only 3 is still large enough to provide massive infreastructure and service variety to consumers that small and local business cant.
There's always also the hope of an altruistic company. Someone like an Elon Musk who doesn't just want to make money but also make the world a better place. Heralding the future. Shame that money and that kind of mindset don't always run hand in hand. Wish I could help those kinds of people along in some way.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Our posts are getting looong :D


Lightknight said:
Processing cost is the amount of processing required to render or calculate or whatever. It isn't hardware cost so much as it is wear and tear incurred on the existing hardware. So a word document that takes up 4MB of RAM to perform an action will still amount to no less than a total of 4MB over multiple machines. In all honesty, it'll usually amount to more total.
But spread over 4 machines it will only amount to 1 MB cost on your machine. thus buying 4 MB of ram when 1 MB is enough is unnecessary expense.
And since we already know that plenty of people waste their processing plenty of time, as in the machine is on and staying idle (still wearing down though) you can save costs by utilizing them for processing somone else needs (meanwhile you also utilize someone elses idle machine).



Lightknight said:
As for the cost, I'm pretty damn sure that every one of those consoles cost money to make. Using the individually purchased client machines would actually be MUCH higher hardware cost. Just because the developers/publishers don't see the cost doesn't mean it isn't there. It's just shifted onto consumers now whether or not they're even buying the game.
Im not sure what you mean there. You argue that buying more hardware is costly, agaisnt me saying that more hardware is costly.


Lightknight said:
And again, these are not a developer's machines. These are a gaming company's machine. These aren't any businesses machine unless some company bought a few. These are individuals' hardware and no one else has claim to it any more than they have a right to use my car when I'm not using it.
Car sharing is an example. you can buy 50.000 dollar car and use it yourself for 2 hours a day. or 5 people can buy 5 pieces of a car (lets imagine thats technically possible), 10.000 each, and each use eachothers pieces for 2 hours a day, totaling a 10 hour a day use, but the costs you paid is only 10.000 instead of 50.000 and you still got to drive where you wanted.

What you're describing is a scenario in which we buy hardware and allow corporations (not consumers) to use them to support their product without our consent or compensation. Why should I provide a free service for a corporation who is charging me for theirs?
No. we do an exchange of services between consumers to use eachothers hardware. Such system is made possible by the corporation that designed a game to work this way, opposed to how it works now.

The HDD and coolers would need to be used in the process. Please explain why a company should be able to use my hardware and my bandwidth to support their software instead of getting their own server farm? ESPECIALLY since a server farm is more efficient at the process than distributed processing.
Company does not use your hardware. other users do. Or other users could make their own servers. and you would have too since they wont be sharing either. making it more costly for them and you.

Lightknight said:
That's swell, but the average American is looking at 10mbps, not your 100. Even at 100mbps, why should another company get to use the internet I pay for to run their business? Why should the next GTA get to use my hardware and my bandwidth without compensating me. For that matter, why should any game company, especially those I'm not buying games from, get to use me as their personal server farm without my consent? Maybe I don't want nude beach volleyball online getting supported by my console? It simply isn't anyone's responsibility but the development studio.
Like i said, this is only possible when we got good internet infrastructure. obviuosly, there is no such thing in america.
You get compensated - you get to use their hardware. Or do you think they should let you use thier hardware without compensation?
And companies are not using it, other users are. ITs basically a choice of "Share" or buy your own serverfarm. either way - you pay the costs. just with sharing the costs are less.

Lightknight said:
You are also continuing to dismiss the statement that distributed processing is less efficient than a server farm. Even with fiber optics a server farm is also made that much more efficient while distributed processing will always have the issues unique to it that aren't magically waved away by better infrastructure or hardware. No matter how fast the connection is, you're still talking about multiple connections and multiple breaking points, the slowest of which being the speed you go at. Why do you think that benefits us? It only hurts the consumer. The only group who would say this is a good idea would be publishers/developers so they don't have to run servers for it.
It is true, distrubuted computing will always have a huge problem - humans. people that do not want to do the sharing and only want to use other peoples services without contribution themselves. you can call them weakest links, egoists, whatever. They are the biggest problem and reaosn why currently distrubuted computing failed.
We may be using different efficienty definition. I conisder lower costs, need to manufacture less ahrdware, and more optimal usage of local hardware to be factors in efficiency.



Lightknight said:
Also, don't neglect the idea that with improved infrastructure we may just see the amount of information being transmitted increase. Well, by "may" I really mean that we WILL. So just because 100 mbps seems huge to most of us now(albeit not you), does not mean that it'll be huge once companies can rely on faster speeds.
We already do. we got things like HD streaming, we got things like server side calculations, and plenty other things we were unable to do back in dialup times. thing is, differently from what i argued for hardware, we have far outran the demands by how much we can provide now. 100mbps is slow for fiber optics cable. we have ability to give home users 1 gbps now at economical costs. the industry would need a lot of catching up to do to make full use of such bandwitch.
Though it could be that they are not using it because most people dont have it yet. sort of like how console hardware held down game hardware requirements.
ALso, what makes you think distributed processing wont be that new information being trnasmitted?


Lightknight said:
I assume you're talking about something like the holodeck. For now, let's discuss video games with the possible inclusion of something like the Rift for visual virtual reality. The ability to provide sensory input like touch or smell is limited by the interface, not necessarily processing. It is a guess at best to assume it would require some tremendous amount of processing rather than just additional processing.
Id prefer Capricas VR than holodeck or Rift. BUt i meant actual processing of reality, not sensory input. you need to process the world before you can input the necessary inputs.

Lightknight said:
Interestingly enough, are you familiar with the first successful human to human interface in which one guy controlled another guy's finger over the internet? What's interesting about that is because the signal was transmitted digitally, there's no reason why signals like that can't be recorded and replayed on demand. This could be a future where sensory input is indeed able to be transmitted digitally and that actually may not require as much processing as you may think so much as pulling up a pre-recorded message at the right time. So you are likely off about the amount of processing it could require with the biggest limitation being the human interface rather than the processing capabilities of the time.
Yes i am, and i think it s a great step forwards, even if it is a half-working prototype that so far barely can move an object on screen after multiple tries and send signals only to willing and anticipating participants.
When i said simulating reality i didnt mean Virtual reality interface. I meant actually simulating it.

Lightknight said:
Double the framerate means a lot less when the monitor is the size of a psp screen. You'd be surprised what a pc can handle at high framerates on much smaller screen. The trick of the Occulus Rift is that it mimics the human perspective by offsetting the view. Simple and brilliant at the same time.
I know how resolution affects framerate. What i meant is framerate on TVs, as in, those 2k resolution ones.

Lightknight said:
It ultimately doesn't matter as long as we can only see what we're looking at during any given time.

Hmm, is that what X3 is? I guess I'll pick up the humble bundle then. I'd been on the fence about it.
But will we only see what were looking at? we already got a lot of info about the game world around us on GUI. what if, not limited by monitor space, we could know a lot more about the world around us. that world would have to be simulated.

Yeah do pick up the humble bundle. i already did, i heard amazing things about these.
But you have to realize it is a LONG game. as in, a "quick way to get strong ship" guide details basic things... that take 8 hours even using ingame fast forward.

Having larger environments doesn't mean an exponentially increasing processing demand. Again, you're limited by how much a human can take in at one time. For example. Let's say there is a city in a country that you're in. Let's say you're standing on top of the tallest building in that city and can see 20 miles in any direction. What's the point of processing 21 miles out? The other side of the mountain? The other cities and countries or anything else? Why process them in full detail? That would be bad coding. You can have objects change but when not being viewed by the player a lot less processing deman is in place. There's a significant difference between assets on a list in the software being tracked on pen and paper verse rendering it for a player that isn't even there. As the player begins to move towards other areas the game begins to load up the objects well before the player gets to them. This is how Skyrim functioned. The player entering a new cell triggered events before the player even got to them. Keeping track of more objects does demand more processing, but not exponentially.
I dont think we know how much human can take in at one time. so far we have not tried to inject knowledge into our brain, we wouldnt know where to start.
The point of processing 21 miles out is processing the life of NPCs there (or other players if its MP) so that NPCs on thier regular routine would say go out of thier house, enter their car and drive towards area that a player can see, in a routinely realistic fashion.
If we want to actually reach realism, we have to process it all in full detail sadly. What we can save up on is rendering - the creation of visible textures are not necessary. but processing still needs to happen, physics included, or else we will jut see cars "spawn" and so on. except very far off screen. thats not realism.
and while current gen sadly focused on the most not-crappy-looking textures they can come up with, actual AI and physics are more demanding.

Naughty Dog is owned by Sony, it is not a seperate company. But that only proves your point. Has been Sony owned since 2001. I don't know when the switch flipped but they've quickly become one of the best studios I've seen. Their command of coding coupled with their excellent writing has impressed me a lot.
Besides the point. I just wanted to point out that it took almsot a decade for a company working exclusively with this CPU to even tackle part of its potential.

There's a difference between wanting better and needing it. Better is a relative change and if you cannot tell any difference between what you have and "better" then why is it necessary? We are limited by our senses. It makes no sense to create an audio file that manages ranges we can't hear or videos at a definition above that which the human eye can percieve. So no, the highest quality that humans can actively percieve is not really going anywhere. It is a stable amount of quality that if we have not already reached it, we will soon.
Only as long as we are limited by our very limited eyeballs. As technology progress we will undoubtedly find a way to enhance our input abilities.
Ever heard the saying "i fell like im seeing X-rays"?
ANd of course there is a difference between wanting and needing. we dont need cars. we dont need tvs. heck we dont evne need games. but we want them. and we have the means. so we make them.

But no, we don't do a lot of things just because we have the means. Creating a file that is entirely indistinguishable from another file except numerically is not something that deserves to be done in mass.
But we do that. as a form of encrypting and hiding files.

What we're talking about is the average computer here and that implies the average machine's function. Why would the average machine play music that is well outside the realm of human senses? It's not that we can't make it, just that we would never have videos playing generated at those details of video or sound quality. It would cost unnecessary time and money in so many areas to do... nothing.

We would literally have to change human senses and these new definitions and sounds would have to be something we actually like.
you answered your own question here.

The atomic level would only matter if you're virtually looking into a microscope. Even then, that would be detail in an extremely limited range. Video games already portray air just fine.
Or if you want to sumulate it realistically, with realistic physics and AI working on it, like fluid dynamics in air.
And no, i dont agree that video games protray air just fine. in fact i dont think they portray anything just fine. i want better, and there are plenty of people like me. Not to mix that with me thinking this is the most important part of the game, no. i can play a game from SNES times and enjoy it, but if given the choise would i want better simulation? of course.

If we've learned anything from the double slit experiment it's that our own universe doesn't process atomic level actions in detail unless we're observing it...
Do tell more!

The average console/pc being weaker bottlenecks production. A full fledged development team isn't going to spend years of development on a video game that only 1% of gaming computer owners can play. At the very least it'll be playable by most pcs and more than likely will be designed with consoles in mind.

With the average machine being brought up by the console market, we'll begin to see new game engines that push physics, ai, and graphics to new levels. Them all being x86 will do wonders to make those engines more easy to apply to other machine types that are also x86. These new engines will then be sold/licensed to other development studios who will then use them to render their own stories while the original company that made that engine gets to work on the next and better version of the engine.

The reason why humans are bottlenecked like this is because this kind of work is expensive in every way imagineable. There being a profit at the end of the line is the only thing that coordinates such large group efforts. In the end, it will benefit everyone.
Yes, costs here are a bottleneck for game engines. however with this "make oen engine, sell to many" that Unreal engine proved possible we can keep those costs down, and this is why we will be able to do simulations at far more acceptable price than jsut bloating budgets to billions.
But to do that, we must raise the average hardware. more than it is being risen now.

Actually, I'm thinking of pcs like an IT person thinks of them. Need more disk space? Throw in a new or second HDD/SDD/Hybrid. Video Card not powerful enough? Replace it as long as the motherboard and power supply can handle it. Etc.
And thats... well not bad... but wont allow you to see the perspective of average user.
There is no one standart equipment of course, but there are plenty of standartization done for PCs now, so much that a person who never had experience with PC hardware could build one and suceed. but the myth of "i must keep tinkering with hardware" continues, depsiute the fact that you can just buy off the shelf PC and have it run for 5 years without anything more than blowing the dust out. heck ive seen people who dont even do that, btu then they bring it to me asking why it doesnt work and i have to scrape off the dust in order to even see if theres a problem.

What steam will do is going to be interesting to see, and Valve may just be popular enough to pull it off.

There's always also the hope of an altruistic company. Someone like an Elon Musk who doesn't just want to make money but also make the world a better place. Heralding the future. Shame that money and that kind of mindset don't always run hand in hand. Wish I could help those kinds of people along in some way.
Hope, yes, realistic expectation, unlikely. These companies are often small and unheard of because thats not a very.. lucrative model. Its nice that they exist, but they will never be on the top sadly.
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Strazdas said:
Our posts are getting looong :D
Yes. I've even been weeding out parts of our discussion that I don't feel are central and going anywhere or just something that we misunderstood eachother while agreeing. But it still just gets more involved. I'm personally still enjoying it though.

But spread over 4 machines it will only amount to 1 MB cost on your machine. thus buying 4 MB of ram when 1 MB is enough is unnecessary expense.
Doesn't matter, they're still incurring 4MB of service on their client base.

Have you considered what this would mean for regions where the console didn't sell well? There would be entire areas of every country where this sucks terribly even if done perfectly.

Have you also considered the amount of a security threat this would pose to all console owners and the networks the consoles are on? With every machine being actively available for input and output?

Car sharing is an example. you can buy 50.000 dollar car and use it yourself for 2 hours a day. or 5 people can buy 5 pieces of a car (lets imagine thats technically possible), 10.000 each, and each use eachothers pieces for 2 hours a day, totaling a 10 hour a day use, but the costs you paid is only 10.000 instead of 50.000 and you still got to drive where you wanted.
Two things:

1. The other customers are not buying my console with me, nor am I paying for their console. What you're talking about is an example where multiple people own the same object. So everyone has legitimate ownership to the car.
2. That is a scenario that individuals can opt out of. You're proposing one where the individuals cannot.

No. we do an exchange of services between consumers to use eachothers hardware. Such system is made possible by the corporation that designed a game to work this way, opposed to how it works now.
No, the burden of having a working game is on the developer. It is not a burden on the consumer. Why do you think companies currently have server farms? Do you think they're being altruistic? No, it's part of their business. You're right that this is consumers relying on other consumer's machines to function as servers. But this is still the corporations outsourcing their responsibility to individual's machines without compensation or consent. This would be like a hotdog stand charging you full price for a hotdog (just the dog, not buns or condiments) and expecting the hotdog consumers to have spare buns to give you because they're not going to eat right now.

Your position that shifting this onto consumers is fine is surprisingly just like that.

You get compensated - you get to use their hardware. Or do you think they should let you use thier hardware without compensation?
Says who? I specifically avoid single player games that require online connections. Diablo III? Didn't buy it, won't buy it. Why should my hardware be used if I'm using it for offline single player games, online multiplayer games (in which we'd all need to be connected to the server and sometimes already function as the host)?

It isn't whether or not other users should be able to use my hardware. It's whether or not a game developer's software should be allowed to use it.

Should consumers be able to use my pc's spare processing power just because I may one day want to use theirs?

It is true, distrubuted computing will always have a huge problem - humans. people that do not want to do the sharing and only want to use other peoples services without contribution themselves. you can call them weakest links, egoists, whatever. They are the biggest problem and reaosn why currently distrubuted computing failed.
We may be using different efficienty definition. I conisder lower costs, need to manufacture less ahrdware, and more optimal usage of local hardware to be factors in efficiency.
No. Distributed computing has many other problems. The BIGGEST of which being that it is a much less efficient distribution of resources. Everything will take longer. It isn't a matter of writing better software or even just having better bandwidth. It's about distributed processing being a chain in which the weakest link breaks it. This isn't just politically or something like that. Not at all.

Have you considered that the cost of servers are already wrapped into the cost of games? That we are already contributing for servers? But the cost of games isn't going to come down because of there not being a server cost. Game companies will continue to hit that $60 mark or higher as long as they can. So we're literally talking about there being 0 benefit to the customer. No. We're talking about there being a negative impact to customers with no reduced price. Less efficient gaming at a cost to their bandwidth without any compensation. The ability to "use others consoles" is not compensation. That's just pushing the cost to someone else again who may or may not be using it.

We already do. we got things like HD streaming, we got things like server side calculations, and plenty other things we were unable to do back in dialup times. thing is, differently from what i argued for hardware, we have far outran the demands by how much we can provide now. 100mbps is slow for fiber optics cable. we have ability to give home users 1 gbps now at economical costs. the industry would need a lot of catching up to do to make full use of such bandwitch.
HD streaming is nothing to 1gbps up/down. Even downloading games which are generally our largest files at the moment does not compare as the download time is in minutes, not hours like the average downloader requires. A 40GB file can take 11+ hours to download depending on the parties involved and the speed. At the average speed it is much simpler to drive to a local gamestop (or hopefully a locally owned game store) and pick it up there. You'll probably even get a better deal for the game than you would on the console websites that they suck at making deals on.

ALso, what makes you think distributed processing wont be that new information being trnasmitted?
Because distributed processing is one of the worst possible ways to process real-time computations for gaming. It is a fundamentally less efficient method in nearly every conceiveable way where gaming is concerned. That's what makes me think (aka know) that. It's like you asking why we can't all take turns driving a large van instead of flying in a company-owned plane to travel 2,000 miles. Only we pay the same amount for both.

You're acting like connecting to a server in your neighborhood would have some drastic advantage over a server farm. It won't. There's a reason why we can already play games like COD without lag, we've figured that model out. Let's say you're playing COD in a distributed computing setup. Here are the problems:

1. There is no promise that all of the individuals in your game would be from the surrounding area. This means that all of the machines involved (the client machines like the one you're using to play on and the server machines which would be the ones just involved for processing) may all have to communicate over similar distances like we do now. And if one of those participants lives in an area with fewer machines then that one person decides your game's latency. In the current setup, everyone is connected to the same central server that is processing the game. If one person's bandwidth is terrible then they are the ones that suffer, not everyone else. That's how it should be.
2. The amount of available processing can easily fluctuate in you area. Let's say the next Halo or COD or whatever comes out and it sells millions of copies on day one. In your scenario EVERY game that uses distributed processing would suffer for the sake of that one game, even that game itself would suffer under its own load as the machines playing it would not be able to contribute the additional amounts required to play it and would only serve as a negative value. Currently, one gaming server going down does not impact other titles. If Halo 4 is tearing up the Halo servers (as it did, when it came out), what does it matter to the next Gears of War or whatever else?
3. Let's talk about the server machines. The machines that aren't being used by the gamer at the moment and so are being used as server processing. Let's say you have four machines that are involved here. Your console has to send out a signal to each of those four machines, those four machines have to then process the work immediately and then send the signal back. Your machine then has to combine and implement those computations. This is now and will always be inherently slower than sending your signal to one powerful server that can handle all of that work at once and hand it back. You're talking about one creation of the packets going (packets are the method in which bits of information are packaged and sent across the internet to the intended recipient. They contain control information as headers: Intended recipient, failure detection, sender, packet sequence, any necessary handshake/password, Data. This is so brilliant because if one packet is ever lost the machine knows exactly which one and can request only that one which is much faster than redownloading the whole thing. Split into packets also means that the individual components are much easier for the network to manage), one line to send the information over, one machine doing all the work at once, and then one more creation of the packets for the response returning. In the distributed computing example this would be four packets going, four packets coming, four different lines for the information to traverse both ways, four different computations and four different computations to collect and add together. If you honestly believe that this is more efficient for real-time gaming then you're very wrong. They are slower even if we manage to load balance perfectly and combine information perfectly. Disbributed processing is only good for larger ongoing projects like the fold@home project where it was not as important to get information back and combined immediately and required a significant amount of processing on the server side.

I meant actually simulating it.
You mean just a graphically correct simulation that looks like reality? We're far closer to that than you may think. Our capabilities may even be there but far from the consumer market. We should see some incredible advances along these lines in the next three years for the reasons I already described.

But will we only see what were looking at? we already got a lot of info about the game world around us on GUI. what if, not limited by monitor space, we could know a lot more about the world around us. that world would have to be simulated.
In what way? At any given time we can only look at so much information at once. The field will always be limited to that unless we evolve or are somehow augmented in ways beyond our comprehension at the moment. At which point you're talking about technology so advanced as to be considered magic at the moment. But yes, if magic happens then that will require a lot of pixie dust (aka processing, joke). That will only push the possibilities further down the road, not make them infinite.

I dont think we know how much human can take in at one time. so far we have not tried to inject knowledge into our brain, we wouldnt know where to start.
Yep, if something that would currently be considered "magic" is created and becomes successfully implemented I'll adjust my responses accordingly. As is, let's assume or at least consider that a society advanced enough to readily transmit knowledge into a human brain would also be in possession of technology powerful enough to do so. Technology that should also be able to render simulations far more efficiently that current tech.

The point of processing 21 miles out is processing the life of NPCs there (or other players if its MP) so that NPCs on thier regular routine would say go out of thier house, enter their car and drive towards area that a player can see, in a routinely realistic fashion.
If we want to actually reach realism, we have to process it all in full detail sadly. What we can save up on is rendering - the creation of visible textures are not necessary. but processing still needs to happen, physics included, or else we will jut see cars "spawn" and so on. except very far off screen. thats not realism.
and while current gen sadly focused on the most not-crappy-looking textures they can come up with, actual AI and physics are more demanding.
Processing the life of NPCs elsewhere in the world is already being done. Like Skyrim. It isn't the same as rendering them for the user. There is a huge gap in processing demands between just tracking objects and rendering them in real time. So let's say I give you your advancement idea if I can have mine.

Besides the point. I just wanted to point out that it took almsot a decade for a company working exclusively with this CPU to even tackle part of its potential.
Did you not play Uncharted 2 or 3? Extremely impressive. The Last of Us just showed a larger mastery of the graphics along with far better writing (and I generally enjoyed Uncharted's writing for what it is). There is a big difference between the ps3's proprietary hardware that made development of it FAR more difficult to do and an x86 environment that every developer cuts their teeth on, so to speak.

Only as long as we are limited by our very limited eyeballs. As technology progress we will undoubtedly find a way to enhance our input abilities.
Yes, a magical technology that has not yet been created could change things and sufficiently make us into a cyborg race. Let's limit our discussion to humans for the time being though. Conditions that currently exist.

(FYI, I'm just using the statment that any technology advanced enough appears like magic, I'm not calling your ideas impossible or anything, just not necessarily possible)

ANd of course there is a difference between wanting and needing. we dont need cars. we dont need tvs. heck we dont evne need games. but we want them. and we have the means. so we make them.
Perhaps "need vs want" was too generous for me to use. Try need vs want something that would be nice vs want that is completely unnoticeable/indiscernable from other available options. Things like that are what we do for fun once or twice. They don't become the norm. That science department that made a stop action movie with individual atoms, that was one such example. It was fun, we did it, that's great. But it isn't suddenly becoming the next big things in movie because, aside from the incredible cost of making a full-length movie with a tunneling microscope, what's the point? People aren't going to add thousands (if not tens of thousands) of dollars to production costs or hours (if not days) of time of processing to make the quality of a movie advanced in a way that is entirely unnecessary and impossible to see the difference of a standard method. Again, why? Because we can? That kind of stuff is a pet project that we do because humans are passionate about things. That's not something a business makes a regular practice at no benefit to anyone and at a great cost to themselves.

But we do that. as a form of encrypting and hiding files.
An encrypted file is distinguishable from an unencrypted file in that one is more secure, which is a feature that people will pay good money for. At least with between an SD and HD file you can say that one looks better. But between the highest quality video that the human eye can percieve and any steps beyond that? No percieveable difference. AKA, no added value.

Or if you want to sumulate it realistically, with realistic physics and AI working on it, like fluid dynamics in air.
We can already do this.


http://www.youtube.com/watch?v=KQMZ2mE-8HI (particle/smoke/wind/temperature sim)
http://www.youtube.com/watch?v=LGIttfk5M08 (tsunami simulation)
http://www.youtube.com/watch?v=U3acQ5dDKEs (the first water sim I could find that looked legit)
http://www.youtube.com/watch?v=oyjE5L4-1lQ (something to mention about realism)

If we've learned anything from the double slit experiment it's that our own universe doesn't process atomic level actions in detail unless we're observing it...
Do tell more!
http://www.youtube.com/watch?v=Q1YqgPAtzho (the best and most concise explanation of the double slit experiment and its findings.

Shooting electrons randomly through a plate with a single verticle slit towards a receptor behind it behaves the same way shooting paintballs through a single slit in front of a wall behaves. A straight verticle line can be traced/seen where they hit the area behind the slit because that's where they got through.

Water would travel through the single slit in a different way by providing another smaller wave on the other side with the focus of wave's intensity still being a single location if measured on the wall behind it.

Adding a second slit is where everything gets weird. A paintball (matter) creates two different lines as expected and water waves produce multiple different areas of focused intensity due to the two waves made on the other side of the slits where they got through bouncing off one another to create multiple smaller waves.

An electron, however, stops behaving like matter when the second slit is added and starts behaving like a wave. Multiples points of focus, each growing more gradual from the middle points, exactly like waves. Scientists initially thought the electrons were bouncing off eachother like water waves did and so did the same experiment by shooting one electron at a time. But they get the same wave-like result which amazed the heck out of them because it's behaving like a wave but for different (unknown) reasons. The math showed that it was going through either slit, both slits, or no slits all at the same time. An apparent impossibility that now has an entire area of science devoted to it.

So then they tried to observe it with a sensor near one of the slits to see which hole it really goes through if any. And here's where things get all hitchcock on us. Not just weird anymore, but batshit crazy. The electrons behaved as if they knew they were being watched and the resulting pattern was two verticle lines like matter (paintballs) just because they were being observed... My joke here being that if we are being simulated, the act of observing them on the atomic level forced our software to allot more detail/processing to it.

This is why I sometimes jokingly tell my friends that we're in a virtual reality simulator that isn't entirely precise at the atomic level (the same joke I just told you that you asked me to clarify). I laugh, they laugh, and then I go home and drink until it's time to cry myself to sleep...

There you go, the reason that quantum physics is an amazing and mysterious place.

But to do that, we must raise the average hardware. more than it is being risen now.
We're improving at a steady rate. Every new console generation explodes out of the gate and then slows down at the end. This evens out well enough. I don't know how much we'd benefit from a drastically improved hardware since even with the previous console generation, game engines went through a couple different versions and didn't instantly become perfect. A drastic improvement would just prevent the minor bottlenecking we get at the end of the cycle. But I consider that to be a point in the market where we perfect everything we've learned so far in preparation for the next leap.

And thats... well not bad... but wont allow you to see the perspective of average user.
It doesn't matter. There are simply too many configurations and hardware combinations. If you buy a dell computer, what does that mean will be in your machine? It doesn't mean anything is the answer. You could have any range of hardware making up any component. I'm sorry, but you're very wrong about this. Consoles have a standardized chip set and hardware configuration. Developers can push them in ways they can't push a more powerful machine. Until/unless a standard hardware configuration is created, like a super popular steam box (aka a console, technically), then games cannot be optimized for the pc in anything but basic architecture and OS which the two main consoles moving to x86 takes away even that advantage from the pc.

What steam will do is going to be interesting to see, and Valve may just be popular enough to pull it off.
Their wording has me somewhat concerned. If they have partnered with several manufacturers then we'll be in the same boat unless Valve has specified the tiers of performance. If the tiers continue to be specified then this will matter.

Its nice that they exist, but they will never be on the top sadly.
Oh, they will and sometimes are. It's just rare. As consumers increase in social consumerism where it matters who we buy from and what we buy, we should start seeing these types getting rewarded for who they are and what they stand for.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Lightknight said:
Strazdas said:
Our posts are getting looong :D
Yes. I've even been weeding out parts of our discussion that I don't feel are central and going anywhere or just something that we misunderstood eachother while agreeing. But it still just gets more involved. I'm personally still enjoying it though.
Yep, the agreement parts in weding out as well, but it seems the text is endless, heh.

Doesn't matter, they're still incurring 4MB of service on their client base.

Have you considered what this would mean for regions where the console didn't sell well? There would be entire areas of every country where this sucks terribly even if done perfectly.

Have you also considered the amount of a security threat this would pose to all console owners and the networks the consoles are on? With every machine being actively available for input and output?
Yes. but currently we got situation where the client base uses only small part of thier capable calculating resources. there is place to incur without extra costs.
if you limit the interconectivity to a city - yes. but you dont have to. while yes in europe for example you would see much more consoles, it also means far less demand for calculation as less people use your resources, so in the end as long as you got the minima treshold you are good. and if you dont, well, youll need more hardware then i guess. or try connecting to people outside the area, like, another city. considering that fiber optics gets me pings of 40MS and less across europe, conencting to another city wouldnt be a problem.
Security risks indeed exist here and it could be exploited. but so can any current machine, altrough i admit not to that high extent. but if you want to be 100% secure never turn it on. elsehow you are always at risk anyway.

Two things:

1. The other customers are not buying my console with me, nor am I paying for their console. What you're talking about is an example where multiple people own the same object. So everyone has legitimate ownership to the car.
2. That is a scenario that individuals can opt out of. You're proposing one where the individuals cannot.
Console is essentialy a set of hardware. if you split that hardware among multiple people, you ahve same situation. while this is quite primitive way of thinking, lets say you have 1/5 of ram, the other guy has 1/5 too and so on. essentially you have a bunch of parts that make 1/5 of a car. but only if you 5 come together to use it.
you can opt out and bui a PC tower, a server, or jsut a bunch of consoles. the point is that 1/5th of a console would also cost 1/5th (lets imagine companies are not going to overprice it, which of course they will)
And of course you will ahve some wasted processing pwoer to communicate, but that is still paying 1/5th instead of 1/1 evne if you actually need 6 people to do that. together youll have 6/5, but alone youll only pay 1/5.
While right now we all pay 5/5, and only use 1/5.

No, the burden of having a working game is on the developer. It is not a burden on the consumer.
So it is developers responsibility that you have a console? should they buy you one?
I dont see what kind of compensation you are wanting here. you will be able to pay your game. without having to pay 500 dollars for a console. isnt that enough?
As for your hotdog example, of you will give me your buns when i want to eat it, sure.

Says who? I specifically avoid single player games that require online connections. Diablo III? Didn't buy it, won't buy it. Why should my hardware be used if I'm using it for offline single player games, online multiplayer games (in which we'd all need to be connected to the server and sometimes already function as the host)?

It isn't whether or not other users should be able to use my hardware. It's whether or not a game developer's software should be allowed to use it.

Should consumers be able to use my pc's spare processing power just because I may one day want to use theirs?
It seems that we still are talking about different things. Your offline single player games would not be able to run on your hardware becuase your hardware is not powerful enough to alone run the game, so you would want to use others. and they would yours for the very same reason. it has nothing to do with a developer.

No. Distributed computing has many other problems. The BIGGEST of which being that it is a much less efficient distribution of resources. Everything will take longer. It isn't a matter of writing better software or even just having better bandwidth. It's about distributed processing being a chain in which the weakest link breaks it. This isn't just politically or something like that. Not at all.

Have you considered that the cost of servers are already wrapped into the cost of games? That we are already contributing for servers? But the cost of games isn't going to come down because of there not being a server cost. Game companies will continue to hit that $60 mark or higher as long as they can. So we're literally talking about there being 0 benefit to the customer. No. We're talking about there being a negative impact to customers with no reduced price. Less efficient gaming at a cost to their bandwidth without any compensation. The ability to "use others consoles" is not compensation. That's just pushing the cost to someone else again who may or may not be using it.
But thats not a fault of a system but a fualt of a greedy publisher. by that logic doing anything will give us 0 benefit.
HD streaming is nothing to 1gbps up/down. Even downloading games which are generally our largest files at the moment does not compare as the download time is in minutes, not hours like the average downloader requires. A 40GB file can take 11+ hours to download depending on the parties involved and the speed. At the average speed it is much simpler to drive to a local gamestop (or hopefully a locally owned game store) and pick it up there. You'll probably even get a better deal for the game than you would on the console websites that they suck at making deals on.
A good quality full HD streaming can go as high as 100mbps in bandwitch. now i really never saw somone do that yet because well most people wouldnt be able to run it. but streaming in high quality is demanding of internet.
Our largest files at the moment are Blue Ray movies, a full quality ones spanning 30-50GB. games seem to be 8-16GB on average now.
I am yet to see a 40 GB game, i believe GTA5 is the largest with 32GB now? but regardless, i could download 40 GB in 53 minutes if we take thoereticall maximum speed. Now realistically it would be closer to 2 hours. In 2 hours i could watch a movie while the thing is downloading in the bkacground. or i could go out, drive to a store, emplore extra costs for gas, get wet during the rain, argue with the shop owner for half hour, pay more, drive back, and pray to god the disc works. which one sounds more appealing?
But why are we even discussion this?
Here are the problems:
These are valid concerns and i can see your point here. But you needed not to write what packets are, i dealt with them before :)

You mean just a graphically correct simulation that looks like reality? We're far closer to that than you may think. Our capabilities may even be there but far from the consumer market. We should see some incredible advances along these lines in the next three years for the reasons I already described.
I mean graphically and physically correct simulation of reality that can be depicted on screen.

In what way? At any given time we can only look at so much information at once. The field will always be limited to that unless we evolve or are somehow augmented in ways beyond our comprehension at the moment. At which point you're talking about technology so advanced as to be considered magic at the moment. But yes, if magic happens then that will require a lot of pixie dust (aka processing, joke). That will only push the possibilities further down the road, not make them infinite.
Information injection. i dont know if there is a technical term, but basically you get knowledge about something injected in your braind and you just "know" it, sort of like a memory but not really. Yes, augmentation would be required for this very likely. that or very good knowledge of brainwaves and how to control them. and yes i can see how some could consider that "magic" now.

Yep, if something that would currently be considered "magic" is created and becomes successfully implemented I'll adjust my responses accordingly. As is, let's assume or at least consider that a society advanced enough to readily transmit knowledge into a human brain would also be in possession of technology powerful enough to do so. Technology that should also be able to render simulations far more efficiently that current tech.
Fair enough, technologies often develop together. that still does not mean that what we see is our limit of observation by any means.there were experiments to actually use taste receptors to trnasfer information to people and it wanst a complete failure. Though that was mainly planned for handicapped people.

Processing the life of NPCs elsewhere in the world is already being done. Like Skyrim. It isn't the same as rendering them for the user. There is a huge gap in processing demands between just tracking objects and rendering them in real time. So let's say I give you your advancement idea if I can have mine.
But how realistic is the processing? standa round the market whole day, if quest X finished teleport to another location?
OF course we can have our ideas, thats all they are - ideas. and both of us could easily be wrong.

Did you not play Uncharted 2 or 3? Extremely impressive. The Last of Us just showed a larger mastery of the graphics along with far better writing (and I generally enjoyed Uncharted's writing for what it is). There is a big difference between the ps3's proprietary hardware that made development of it FAR more difficult to do and an x86 environment that every developer cuts their teeth on, so to speak.
Not sure if i mentioned before, but due to monetary and time (theres so little time so many games) i currently limit myself to PC games and uncharted are PS3 exclusives.
But i have seen other people play it. and i have seen videos. Impressive compared to what we had previously? yes. impressive comapred to what i consider realistic portrayal? no. The Last of Us and Uncharted were made by same guys and i though i kinda covered them already so yeah. Yes we are advancing. but we arent there yet.

Yes, a magical technology that has not yet been created could change things and sufficiently make us into a cyborg race. Let's limit our discussion to humans for the time being though. Conditions that currently exist.

(FYI, I'm just using the statment that any technology advanced enough appears like magic, I'm not calling your ideas impossible or anything, just not necessarily possible)
Yes, i know that Ensteins Quote too :) no problem there. After all the "magicians" were mere alchemists that knew more than the rest.
ANd yes if we take this extreme limitation that is our lump of meat then the hardware will catch up in not that far future. not couple years and by no means couple console generatinos yet.

Perhaps "need vs want" was too generous for me to use. Try need vs want something that would be nice vs want that is completely unnoticeable/indiscernable from other available options.
I dont understand.
I know people that consider new Iphone "nice" even if it does pretty much nothing differently (such versions exist). and they actualyl went out and bought one. because "its nice". they still use it exactly how they used their previuos one. so it can be both wanting nice and be indiscernable.

Things like that are what we do for fun once or twice. They don't become the norm.
Erm, motion controls?

Again, why? Because we can? That kind of stuff is a pet project that we do because humans are passionate about things. That's not something a business makes a regular practice at no benefit to anyone and at a great cost to themselves.
Computers were a pet project. Internet were a military pet project. mouse was a university students per project that apple bought (and microsoft stole :D ) pet projects matter. not all of them, but there are those that are the norm.

An encrypted file is distinguishable from an unencrypted file in that one is more secure, which is a feature that people will pay good money for. At least with between an SD and HD file you can say that one looks better. But between the highest quality video that the human eye can percieve and any steps beyond that? No percieveable difference. AKA, no added value.
I meant it in a way that encryption hides the file by maknig it look identical to a normal file. It was a low jab i admit.

We can already do this.
Not really. The first video of smoke effects. Yes, it looks nice, but not realistic, and definitely not real time. Heck, in the Nvidia thread there was a video posted that was shorter and took 11 hours to render. for less couple seconds of similar stuff. on doual Geeforce Titan.
Second video, while that looks somewhat ok for basic modeling, the particles are huge, i see some flying off kilometers high (in the videos scale) and the whole think looks more like old game than anything. and no textures.
Third video, yeah that looks a bit closer. still the bouncyness of water is way too high, unless this was meant to be fimed during an eatrthquake, but it does look much better. The video owner in comments said that 8 processors and doual graphic cards (doesnt say which) took over 1 minute for a single frame. that means its still 1500 times too slow for real time simulation of a small scale fluid and thats the only thing simulated in it, there is no rain, atmosphere, wind, plenty of factors that would increase this at least 50 times more.
Noodles. gota love them rubber noodles. the only thing this simualtead correctly is the fact that noodles never go where you try to put them and peopel throw a hissy fit and swing dishes. hehe.

Why yes im very demanding why do you ask?

http://www.youtube.com/watch?v=Q1YqgPAtzho (the best and most concise explanation of the double slit experiment and its findings.



There you go, the reason that quantum physics is an amazing and mysterious place.
This is interesting and i did not expect this initially. however after thinking of what i saw i happen to come to the same conclusion as one of the commenters: "Its not the observer changing reality. ITS THE DETECTION MACHINE interfering with the electron."
In order to get identical result the detection machine has to be completely identical at both slits. this is practically impossible, therefore this experiment is practically impossible.


We're improving at a steady rate. Every new console generation explodes out of the gate and then slows down at the end. This evens out well enough. I don't know how much we'd benefit from a drastically improved hardware since even with the previous console generation, game engines went through a couple different versions and didn't instantly become perfect. A drastic improvement would just prevent the minor bottlenecking we get at the end of the cycle. But I consider that to be a point in the market where we perfect everything we've learned so far in preparation for the next leap.
Indeed we are. BUt i dont agree that the bottlenecking at the end of cycle is minor. The consoles we are getting as "next leap forward" is barelly beating the average "PC". If we had console cycles that upgrade consoles faster, like, lets say every 5 years, we would be able to avoid artificial bottlenecking which if anything slows things down.

It doesn't matter. There are simply too many configurations and hardware combinations. If you buy a dell computer, what does that mean will be in your machine? It doesn't mean anything is the answer. You could have any range of hardware making up any component. I'm sorry, but you're very wrong about this. Consoles have a standardized chip set and hardware configuration. Developers can push them in ways they can't push a more powerful machine. Until/unless a standard hardware configuration is created, like a super popular steam box (aka a console, technically), then games cannot be optimized for the pc in anything but basic architecture and OS which the two main consoles moving to x86 takes away even that advantage from the pc.
If you buy a dell computer that means that you get a computer where all components are compatible with eachother. the rest - up to sofware API. programmers code to software API anyway and not directly to hardware. its that software API that has to deal with different ahrdware. and so far both DirectX and OpenGL manages. (and i guess whatever PS3 uses if its different than OpenGL, i dont remmeber). And now with consoles moving to be small PCs this will become even more standartized. Yes, you can optimized for a specific hardware, but even with software API written to expresely do that like on Xbox we only see a 10-15% optimization at best. Unless the game is very very badly ported, but thats nto really hardwares fault. If anything, since the new consoles are goign to write to DirectX and OpenGL (xbox and PS4 respectively) they are basically writing to same software API as for PCs anyway. the optimization gap will become miniscule.
And lets be honest, save for some exception like my friend with asus manufactured GPU (yes, really) most of the "important" hardware became standart. you either get Intel or AMD, which use same drivers and architecture for all of their generatinos (unless we talk really old ones), and you got Nvidia and Radeon (AMD renamed them AMD of course, but they are radeons damnit), which also use standartized drivers. heck, my 8600 runs on exact same drives as the new cards run. and that ones made in 2007!

Their wording has me somewhat concerned. If they have partnered with several manufacturers then we'll be in the same boat unless Valve has specified the tiers of performance. If the tiers continue to be specified then this will matter.
Valve being vague is nothing new. we find more from leaks and costumer support answer accidents than valve themselves most of the time. altrough yes if it is what most people expect it to be it will be a PC with another name.

Oh, they will and sometimes are. It's just rare. As consumers increase in social consumerism where it matters who we buy from and what we buy, we should start seeing these types getting rewarded for who they are and what they stand for.
Can you name one world renown international company that is like that?
Social consumerism, been there. you know why they do it? because bigger profits. not out of any altruistic goals or some such. even that socialist bakery where EVERY worker is equal part owner in interviews said that after they made this decision their profits increased because everyone wanted more and thus worked harder.
We should. but we wont. because for all our pondering and internet activism we will just end up buying the cheapest option in the evening. I mean, sure, i dont buy from Ubisoft for example out of a moral stance i took agaisnt them. i havent bought from them in years. how many people did you see bending over with ME3 for Origin, a known spyware, though?
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Strazdas said:
Yes. but currently we got situation where the client base uses only small part of thier capable calculating resources. there is place to incur without extra costs.
Yes, and my blender isn't getting used when I'm not using it. My shower either. I'm not seeing why that means there's some imperative to force consumers to let people share their property without consent or compensation.

if you limit the interconectivity to a city - yes. but you dont have to. while yes in europe for example you would see much more consoles, it also means far less demand for calculation as less people use your resources, so in the end as long as you got the minima treshold you are good. and if you dont, well, youll need more hardware then i guess. or try connecting to people outside the area, like, another city. considering that fiber optics gets me pings of 40MS and less across europe, conencting to another city wouldnt be a problem.
The only advantage you were touting was the less time it takes to communicate to servers close to you (which is a non-issue considering that we don't currently have trouble communicating in real time with servers across the country).

Security risks indeed exist here and it could be exploited. but so can any current machine, altrough i admit not to that high extent. but if you want to be 100% secure never turn it on. elsehow you are always at risk anyway.
Listen, I am a security expert. This happens to be one of my areas of expertise. Yes, there is always a risk, but it would be nothing like turning every machine into a server machine that communicates with other consoles owned by what could be very sketchy people. As it is now, consoles go through a central server that has more than appropriate security and the user owned machines never end up talking directly. Your example of distributed computing cannot work over long distances. If they continue to talk through a central server that could be several states away then you'd be multiplying that distance several fold. So you're talking about either compounding the security threat by making every console in a region a member of the same domain (equivalent) or taking away any advantage of doing this that you thought you had. If someone with malicious intentions gets on your network, that's usually the end of the story unless you catch them soon enough. Most firms and corporations I work with don't have any kind of internal firewalls. It's all about keeping people out of the house rather than locking the individual rooms. Here, you're giving people a key to the house. It may only open up a small room facing a locked door, but at this point you have a manageable challenge if you're a black hat.

Console is essentialy a set of hardware. if you split that hardware among multiple people, you ahve same situation. while this is quite primitive way of thinking, lets say you have 1/5 of ram...snip
Again, your scenario is a situation where people are using someone else's property without their consent and without compensation. Not just people but the company making the software. They are using console owners' machines instead of their own servers and that's basically just stealing from consumers. It may not be very much. May just be in the form of a little wear and tear on the device and some bandwidth. But it's something and that's not only greedy as hell but unethical. Do you remember why a lot of people bought the ps3 originally? It was a really cheap bluray player. The more these consoles become home entertainment systems, the more you'll see people who aren't even gamers owning these devices. This is not an equivalent exchange.

So it is developers responsibility that you have a console? should they buy you one?
What does that have to do with anything? The job of a developer is that if you put their game in your console, it needs to work. If they require it use a server, it is their job to make sure that a server exists, not yours. So yeah, if they're going to use MY property as a server for their games that I may not even be ever going to play, then they should buy it for me or defray the cost.

I dont see what kind of compensation you are wanting here. you will be able to pay your game. without having to pay 500 dollars for a console. isnt that enough?
Oh, you think consoles would be cheaper if set up this way? NO. This does nothing to defray console costs. If anything it'd make them more expensive. The consoles would already have to be fairly powerful if they're going to be able to power other games that the entire console can't process itself.

As for your hotdog example, of you will give me your buns when i want to eat it, sure.
So you'd be ok with the hotdog stand charging you full price for a hotdog with the expectation that it is the people eating there's responsibility to provide the bun that makes the dog into a hotdog? We have a fundamentally different perspective. Maybe this is just the College of Business in me but that's a way to lose customers and potentially incur consumer rights lawsuits/investigations. It is not the consumer's job to make a corporation's product work. It is the corporation/vendor's job to do so.

It seems that we still are talking about different things. Your offline single player games would not be able to run on your hardware becuase your hardware is not powerful enough to alone run the game, so you would want to use others. and they would yours for the very same reason. it has nothing to do with a developer.
The developer is responsible for making a game that works. That's either developing a game that the console is capable of handling or providing the additional resources via their own servers. Make no mistake, companies like EA will server-side processing to calculate the most mundane components of their games that are also necessary. Their biggest error with SimCity was not making the server actually process something so they got caught in a lie when they claimed the single player mode needed the server to perform "calculations".

So you're going to have companies who require a server because they're greedy and yet refuse to provide the resources for the same reason and because of the alternative. Eventually, with enough games, we could easily see an event where the other machines aren't enough to process the demands of the gamers. Think about just a few games that demand five times as many resources as other games. Compare The Last of Us to Journey, for example, in resource demands.

But thats not a fault of a system but a fualt of a greedy publisher. by that logic doing anything will give us 0 benefit.
All publishers are greedy, friend. They are companies trying to make money. But yes, us doing anything free to help the consumer other than hand them money for their product will give us 0 benefit. That is correct. In fact, I was being too generous with 0 benefit. What I should have said is that it's only a net negative. Less service for the same cost. Not just less, as stated multiple times, but distributed processing is a terrible method for real-time gaming.

A good quality full HD streaming can go as high as 100mbps in bandwitch. now i really never saw somone do that yet because well most people wouldnt be able to run it. but streaming in high quality is demanding of internet.
Our largest files at the moment are Blue Ray movies, a full quality ones spanning 30-50GB. games seem to be 8-16GB on average now.
I'm not sure about that. Avatar on Bluray (full quality) was 8GB.

The average is expanding. As stated, Uncharted 3 was 43.5GBs to download. Interestingly enough, I had to delete Avatar to get it installed. The Last of Us was 25GB. I've heard tale that God of War 3 was 40GB. Currently, games appear to be limited to the 50GBs available in the dual layer bluray disks. I do not know if the next generation of consoles will be able to read more layers or more densely packed ones. Or perhaps they'll start combining digital download portions of the games even if you purchase the disk. Not sure on that, but the size demands are certainly shooting up in the past few years.

I am yet to see a 40 GB game, i believe GTA5 is the largest with 32GB now? but regardless, i could download 40 GB in 53 minutes if we take thoereticall maximum speed. Now realistically it would be closer to 2 hours. In 2 hours i could watch a movie while the thing is downloading in the bkacground. or i could go out, drive to a store, emplore extra costs for gas, get wet during the rain, argue with the shop owner for half hour, pay more, drive back, and pray to god the disc works. which one sounds more appealing?
But why are we even discussion this?
Because, once again, you have 100mbps up/down. That is 10 times the average rate. That's fantastic that it takes you only 2 hours to download that device. The average person though? 20 hours.

I shit you not, just for this conversation I went ahead and looked up prices for internet from Comcast in my area. 3 mbps up/768 Kbps down will run you $20 and that's only for the first six months. $30 after that.

How about the fastest speed they offer here? 50Mbps down/ 10Mbps up. That's $40/month for the first six months and then it goes to $60/month. Remember, only a couple years ago they were trying to charge me over $60 for 15mbps up and who knows what down. So at least the new company in town that I switched to is bruising them up a bit.

These are valid concerns and i can see your point here. But you needed not to write what packets are, i dealt with them before :)
I have no idea what your technical background is or the background of anyone who may be perusing this thread and our posts (perhaps as a form of self-punishment). Sorry if that sounded like I was talking down to you. I was just explaining why this stuff takes time by necessity rather than being something that tech will eventually hammer out.

and yes i can see how some could consider that "magic" now.
So then why are we including this as part of the debate? I could just as easily talk about the day we're converted to energy as a species and able to see in all directions at once and it would mean just as much to this conversation (aka, nothing presently).

Fair enough, technologies often develop together. that still does not mean that what we see is our limit of observation by any means.there were experiments to actually use taste receptors to trnasfer information to people and it wanst a complete failure. Though that was mainly planned for handicapped people.
That's still a limited amount of human observation. Not only do we have limited sensory input, but we can only focus on so many things at one time. In fact, aside from some very small things there is good evidence that humans can only focus on one thing at one time. We can just transition between topics rapidly (aka, no true thing as multitasking so much as project juggling).

But how realistic is the processing? standa round the market whole day, if quest X finished teleport to another location?
In Skyrim the individuals would actually travel along a road to get to another place. They could even potentially be attacked and killed on the way. If you fast traveled to where they were on a map they would be long gone because you also took time to get there. There are plenty of other such timers and these things are only getting better. Again, keeping track of characters and progress off screen is entirely possible and in no way requires as much processing as rendering them visually in real time.

Not sure if i mentioned before, but due to monetary and time (theres so little time so many games) i currently limit myself to PC games and uncharted are PS3 exclusives.
Hmm, I assumed you're knowledge of the last of us implied PS3 ownership. I own the ps3, 360, and a powerful pc. If strapped for cash, the pc is by far the most access to the widest range of affordable games. So kudos to going that route.

ANd yes if we take this extreme limitation that is our lump of meat then the hardware will catch up in not that far future. not couple years and by no means couple console generatinos yet.
Then we are largely in agreement here. Barring some huge and sweeping technology that the average person acquires which impacts both our ability to acquire information but also our ability to process it dynamically then the bar is set relatively low to exceed processing demands for video games. However, as you agreed with, a society technologically advanced enough to directly interface with the human body in that way may already have made technological advances in computing that would already exceed the additional demands or make that higher bar far more reachable.

I dont understand.
I know people that consider new Iphone "nice" even if it does pretty much nothing differently (such versions exist).
Doing the same thing more quickly/easily is worth money for people. Two images that look exactly the same are not. Again, 0 discernable difference is not a value.

Erm, motion controls?
? They are a different way for the user to interface with the game. When I say we do some things just for fun and never again, I'm not talking about all things fun. I'm talking about the arduous and unnecessary. People will achieve those kinds of goals just to do it but that doesn't mean it'll ever be widespread. You can have one perfectly highest-def movie at 50GB (number pulled out of ass) or you can have a higher def version completely indistinguishable by the human eye that costs tens of thousands in additional equipment, computing time, and editing time to make that is 70GB+. It gives no benefit because the human eye can't tell the difference in this scenario. So... why the hell do you think companies would start to do this? Who benefits from commercializing it?

Computers were a pet project. Internet were a military pet project. mouse was a university students per project that apple bought (and microsoft stole :D ) pet projects matter. not all of them, but there are those that are the norm.
But all those things made a difference. We're talking about a format that is only more tedious to make, transfer, and render without posing any benefit. Why would that become mainstream?

I meant it in a way that encryption hides the file by maknig it look identical to a normal file. It was a low jab i admit.
? Encryption does not make the file look identical to a normal file. It jumbles the contents via an algorythm that the decryption key knows. When you look at the file as a user it looks the same. But when you look at the file's code without decrypting it you'll find something very different. Jab or not, it wasn't correct.

Why yes im very demanding why do you ask?
Those were a series of videos that took me a few seconds to find. No telling where we are since then. Either way, I have made no claim that we've arrived. Only that we can and DO simulate real physical scenarios. The trick is just catching up with that technologically.

This is interesting and i did not expect this initially. however after thinking of what i saw i happen to come to the same conclusion as one of the commenters: "Its not the observer changing reality. ITS THE DETECTION MACHINE interfering with the electron."
Why would a detection machine at one slit so drastically change the results? That's the million dollar question. You can set up the experiment in ways that don't require direct interaction. Such as sensors that don't require firing protons to observe. We've had people do this with a fiber optic disk that measures them without deflecting.

So, why do the electrons behave as matter while observed and waves when not? It's easy to say that the observation method is interferring, that's obviously true due to the varying results. The question is why and how.

The consoles we are getting as "next leap forward" is barelly beating the average "PC".
That is 100% wrong. Three things:

1. This generation, the average pc is still 4GB with a relatively weak cpu and video card. And that's the average pc that's still being sold so we're looking at that being true for some time now. Double the power the power even on paper isn't "barely beating the average".
2. Consoles can be optimized in a way that pcs cannot be. This makes the comparison on paper meaningless except to establish a starting point from which the optimization takes place. We've already discussed this though.
3. Finally and most importantly, it doesn't matter what the average pc is. Gaming development advances are bottlenecked by consoles, not pc's. AAA developers base their game on the weakest link. The PC version of the game is given a nicer look but under the hood it's still made for the console. So consoles that are 10x their predecessors which were themselves already looking very good is a tremendous leap forward. Especially in an area where our advancements in pcs appear to be slowing down.

Though they just announced the successful use of nano tubes in processor construction. It could be huge where smaller and smaller processing is concerned.

If you buy a dell computer that means that you get a computer where all components are compatible with eachother.
Compatibility doesn't matter. They still can't be optimized by developers. With a console, they know exactly what the cpu, gpu, and Ram is like. They have dev kits specifically made for the consoles and do testing in each area. So they can code in ways that specifically call on the RAM, CPU, or GPU when that's the best one to use and in the way that is the best way to us. In a pc, even the brand and frequency of the RAM aren't a given. Different video cards have different drivers and can be optimized in different ways. As such, if you coded for only one type of video card or cpu it would be significantly less capable on other machines if it's even playable at all.

I'm not sure why you're disagreeing with me on this point. This is one of the most widely known advantages of the console.

Can you name one world renown international company that is like that?
Most of the stuff that Elon Musk has his hands in go that way. Bill Gates has taken most of the profit he's made from Microsoft and has converted it into a hugely philanthropic non-profit bent on ridding the world of what evils it can and encouraging innovation to make life better. Google also marches heavily in those areas of philanthropic spending to reward innovators in most areas of science.

Yeah, I think we're beginning to see a trend where social spending becomes a part of corporate image.