danpascooch said:
CrystalShadow said:
danpascooch said:
monnes said:
danpascooch said:
danpascooch said:
Amethyst Wind said:
Hehehe.
"We could tweak it......but fuck you guys."
My thoughts exactly.
Anyway, I don't believe them, but if I did, it would make sense. Sure it can't hurt to be more sensitive, but if it makes no real difference use the development time improving something else.
monnes said:
danpascooch said:
Amethyst Wind said:
Hehehe.
"We could tweak it......but fuck you guys."
My thoughts exactly.
Anyway, I don't believe them, but if I did, it would make sense. Sure it can't hurt to be more sensitive, but if it makes no real difference use the development time improving something else.
Being more sensetive sure could hurt, I think that's the whole point of why they're not making it more accurate. Just read the last paragraph of the article. Seems to me like most people are missunderstanding this, although I suppose I could be the one who didn't get it.
More sensitive = closer to real life.
I don't have over-sensitivity problems in real life.
for certain games, yes, but for example FPS games, regestering the slightest movement is just silly, considering that without the weight of the gun you are bound to shake a bit.
That's a good point, which is why the Move should have as much accuracy as possible, and let the developers of the games tune the responses on their end as appropriate to the type of game they are making, limiting everyone for those handful of cases when the sensitivity could be tuned on the developers end is stupid.
Except, given how the move functions, more accuracy probably means higher CPU utilisation.
Sure, the developers can artificially lower the accuracy if it's too high for their purposes, but that would generally mean taking the highest accuracy, and then lowering it after the fact.
Remember, the position is being tracked by noting where a coloured sphere appears on a webcam.
That implies some degree of image processing, and image processing isn't cheap in terms of processing time.
In any event, the comment makes sense really.
When a controller becomes too sensitive, it has unintended consequences.
If you are a developer dealing with analog sticks on a traditional controller, you have to think very carefully about dead zones so the control is accurate when it needs to be, but doesn't do 'precise' but irritating motions because the player isn't precise enough when making fast movements to realise that it's their own fault.
As a result, you have to intentionally lower the accuracy a lot of the time, or get blamed for it.
I consider it the same principle as taking a professional photo or filming a major movie, always get it in the HIGHEST resolution possible (or in this case, precision) because it can always be lowered later if need be, but it can't be raised.
Also, it's still processing the same image, so I don't think there would be a large CPU increase, considering that by saying "tweaking" implies not swapping out the camera for a better one I think they are talking more about how the software interprets the sensors on the controller, I could be wrong. But still, since when has "it might cause more of a load on the CPU" ever been a problem with the PS3!? The thing has way TOO much power, that's why it was such a ridiculous price to begin with.
Lol. One thing being a game programmer has taught me is there is
NEVER enough power for what you want to do.
And you might ask why processing an image with a finite resolution could require more cpu time if you need higher accuracy...
Well, image recognition isn't a simple task like that. It's not 1 pixel = 1 data point. Typically, most such algorithms perform increasingly complex processes.
For something like move, you are tracking a coloured sphere. You have the benefit of being able to control it's colour, to make the contrast as high as possible, so you can probably almost get away with doing a pixel compare for each pixel in the image, and in the most naive approach, determine the centre and 4 corners of this sphere. (the centre gives x and y positions, the 4 edges can be used to determine depth, with some simple math.)
Now, according to it's specs, the camera Sony is using has 2 resolutions, and 2 possible framerates. (320x240@120hz, 640x480@60hz, and presumably, 320x240@60hz is also an option.)
This means, the lowest quality setting for the camera is 1/2 the medium one, and 1/4 the maximum one.
So going with the maximum resolution of the camera could increase the cpu load by up to 4 times.
But, there are other things you can do to increase the resolution;
Firstly, you could process the edges of the sphere on the image to increase the accuracy. This sounds trivial, but it actually means doing several extra processes over the simpler method.
firstly, previously you could assume the sphere was a single uniform colour, and you only needed to keep track of the handful of pixels that define the extremes of the sphere's position.
If you try and track the edges more accurately, you then have to deal with the pixels that are a blend of the sphere colour and the background. This takes a lot more work than having a single hard colour transition as a boundary.
And if you truly want maximum accuracy, you have to compare the edge of the entire sphere, not just a handful of the most extreme outer points.
From there, you can do inter-frame processing. Eg, taking into account to motion between one frame and the next, which ups cpu use further.
And I'm sure there's even more that could be possible if you really wanted to bother with it, but it would mostly just be a wasted effort.
How much extra effort would this really take though? I can't say for sure, but I wouldn't be surprised if you gained maybe 10-20% more accuracy from about 2x the cpu usage...
These kinds of problems are rarely linear in terms of how long they take to process for any given level of accuracy.