Escapist Podcast - Science and Tech: 029: The Great Frame Rate Debate

The Escapist Staff

New member
Jul 10, 2006
6,151
0
0
029: The Great Frame Rate Debate

In this episode of The Escapist's Science and Tech podcast, host CJ Miozzi and Escapist writers discuss recent headlines in the world of science and technology, as well as the great debate over 24 FPS vs. 48 FPS in movies, and 30 vs. 60 FPS in video games.

Watch Video
 

Raziel

New member
Jul 20, 2013
243
0
0
Split the baby is a bad metaphor. But his point is that he's got to come up with a plan that could get passed. You saw after the statement from Obama that some politicians spoke out immediately against the idea. No matter how popular net neutrality might be with voters, there is a HUGE amount of industry push against it. And this is America, money talks. We're going to get screwed.
 

madwarper

New member
Mar 17, 2011
1,841
0
0
Raziel said:
Split the baby is a bad metaphor.
Except it's not a metaphor. It's a common idiom in that means to make a simple compromise.
http://en.wikipedia.org/wiki/Judgment_of_Solomon#.22Splitting_the_baby.22

The expressions "splitting the baby" or "cutting the baby in half" are sometimes used in the legal profession for a form of simple compromise: solutions which "split the difference" in terms of damage awards or other remedies (e.g. a judge dividing fault between the two parties in a comparative negligence case).
 

gardian06

New member
Jun 18, 2012
403
0
0
Everyone in these things looks at graphical hardware, or resolution when there is a lot more going on then even the "video" enthusiasts talk about. lets talk about what makes up a frame. a game is primarily comprised of what is referred to as a game loop which has 3 basic/primary parts: logical update, physics update (typically based on logical operations), and graphical analysis/render. These parts of the game loop are running constantly, and can be expanded into: pre-input update, input check/processing, post-input update, physics move, collision detection, collision resolution, calculation of camera frustrum, post process effect pass(es) (this includes shaders, some of which have to be done one at a freaking time over the ENTIRE scene), and then render.

now currently I am just giving a list of steps that a game loop goes through, but in reality these steps happen every "frame" (I will get back to frame locking later), and depending on the amount of GameObjects, and the complexity of commands be issued. now your logical operations are running on the primary processor, and then your physics operations are all running on the GPU (cause graphics alone isn't enough for it to do). So in reality if these commands are rather complex, and a large number of them then this "can" get rather slow.

usually the biggest bottleneck in a game is a trade off between physics, AI, and graphics. if you have a highly complex physics environment (deformable-interactive terrain, interactive terrain, or just a lot of concave collidable surfaces). AI can be a bottleneck if you have a highly complex AI algorithm, or even a lot of simple AI algorithms running concurrently, and mind you if each one has a unique instruction, or decision set then it gets worse by a long ways. Graphics can actually only really be a bottleneck when we start talking about highly detailed models, or complex shaders running, but this can still get out of hand quickly.

so this is probably quite a bit of babel to many of people, but what it amounts to is if your processor, RAM, or GPU is slower then "bench-marked" then these bottlenecks start to become a cascading issue.now under the right circumstances it is possible to have an unoptimized game run at a higher frame-rate, and do so consistently, but this is the exception, and not the rule. usually a target frame rate needs to be determined (this determination can be made at any stage of development, but it is typically better to decide it sooner then later) then each portion of the game can be profiled, and optimized accordingly. now if the game can smoothly run at say 75 fps, but every so often there are drops down to 60 fps there are a couple of options: either let this happen (maybe market the game as 60fps anyways to save face), or cap the game at 60fps-render. there is a difference between capping something at XXfps-render, and XXfps when you cap a game as a certain render limit then you are telling the game to process everything for as many frames as it can handle, but only render the capped value (running a clock in the background, and every time the clock hits value X then call render. the other form of cap is where you holding the entire update at ransom to a clock, and then having it process things. this can be exaggerated by having a number of things in the game be controlled by "real-time" rather then "system-ticks" (but that is a lengthier discussion that is trickier to describe without examples that for the life of me I can't think of at the moment).

all in all, at the end of the day: there is a lot going on at play then just the rendering, or just the graphical hardware. you have to consider the entire machine.

source actual experience optimizing numerous games (as an uncredited contractor)

in the video there was mention of what it takes to enable motion blur in a rendered video, and the question was raised as to what it would take to do this in a game "on the fly" in a game if you want motion blur there are 2 ways to accomplish this effectively. the first way is to take the current frame without blur, and then before that frame is rendered grab the next frame find what is different, and then apply blur to the difference. the other way is to use interpolation take frame1 then grab frame2, but instead of rendering frame2 render frame1.5 which is halfway in between with blur (basically what Marla was talking about with the smart TVs). and it is not cut and dry which of these is faster. if there is a low number of things on the screen then you can easily do either with not an extreme impact, but if you have a lot of simple objects on screen then you want the first approach cause we are pretty sure physics is done all the way, but if there are a lot of complex objects then we do the second approach because if the physics didn't finish all the way then it is possible to hand the render a "majority" of the new information and the old, and still be successful, and as long as the incomplete data will be either minor corrections, or to the edges of the frame the eye will not notice "as much". this is probably the most taxing shader/post-process effect you can do in a game currently.

typically in a rendering program the reason it can take such a dramatic increase in compile/render time is that instead of the program looking at the objects in the scene (typically if the program is not 100% layer driven where every object is on a different layer then the program just has to ASSuME things, and even then user ability, and scene density are still drawn into question) it instead has to look at a pixel-by-pixel bases as to what is, and isn't blurred. a simple test for this is to in your rendering software take 2 objects starting on either side of the screen (toward the edges) then in about 2 frames move then to about the opposite sides of the screen respectively (best results if only about 3 quarters of the screen for each). what you should notice is that about the center you should see the 2 objects seem merge together as 1. in a game this would not happen as each object is treated independently from each other object
 

JustMakingAComment

New member
Jun 25, 2014
29
0
0
Regarding net neutrality, people should read http://www.cnet.com/news/comcast-vs-netflix-is-this-really-about-net-neutrality/ and consider that this is not "the people vs evil corporations", but rather "some corporations tricking people into protesting other corporations so the first corporations can make more profit by forcing costs onto the second corporations".

Since this is the "science" podcast, I am sure you are all familiar with the ultra-high-speed, just-for-scientists network that is called Internet2? It is, of course, connected to the Internet. All card-carrying the net neutrality advocates should be calling for Internet2 networks to be made freely available to Netflix so it can push its content even faster at no cost to itself. Because "neutrality" means scientists and the defense department don't get special fast networks either, right?

Regarding frame rates, you can have 500 frames per second if you're willing to run at a lower resolution, not have lighting effects, shadows or other things that consume resources to produce. When people are offered a choice between "better looking" and "refresh faster than about 30 fps", they want "better looking". What do you think people will want: 4K resolution or 60fps? General use of 60fps will happen once display resolution stops increasing and once visual effects processing stops improving.
 

ExileNZ

New member
Dec 15, 2007
915
0
0
I'm sure there are perfectly good hardware-related reasons why we have to choose between 60FPS and 1080p in 2014, but I simply can't get over the fact that the last game I played limited at 60FPS was Quake 2.

Granted, it didn't run at 1080p... oh wait, yes it did.

How did we get here?
 

Hoplon

Jabbering Fool
Mar 31, 2010
1,840
0
0
Hilariously even cinemas play back at 72 fps.

In games the frame rate is more to compensate for sudden busyness on the screen dropping the frame rate, this is compensated for some what by the adaptive refresh rate technologies.
 

JustMakingAComment

New member
Jun 25, 2014
29
0
0
If anyone holds that films play at 72fps, despite knowing that's due to flashing same frames three times, then they should have nothing to complain about with so-called 30fps games.

Because everything on a 60Hz monitor is 60fps, so long as you don't care about the same frames being displayed two or even three times over.

Problem solved.
 

nadesico33

It's tragically delicious!
Mar 10, 2010
50
0
0
In regards to the FCC's letter to AT&T delaying its fiber buildout, as a concession for its buyout of Dish, AT&T agreed to provide high-speed fiber to X number of customers in Y markets by date Z (I don't know the specific numbers), and AT&T had already failed to meet the minimum milestones the FCC had set for this buildout when they decided to postpone it. They are technically in breach of a government order.

More generally, remember that Verizon is treating all its fiber as privately owned and operated, even though they received government subsidies earmarked for the buildout of PUBLIC UTILITIES under the very Title II restrictions they are so vehemently fighting against. So yeah, a decent portion of Verizon's FIOS service are technically public utilities. Why aren't they being treated like it?