The screen is close to your face, but that matter is mitigated by the lenses, which also stretch the images across your field of view.
Optics in previous consumer grade headsets have preserved the geometry of the panel, presenting you with that screen that seems to float in front of you, but the Oculus doesn't care to try; Its lenses focus on making sure the image fills your view, then lets the software adjust the image on the viewplane, to compensate for the distortion, either using a post-processing displacement filter, or even rendering to a non-flat-and-rectangular viewplane to begin with.
There will no doubt be a number of settings, such as separation, to adjust for comfort, depending on how you are built and prefer things (maybe you want exaggerated depth in some situations, for instance).
The biggest matter should be latency: you'll probably want to optimise performance, so that you get good response and FPS, even if that means sacrificing a bit of quality -- Temporal fidelity over visual.
The one thing I can think of, off the top of my head, that can currently not be worked around, is the fact that the screen is at one distance from your eyes, both physically and given the compensation of the lenses, which means your eyes can not focus on different depths within the view, and which may cause discomfort, until your eyes learn to relax that reflex while using the headset.
There may be a halfway solution, at some point, with pupil tracking built into the headset, so that the software can simulate DOF, depending on what you are looking at. Your own lenses will still have to remain focused on the depth of the physical screen, but it may become easier to adapt. The problem is what to do in uncertain situations, such as when you are looking through foliage or a pane of glass; do you want to focus on the room behind the glass, or the smudge on the pane? Even such a matter as taking an overlook view, where you are paying attention to peripheral things as much as what is right ahead of you.
Optics in previous consumer grade headsets have preserved the geometry of the panel, presenting you with that screen that seems to float in front of you, but the Oculus doesn't care to try; Its lenses focus on making sure the image fills your view, then lets the software adjust the image on the viewplane, to compensate for the distortion, either using a post-processing displacement filter, or even rendering to a non-flat-and-rectangular viewplane to begin with.
There will no doubt be a number of settings, such as separation, to adjust for comfort, depending on how you are built and prefer things (maybe you want exaggerated depth in some situations, for instance).
The biggest matter should be latency: you'll probably want to optimise performance, so that you get good response and FPS, even if that means sacrificing a bit of quality -- Temporal fidelity over visual.
The one thing I can think of, off the top of my head, that can currently not be worked around, is the fact that the screen is at one distance from your eyes, both physically and given the compensation of the lenses, which means your eyes can not focus on different depths within the view, and which may cause discomfort, until your eyes learn to relax that reflex while using the headset.
There may be a halfway solution, at some point, with pupil tracking built into the headset, so that the software can simulate DOF, depending on what you are looking at. Your own lenses will still have to remain focused on the depth of the physical screen, but it may become easier to adapt. The problem is what to do in uncertain situations, such as when you are looking through foliage or a pane of glass; do you want to focus on the room behind the glass, or the smudge on the pane? Even such a matter as taking an overlook view, where you are paying attention to peripheral things as much as what is right ahead of you.