Today Futuremark revealed that they are in the process of developing a benchmark for virtual reality hardware and displays. In the same naming style as PCMark and 3DMark, this new virtual reality benchmark will be called VRMark. According to Futuremark, VRMark will use a combination of software and hardware to evaluate a system's ability to provide a high quality virtual reality experience.

Because virtual reality systems have many aspects that all need to be functioning properly to provide a good experience, the process of benchmarking them is different from how you would test a computer or a smartphone. VRMark will evaluate a system's ability to provide a high and consistent frame rate, as with virtual reality it's important to both have a high frame rate, as well as to ensure that the timing between those frames is consistent.

In addition to measurements of the hardware's ability to render and display frames in a timely and smooth manner, VRMark will evaluate the sensors located in a VR head-mounted display. Lowering sensor latency has been a big part of the development process for VR headsets, and VRMark will help companies and reviewers evaluate this aspect of VR system performance.

There's currently no word on when VRMark will be released, apart from the promise that it will launch in 2015. Virtual Reality benchmarks like VRMark will certainly be a useful tool to see how the various VR headsets currently being developed compare to one another.

Source: Futuremark

Comments Locked

4 Comments

View All Comments

  • Wwhat - Wednesday, June 10, 2015 - link

    Without a unified agreed system nor the interface to test many aspects that need to be tested I don't see how this can be done successfully at this time, especially with software only.
  • FM-Jarnis - Thursday, June 11, 2015 - link

    It is not software only. Quoting the PR:

    "VRMark is our first major product announcement since we joined forces with UL. Combining the expertise of both companies, we will additionally offer professional lab-based VR testing with precision instruments and verification of performance claims to manufacturers and other customers."
  • ishould - Wednesday, June 10, 2015 - link

    On the mobile side of things, I wonder how they make it fairly consistent across devices for scoring. I'm sure you can get more efficiency gearing towards an Kepler, Adreno, Mali, Pascal, etc. Any efficiencies found in the programs would raise the scores across all architectures. How do they do cross-platform benchmarks (ARM vs x86) now?
  • edzieba - Thursday, June 11, 2015 - link

    This either needs pervasive and reliable latency testers built into HMDs (like the DK2's 'pulse the first transmitted pixel' in-display-controller latency timer), or include a hardware latency tester. IMU testing could be as simple as a small table with a 'thumper' solenoid to provide a nice sharp impulse with a known cycle time (i.e. you know how long the solenoid takes to move, so you know when the IMu should report an impulse).
    Volume tracking system linearity and reliability is going to be a LOT harder to test though. The 'gold standard' is to stick the HMD on a robotic arm and bring it through a set of known movements, to compare those to the tracked movements. That's not something most testers have lying around, let alone end-users.

    Even without testing the tracking system's performance, this is going to be closer to an FCAT setup than a software-only test.

Log in

Don't have an account? Sign up now