New-Tech Europe | November 2016 | Digital edition
frame rendered using multiview. As expected since our CPU is only sending one draw call, we are only processing once on the CPU. Also, on the GPU the vertex job is smaller since we are not running the non- multiview part of the shader twice. The fragment job, however, remains the same as we still need to evaluate each pixel of the screen one by one. Relative CPU Time As we have seen, multiview is mainly working on the CPU by reducing the number of draw calls you need to issue in order to draw your scene. Let us consider an application where our CPU is lagging behind our GPU, or in other words is CPU bound. In this application the number of cubes is changing over time, starting from one and going up to one thousand. Each of them is drawn using a different draw call - obviously we could use batching, but that’s not the scope here. As expected, the more cubes we add, the longer the frame will take to render. On the graph below, where smaller is better we have measured the relative CPU time between regular stereo (Blue) and multiview (Red). If you remember the timeline, this result was expected as multiview is halving our number of draw calls and therefore our CPU time. Relative GPU Time On the GPU we are running vertex and fragment jobs. As we have seen in the timeline (Fig. 3), they are not equally affected by multiview, in fact only vertex jobs are. On Midgard and Bifrost based Mali GPUs only multiview related parts in the vertex shaders are executed for each view. In our previous example we looked
Fig. 2: Regular Stereo job scheduling timeline
Fig. 3: Multiview job scheduling timeline.
Fig. 4: Scene used to measure performances
Fig. 5: Relative CPU time between multiview and regular stereo. The smaller the better, with the number of cubes on the x-axis and the relative time on the y-axis. Multiview in red, and regular stereo in blue
48 l New-Tech Magazine Europe
Made with FlippingBook