<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>rss.livelink.posts-in-node</title>
    <link>https://research.activision.com/t5/Research-Blog/ct-p/research-blog</link>
    <description>rss.livelink.posts-in-node</description>
    <pubDate>Mon, 17 Jul 2017 13:53:38 GMT</pubDate>
    <dc:creator>research-blog</dc:creator>
    <dc:date>2017-07-17T13:53:38Z</dc:date>
    <item>
      <title>Rendering of Call of Duty: Infinite Warfare</title>
      <link>https://research.activision.com/t5/Research-Papers/Rendering-of-Call-of-Duty-Infinite-Warfare/ba-p/10308283</link>
      <description>&lt;P&gt;This lecture presents a technical deep dive into COD:IW renderer and the architectural design process behind it. Numerous topics will be covered: shadows, particle rendering, reflections, refractions, global illumination, volumetric rendering and Forward+ rendering.&lt;BR /&gt;This talk will guide you through high level interactions between those core subsystems, their design, as well as unveil some novel optimizations and image quality improvements. Finally, the lecture will show specific approaches to performance scaling, that allow fluid 60hz gameplay at 1080p on current generation consoles. The target audience is rendering engineers as well as technical artists interested in learning about modern methods for 60hz titles.&lt;/P&gt;
&lt;P&gt;Click here to view the full presentation in: &lt;A href=" https://www.activision.com/cdn/research/2017_DD_Rendering_of_COD_IW.pdf" target="_self"&gt;PDF&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Click here to download&amp;nbsp;the full presentation in: &lt;A href=" https://www.activision.com/cdn/research/2017_DD_Rendering_of_COD_IW_V3.pptx" target="_self"&gt;PowerPoint&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 08 Jul 2017 00:39:08 GMT</pubDate>
      <guid>https://research.activision.com/t5/Research-Papers/Rendering-of-Call-of-Duty-Infinite-Warfare/ba-p/10308283</guid>
      <dc:creator>mdrobot</dc:creator>
      <dc:date>2017-07-08T00:39:08Z</dc:date>
    </item>
    <item>
      <title>Ambient Dice</title>
      <link>https://research.activision.com/t5/Research-Papers/Ambient-Dice/ba-p/10284641</link>
      <description>&lt;P&gt;We present a family of basis functions designed to accurately and efficiently represent illumination signals on the unit sphere.&lt;BR /&gt;The bases are built of locally supported functions, needing three to six basis functions in a given direction.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click here for the complete paper:&amp;nbsp;&lt;A href="https://www.activision.com/cdn/research/ambient_dice_web.pdf" target="_self"&gt;PDF&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 08 Jul 2017 00:48:11 GMT</pubDate>
      <guid>https://research.activision.com/t5/Research-Papers/Ambient-Dice/ba-p/10284641</guid>
      <dc:creator>miciwan</dc:creator>
      <dc:date>2017-07-08T00:48:11Z</dc:date>
    </item>
    <item>
      <title>Fast Filtering of Reflection Probes</title>
      <link>https://research.activision.com/t5/Research-Papers/Fast-Filtering-of-Reflection-Probes/ba-p/10046672</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Game and movie studios are switching to physically based rendering en masse, but physically accurate filter convolution is&lt;BR /&gt;difficult to do quickly enough to update reflection probes in real-time. Cubemap filtering has also become a bottleneck in the&lt;BR /&gt;content processing pipeline. We have developed a two-pass filtering algorithm that is specialized for isotropic reflection kernels, &lt;BR /&gt;is several times faster than existing algorithms, and produces superior results. The first pass uses a quadratic b-spline recurrence &lt;BR /&gt;that is modified for cubemaps. The second pass uses lookup tables to determine optimal sampling in terms of placement, mipmap level,&lt;BR /&gt;and coefficients. Filtering a full 1282 cubemap on an NVIDIA GeForce GTX 980 takes between 160 μs and 730 μs with our method,&lt;BR /&gt;depending on the desired quality.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click here to download the complete paper:&amp;nbsp;&lt;A href="https://www.activision.com/cdn/research/paper_egsr.pdf" target="_self"&gt;PDF&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H1 id="post-subject"&gt;&amp;nbsp;&lt;/H1&gt;
&lt;DIV class="blog-blurb"&gt;&amp;nbsp;&lt;/DIV&gt;</description>
      <pubDate>Thu, 06 Jul 2017 00:00:56 GMT</pubDate>
      <guid>https://research.activision.com/t5/Research-Papers/Fast-Filtering-of-Reflection-Probes/ba-p/10046672</guid>
      <dc:creator>CTRND</dc:creator>
      <dc:date>2017-07-06T00:00:56Z</dc:date>
    </item>
    <item>
      <title>Filmic SMAA: Sharp Morphological and Temporal Antialiasing</title>
      <link>https://research.activision.com/t5/Research-Papers/Filmic-SMAA-Sharp-Morphological-and-Temporal-Antialiasing/ba-p/10012720</link>
      <description>&lt;P&gt;From Siggraph 2016, our antialiasing technique for games.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Click here to download complete presentation: &amp;nbsp;&lt;A href="https://www.activision.com/cdn/research/FilmicSMAAfinal.pptx" target="_self"&gt;PowerPoint PPTX&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 13 Jul 2017 17:41:31 GMT</pubDate>
      <guid>https://research.activision.com/t5/Research-Papers/Filmic-SMAA-Sharp-Morphological-and-Temporal-Antialiasing/ba-p/10012720</guid>
      <dc:creator>CTRND</dc:creator>
      <dc:date>2017-07-13T17:41:31Z</dc:date>
    </item>
    <item>
      <title>ATVI-TR-16-02: Practical Order Independent Transparency</title>
      <link>https://research.activision.com/t5/Tech-Reports/ATVI-TR-16-02-Practical-Order-Independent-Transparency/ba-p/10008126</link>
      <description>&lt;P&gt;Transparencies have always been a tricky problem for games, to render them correctly you have to draw them in front-to-back order from the view of the camera, traditionally this means to sort them when you do this you have to break apart material batches and sometimes even meshes, which affects performance. Order Independent Transparency (OIT) solves all these problems but simple implementations severely limit the number of layers of transparencies you can have. This document describes a method that combines OIT with software rasterized sprites via compute shaders, which we’ll call ”Compute Sprites”. Most of our transparent layers were due to particles, the Compute Sprites efficiently take care of most of them, which allows for a practical implementation of OIT for the remaining mesh transparencies in the scene.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Click here for the full technical report: &amp;nbsp;&lt;A href="https://research.activision.com/eikmo72643/attachments/eikmo72643/tech-reports/19/6/PracticalOIT.pdf" target="_self"&gt;PDF&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Jul 2017 17:35:23 GMT</pubDate>
      <guid>https://research.activision.com/t5/Tech-Reports/ATVI-TR-16-02-Practical-Order-Independent-Transparency/ba-p/10008126</guid>
      <dc:creator>CTRND</dc:creator>
      <dc:date>2017-07-06T17:35:23Z</dc:date>
    </item>
    <item>
      <title>Efficient GPU Rendering of Subdivision Surfaces</title>
      <link>https://research.activision.com/t5/Research-Papers/Efficient-GPU-Rendering-of-Subdivision-Surfaces/ba-p/10002177</link>
      <description>&lt;P&gt;We present a novel method for real-time rendering of subdivision surfaces whose goal is to make subdivision faces as easy to render as triangles, points, or lines. Our approach uses standard GPU tessellation hardware and processes each face of a base mesh independently, thus allowing an entire model to be rendered in a single pass. The key idea of our method is to subdivide the u, v domain of each face ahead of time, generating a quadtree structure, and then submit one tessellated primitive per input face. By traversing the quadtree for each post-tessellation vertex, we are able to accurately and efficiently evaluate the limit surface.&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Click below&amp;nbsp;to download the complete paper and supporting documents:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Video:&amp;nbsp;&lt;A title="Video " href="https://www.youtube.com/watch?v=PTGTviYwolE" target="_self"&gt;https://www.youtube.com/watch?v=PTGTviYwolE&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Bib TeX:&amp;nbsp;&lt;A href="http://www.graphics.stanford.edu/~niessner/papers/2016/4subdiv/brainerd2016efficient.bib" target="_blank"&gt;http://www.graphics.stanford.edu/~niessner/papers/2016/4subdiv/brainerd2016efficient.bib&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;PDF:&amp;nbsp;&lt;A href="http://www.graphics.stanford.edu/~niessner/papers/2016/4subdiv/brainerd2016efficient.pdf" target="_self"&gt; http://www.graphics.stanford.edu/~niessner/papers/2016/4subdiv/brainerd2016efficient.pdf&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Stanford Project: &lt;A href="http://www.graphics.stanford.edu/~niessner/brainerd2016efficient.html" target="_self"&gt;http://www.graphics.stanford.edu/~niessner/brainerd2016efficient.html&amp;nbsp;&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Jul 2017 00:03:20 GMT</pubDate>
      <guid>https://research.activision.com/t5/Research-Papers/Efficient-GPU-Rendering-of-Subdivision-Surfaces/ba-p/10002177</guid>
      <dc:creator>CTRND</dc:creator>
      <dc:date>2017-07-06T00:03:20Z</dc:date>
    </item>
    <item>
      <title>Volumetric Global Illumination at Treyarch</title>
      <link>https://research.activision.com/t5/Research-Papers/Volumetric-Global-Illumination-at-Treyarch/ba-p/10002170</link>
      <description>&lt;P&gt;We present a solution for indirect diffuse lighting as an alternative to traditional lightmaps. The primary goals were to reduce&amp;nbsp;light baking times, and to allow&amp;nbsp;the lighting to apply to moving objects and effects with the same quality as the environment. Our solution was to use carefully placed irradiance volumes. These were baked using a unique image-based sampling approach that makes use of multiple input images. We also discuss the evolution of this idea and the many caveats and dead ends we experienced along the way.&lt;/P&gt;
&lt;P&gt;Click here to see the full PowerPoint:&amp;nbsp;&lt;A href="/t5/forums/editpage/board-id/research-papers/message-id/\\kmyers-1064\siggraph\SparseShadowTree.pptx" target="_self"&gt;SparseShadowTree PPTX&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 04 May 2017 22:28:36 GMT</pubDate>
      <guid>https://research.activision.com/t5/Research-Papers/Volumetric-Global-Illumination-at-Treyarch/ba-p/10002170</guid>
      <dc:creator>CTRND</dc:creator>
      <dc:date>2017-05-04T22:28:36Z</dc:date>
    </item>
    <item>
      <title>Practical Real-Time Strategies for Accurate Indirect Occlusion</title>
      <link>https://research.activision.com/t5/Research-Papers/Practical-Real-Time-Strategies-for-Accurate-Indirect-Occlusion/ba-p/10002180</link>
      <description>&lt;P&gt;We present new systems for occlusion of indirect lighting: GTAO for screen-space ambient occlusion, which yield superior results than the current state of the art (HBAO) while being as fast as one of the fastest screen-space techniques (HemiAO), and GTSO, which is a series of approximations that accurately model specular occlusion under probe lighting.&lt;/P&gt;
&lt;P&gt;Click here for the complete presentation:&amp;nbsp;&lt;A href="https://www.activision.com/cdn/research/s2016_pbs_activision_occlusion.pptx" target="_self"&gt;Power Point&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 04 May 2017 20:47:53 GMT</pubDate>
      <guid>https://research.activision.com/t5/Research-Papers/Practical-Real-Time-Strategies-for-Accurate-Indirect-Occlusion/ba-p/10002180</guid>
      <dc:creator>CTRND</dc:creator>
      <dc:date>2017-05-04T20:47:53Z</dc:date>
    </item>
    <item>
      <title>Sparse Shadow Trees</title>
      <link>https://research.activision.com/t5/Research-Papers/Sparse-Shadow-Trees/ba-p/10002172</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Lighting large outdoor scenes continues to present a challenge for realtime rendering. Cascaded shadowmaps are costly to render over large areas, and baked lightmaps are expensive and require an unique parametrization over the entire scene. We present a technique that allows for baking shadowmaps on large outdoors area with minimal memory consumption that is amenable to deferred rendering.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Click here to download the complete presentation:&lt;/SPAN&gt;&amp;nbsp;&lt;A href="https://www.activision.com/cdn/research/SparseShadowTree.pptx" target="_self"&gt;&lt;SPAN&gt;P&lt;/SPAN&gt;owerPoint&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Click here to download the complete paper:&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://research.activision.com/eikmo72643/attachments/eikmo72643/research-papers/3/7/SparseShadowTree.pdf" target="_self"&gt;Paper PDF&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Jul 2017 17:37:32 GMT</pubDate>
      <guid>https://research.activision.com/t5/Research-Papers/Sparse-Shadow-Trees/ba-p/10002172</guid>
      <dc:creator>CTRND</dc:creator>
      <dc:date>2017-07-06T17:37:32Z</dc:date>
    </item>
    <item>
      <title>ATVI-TR-16-01: Practical Realtime Strategies for Accurate Indirect Occlusion</title>
      <link>https://research.activision.com/t5/Tech-Reports/ATVI-TR-16-01-Practical-Realtime-Strategies-for-Accurate/ba-p/9998529</link>
      <description>&lt;P&gt;Ambient occlusion is ubiquitous in games and other real-time applications&lt;BR /&gt;to approximate global illumination effects. However there&lt;BR /&gt;is no analytic solution to ambient occlusion integral for arbitrary&lt;BR /&gt;scenes, and using general numerical integration algorithms is too&lt;BR /&gt;slow, so approximations used in practice often are empirically made&lt;BR /&gt;to look pleasing even if they don’t accurately solve the AO integral.&lt;BR /&gt;In this work we introduce a new formulation of ambient occlusion,&lt;BR /&gt;GTAO, which is able to match a ground truth reference in half a&lt;BR /&gt;millisecond on current console hardware. This is done by using&lt;BR /&gt;an alternative formulation of the ambient occlusion equation, and&lt;BR /&gt;an efficient implementation which distributes computation using&lt;BR /&gt;spatio-temporal filtering. We then extend GTAO with a novel technique&lt;BR /&gt;that takes into account near-field global illumination, which&lt;BR /&gt;is lost when using ambient occlusion alone. Finally, we introduce a&lt;BR /&gt;technique for specular occlusion, GTSO, symmetric to ambient occlusion&lt;BR /&gt;which allows to compute realistic specular reflections from&lt;BR /&gt;probe-based illumination. Our techniques are efficient, give results&lt;BR /&gt;close to the ray-traced ground truth, and have been integrated in&lt;BR /&gt;recent AAA console titles.&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Click here to read the full technical report:&amp;nbsp;&lt;A href="https://www.activision.com/cdn/research/PracticalRealtimeStrategiesTRfinal.pdf" target="_self"&gt;PDF&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 07 Jul 2017 19:06:30 GMT</pubDate>
      <guid>https://research.activision.com/t5/Tech-Reports/ATVI-TR-16-01-Practical-Realtime-Strategies-for-Accurate/ba-p/9998529</guid>
      <dc:creator>CTRND</dc:creator>
      <dc:date>2017-07-07T19:06:30Z</dc:date>
    </item>
  </channel>
</rss>

