URP Customization | Unity3D

Understanding the Basic Fundamentals

SebastianShearer
10 min readDec 6, 2020

Unity’s new Render Pipelines comes with some pretty awesome new API that allows us, developers, to reach AAA results with great optimization when it comes to rendering and customizations.

We plan to release tutorials on how to add your own customizations and create your own render pipeline for future projects, but before we jump into advanced projects we wanted to highlight some important aspects of URP in order to fully grasp an understanding of how URP works and what its capabilities are.

Table of Contents:
- Universal Render Pipeline
- URP Shaders
- URP Post-Processing
- Customizing URP
- Summary

Universal Render Pipeline

URP itself functions by using forward renderer, shading models, camera & UniversalRP API, URP implements a rendering loop that checks all our inputs and tells how to render a frame.

When we add our render pipeline in our Graphic Settings Unity starts to use URP to render all Cameras. For each camera, URP executes a camera loop that culls rendered objects, builds data for the renderer, and outputs an image to the framebuffer.

A lot of our information about URP will be coming from the official documentation. Which is the best resource to use when learning about Unity!

Camera Loop Steps:

Since a lot of our customization will be surrounded by working with the Camera, it's smart to understand the processes and schedule that our new buddy undergoes on the daily/second/nanosecond/frame ;).

Setup Culling Parameters: Configures the parameters that determine how the system culls shadows and lighting, this can be controlled with a custom renderer

Culling: uses the culling parameters that we have setup up previously, a list of visible renderers, shadow casters, and lights visible to the Camera are computed. Culling parameters and Camera layer distances affect the culling and rendering performance

Build Rendering Data: Gathers information based on the culling output, quality settings from the URP Asset, Camera, and the current running platform to build the “RenderingData”. The Rendering data tells the renderer the amount of rendering work and quality required for the Camera and the currently chosen platform.

Setup Renderer: Builds a list of render passes and queues them for execution according to the rendering data. You can override this part of the render pipeline with a custom renderer.

Execute Renderer: Executes each render pass in the queue. The renderer outputs the Camera image to the framebuffer.

URP Shaders

URP has a different approach when it comes to shaders, built-in custom shaders do not work with URP, so they included a new set of materials that are made for common use scenarios. You are given the choice to have real-time lighting with Physically Based Shaders(PBS) and non-physically Based Rendering(PBR). Unity describes the best case scenarios in better detail here.

With the Universal Render Pipeline, you can have real-time lighting with either Physically Based Shaders (PBS) or non-Physically Based Rendering (PBR).

PBS: Lit Shader | All Platforms
Non-PBR: Simple Lit Shader | Less Powerful Devices
Baked Lighting: Baked Lit Shader
No Lighting: Unlit Shader

Post-Processing

URP has its own implemented version of post-processing that is unrelated to older versions, also known as post-processing v3, but the post-processing stack v2 in the package manager is compatible, but we won't be covering that version here.

Volumes

URP uses what’s called a Volume framework for its post-processing, each Volume can either be global or have local boundaries. They contain Scene setting property values that URP interpolates between them, depending on Camera position. Local Volumes can be used to change environment settings, like fog color and density.

The Volume component can be added to any GameObject, including a Camera, though Unity recommends adding this component to a dedicated GameObject for each Volume.

The Volume component only references a Volume Profile which contains the values to interpolate between. Right-click, create, and choose Volume Profile to add one to your scene.

You can alter properties that are set at default by adding Volume overrides, these are structures containing overrides for the Volume Profile’s default values.

Our scene can contain many Volumes, generally, you always have 1 Global Volume, and can add as many Local Volumes as needed. Local Volumes affect the Camera if they encapsulate the Camera within the bounds of their Collider (Add any collider to a Volume Component to use Local Volumes).

URP uses the Camera position and Volume properties to calculate the contribution of every Volume in the scene, any Volumes with a non-zero contribution are used to calculate interpolated final values for every property in all Volume Components.

Blend Distance: This is the distance away from the Volume’s Collider that URP will start to blend from. 0 means URP will apply the override the second it enters the collider. (Local Volumes Only)

Weight: The amount of power/influence the Volume has on the scene, this is applied when calculating Camera position and Blend Distance.

Priority: URP uses this to determine which Volume should be used when Volumes have an equal amount of influence on the Scene. Higher numbers are used first.

Post-Processing Document Links
Tips for Post-Processing
Read more about Volumes
Read more about Volume Overrides
List of Default Effects

Customizing URP

As we mentioned above, URP has its own separate version of Post-Processing that comes pre-installed, this is separate from the “Post-Processing Stack v2” in the package manager but it is still supported for URP. The main difference when using PPv2 is how you can create custom effects. We will be covering only the pre-installed systems of URP, to learn more about PPv2 customizations click any of the links below.

PPv2 Documentation
PPv2 Custom Effects
PPv2 Video Tutorial

For making customizations in the default URP packages, we use the “beginCameraRendering” event before it is rendered in each active Camera every frame. The Camera needs to be active in order for this event to be called, adding this method to an event allows you to execute custom logic before the Camera is rendered by Unity.

Examples of custom logic are extra Cameras for Rendering textures and using those textures for extra effects like planar reflections or surveillance cameras, your imagination is the limit.

Other events exist that can be used for customizations in URP that we will be covering deeper in a further tutorial, for now, you can read them here.

Pipeline Callbacks

This is where knowing the Camera Loops comes in handy, for example when our Camera is setting up our culling parameters that means our lights, objects, and probes are being rendered.

Next is to build our rendering data, Lights, Objects, Shadows, HSR, Post, Stereo, Target Output, Format, Msaa & Resolution.

Last our camera will execute our render data through render passes and output our final result.

Example Script

First, add a game object to the scene and create a new script called “BasicCallBackExample”, remember to add “using UnityEngine.Rendering” at the top of your script.

We will be showing two scripts that basically do the same thing, just in different ways.

Example 1
Example 2

In both examples, we show how URP calls a method at the beginning of every camera render, the only difference is the point of call, you have the freedom to choose how to control your URP customizations.

Example 1 Documentation
Example 2 Documentation

Scriptable Render Feature
Render Features allows us to create our own rendering system that isn't from scratch, unfortunately, the documentation on how to get this started is a bit scarce, so let's go over the idea here.

We can create multiple Render Features for our custom Pipeline, each feature will hold two classes, “ScriptableRendererFeature” and one that extends “ScriptableRenderPass”. Our “pass” handles all the rendering work while the “feature” handles communicating with the rest of our Pipeline.

Invert Render Feature

We are going to explain how to create your own invert render feature using URP’s Grahpics.Blit is best for creating post-processing effects. Blit grabs our “dest” which is our RenderTexture (null Textures will result in the screen being grabbed) and draws a full-screen quad. (More into Kernel Convolution)

Create two new C# scripts, one named “InvertRenderPass” and the other named “InvertFeature”

Make sure to add these dependencies at the top of your script, “InvertFeature” only requires the “Universal” dependency.

Make sure to replace “MonoBehavior” with “ScriptableRenderFeature” inside your script titled “InvertRenderFeature”, we are going to build our settings that will add to our render passes first.

Make sure to title “MyFeatureSettings” to a lowercase “settings” in order for our variables to appear. “[System.Serializable]” is important for our script to be readable from SRP as well.

All of these variables will be shown in the inspector when we add our SRF to the pipeline renderer.

Next, we add an override Create() void and set up our “InverRenderPass” & “RenderTargetHandle”, inside our Create() void this will control the basic parameters of our feature, “My custom pass” refers to the name seen in the Frame Debugger found under “Window > Analysis > Frame Debugger”, which is also going to be our “profilerTag” in our next script.

Lastly, we complete our script with our “AddRenderPasses”, returning if our settings show us disabled and having our renderer handle the rest. Let's dig into that.

Open your “InverRenderPass” script and replace “MonoBehavior” with “ScriptableRenderPass”

Create new variables for our RenderPass to grab that is being sent from our last script.

“Setup” is being called as a custom addition to get the color target of the camera which we grabbed with “renderer.cameraColorTarget”.

We gave permission for our script to run when we called “renderer.EnqueuePass(myRenderPass)”.

Our next step is to create another override void “Configure()” that is going to be handling the creation of our temporary render texture for each updated frame.

We need to create another override void after our “Configure()” void called “Execute()”, this will be responsible for grabbing the parameters we plugged into our settings, first we need to grab our CommandBuffer by getting the name of our feature and clear all settings that could be leftover from earlier features.

Next, we apply our cameraColor using “cmd.Blit” which applies our material with a temporary texture. We need to “cmd.Blit” one more time, using our temporary texture and finally execute our commands, clear and release the “CommandBufferPool”.

And the final void, “FrameCleanup()” is used for allocating anything in Configure.

Now all that's left is applying it! Create a new “Unlit Graph” so we can create a shader to fill in our new texture( to really understand the magic, we explain that in another article here).

Create a Texture2D variable and make sure to change the reference name to “_MainTex” so our feature can find it. Generally, imagine your scene turned into a picture and you editing it on an image editor. This is how post-processing works, using bloom, sharpen, invert, etc. It’s changing pixels, given with some Unity3D magic we can be more unique and dynamic than other programs allow.

Create a new material, add your shader to it and open your UniversalRenderPipeline_Renderer, there you will see an option for “Render Features” with a button to add a new one. Your feature should automatically pop up.

You can edit the shader to play around with ideas for new post-processing effects.

Thanks to Sam Driver for the deeper understanding

Camera Stacking
This is used for mixing many cameras into one view, Camera Stacking is capable of many different possibilities and we plan to cover examples of that in further tutorials. For now, if this is something you're interested in, we have linked Unity’s official tutorial below.

Unity Tutorial on Camera Stacking

Summary

We appreciate you reading our tutorial! This is just a short breakdown and a few examples of how URP can be used to create amazing effects. We are excited to release future tutorials as we create our own render features and URP customizations.

Join GooSlimeStudios Discord
Download Scripts

https://www.patreon.com/GooSlime?fan_landing=true

--

--

SebastianShearer
SebastianShearer

No responses yet