Renderers and shapes#

This page describes how renderers turn your data into GPU draw calls and how to work with shapes and instance data.

Built-in renderers#

The library ships several renderers for common visualization tasks:

The tutorials show concrete combinations of these building blocks for instanced geometry, vector fields and selection highlights.

Instance data#

Many renderers, in particular webgpu.shapes.ShapeRenderer, can render thousands of instances efficiently. Typical instance attributes include:

  • positions (3D translation per instance)

  • directions (orientation or direction vectors)

  • values (scalars mapped to colors via a colormap)

  • colors (explicit per-instance RGBA values)

These attributes can be passed either as numpy arrays or as pre-created GPU buffers. For example:

renderer.positions = np.random.randn(N, 3)
renderer.values = my_scalar_field

Changing instance data usually requires a redraw. Call renderer.set_needs_update() after updating large arrays to ensure the next frame rebuilds the relevant GPU buffers.

Writing a custom renderer#

For data types that do not fit the existing renderers, you can implement your own by subclassing webgpu.renderer.Renderer (most common) or, for advanced use, webgpu.renderer.BaseRenderer.

High‑level lifecycle#

Renderers live inside a webgpu.scene.Scene:

  • you create one or more renderers and pass them to Scene,

  • Scene creates a webgpu.renderer.RenderOptions object that contains the camera, light, canvas and command encoder,

  • on every render, each renderer is asked to update (if needed) and then render using those options.

The base classes handle most of this for you:

  • BaseRenderer tracks a timestamp and only calls update(options) when something changed (for example after set_needs_update()),

  • Renderer` builds a RenderPipeline and, by default, performs a simple non‑indexed draw call plus an optional separate selection pass.

In a custom renderer you mainly describe what to draw (WGSL shader, buffers and bindings) and when to (re)build GPU resources.

Choosing a base class#

  • Subclass Renderer when you want a standard graphics pipeline (color + depth, optional selection) and are happy with the default render and select implementations. This is appropriate for most visualization use cases.

  • Subclass BaseRenderer directly only when you need full control over pipeline creation or draw calls (for example multiple passes, unusual topologies or compute‑driven rendering). Then you are responsible for implementing create_render_pipeline and render yourself.

  • Use webgpu.renderer.MultipleRenderer to group several renderer instances into one logical object that shares selection behaviour and on_select callbacks.

Core responsibilities of a renderer#

Regardless of the base class, a renderer is responsible for three things:

  • defining shader code,

  • defining GPU bindings (buffers, textures, samplers),

  • updating those GPU resources when the Python‑side data changes.

The key methods and attributes are:

  • get_shader_code(self) -> str

    Return WGSL shader source. The recommended pattern is to store WGSL in your_module/shaders/ and load it via webgpu.utils.read_shader_file(). The string is run through webgpu.utils.preprocess_shader_code(), which supports #import (for shared code like camera and lighting) and simple #ifdef/@TOKEN@ replacement.

    Two defines are injected automatically:

    • RENDER_OBJECT_ID – a unique integer for this renderer. In shaders you typically use @RENDER_OBJECT_ID@ when writing to the selection buffer.

    • SELECT_PIPELINE – defined only for the selection pipeline, which makes it easy to branch between normal and selection output in the same WGSL file if desired.

  • get_bindings(self) -> list[webgpu.utils.BaseBinding]

    Return a list of bindings (uniform buffers, storage buffers, textures, samplers) that the renderer needs. Use helper classes such as webgpu.utils.BufferBinding, webgpu.utils.UniformBinding and webgpu.utils.TextureBinding. These bindings are combined with scene‑wide bindings from the camera and lights to build the final bind group.

  • update(self, options: webgpu.renderer.RenderOptions) -> None

    Prepare or refresh GPU‑side state when the scene timestamp changes. This often means:

    • creating or updating vertex/index/instance buffers from NumPy arrays using webgpu.utils.buffer_from_array(),

    • updating uniform buffers that depend on current camera or light configuration,

    • filling self.vertex_buffers and self.vertex_buffer_layouts (when using classic vertex attributes),

    • setting self.n_vertices, self.n_instances, self.topology and, if needed, alternative WGSL entry points via self.vertex_entry_point, self.fragment_entry_point and self.select_entry_point.

    The base class decorator ensures update is only called when necessary. When you change large arrays from Python, call renderer.set_needs_update() so the next frame rebuilds buffers.

How drawing and selection work#

When you subclass Renderer, you normally do not override render and select:

  • Renderer.create_render_pipeline() compiles your shader code, creates a bind group from options.get_bindings() + self.get_bindings() and sets up a graphics pipeline for both color and selection passes.

  • Renderer.render() opens a render pass from RenderOptions, binds the pipeline, bind group and any vertex buffers in self.vertex_buffers, then calls draw(self.n_vertices, self.n_instances).

  • Renderer.select() does the same using self._select_pipeline and writes into the offscreen selection texture. The Scene then reads back a single pixel, decodes obj_id and forwards a webgpu.renderer.SelectEvent to any renderer that registered on_select callbacks.

For more advanced layouts you can override these methods. For example, the shape renderer in webgpu.shapes performs indexed drawing (drawIndexed) and sets up multiple vertex buffers for positions, directions, per‑instance colors/values and additional per‑mesh data.

Patterns for custom renderers#

The built‑in renderers show common patterns you can copy:

  • webgpu.shapes.ShapeRenderer uses classic vertex and instance attributes. In update it:

    • converts NumPy arrays into GPU vertex buffers,

    • fills self.vertex_buffers and self.vertex_buffer_layouts to describe positions, directions, colors and per‑shape data,

    • chooses fragment entry points depending on whether you pass scalar values (colormap look‑up) or explicit RGBA colors.

  • webgpu.triangles.TriangulationRenderer uses storage buffers only: it uploads vertex positions and normals via webgpu.utils.BufferBinding and accesses them from WGSL using @builtin(vertex_index) and @builtin(instance_index). This is a good template when you prefer to keep all geometry in storage buffers instead of vertex attributes.

Adapting these patterns#

To create your own renderer:

  1. Decide whether you want vertex attributes (vertex_buffers) or storage buffers (BufferBinding) or a combination.

  2. Subclass Renderer, set up your CPU‑side attributes and determine n_vertices and n_instances.

  3. In update, create or update GPU buffers, vertex layouts and any uniforms. Call set_needs_update whenever Python‑side data changes.

  4. Implement get_shader_code (and, if needed, get_bindings) so the WGSL code and bindings match the buffers you created.

  5. Optionally customise selection by providing a select_entry_point that writes @RENDER_OBJECT_ID@ and any per‑instance information you want to receive in SelectEvent.

With these pieces in place, your renderer can be dropped into any existing Scene alongside the built‑in renderers.