Renderers and shapes#
This page describes how renderers turn your data into GPU draw calls and how to work with shapes and instance data.
Built-in renderers#
The library ships several renderers for common visualization tasks:
-
Renders many copies of a base shape (cylinder, cone, circle).
Supports per-instance positions, directions, scalar values and explicit colors.
Integrates with
webgpu.colormap.Colormapto map scalar values to colors.
Triangle-based renderers in
webgpu.trianglesWork with arbitrary meshes defined by vertex positions, normals and triangle indices.
Useful when you already have a mesh from another tool or solver.
Additional helpers such as
webgpu.labels,webgpu.vectors,webgpu.colormapandwebgpu.clippingprovide labels, vector glyphs, colormaps and clipping planes.
The tutorials show concrete combinations of these building blocks for instanced geometry, vector fields and selection highlights.
Instance data#
Many renderers, in particular webgpu.shapes.ShapeRenderer, can
render thousands of instances efficiently. Typical instance attributes
include:
positions (3D translation per instance)
directions (orientation or direction vectors)
values (scalars mapped to colors via a colormap)
colors (explicit per-instance RGBA values)
These attributes can be passed either as numpy arrays or as
pre-created GPU buffers. For example:
renderer.positions = np.random.randn(N, 3)
renderer.values = my_scalar_field
Changing instance data usually requires a redraw. Call
renderer.set_needs_update() after updating large arrays to ensure the
next frame rebuilds the relevant GPU buffers.
Writing a custom renderer#
For data types that do not fit the existing renderers, you can implement
your own by subclassing webgpu.renderer.Renderer (most common)
or, for advanced use, webgpu.renderer.BaseRenderer.
High‑level lifecycle#
Renderers live inside a webgpu.scene.Scene:
you create one or more renderers and pass them to
Scene,Scenecreates awebgpu.renderer.RenderOptionsobject that contains the camera, light, canvas and command encoder,on every render, each renderer is asked to
update(if needed) and thenrenderusing those options.
The base classes handle most of this for you:
BaseRenderertracks a timestamp and only callsupdate(options)when something changed (for example afterset_needs_update()),Renderer`builds aRenderPipelineand, by default, performs a simple non‑indexeddrawcall plus an optional separate selection pass.
In a custom renderer you mainly describe what to draw (WGSL shader, buffers and bindings) and when to (re)build GPU resources.
Choosing a base class#
Subclass
Rendererwhen you want a standard graphics pipeline (color + depth, optional selection) and are happy with the defaultrenderandselectimplementations. This is appropriate for most visualization use cases.Subclass
BaseRendererdirectly only when you need full control over pipeline creation or draw calls (for example multiple passes, unusual topologies or compute‑driven rendering). Then you are responsible for implementingcreate_render_pipelineandrenderyourself.Use
webgpu.renderer.MultipleRendererto group several renderer instances into one logical object that shares selection behaviour andon_selectcallbacks.
Core responsibilities of a renderer#
Regardless of the base class, a renderer is responsible for three things:
defining shader code,
defining GPU bindings (buffers, textures, samplers),
updating those GPU resources when the Python‑side data changes.
The key methods and attributes are:
get_shader_code(self) -> strReturn WGSL shader source. The recommended pattern is to store WGSL in
your_module/shaders/and load it viawebgpu.utils.read_shader_file(). The string is run throughwebgpu.utils.preprocess_shader_code(), which supports#import(for shared code like camera and lighting) and simple#ifdef/@TOKEN@replacement.Two defines are injected automatically:
RENDER_OBJECT_ID– a unique integer for this renderer. In shaders you typically use@RENDER_OBJECT_ID@when writing to the selection buffer.SELECT_PIPELINE– defined only for the selection pipeline, which makes it easy to branch between normal and selection output in the same WGSL file if desired.
get_bindings(self) -> list[webgpu.utils.BaseBinding]Return a list of bindings (uniform buffers, storage buffers, textures, samplers) that the renderer needs. Use helper classes such as
webgpu.utils.BufferBinding,webgpu.utils.UniformBindingandwebgpu.utils.TextureBinding. These bindings are combined with scene‑wide bindings from the camera and lights to build the final bind group.update(self, options: webgpu.renderer.RenderOptions) -> NonePrepare or refresh GPU‑side state when the scene timestamp changes. This often means:
creating or updating vertex/index/instance buffers from NumPy arrays using
webgpu.utils.buffer_from_array(),updating uniform buffers that depend on current camera or light configuration,
filling
self.vertex_buffersandself.vertex_buffer_layouts(when using classic vertex attributes),setting
self.n_vertices,self.n_instances,self.topologyand, if needed, alternative WGSL entry points viaself.vertex_entry_point,self.fragment_entry_pointandself.select_entry_point.
The base class decorator ensures
updateis only called when necessary. When you change large arrays from Python, callrenderer.set_needs_update()so the next frame rebuilds buffers.
How drawing and selection work#
When you subclass Renderer, you normally do not override
render and select:
Renderer.create_render_pipeline()compiles your shader code, creates a bind group fromoptions.get_bindings() + self.get_bindings()and sets up a graphics pipeline for both color and selection passes.Renderer.render()opens a render pass fromRenderOptions, binds the pipeline, bind group and any vertex buffers inself.vertex_buffers, then callsdraw(self.n_vertices, self.n_instances).Renderer.select()does the same usingself._select_pipelineand writes into the offscreen selection texture. TheScenethen reads back a single pixel, decodesobj_idand forwards awebgpu.renderer.SelectEventto any renderer that registeredon_selectcallbacks.
For more advanced layouts you can override these methods. For example,
the shape renderer in webgpu.shapes performs indexed drawing
(drawIndexed) and sets up multiple vertex buffers for positions,
directions, per‑instance colors/values and additional per‑mesh data.
Patterns for custom renderers#
The built‑in renderers show common patterns you can copy:
webgpu.shapes.ShapeRendereruses classic vertex and instance attributes. Inupdateit:converts NumPy arrays into GPU vertex buffers,
fills
self.vertex_buffersandself.vertex_buffer_layoutsto describe positions, directions, colors and per‑shape data,chooses fragment entry points depending on whether you pass scalar values (colormap look‑up) or explicit RGBA colors.
webgpu.triangles.TriangulationRendereruses storage buffers only: it uploads vertex positions and normals viawebgpu.utils.BufferBindingand accesses them from WGSL using@builtin(vertex_index)and@builtin(instance_index). This is a good template when you prefer to keep all geometry in storage buffers instead of vertex attributes.
Adapting these patterns#
To create your own renderer:
Decide whether you want vertex attributes (
vertex_buffers) or storage buffers (BufferBinding) or a combination.Subclass
Renderer, set up your CPU‑side attributes and determinen_verticesandn_instances.In
update, create or update GPU buffers, vertex layouts and any uniforms. Callset_needs_updatewhenever Python‑side data changes.Implement
get_shader_code(and, if needed,get_bindings) so the WGSL code and bindings match the buffers you created.Optionally customise selection by providing a
select_entry_pointthat writes@RENDER_OBJECT_ID@and any per‑instance information you want to receive inSelectEvent.
With these pieces in place, your renderer can be dropped into any
existing Scene alongside the built‑in renderers.