diff options
| author | soaos <soaos@soaos.dev> | 2025-10-02 14:47:39 -0400 |
|---|---|---|
| committer | soaos <soaos@soaos.dev> | 2025-10-02 14:47:39 -0400 |
| commit | 4668221cfe67e32e892cd9a157a43edd426d5b65 (patch) | |
| tree | 5b018df636e95f1c8e69081ce2f6ee8f2d21ac33 /blog/terminal_renderer_mkii | |
| parent | d0771c83024be1bfac81753eb6f4db817cff93db (diff) | |
Mostly finished the GP text renderer post, need to finish on desktop
Diffstat (limited to 'blog/terminal_renderer_mkii')
| -rw-r--r-- | blog/terminal_renderer_mkii/david.png | bin | 0 -> 27218 bytes | |||
| -rw-r--r-- | blog/terminal_renderer_mkii/davidbayer.png | bin | 0 -> 4213 bytes | |||
| -rw-r--r-- | blog/terminal_renderer_mkii/davidthreshold.png | bin | 0 -> 3324 bytes | |||
| -rw-r--r-- | blog/terminal_renderer_mkii/index.html | 156 |
4 files changed, 148 insertions, 8 deletions
diff --git a/blog/terminal_renderer_mkii/david.png b/blog/terminal_renderer_mkii/david.png Binary files differnew file mode 100644 index 0000000..6cfa884 --- /dev/null +++ b/blog/terminal_renderer_mkii/david.png diff --git a/blog/terminal_renderer_mkii/davidbayer.png b/blog/terminal_renderer_mkii/davidbayer.png Binary files differnew file mode 100644 index 0000000..af4bfc4 --- /dev/null +++ b/blog/terminal_renderer_mkii/davidbayer.png diff --git a/blog/terminal_renderer_mkii/davidthreshold.png b/blog/terminal_renderer_mkii/davidthreshold.png Binary files differnew file mode 100644 index 0000000..6c6e014 --- /dev/null +++ b/blog/terminal_renderer_mkii/davidthreshold.png diff --git a/blog/terminal_renderer_mkii/index.html b/blog/terminal_renderer_mkii/index.html index bd37efc..78171f1 100644 --- a/blog/terminal_renderer_mkii/index.html +++ b/blog/terminal_renderer_mkii/index.html @@ -19,26 +19,166 @@ <!-- Header Section --> <h1>Terminal Renderer - Rendering to Text with Compute</h1> <p>October 2, 2025</p> - <p>This week I brought my terminal renderer to the next level by performing text rendering on the GPU.</p> + <p>This week I brought my terminal renderer to the next level by performing text rendering on the GPU. + </p> </div> <figure class="cover-image"> - <img src="cover.png" alt=""> - <figcaption>The Stanford Dragon, outlined and rendered as Braille characters in a terminal emulator.</figcaption> + <img src="cover.png" alt=""> + <figcaption>The Stanford Dragon, outlined and rendered as Braille characters in a terminal emulator. + </figcaption> </figure> </section> <section class="text-section"> - <h2>Preamble: Unicode Braille and Ordered Dithering</h2> + <h2>Context</h2> + <h3>Unicode Braille</h3> <p> - I first messed around with rendering images to the terminal with Braille characters in like 2022 I think? I wrote a simple CLI tool - that applied a threshold to an input image and output it as Braille characters in the terminal. + I first messed around with rendering images to the terminal with Braille characters in like 2022 I + think? I wrote a simple CLI tool + that applied a threshold to an input image and output it as Braille characters in the terminal. <a + href="https://tv.soaos.dev/w/twpHAu4Jv8LJc9YjZbfw5g" target="_blank">Here's a recording I took back + when I did it.</a> + </p> + + <p> + <figure class="fig fig-right"> + <div class="centered"> + <table class="schema-table"> + <tbody> + <tr> + <td>0</td> + <td>3</td> + </tr> + <tr> + <td>1</td> + <td>4</td> + </tr> + <tr> + <td>2</td> + <td>5</td> + </tr> + <tr> + <td>6</td> + <td>7</td> + </tr> + </tbody> + </table> + </div> + <figcaption>The corresponding bit position for each braille dot.</figcaption> + </figure> + This effect is pretty cool, and it was pretty easy to implement as well. The trick lies in how the + <a href="https://en.wikipedia.org/wiki/Braille_Patterns#Block" target="_blank">Unicode Braille block</a> + is laid out. Every 8-dot Braille combination happens to add up to 256 combinations, the perfect amount to + fit in the range between <code>0x2800</code> (⠀) and <code>0x28FF</code> (⣿). In other words, every + character + within the block can be represented by changing the value of a single byte. + </p> + <p> + The lowest 6 bits of the pattern map on to a 6-dot braille pattern. However, due + to historical reasons the 8-dot values were tacked on after the fact, which adds + a slightly annoying mapping to the conversion process. Either way, it's a lot easier + than it could be to just read a pixel value, check its brightness, and then use a + bitwise operation to set/clear a dot. + </p> + <h3>Ordered Dithering</h3> + <p> + Comparing the brightnes of a pixel against a constant threshold is a fine way to + display black and white images, but it's far from ideal and often results in the loss + of a lot of detail from the original image. + </p> + <figure class="fig fig-horizontal"> + <div class="horizontal-container"> + <img src="david.png" alt="" /> + <img src="davidthreshold.png" alt="" /> + <img src="davidbayer.png" alt="" /> + </div> + <figcaption>From left to right: Original image, threshold, and ordered dither. <a + href="https://en.wikipedia.org/wiki/Dither" target="_blank">Wikipedia</a></figcaption> + </figure> + <p>By using <a href="https://en.wikipedia.org/wiki/Ordered_dithering" target="_blank">ordered dithering</a>, + we + can preserve much more of the subtleties of the original image. While not the "truest" version of + dithering possible, + ordered dithering (and <i>Bayer</i> dithering in particular) provides a few advantages that make it very + well suited to realtime computer graphics: + <ul> + <li>Each pixel is dithered independent of any other pixel in the image, making it extremely + parallelizable and good for shaders.</li> + <li>It's visually stable, changes to one part of the image won't disturb other areas.</li> + <li>It's dead simple.</li> + </ul> + Feel free to read up on the specifics of threshold maps and stuff, but for the purposes of this little + explanation it's + enough to know that it's basically just a matrix of 𝓃⨉𝓃 values between 0 and 1, and then to determine + whether a pixel (𝓍,𝓎) + is white or black, you check the brightness against the threshold value at (𝓍%𝓃,𝓎%𝓃) in the map. </p> </section> <section class="text-section"> - <h2>Generating and Parsing Logs</h2> + <h2>The old way™</h2> + <p> + My first attempt at <i>realtime</i> terminal graphics with ordered dithering + (<a href="https://www.youtube.com/watch?v=tXP6sL9D0gY" target="_blank">I put a video up at the time</a>) + ran entirely on the CPU. I pre-calculated the threshold map at the beginning of execution and ran each + frame + through a sequential function to dither it and convert it to Braille characters. + </p> <p> + To be honest, I never noticed + any significant performance issues doing this, as you can imagine the image size required to fill a + terminal + screen is signficantly smaller than a normal window. However, I knew I could easily perform the + dithering on the GPU + as a post-processing effect, so I eventually wrote a shader to do that. In combination with another + effect I used to + add outlines to objects, I was able to significantly improve the visual fidelity of the experience. A + good example of + where the renderer was at until like a week ago can be seen in <a + href="https://www.youtube.com/watch?v=BNgteRpLAP0" target="_blank">this video</a>. + </p> + <p> + Until now I hadn't really considered moving the text conversion to the GPU. I mean, <i>G</i>PU is for + graphics, + right? I just copied the <i>entire framebuffer</i> back onto the CPU after dithering + and used the same sequential conversion algorithm. Then I had an idea that would drastically reduce the + amount + of copying necessary. + </p> + </section> + <section class="text-section"> + <h2>Compute post-processing</h2> + <p> + What if, instead of extracting and copying the framebuffer every single frame, we "rendered" the text on + the GPU + and read <i>that</i> back instead? Assuming each pixel in a texture is 32 bits (RGBA8), and knowing that + each braille + character is a block of 8 pixels, could we not theoretically shave off <i>at least</i> 7/8 of the bytes + copied? + </p> + <p> + As it turns out, it's remarkably easy to do. I'm using the <a href="https://bevy.org" + target="_blank">Bevy engine</a>, + and hooking in a compute node to my existing post-processing render pipeline worked right out of the + box. + I allocated a storage buffer large enough to hold the necessary amount of characters, read it back each + frame, and dumped + the contents into the terminal. + </p> + <p> + I used UTF-32 encoding on the storage buffer because I knew I could easily convert a "wide string" into + UTF-8 before printing it, and + 32 bits provides a consistent space to fill for each workgroup in the shader versus a variable-length + encoding like UTF-8. + Although now that I think about it, I could probably switch to using UTF-16 since all the Braille + characters could be represented + in 2 bytes, and that would be half the size of the UTF-32 text, which is half empty bytes anyways. + </p> + <p>Okay so I went and did that, seems to work great. Wow. This little side quest has been a part of my + broader efforts to revive a project I + spent a lot of time on. I'm taking the opportunity to really dig in and rework some of the stuff I'm not + totally happy with. So there might be quite a few of this kind of post in the near future. Stay tuned. </p> </section> </article> </body> -</html> +</html>
\ No newline at end of file |
