In this post I talk about how I went about setting up a stat visualization page for my rockbox mp3 player.
-
-
-
- A static site generation experiment
-
-
-
-
Preamble: Digital Sovereignity & Rockbox
-
- I've been building up a pretty sizeable collection of digital music
- over the last couple of years. I think there's a lot of value in owning
- the music I pay for and being able to choose how I listen to it.
- Purchasing music also allows me to support artists in a more direct
- and substantial way than the fractions of cents for using streaming services,
- but that's more of a happy consequence than some moral obligation I feel.
-
-
- Over the years, I've enjoyed listening to my music in a variety of ways.
- For years I kept all of my music files on all of my devices and used
- various local music clients depending on the platform, most notably mpd
- and ncmpcpp on linux. Eventually, as I charged headlong into the glorious
- world of self-hosting, I began using a central Jellyfin media server that
- I stream music and video from. It's super convenient, and works on all of
- my devices (including my TV!).
-
-
- My media server is great, and it's been the primary way I listen to music
- for a while now. But it has limitations. For example, I don't expose my media
- server to the internet, so I'm unable to stream from it while I'm out and
- about. And even if I could, the bandwidth requirements would be pretty high.
- I figured I would need a dedicated music player if I wanted to take my music
- library on the go, and settled on the HIFI Walker H2 after reading some
- online recommendations. The ability to install Rockbox, an open-source firmware,
- was a big factor in my decision. I couldn't tell you how the device works
- out of the box, since I flashed the firmware pretty much immediately once I got it,
- but I've been super impressed with how the device works while running Rockbox.
-
-
-
-
- I'm using a modified version of the InfoMatrix-v2 theme, which looks great.
-
- Rockbox comes with many codecs for common audio formats including FLAC and MP3. The
- device boots extremely quickly, and the interface is snappy. Virtually every aspect
- of the user experience is tweakable and customizable to a crazy degree. I've even begun
- listening to music on my player even at home, since a device specifically for the
- purpose provides less distraction while I'm trying to be productive.
-
-
- All this to say I'm pretty much sold on Rockbox. But there's certain things I
- still miss from my days of being a user of popular services like Spotify with
- fancy APIs and data tracking. Things like Spotify wrapped or third-party apps
- for visualizing playback statistics are a fun way to see what my listening history
- looks like and could potentially be used to help find more music that I'd enjoy.
- This is why when I noticed that Rockbock has a playback logging feature, a little
- lightbulb lit up over my head.
-
-
-
-
Generating and Parsing Logs
-
-
-
- The logging feature can be accessed through the settings menu.
-
- Rockbox has a feature that logs playback information to a text file. This feature can
- be enabled by setting Playback Settings > Logging to "On". With this setting enabled, a
- new line gets added to the end of the .rockbox/playback.log file every time you play a track,
- containing info about what you played and when.
-
-
- The logging feature is actually already used by the LastFM scrobbler plugin that comes preloaded with
- Rockbox, which is probably the simplest way to get insights into your playback. However,
- I personally want to avoid using third-party services as much as possible, because it's more fun.
-
-
- If I take a look at a logfile generated after a bit of listening, I'll see that I've wound up with
- a series of lines that each look something like this:
-
-
1758478258:336689:336718:/<microSD0>/Music/This Is The Glasshouse/This Is The Glasshouse - 867/This Is The Glasshouse - 867 - 01 Streetlight By Streetlight.flac
- An example of a log entry for "Streetlight by Streetlight" by This is the Glasshouse.
-
-
-
-
- I wasn't really able to find any information online about the format of these logs, but they appear
- to be simple enough to figure out. From what I can tell, each event is broken up into 4 pieces:
-
-
Timestamp: The number of milliseconds since the UNIX epoch.
-
Playback Duration: The amount of the song that was played, in milliseconds.
-
Total Track Length: The length of the played track, in milliseconds.
-
File Path: An absolute path to the file containing the track on the filesystem.
-
- All of this is enough to know what I was listening to and when. I can use the file path to check for
- audio tags which can help glean even more information about my listening habits.
-
-
Now that I have this information and know how to interpret it, I'm ready to start processing it!
-
-
-
Analyzing Playback History
-
- In order to get some useful information out of my playback history, I think it's a good idea to start by
- building
- a database. I created a sqlite database with the following tables:
-
-
-
-
songs
-
-
-
-
-
id
-
i64
-
PK
-
-
-
title
-
String
-
-
-
-
artists
-
JSON
-
-
-
-
album_id
-
i64?
-
-
-
-
genre
-
String?
-
-
-
-
-
-
-
-
albums
-
-
-
-
-
id
-
i64
-
PK
-
-
-
title
-
String
-
-
-
-
artist
-
String
-
-
-
-
cover_art
-
Blob?
-
-
-
-
-
-
-
-
history
-
-
-
-
-
id
-
i64
-
PK
-
-
-
timestamp
-
Datetime
-
-
-
-
duration
-
i64
-
-
-
-
song_id
-
i64
-
-
-
-
-
- I can add more columns later, but this is a good place to start.
-
-
- Now, as I read through the logfile line-by-line, I can check if each album exists before
- inserting it into the database:
-
-
for line in log_file.lines().flatten() {
- println!("{line}");
- // Skip comments
- if line.starts_with("#") {
- continue;
- }
- let chunks = line.split(":").collect::>();
-
- let timestamp = DateTime::from_timestamp_secs(
- i64::from_str_radix(chunks[0], 10).context("Failed to parse timestamp")?,
- )
- .context("Failed to convert timestamp")?;
-
- // Load tags from file on device
- let file_path = chunks[chunks.len() - 1][1..]
- .split_once("/")
- .context("Missing file")?
- .1;
- let tags = Tag::new()
- .read_from_path(args.mount_point.join(file_path))
- .context("Failed to read audio tags")?;
-
- //...
-}
if let Some(existing_album) =
- sqlx::query("SELECT id FROM albums WHERE title=$1 AND artist=$2")
- .bind(album_title)
- .bind(album_artist)
- .fetch_optional(&mut *db)
- .await
- .context("Failed to execute query to find existing album")?
-{
- let album_id: i64 = existing_album.get("id");
- info!("Album already exists, id {album_id}");
- //...
-} else {
- info!("Inserting new album: {album_title} by {album_artist}");
- //...
- let result = sqlx::query(
- "INSERT INTO albums (title, artist, cover_art) VALUES ($1, $2, $3);",
- )
- .bind(album_title)
- .bind(album_artist)
- .bind(cover)
- .execute(&mut *db)
- .await
- .context("Failed to execute query to insert album into database")?;
-
- //...
-}
- Checking for an album with matching artist and title before creating a new row in the
- database.
-
- I did something similar with the songs and history tables, basically building up a cache
- of history information and skipping anything that's already in the database on repeat runs.
-
-
- With this database constructed, it's pretty easy to get a bunch of different information
- about my listening. For example (forgive me if my SQL skills are kind of ass lol):
-
-
SELECT
- songs.title AS song_title,
- songs.artists AS song_artists,
- songs.genre AS song_genre,
- albums.title AS album_title,
- albums.artist AS album_artist,
- history.timestamp AS timestamp,
- history.duration AS duration
-FROM history
-CROSS JOIN songs ON songs.id = history.song_id
-CROSS JOIN albums ON albums.id = songs.album_id
-ORDER BY timestamp DESC;
- Querying for a list of each history entry along with track metadata, sorted from most to
- least recent.
-
-
-
SELECT
- songs.genre,
- SUM(history.duration) AS total_duration
-FROM history
-CROSS JOIN songs ON history.song_id = songs.id
-GROUP BY genre
-ORDER BY total_duration DESC
-LIMIT 10;
- Querying for the top 10 most listened genres by playtime.
-
-
-
- It's all well and good to be able to view this information using a database client,
- but it would be really cool if I could visualize this data somehow.
-
-
-
-
Visualizing this Data Somehow
-
- I wanted to make this data available on my website for people to view, and for a bunch of mostly trivial
- reasons I won't get into here, I have a couple of requirements for pages on this site:
-
-
Pages need to be static.
-
Pages need to be JavaScript-free.
-
- This means any chart rendering needs to be done automatically at build time before
- deploying. I don't currently use a static site generator for my site (just for fun),
- so I'm basically going to need to write one specifically to generate this page.
-
-
- I won't get too deep into the specifics of how I queried the database and generated each visualization
- on
- the page, but I can explain the visualizations I created using the queries from the previous section.
- For the
- listening history I wanted to generate a table displaying the information. To accomplish this, I first
- used a combination of sqlx's ability to convert a row to a struct and serde to serialize
- the rows as JSON values.
-
-
- Struct definition for a history entry, allowing conversion from a sqlx row and
- de/serialization from/to JSON.
-
-
-
- In order to keep the generation as painless as possible, I decided to use the Tera template
- engine, which allows me to define a template HTML file and substitute in values from
- a context which I can define before rendering. In the case of the table, I can just generate a <tr>
- matching the data for each item:
-
-
-
- A Tera macro for generating a table from a list of playback history items.
- I used a macro so I can re-use this later if I want to add time range views.
- (last month, year, etc.)
-
-
-
-
- I wrote similar macros for each of the visualizations I wanted to create. Most are
- easy, but for my top 10 genres I wanted to display a pie chart. I found a pretty decent
- data visualization crate called charming that's able to render to html, however
- the output contains javascript so it's a no-go for me. Luckily, it can also render to
- an SVG which I can embed nicely within the page.
-
-
- Here's one I generated just now.
-
-
-
- And that's pretty much all there is to it! The finished thing can be found here.
-
-
-
-
-
-
\ No newline at end of file
diff --git a/blog/rockbox_stats/log-setting.bmp b/blog/rockbox_stats/log-setting.bmp
deleted file mode 100644
index 29789fa..0000000
Binary files a/blog/rockbox_stats/log-setting.bmp and /dev/null differ
diff --git a/blog/rockbox_stats/playback-settings.bmp b/blog/rockbox_stats/playback-settings.bmp
deleted file mode 100644
index cee3cfb..0000000
Binary files a/blog/rockbox_stats/playback-settings.bmp and /dev/null differ
diff --git a/blog/rockbox_stats/player.bmp b/blog/rockbox_stats/player.bmp
deleted file mode 100644
index 452a057..0000000
Binary files a/blog/rockbox_stats/player.bmp and /dev/null differ
diff --git a/blog/terminal_renderer_mkii/cover.png b/blog/terminal_renderer_mkii/cover.png
deleted file mode 100644
index b3ddfd9..0000000
Binary files a/blog/terminal_renderer_mkii/cover.png and /dev/null differ
diff --git a/blog/terminal_renderer_mkii/david.png b/blog/terminal_renderer_mkii/david.png
deleted file mode 100644
index 6cfa884..0000000
Binary files a/blog/terminal_renderer_mkii/david.png and /dev/null differ
diff --git a/blog/terminal_renderer_mkii/davidbayer.png b/blog/terminal_renderer_mkii/davidbayer.png
deleted file mode 100644
index af4bfc4..0000000
Binary files a/blog/terminal_renderer_mkii/davidbayer.png and /dev/null differ
diff --git a/blog/terminal_renderer_mkii/davidthreshold.png b/blog/terminal_renderer_mkii/davidthreshold.png
deleted file mode 100644
index 6c6e014..0000000
Binary files a/blog/terminal_renderer_mkii/davidthreshold.png and /dev/null differ
diff --git a/blog/terminal_renderer_mkii/index.html b/blog/terminal_renderer_mkii/index.html
deleted file mode 100644
index 326c5d4..0000000
--- a/blog/terminal_renderer_mkii/index.html
+++ /dev/null
@@ -1,189 +0,0 @@
-
-
-
-
-
- Terminal Renderer Mk. II - Rendering to Text with Compute
-
-
-
-
-
-
-
-
-
Terminal Renderer Mk. II - Rendering to Text with Compute
-
October 2, 2025
-
This week I brought my terminal renderer to the next level by performing text rendering on the GPU.
-
-
-
-
- The Stanford Dragon, outlined and rendered as Braille characters in a terminal emulator.
-Full video
-
-
-
-
-
Context
-
Unicode Braille
-
- I first messed around with rendering images to the terminal with Braille characters in like 2022 I
- think? I wrote a simple CLI tool
- that applied a threshold to an input image and output it as Braille characters in the terminal. Here's a recording I took back
- when I did it.
-
-
-
-
-
-
-
-
-
0
-
3
-
-
-
1
-
4
-
-
-
2
-
5
-
-
-
6
-
7
-
-
-
-
- The corresponding bit position for each braille dot.
-
- This effect is pretty cool, and it was pretty easy to implement as well. The trick lies in how the
- Unicode Braille block
- is laid out. Every 8-dot Braille combination happens to add up to 256 combinations, the perfect amount to
- fit in the range between 0x2800 (⠀) and 0x28FF (⣿). In other words, every
- character
- within the block can be represented by changing the value of a single byte.
-
-
- The lowest 6 bits of the pattern map on to a 6-dot braille pattern. However, due
- to historical reasons the 8-dot values were tacked on after the fact, which adds
- a slightly annoying mapping to the conversion process. Either way, it's a lot easier
- than it could be to just read a pixel value, check its brightness, and then use a
- bitwise operation to set/clear a dot.
-
-
Ordered Dithering
-
- Comparing the brightnes of a pixel against a constant threshold is a fine way to
- display black and white images, but it's far from ideal and often results in the loss
- of a lot of detail from the original image.
-
-
-
-
-
-
-
- From left to right: Original image, threshold, and ordered dither. Wikipedia
-
-
By using ordered dithering,
- we
- can preserve much more of the subtleties of the original image. While not the "truest" version of
- dithering possible,
- ordered dithering (and Bayer dithering in particular) provides a few advantages that make it very
- well suited to realtime computer graphics:
-
-
Each pixel is dithered independent of any other pixel in the image, making it extremely
- parallelizable and good for shaders.
-
It's visually stable, changes to one part of the image won't disturb other areas.
-
It's dead simple.
-
- Feel free to read up on the specifics of threshold maps and stuff, but for the purposes of this little
- explanation it's
- enough to know that it's basically just a matrix of 𝓃⨉𝓃 values between 0 and 1, and then to determine
- whether a pixel (𝓍,𝓎)
- is white or black, you check the brightness against the threshold value at (𝓍%𝓃,𝓎%𝓃) in the map.
-
-
-
-
The old way™
-
- My first attempt at realtime terminal graphics with ordered dithering
- (I put a video up at the time)
- ran entirely on the CPU. I pre-calculated the threshold map at the beginning of execution and ran each
- frame
- through a sequential function to dither it and convert it to Braille characters.
-
-
- To be honest, I never noticed
- any significant performance issues doing this, as you can imagine the image size required to fill a
- terminal
- screen is signficantly smaller than a normal window. However, I knew I could easily perform the
- dithering on the GPU
- as a post-processing effect, so I eventually wrote a shader to do that. In combination with another
- effect I used to
- add outlines to objects, I was able to significantly improve the visual fidelity of the experience. A
- good example of
- where the renderer was at until like a week ago can be seen in this video.
-
-
- Until now I hadn't really considered moving the text conversion to the GPU. I mean, GPU is for
- graphics,
- right? I just copied the entire framebuffer back onto the CPU after dithering
- and used the same sequential conversion algorithm. Then I had an idea that would drastically reduce the
- amount
- of copying necessary.
-
-
-
-
Compute post-processing
-
- What if, instead of extracting and copying the framebuffer every single frame, we "rendered" the text on
- the GPU
- and read that back instead? Assuming each pixel in a texture is 32 bits (RGBA8), and knowing that
- each braille
- character is a block of 8 pixels, could we not theoretically shave off at least 7/8 of the bytes
- copied?
-
-
- As it turns out, it's remarkably easy to do. I'm using the Bevy engine,
- and hooking in a compute node to my existing post-processing render pipeline worked right out of the
- box.
- I allocated a storage buffer large enough to hold the necessary amount of characters, read it back each
- frame, and dumped
- the contents into the terminal.
-
-
- I used UTF-32 encoding on the storage buffer because I knew I could easily convert a "wide string" into
- UTF-8 before printing it, and
- 32 bits provides a consistent space to fill for each workgroup in the shader versus a variable-length
- encoding like UTF-8. Here's a video of the new renderer working.
- Although now that I think about it, I could probably switch to using UTF-16 since all the Braille
- characters could be represented
- in 2 bytes, and that would be half the size of the UTF-32 text, which is half empty bytes anyways.
-
-
- Okay so I went and tried that but remembered that shaders only accept 32-bit primitive types, so it doesn't matter anyways. This little side quest has been a part of my
- broader efforts to revive a project I
- spent a lot of time on. I'm taking the opportunity to really dig in and rework some of the stuff I'm not
- totally happy with. So there might be quite a few of this kind of post in the near future. Stay tuned.
-