EXPI
EXPI lives in the Brotli dictionary. RIP JPEG encoder. RIP Compo stream.
The Assembly has been going on for over 30 years. It is the demoscene event with the most anticipated 1kb, 4kb and full size demo competitions. I am so happy to join the competition with Veikko "pestis" Sariola, winning the 1kb competion and earning the second highest score of the entire event only behind the phenomal demo from Andromeda Software Development.
Exploring the JPEG artifacts, Brotli, and beautiful synth
EXPI started from the idea of exploring JPEG artifacts with a first prototype back in March 2023. While it worked perfectly, life happened and it is only until Veikko "pestis" Sariola offered to team up, that things went to the next level and we made EXPI in just 10 days.
Pestis delivered a beautiful and moody music, with the cleanest sound I've heard in 1kb. He used his experience developing pakettic, the best packer for TIC-80 to write a custom JavaScript parser and minifier and started diving into the default dictionary of Brotli.
Working with more tooling, was a different experience for me, but it felt much smoother than M22. Also this gave us many ideas to develop tools dedicated to 1k and 4k intros using Brotli.
HD video capture
Here's an HD video capture to watch EXPI in the best possible way in case your current web browser or set up isn't suitable
Technical break down
Let's dive into a technical break down of EXPI, but don't hesitate to reach if you are curious about anything or see some gaps.
Source code
The entire source code is 5562 bytes long and compresses down to 1024 bytes using our custom minifier and Brotli-11
Click<canvas id=c>
<script>
// EXPI by Mathieu 'p01' Henri + Veikko 'pestis' Sariola
// 1kb intro for Assembly 2023
//
// EXPI lives in the Brotli dictionary. RIP Jpeg decoder. RIP Compo live stream.
//
// This file compressess down 1024 bytes using a custom made packer/minifier + Brotli-11
onclick = d => {
b = c.getContext`2d`;
S = Math.sin;
min = Math.min;
P = Math.PI;
A = new AudioContext;
// A single channel buffer with chunks of 2048 samples (it must be a power of 2), and 1024 can be tight
// The sampleFrequency depends on the browser and audio drivers, but it's typically 44100 or 48000Hz
a = A.createScriptProcessor(2048, onclick = delayIndex = T = t = 0, 1);
a.connect(A.destination);
// Abuse the audioprocess to update the audio buffer, and the viusals
// This gives a funky framerate due to the number of samples (1024, 2048, ...) and sampleRate (44100, 48000)
// But that's what I've used for a while. It works fine and no one complained :p
a.onaudioprocess = d => {
// clear and reset the canvas
c.width = 1024;
c.height = 576;
d = d.outputBuffer.getChannelData(0);
M = min(1, .1 + S(min(1, t / 128) * P) * 4); // Master volume
// πΉπΊπ»π Sound Synth
for (i = 0; i < 2048; i++) {
b[i] = syncsSum = 0; // b is the canvas, but we reuse it as an array to store the envelopes of music channels for syncs
part = t / 16;
y = A[++delayIndex] || 0; // t = signal output. A is the AudioContext, but we reuse it as an array to store the delay buffer
a = min(part, 7) & 7;
// π» The additive synthesizer
for (j = "20020002"[a]; j < "41446664"[a]; j++) { // only some of the channels are active, depending on the part
r = part * (1 << 5 - j); // r = row within pattern
v = r % 1; // v = position within row
r = part * (132 << j) * P * [10, 0, 15, 10, 18, 10, 15, 20][r & 7]; // r = frequency
syncsSum += b[j] = v = r && 3 ** -(.006 / v + j / 2 + v * 2); // b[j] = v = channel envelope
for (f = 1; f < 7; f++)
y += S(part * f / 8) * S(r * f) * v / 3 // S(r * f) is the "partial", S(part * f / 8) is the slowly changing weight of the partial
}
// π₯ Drums
r = t * 8 + 1; // r = row within pattern
v = r % 1; // v = position within row
j = (r & 15) ** "14212421"[a] % "19939971"[a]; // j = what drum to play, 0 = no drum 1 = kick 2+ = noisy snare/hihats
r = 3 ** -(.006 / v + j / 4 + v * j); // r = envelope
syncsSum += b6 = j == 1 && r; // store the kick drum envelope to b6 for syncs
y += S(256 * j ** 16 * v ** .1) * r; // add the kick drum to signal output
A[delayIndex %= A.sampleRate / 2] = y / 3; // store the signal to the delay buffer
d[i] = y * .6 * M; // store the signal into "d" the output buffer and move "t" forward
t += 1 / A.sampleRate
}
// ββ Horizon
b.strokeStyle = "#fc9";
b.translate(512, 288);
for (i = r = 8 + part; i < 64; i++) {
b.beginPath();
b.lineWidth = min(part, 1) ** 8 * r / 1024;
// Draw a 1x or 33x wide ellipse to get a circle or something akin to horizontal lines
b.ellipse(0, 0, (t & 32) * r + r, r, 0, 0, P * 2);
b.stroke();
r += r / 8
}
zzzz = min(4, t / 4) + syncsSum ** 4 * 2;
// π₯¨ Twister
for (i = 0; i < 2; i += 1 / 256) {
a = t + S(i * P + S(i * P + S(t)));
b.rotate(P / 256);
for (v = j = 0; j <= 4; j++) {
f = v;
y = S(a + P / 2 * j);
v = min(part - 1, M) * (t / 4 % 2 + 1) * (32 + zzzz + 8 * S(a + P * S(t / 3)) - min(-M, 8 * S(a) - 8 * b6) * y);
// Do we need to draw this edge ?
if (v * j > 0 & v > f) {
b.fillStyle = `hsl(${i * 180 + j * 90},50%,${48 + 48 * y}%`;
b.fillRect(0, f, 2, v - f)
}
}
b.fillStyle = b.shadowColor = "#fc9";
b.fillRect(0, S(i * 64) * (512 + t), 2, 2) // stars
}
// π Zoom blur the visuals
b.filter = "blur(10px";
b.globalAlpha = .2;
r = 1024;
b.drawImage(c, -r, -r / 2, r * 2, r);
b.globalAlpha = .1;
r = r * 4;
b.drawImage(c, -r, -r / 2, r * 2, r);
b.filter = "none";
// ππ Tiny EXPI text
b.globalAlpha = b[1] + b[2];
for (i = 0; i < 16; i++)
b.fillText("EXPI"[t / 2 & 3], i * 200 - 512, 0);
// Β©οΈ End credits
b.globalAlpha = min(1, t / 128) ** 32;
b.fillText("p01+pestis", -t, 8);
b.font = "900 30px sans-serif";
b.fillText("EXPI", -t, 0);
// ππ Big EXPI text
b.globalAlpha = b[4];
b.font = "900 250px sans-serif";
for (i = 0; i < 16; i++)
b.fillText("EXPIEXPI", i * 200 - t * 16 - 800, i * 200 - t * 16);
// πΊ Grab jpeg
if (t > T)
T++,
jpg = c.toDataURL("image/jpeg").split("");
// βοΈ Sun
b.beginPath(
b.shadowBlur = 30);
b.arc(0, 0, 8 * zzzz, b.globalAlpha = 1, 8);
b.fill();
// π₯ Glitch the jpg
if (part > 3 & syncsSum > .1)
jpg[Math.random() * jpg.length | 0] = 0;
// π₯οΈβ«βͺ Display the jpg, sometime in grayscale
c.style = `position:fixed;top:0px;left:0px; height:100%;width:100%;filter:grayscale(${(part - 1) % 3 >> 1}); background:radial-gradient(#303,#0004),url(${jpg.join("")})50%/90%`
}
}
</script>
That's it.
You see, no external libraries or resources.
Copy and paste this source code into a new file or even some web playground if you want to see for yourself.
Visuals: Horizon, Twister, zoom blur and texts
To simplify the code, all the visuals that get blurred, are rendered in polar coordinates. Using the 2D Canvas API, we move to the center using b.translate(512, 288);
, then slowly rotate using b.rotate(P / 256);
, where P
is PI, to draw each element at an increasing angle until getting back to the origin.
Horizon
The horizon is a list of exponentially increasing ellipses that are 1x or 33x wide, to produce a "tunnel" vision of circles, aka 1x wide ellipses, or "plane" horizon of near-horizontal lines from the 33x wide ellipses.
Twister and stars
Interestingly, the twister was a placeholder effect from the initial prototype, but the various experiments with other visuals just never sit right.
The dimension of the twister follows the master and current volume of the music, and spin according to some channels of the music.
As everything is drawn in polar coordinates, we draw one "slice" of the twister at a time, and go round the inner circle, comparing whether the current edge is facing the camera or not to determine if we need to draw it using if (v * j > 0 & v > f) {
where j
is the index of the vertex, v
and f
are the current and previous vertex respecitvely. If you remmeber, this is exactly the same idea as MINAMI DISTRICT.
After drawing the current slice of the twister, we draw a single star at a pseudo random position along the current ray/angle, using b.fillRect(0, S(i 64) (512 + t), 2, 2)
Zoom blur
To give more depth to the visuals, we apply a zoom blur using b.filter = "blur(10px"; b.globalAlpha = .2;
, a 0.2
opacity and 10px
blur, to copy the copy canvas twice onto itself at different sizes. Note that the closing )
in the filter is not required, so we skipped it to save 1 byte.
This simple zoom blur adds so much to the visuals.
Texts
After that, we add some layers of text and glyphs that follow the time or music channels, like the individual letter of the word EXPI, or the big EXPI that scrolls in the background in the last parts of the intro.
And of course the credits and end title.
These do take some precious bytes but adds so much to the overall style, and is something that is almost never seen in Windows, Mac and Linux 1kb intros.
Sunburn
The sun burn, is but a glowing circle using shadowBlur
and following the beat.
JPEG glitches
JPEG glitches happen where a browser or image viewers tries to decode a broken JPEG image and have incomplete information about how to decode the following portion of the image. It typically looks like some strange colorful bands in the image, parts of the image being shifted a few blocks away, loss of chromatic information, or some portions of the image being reduced to very harsh approximation.
Several full size demos have replicated the look of JPEG glitches but to my knowledge none have done that using real JPEG codecs, or even done that on the web.
Interestingly, it is rather trivial to achieve on the web as the web browsers are built to be resilient to all kinds of broken resources. EXPI highlights the difference of image codecs between browsers and how they deal with corrupted images in a beautiful/artsy manner or less so. For a fun exercise, you should really check EXPI in different browsers.
The Canvas API, comes with toDataURL()
method to encode a 2D, or 3D, Canvas to a data:
URL in PNG or JPG.
In EXPI, we take a JPG screenshot of EXPI itself every second using:
if (t > T)
T++,
jpg = c.toDataURL("image/jpeg").split("");
Where t
and T
are the time in seconds, and the time to take the next JPG screenshot. So here we get an array with the list of every characters that makes the data:
URL of the most recent JPG screenshot of EXPI
Later on, we glitch/break this array by setting a random character in that array to 0
based on the time and music:
if (part > 3 & syncsSum > .1)
jpg[Math.random() * jpg.length | 0] = 0;
And finally we set the style of our Canvas to apply the broken JPG screenshot as background, add a vignetting radial gradient, size the canvas and apply some optional filter for style reasons using:
c.style = `position:fixed;top:0px;left:0px; height:100%;width:100%;filter:grayscale(${(part - 1) % 3 >> 1}); background:radial-gradient(#303,#0004),url(${jpg.join("")})50%/90%`
Also note how we center the broken JPG using 50%/
, but size it at 90%
rather than 100%
to get these nice repeating effects in the edges and corners of the screen.
Sound Synthesizer
Kudos to Veikko "pestis" Sariola for developing the soundtrack for EXPI, and producing some of the cleanest and most rich sound I have heard in JavaScript 1kb productions.
The melodic instrument synthesizer is based on the Lovebyte 2023 bytebeat entry Etude de la synthΓ©se additive, and the ideas were more thoroughly explained in the accompanying seminar.
The song was composed in the dollchan bytebeat tool.
In EXPI, there is an additive synthesizer, where each note is synthesized by adding together multiple sine waves, each running at integer multiples of the fundamental frequency, but with slightly different weights varying slowly as the song progresses. Additionally, each note has an envelope, with relatively sharp attack and slow decay.
We had to limit ourselves to the first 6 partials, as the computational load was limiting us: the synth load peaked at ~ 35% of the available time budget we had per frame, and this would scale almost linearly with the number of partials, as that inner loop was the hottest loop in the synth.
There are 6 channels, each playing the exact same melody, which easiest to hear in the bassline. The bassline (channel 1) is playing it as quarter notes. Channel 2 is playing the same melody, but one octave higher and as half notes. Channel 3 again one octave higher and as whole notes, etc...
Given that all channels are playing the same melody at different speeds, pretty much all the combinations of the different notes in the melody will appear at some point, so all notes had to harmonize together well. Therefore, the melody was kept very simple: the whole song uses only e
, b
, d
and E
(one octave up). This suggests some kind of bluesy sound with the minor 7th, but we never give the third, so it's not clear if it's a minor or a major scale.
To give the song a bit of structure, we vary which channels are active and not by tabulating the lowermost and uppermost active channel. When the song starts, only channels 3-4 are active. In the second part, only bass is playing (channels 1 to 2). In the third part, channels 1-4 are active, etc...
The drum synthesizer started with getting the kick going. The kick is synthesized with the classic sin(C * v ** p)
function, where v
is time within the kick (e.g. from 0
to 1
, where 1
is the end of the kick), typically in the range of .05
to .2
, and C
is a constant controlling the overall pitch of the kick. The effect of the power is that there is a sharp pitch drop in the kick, exactly what we want. On top of the sine, an envelope was added, so there's a bit of attack and decay.
For "hihats" and "snares", we want something more noisy and reused (abused) the same equation: if you multiply C
by a large factor, the whole sinewave just becomes almost like a random number generator and thus noise. We added sin(C * j ** 16 * v ** p)
to the equation.
If j == 0
, the whole equation just gives 0
out i.e. silence. With j == 1
, it's the kick equation and with j >= 2
it's just noise
All that remained was to make the envelope shorter as j
increases, so j == 2
is a longer and more like snare, while j == 5
is already very short and more like a hihat.
At this point, we could tabulate different drum loops (values of j
) and just choose the drum loop based on the part of the song i.e. use the classic "orderlist" and "patterns" approach of trackers.
However, to save bytes, we want to generate the patterns using an equation, and just tabulate the parameters of the equation depending on the part. The equation was j = (r+1&15) ** a % b
, where r
is the row within the drum loop (from 0
to 15
) and constants a
and b
depend on the part of the
song.
Then we just listened to different combinations of a
and b
that gave decent sounding drum loops.
Taming Brotli with a custom minifier
To avoid spending time in the mind-numbing variable name juggling, we wrote a simple packer/minifier. It follows the lead of pakettic. First parsing the code looking for variables and replace them with single-letter ones, ignoring whitespace and comments. Then, juggle around these single-letter variables to see which give the smallest compressed file size, compressed using Brotli.
The parser did not attempt to produce a full abstract syntax tree, but just detect alphanumeric tokens, property names, strings, comments and template strings. Also it had a list of reserved words: every alphanumeric token not in the list reserved words was considered a variable name.
After manually adding bunch of things to the list of reserved words as needed, this solution was good enough to pack the intro without errors.
The variable juggling typically saves about 20 bytes, but things started to get difficult when we were around 1040 bytes. Brotli is more complicated to work with than classic DEFLATE and the results were rather unpredictable: removing a line might increase the file size, while adding a whitespace somewhere might decrease it.
The solution to make more sense of the inner life of Brotli came when we found the Brotli dictionary, the default list of tokens that the compressed stream can reference before even looking at the input data.
Carefully looking for close matches in the dictionary and then editing the code to match saved another 10-20 bytes.
For example, after minification, our code had a snippet =new
, but only = new
is in the dictionary, so we hacked the parser to replace any instance of =new
with = new
to save a few bytes.
Similarly, the minifier used to output }</script>
but only }\n</script>\n
is in the dictionary, so we replaced the minified output with the exact match, and saved more bytes.
Finally, for typographical reasons, we wanted the intro name
to be in ALL CAPS, but wanted to save bytes, so we searched the
dictionary for ALL CAPS words. There wasn't too many: HTML
, JSON
, FREE
, POST
, ISBN
, LINE
, EXPI
, HTTP
, BODY
, HEAD
, START
, BEGIN
,ISBN
, DATA[
, and FFFFFF
. So we went with EXPI.
Final notes
Again, it's been a fantastic experience to work with Veikko "pestis" Sariola on EXPI and see the reactions during and after the competition from folks who could not believe that what they were seeing and hearing came from just 1024 bytes of HTML, Javascript and CSS.
The next amazing feedback came from the results of the competition. Seeing EXPI win the 1kb competition, with the second highest score of the entire Assembly competition only behind the phenomal full size demo from Andormeda Software Development was just unbelievable.
Thank you. THANK YOU. THANK YOU.
See you next year at Assembly 2024.
Additional links
- HD Youtube capture
- Full archive submitted at the Assembly 2023
- EXPI on Pouet.net the demoscene portal, where comments and a thumb up, or down, are very welcome to learn how to improve for next year.