The previous exploration, Text 1: Rendering and Editing used a simple pixel buffer for rendering. The primary goal of this exploration is to build a simple GPU rendered, monospaced text editor and learn graphics programming. It's in reverse chronological order.
This is the best stopping point for this exploration. Getting here has provided me with hands-on experience in shader programming, UV image mapping, texture rendering, orthographic projection, camera movement, texture atlases, occlusion culling, geometry instancing, font rendering, text manipulation, simple file management, and building a command interface. Each item was discovered at the speed of need. In other words, when looking at my list of requirements and expectations, I chose a balance between the perceived difficulty and logical progression. That said, I didn’t spend too much time overthinking it. The process was all about discovery and finding answers to questions.
You may be wondering if I wrote tests along the way. I didn’t. Tests are for solidified code meant for customers. However, notes, comments, and simplicity then became essential in this recreational project. I can’t imagine how challenging this would have been if I also had to redo tests based on false assumptions and decisions made from ignorance. If I were to build a product now, I would write tests with the lessons learned from this experience in mind.
Thanks
delete
command requires the full text and the file name to execute. It's not a permanent feature. In a larger application, I'll move the file to the operating system's trash and have the user perform deletion.H
and L
keys for cursor navigation. It completely disrupts my flow.:
while in normal mode, activates the command line in insert mode. Pressing ESC
returns it to normal mode and pressing ESC
again will return you to the document.Normal Mode
Insert Mode
g
in major_normal_mode
to set the minor_mode
to goto
, then g
again to execute the goto_file_start()
command.goto_last_line()
, press ge
while also in major_normal_mode
.J
or K
. Helix allows for this with the following configuration.[keys.normal]
S-j = "scroll_up"
S-k = "scroll_down"
# ...
Unwrapped
Line Wrapped
Word Wrapped
ALT+HOME
moves the cursor to the first character, ALT+END
moves it to the last. HOME
and END
moves the camera to the start and end of the document, respectively. getlorem --units bytes --count 10000000 --swl > 10MB.txt
, but it produces a single line so be sure to select all of the sentence endings (\.
) and replace them with new lines (\.\n
).col_count
and row_count
is just enough to allow glyphs to peek into visibility. This is because based on the camera's distance from 0, I can offset which rows and columns to extract from the text. To clarify, if the camera is moves the distance of a cell along the y axis, then the row_offset
is set to 1. Then, all that is left is to iterate from 1 to the number of visible columns or the last line index of the file. This should work for horizontal scrolling as well.The problem was that each cell had its own buffer and requires that I iterate through every one of them for re-positioning when the window is resized. Before delving into threads, polling, and also recalling Casey Muratori's Lectures on Optimization, I decided to see how far I could go without increasing the system's complexity. After all, computers today are orders of magnitude faster than what was available in the days of the VT100, so the issue must have been with my implementation.
While learning about instancing, I discovered that there is no need to send the entire 4 by 4 transformation matrix to the GPU. These debug cells only require the x and y scales and translations. So I opted to send just these four values over and build the matrix within the shader. As a test, I then lowered the cell size to 8 by 16 units and, to my surprise, discovered that the application was able to transition from rendering 100 cells to over 60,000 without stuttering.
That was easy, but it's a bit slow. Up until this point the application rendered large cells which hid the performance problem that occurs when the cells are set to a reasonable size. What's the deal? It's just rendering UV Quads. Along the way, I finally got around to maintaining a consistent cell size across screen DPIs. As mentioned before, macOS screens have twice the DPI of standard monitors. So a box width of 40 units will appear to be 20 units.
While I've learned a great deal to get to this point, I realize that I made a wrong turn back at step 21. The problem lies with the LineLattice. Every line of the document is created and placed into the world even though only a fraction of the lines will ever be visible at once. I should have trusted my instincts and gone with a row count that is dependent on the height of the viewport. With that, I should be able to add extra rows on both the top and bottom of the grid as a buffer to allow for smooth scrolling. Once an off-screen line enters the viewport, I'll then take the furthest row from the opposite side and append it to the side of the scroll direction.
To clarify, let's say the screen supports 20 visible rows. The plan is to append and prepend, for example, 5 to each end, bringing the total row count to 30. 10 rows will be culled from rendering. Scrolling down will cause 1 buffered bottom row to become visible. At which point, I'll take the topmost row and position it at the bottom. The same can be done with the columns.
They say a good night's rest is the best debugger. Searching for "NES Tilemaps" on YouTube led me to this video. Illustrating what I'm trying to achieve. The Nintendo Entertainment System's Loading Seam - Retro Game Mechanics Explained
LineLattice
. This should make line-wrapping, culling, and possibly rendering line numbers much easier. The project now contains a file named lorem.txt. It has seven paragraphs across thirteen lines, with the longest line containing 1,916 characters. The file uses 9 KB. If the grid were set to have a column count of 1,916 and a row count of thirteen, it would contain 24,908 cells. That would use far too much memory for off-screen characters. If I needed to use a 1 MB file, the cell count for unwrapped lines would be 1,277,783.
The cells have a 2:1 ratio, where the height is twice the size of the width. Their dimensions are used to determine the number of cells that can fit on the screen, as shown in sections 13 (Cell Column Count) and 16 (Line Breaking/Word Wrapping).
However, this speculative little calculation is a distraction. The next step is to render a single unwrapped line of the file. The lingering questions from the previous section are not yet relevant.
I've had to back track and update my assumptions about text editors. For example, in section 14, I assumed that I'd always know how many cells I'd need based on the length of the body of text. That proved to be false once I reached the point of line wrapping; where I'd have to skip cells and move to a new line to avoid breaking a word across lines.
I'm sure I'll have to backtrack more as the project grows in complexity, so now is a good time to take a break, review the lessons learned, and try to think a few steps ahead.
I recently earned an ITILv4 Certification. Rather than brain dump (forget it until I'm questioned on the matter) the material in preparation for the next course; I've decided to try to put it to good use using the 4 of the 7 steps of their Continual Improvement Model.
The editor should be able to do the following:
We currently have:
The proper next step is to render an essay. So far I can only render a sentence. The demos don't show the ability to render a new line if the text string has a new-line character (\n
).
There's only one way to find out.
The font is VictorMono-Regular.
Rendering a single glyph requires simply selecting the correct block index, UV coordinates, and transforming the quad to the dimensions of the image.
Since I have 1x and 2x pixel density monitors, I'll have to generate another atlas with a doubled font size for moving windows across them. Resizing the camera's viewport does not yield crisp results.
Supporting multiple font sizes will require additional atlases. As you can imagine, the memory costs would outweigh the benefits; which could explain why the elder_devs
found other means to render text, even if they limited the glyph pallet.
An elder_dev
is someone who cut through the forest of tough software and hardware problems of the past. They laid the foundation of what we build upon
Text rendering with a GPU is a bit of a rabbit hole. Among single and multichannel signed distance fields, curve tessellation, and Bézier curve outlines in the shader, the most direct approach is to use an atlas.
This glyph atlas is composed of the rendered images for each glyph available in the font file. Since the total number of glyphs available can vary between fonts, it seems best to generate the atlas as several vertical slices as needed. For the sake of the screenshot, the height of each block is 512 which generates 108 blocks for the font size of 32. A larger block size will require fewer blocks.
The implementation is simple. Place the glyph in the block with the available space and remember its location. There is a one pixel gap between the glyphs and each block has its own transformation for debugging purposes.
Game engines would typically use a texture packer; which finds the optimal configuration to save on space. They may also choose to select only the most common characters, depending on factors like distribution region and scope.
320 × 320
480 × 320
The use of an orthographic camera resolves the distortion issues. The window can be resized and moved across screens of differing pixel densities while maintaining the image's display size.
320 × 240
320 × 320
The source image is 256 by 256 scaled by 0.5 to 128 × 128. Its distortion in the first screenshot is due to the proper mapping of its edges to the UV coordinates of the quad. Additionally. The vertices are still hard coded to the normalized device coordinate (NDC) system, which allows the image to appear undistorted if the window's width and height were equal, as shown in the second screenshot.
1.0 DPI
2.0 DPI
Apple displays have twice the pixel density of standard displays. Opening the window on a standard display and moving the window to the apple display will cause the image to appear smaller due to the doubling of the available pixels in the window. The solution is simple. When the scale factor changes, adjust the dimensions of the render surface just as you would with a resized window.