Friday, April 17, 2026
a9673cdb-363e-4ca8-aaf0-07047eb8ba74
| Summary | ⛅️ Mostly cloudy until night. |
|---|---|
| Temperature Range | 18°C to 23°C (65°F to 73°F) |
| Feels Like | Low: 59°F | High: 76°F |
| Humidity | 53% |
| Wind | 18 km/h (11 mph), Direction: 64° |
| Precipitation | Probability: 56.00000000000001%, Type: rain |
| Sunrise / Sunset | 🌅 06:14 AM / 🌇 07:21 PM |
| Moon Phase | Unknown (100%) |
| Cloud Cover | 68% |
| Pressure | 1005.15 hPa |
| Dew Point | 52.4°F |
| Visibility | 5.69 miles |
I have updated Chapter 4—"Structs and Enums"—of the English edition of the book 《Learn Rust through Examples》.
You can read here: https://rustbyexample.gfor.xyz/chapter_04_structs_enums
github: https://github.com/SiliconAwakening/Learn-Rust-through-examples
I've been iterating on a terminal markdown reader in Rust over the last few weeks and it's at a point where I'd love some outside eyes. Main goal: browse a folder of markdown notes (research, specs, project docs) without leaving the terminal, and actually see mermaid diagrams and tables inline.
cargo install markdown-tui-explorer → binary is markdown-reader.
What it does
.gitignore-respecting tree of markdown files, live-reloads on change, restores tabs + scroll per directorysyntect + pure-Rust regex)V/y/yy for line-wise visual selection and clipboard yank via OSC 52i drops into an in-place vim-style editor (edtui) at the exact source line the cursor was on; :w/:q/:wq/:q!Stack
ratatui + crossterm for the TUIpulldown-cmark for parsing, syntect (fancy-regex backend) for highlightingmermaid-rs-renderer + resvg for mermaid → PNG, ratatui-image for inline displayedtui for the embedded editortokio for async file I/O and background searchWhat's rough
edtui so some vim-power-user features (registers, macros, regex search) aren't there yetMeta: how this was built
The project is a small experiment in agent-driven development — the backlog, architectural decisions, release notes, and feature specs all live in markdown inside the repo, and that documentation is the source of truth that both I and a coding agent work from. The agent reads the docs, implements changes, and the human (me) reviews the diff; the docs get updated first whenever the direction changes. No issue tracker, no separate spec tool — just markdown the agent and I both read.
Mildly circular: the tool I built to read project docs in a terminal is itself most often used to read the project docs that drive its own development. Which is also why inline mermaid rendering, fast project-wide search with preview, and a visible cursor that tracks which source line you're on all existed as scratch-your-own-itch features before they were framed as a product.
Links
Feedback welcome — especially on the editing experience and anything that feels un-idiomatic on the Rust side.
You have a project where you want to create a project and find a nice dependency but it's outdated, and contains panics. You check the repo and you find that the maintenance is not active (some issue contains "is this project active" without answers)
You would like to use this dependency and upgrade it, to use it into your project.
since you cannot publish a crate with a patch inside the Cargo.toml
what would you do ?
- create a fork and publish it ? How do you name it?
- create a fork and publish it under yourproject-dependency
- rewrite it without forking (since you're editing a lot of the code, is this ok?)
- clone it inside your src and use it ?
- another option?
thanks
I spent a while trying to figure out how to ship a mobile app with Rust as the primary language. Posting this in case anyone else is doing the same evaluation. Caveat up front: the Rust mobile UI space is moving fast and anything I say about a specific library may already be out of date. This is my take from the last couple months of looking.
What I looked at for Rust-native UI
Slint: The most "ready for apps" of the bunch. iOS/Android targets work and 1.15 added safe-area insets + virtual-keyboard positioning, which were real blockers for me earlier. DSL is nice. Limitations I bumped into: mobile is currently Rust-only (no language-binding story on mobile), and platform integrations (IAP, haptics, ads, store review prompts, accessibility, localization) are still mostly DIY. For a shipping consumer app that needed all those, I'd be writing a lot of glue.
Dioxus: React-like UI which I'm familiar with already, steady mobile progress. 0.7 just landed with hot-patching on mobile which was very tempting. iOS is decent, but Android is still "experimental" per the docs. For a heavily interactive UI with lots of input-tied animations and haptics, I wasn't confident enough to bet on it. But it's probably fine for content-heavy apps.
egui: The tools and debug UIs look great. Technically runs on iOS and Android. But it's immediate-mode which felt like learning how to walk again for me. It doesn't really do mobile well by default, things like gestures, keyboard avoidance, safe areas, accessibility are all doable but all on you. It's the wrong tool for a shipping a mobile app IMO.
Bevy: Tempting for a game. But Bevy UI is game UI, and mobile shipping (signing, splash screens, IAP plugins, ad SDKs, store metadata) is a lot of DIY. People have shipped, it's not easy. Great engine, not trying to be a mobile app platform.
Tauri 2 (Mobile): Doesn't solve the problem I was solving. UI is HTML/JS/CSS in WKWebView (iOS) or System WebView (Android); Rust is the backend. If I'm writing a web UI, I'd rather use a proper mobile framework.
What I ended up doing
React Native for the UI and platform stuff. A Rust crate for literally everything else. I was already familiar with React, which helped this decision among the other approaches of making a rust module in some other multi-platform framework.
The bridge is uniffi-bindgen-react-native (ubrn), which sits on top of Mozilla's UniFFI and generates TS bindings for Android/iOS from proc-macro-annotated Rust.
What Rust actually does
- Board generation, solver, hint deduction, and the whole game flow. I could technically pull it out and plug it into any other thin-UI and the entire game and the related features would work.
- An async request_game() that runs on a background thread and notifies TS via callback when ready. UniFFI's async support is good here, no polling, no JS Promises across the bridge TS holds zero game state. It subscribes to a "phase changed" callback and re-renders.
What's nice (the Rust side)
- cargo test runs the whole engine in under a second. No React renderer, no emulator, no act().
- Enums + match for state machines is exactly right. Exhaustiveness checking catches bugs while developing.
- Result at the FFI boundary is clean. TS gets a thrown error with the message; Rust keeps strongly-typed variants.
- Callbacks from Rust to TS work well for push-style updates.
- Cross-platform for free. The same Rust code cross-compiled for iOS and Android targets.
What's annoying
- UniFFI through ubrn is pre-1.0 and you'll hit some growing pains at the codegen layer.
- async across the FFI is supported but the debugging story isn't great. A panic coming through the bridge loses most of the trace. Ended up adding a log callback and tracing on the Rust side that then spits it out on the TS-side.
- Type translation has edge cases (custom types through callbacks, nested enums in some positions) where you end up flattening.
- Rust > RN has a much smaller community with fewer answers online. So you should be comfortable looking through the source code of libraries yourself.
Would I do it again
Yes. Rust is doing the load-bearing work and RN is a fairly thin render layer. The split respects what each ecosystem is actually good at today. If Slint or Dioxus close the gap on platform integrations for polished consumer apps, I'd re-evaluate for my next thing.
If anyone's further into the Rust-native UI path, I'd genuinely love to hear about it for my next project :)
I ran into this blog post, and I didn't see it posted on this sub yet. It was very interesting to me for several reasons:
Normally I don't share links and I have no affiliation with the author, but this was very insightful and might be to others as well.
There's now a Rust maintainer team, through which, to start with, 8 people who already had been maintaining the rust compiler, and standard library, get a source of stable funding to continue to work to maintain the language!
Isn't it wonderful when a niche tool you know about but would rarely ever touch just so happens to be the perfect solution for a completely unrelated problem? This just happened to me lol.
Normally you would only use PhantomData for type erasure and lifetime shenanigans. But today while coding a UI widget that has none of that stuff, surprisingly I found myself in a weird situation where PhantomData is exactly what I needed.
This is my existing code, minimised:
```rust use std::fmt::Display;
pub struct DropDownMenu<T> { pub selected: T, } impl<T: strum::IntoEnumIterator + Display> DropDownMenu<T> { pub fn render(&self) { for choice in T::iter() { render_item(choice); } } } ```
Now I need to change it so that DropDownMenu can also accept an arbitrary list of choices at runtime, say a Vec<String>. But it still needs to work with enums. So naturally I reached for traits:
```rust pub trait ToChoices { type Choice: Display; fn get_choices(&self) -> impl Iterator<Item = Self::Choice>; } // this would actually be a generic impl, but let's keep it simple here impl ToChoices for Vec<String> { // again, I would actually use &str, but let's not involve lifetimes type Choice = String; fn get_choices(&self) -> impl Iterator<Item = Self::Choice> { self.iter().cloned() } } // workaround for "conflicting impl because upstream may add a new impl" problem // specialization would make this unnecessary pub trait ChoosableEnum: strum::IntoEnumIterator + Display {} impl<T: ChoosableEnum> ToChoices for T { type Choice = T; fn get_choices(&self) -> impl Iterator<Item = Self::Choice> { T::iter() } }
pub struct DropDownMenu<T: ToChoices> { pub choice_provider: T, pub selected: T::Choice, } impl<T: ToChoices> DropDownMenu<T> { pub fn render(&self) { for choice in self.choice_provider.get_choices() { render_item(choice); } } } ```
See the problem here? In making DropDownMenu contain a runtime value, its users are forced to always pass in an arbitrary value for choice_provider when constructing it. But in the case of a ChoosableEnum, such a runtime "provider" is not actually necessary. So how do you semantically encode the concept "for some types a runtime provider is necessary but for some others it's not"?
PhantomData to the rescue! Simple change and viola:
rust impl<T: ChoosableEnum> ToChoices for PhantomData<T> { type Choice = T; fn get_choices(&self) -> impl Iterator<Item = Self::Choice> { T::iter() } }
Now constructing a DropDownMenu<MyEnum> is just DropDownMenu { choice_provider: PhantomData, selected: MyEnum::DefaultChoice }, which can be made nicer even further by wrapping it in a constructor associated function.
Honestly, taking a step back after going through all this, I must say, what a monstrosity 😅. But well, generic code does tend to grow into this when the requirement grows complicated so I guess it's not too bad after all.
Suggestions for improvements are very welcomed!
Turns out, I have in fact dug myself into a hole; see my comment. Alright, maybe this weird use of PhantomData is stupid after all. Let this be a cautionary tale then. Of what? I don't know. I should get some sleep.
We've been circling this for a long time, and it's become a bit of a white whale on the team. But we finally shipped it. Meilisearch is a search engine written in Rust, and this feature addressed some of the worst parts of the codebase: distributed state management, async I/O, and careful handling of node failures, so I thought it was great to share it.
When you max out a single Meilisearch machine, and you're stuck. We now let you distribute an index across N machines with replicas for failover. We ran scale tests with five shards for some customers (can't name them), and performance was much better.
It took a long time to ship this because we tried maaaany various replication algorithms over the years, and none of them worked well enough. And now the architecture works this way:
We are currently running the Meilisearch Launch Week. If you want to learn more about the latest released features, check out our dedicated announcement page.
Hi everyone, I've been learning the Rust programming language for the past year and after some few small projects I shot my shot with this big boy.
** The project isn't complete yet and have a lot of //TODO: comments that you can grep and contribute to the project if you want. **
ZVM, An educational, zero-dependency, single-threaded, garbage-collected Rust Implementation of the official Oracle Java Virtual Machine Specifications.
After 9 months of part-time work, I have successfully:
1- Read and studied a big part of the specs, specially chapters 1, 3, 4, and 6. ( 1 month )
2- Planned and designed the high-level architecture for the project. ( 1 month )
3- Implemented the first three stages of the project: parsing class files, introducing base vm components, and writing the instruction set. ( 7 months )
ZVM can now take a class file as input as well as any number of arguments, and fully parse its contents into in-memory data structures that serve the vm execution, and then starts executing it by reading bytecode, managing runtime, call stack, operand stack, accessing cp, executing instructions, and more and more etc.
The fourth and next stage is to handle memory management in the virtual machine, introducing an HMM or a heap memory manager and a garbage collector with a simple garbage collection algorithm like mark and sweep.
I have fully documented the project in the README.md file if you are curious about how things are done or if you want to understand stuff for contributing, which I will be very happy with :).
gh: https://github.com/muhammadzkralla/zvm
What are your thoughts?
Hi,
I’ve been working on a high-performance, cross-platform 3D rendering engine called Myth in Rust on top of wgpu.
The goal is to combine low-level performance with a more ergonomic, Three.js-like API.
Some highlights:
Repo: https://github.com/panxinmiao/myth
Showcase: https://panxinmiao.github.io/myth/showcase
Gallery: https://panxinmiao.github.io/myth
Enums in detail: tagged union representation, why enums are important for type safety, designing with enums to make invalid states unrepresentable, syntax walkthrough, and a practice exercise.
Hi everyone! Thanks for being such a great community.
Recently, I developed a new Rust-to-C compiler, based on rustc, as a hobby project.
https://github.com/rustic-compiler/rustc_codegen_c
I know there are already a lot of projects trying to do the same thing. This compiler is capable of compiling rustc and std itself into C code.
It's just a hobby project, so it may have plenty of rough edges. I'd love to hear your thoughts and feedback! If it sounds interesting to you, I'd be thrilled if you gave it a try!
Note that this post was proofread with AI, as I'm not a native English speaker. This project also makes use of AI, which has become fairly common these days, and I have carefully reviewed. I appreciate your understanding!
I discuss streams. It's a pretext to learn about higher-order streams, like flatten and to introduce a new stream: switch! It's very useful, and will have no secret for you.
Just my personal hobby project that I thought was cool.
Also it romanizes chinese, japanese, and korean lyrics.
| submitted by /u/tslocum [link] [comments] |
Hello everyone!
I wanted to share this open-source project I worked on. I've been using DuckDB embedded in Go applications for a while and it's a really useful tool. But Go is really good at I/O, and DuckDB adds its own I/O stack, which doesn't integrate well with Go: you have to replicate the authentication and instrumentation layers (can't use net/http, different secrets manager when reading off of S3, different TLS stack, etc...). It's a big bag of problems that Go already solves for and I wanted to avoid.
Thankfully, DuckDB is really well designed, and it has a concept of virtual file systems. So with a bit of CGO glue, I was able to bridge with the standard fs.FS interface to completely sandbox the I/O layer and delegate it entirely to Go.
I hope it's useful, I enjoyed working on it and it's been a foundational building block everywhere I've used DuckDB since. Leave feedback if you have any!
| submitted by /u/mastabadtomm [link] [comments] |
Basically, I've always been interested in doing something cool in a real compiler. It's one of my areas of interest. And one day, I thought that it would be more convenient in some aspects if Go had tuples instead of multiple return values (check here). From that moment on, I became even more passionate about the idea.
As a result, I was able to implement step-by-step
Yes, features not production ready, some bugs and limitations are still inbound. But I am not a member of the Go development team, and in general I worked alone. Still, these features are composable
I wrote an article on Medium about how to implement a conditional expression. Article about tuples can be written as well, but the conditional expression affects several stages of compilation in various interesting ways. If you are interested in Go compiler internals, kindly suggest to consider it and its sources :)
I hope, for some compiler internals enthusiasts this material will be interesting.
Let me say right away that I don't impose my opinion on the Go language, its features, or anything else. I respect your opinion if it differs from mine; but I find comments like «I think the language doesn't need this feature» simply pointless, they don't lead to discussion.
| submitted by /u/SellAffectionate411 [link] [comments] |
Rotating 3D objects in game engines has always been a math-heavy process. In the Initially, using Euler angles (Pitch, Yaw, Roll) seems easy, but we must strictly reject them. Their biggest flaw is Gimbal Lock a condition where your rotation axis collapses and the entire math breaks. After rejecting Eulars cause of this failure, an engine architect is left with only two hardcore ways to handle rotations: Quaternions and Rotation Matrices. The Quaternion (which is king of Rotation fro me), are preferred because it's math formula is flexible, Gimbal-lock safe, and it can be made cache-friendly. But on the other hand, the standard 3D math and rendering world runs by default on Rotation Matrices. The problem is that when you put these matrices into real-time physics and high-performance computation, then a new engineering horror starts.
This engineering horror first comes forward in the form of Non-Orthogonal Drift. In a rotation matrix there should always be three orthogonal axes means all axis should be on 90 degrees. When floating-point math is repeatedly multiplied in the entire frame, then due to rounding errors those axes do not remain at strictly 90 degrees. The result is this that your perfectly square character starts looking squashed or distorted or like a skewed box. To fix this drift Re-orthogonalization is needed. The new object became skewed, now the CPU will have to stop the game and make that matrix straight again with math. This CPU Penalty makes the game slow, especially then when you have 1000 objects on the screen.
This overhead of math is only half the story. The real bottleneck is hit then when the CPU has to read the data of these 1000 objects from memory and after fixing write it back, because from the perspective of memory a matrix is very heavy. Think, a standard 3x3 (f32) rotation matrix takes 36 bytes (288 bits). But in reality for the entire mathematical rotation the matrix is only 3x3, whereas in Game Engines we always use a 4x4 Matrix, so that along with rotation in that matrix Translation (movement) and Scaling (size change) can also be saved. Its total size becomes 64 Bytes. This is that very number which fits in an L1 Cache line and blocks the CPU bandwidth. Hearing this it feels like okay, a 64-byte matrix will perfectly fit into the 64-byte cache line of the CPU, so what is the problem in this? The problem is this that in engineering when the size of any data becomes exactly equal to the memory container, then the margin of error becomes absolutely zero.
If the starting address of this matrix in memory is not precise (aligned), then this perfect fit suddenly becomes a hardware nightmare. Understand this with the example of a bare-metal memory address: suppose the first Cache Line of the CPU is from address 0 to 63, and the second Cache Line is from 64 to 127. If your entire 64-byte matrix is perfectly aligned (means it starts from address 0), then it will fit inside 0 to 63 in a single shot. But if the memory allocator shifts it even a little bit and starts it from address 16, then the data will cross the boundary. Result? The initial 48 bytes of the matrix will remain in the first cache line, and the remaining 16 bytes will spill and go into the second cache line. To process this unaligned data now the hardware has to pick up two separate cache lines in a single fetch and stitch them. If you are using SIMD instructions, then upon not having strict alignment either the CPU will straight give a Segmentation Fault (crash), or if you used an unaligned load instruction (movups), then the pipeline will stall and the load latency will double. And if by mistake this unaligned data crossed a 4KB Page Boundary, then a TLB miss will trigger and the CPU will have to do a page walk which can literally drop your speed up to 100x.
After this battle of cache lines, when the data comes inside the CPU core for final execution, then another limit of hardware is hit: Registers. We have XMM registers which are only 128-bit wide. This directly means that in a single register only 4 floating-point values can come. When you sit to process a 4x4 matrix with 16 values, then you will have to do messy loading between multiple registers, which makes the pipeline slow.
On the other hand, how clean and fast Quaternion is in memory, this in itself is a masterstroke. In a Quaternion the range is absolutely precise: [w, x, y, z] together make 4 floats, and its size is exactly 16 Bytes. This very compact size saves us memory fetch. With this we avoid Gimbal lock anyway, but also use the L1 Cache very efficiently. In reality, the entire [w, x, y, z] (all 16 bytes) is a Native Hardware Fit. Modern CPUs have 128-bit registers (like SSE registers XMM in Intel, or NEON registers Q in ARM). Because 4 floats multiplied by 4 bytes = 16 bytes, and 16 bytes are exactly 128 bits. This directly means that the CPU in a single instruction can load the entire quaternion into the register and multiply it. Therefore its math is much faster than the Matrix.
But here is a very big catch. The perfect loading of data in the register is only an advantage of storage and bandwidth, but when it comes to computation like doing quaternion multiplication (qvq-1 - The Sandwich Approach) to rotate a 3D vector then the game changes. For multiplication the hardware has to do cross and dot multiply of w, x, y, and z among themselves. And right here memory layout becomes our biggest obstacle. When you fetch XYZ values, then hopping has to be done in memory because the data is in Rows (which we call AoS layout). You will do branchless programming by using SIMD, but if you started Horizontal processing (data manipulation inside a single register), then its overhead will be so high that the purpose of using SIMD itself will be finished. To solve real-time physics we have a window of only 2ms. There is only one way to hit this frame rate: ending the overhead of shuffling and aligning the data through swizzling in such a way that it can stream straight into the registers.
Efficient data alignment and SIMD execution itself is that bar which separates an average engine from a high-performance bare-metal engine.