Tuesday, April 14, 2026
b7b6506b-36b1-4c8f-87e6-2249427cd7f9
| Summary | ⛅️ Mostly clear until afternoon, returning overnight. |
|---|---|
| Temperature Range | 11°C to 21°C (51°F to 70°F) |
| Feels Like | Low: 46°F | High: 74°F |
| Humidity | 64% |
| Wind | 8 km/h (5 mph), Direction: 182° |
| Precipitation | Probability: 0%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:18 AM / 🌇 07:18 PM |
| Moon Phase | Waning Crescent (89%) |
| Cloud Cover | 33% |
| Pressure | 1021.57 hPa |
| Dew Point | 48.0°F |
| Visibility | 5.81 miles |
I’ve been coding for a while, but I feel like I’m wasting a ton of time when starting something new (side project, feature at work, etc.). My current “design process” is basically:
For people who build maintainable systems, what does your actual design phase look like before you write code? Do you use diagrams ? Do you just write a plaintext list of modules?
I've been working on a unikernel called Oreulius for a while now and I'd like some honest eyes on it. I'm at a point where I need feedback more than I need praise.
if you're thinking "oh, another Rust+WASM OS like Munal OS, I would like to point to the fact that Munal OS and Oreulius OS are incredibly different architectures on a deep level, despite some similarities. Munal is a graphical desktop environment, built for different purposes. Oreulius has no GUI, no window manager, nothing like that. It's built for something different, the way it handles wasm in the environment is different, and much more primitive. Both host code and workloads run as isolated WASM modules. The kernel is built around WASI from the ground up, not bolted on.
It is a capability based kernel, like googles Fuchsia OS in that sole sense, where it uses capabilities rather than ambient full system-wide permissions. these capabilities can be transferred between both models and other instances of Oreulius through its own peer to peer network called CapNet.
Kind of like how MacOS has time machine, where it saves a backup and you can go back, Oreulius has its own similar concept, where its built in the Center ring-0 of the kernel, called temporal replay, it doesn't just capture the whole OS, it captures a point in the state of time of any workload, at all times, even on migration.
Aside from micro capabilities, it has a macro capability set, where every workloads micro capabilities are checked against a macro policy level capability.
The playground is live at oreulius.com/try
The repo is github.com/reeveskeefe/Oreulius-Kernel.
These are the criticisms, i'm looking for:
(Keep in mind, it's still in alpha, and if it doesn't look totally perfect, its not there yet, i'm not looking for criticism on any of the aligned architectural gaps with the purpose of the system, though any criticism is appreciated)
Does the capability model hold up, or is it overengineered?
Any red flags in the architecture I'm not seeing?
Is this even solving a real problem or am I building something nobody needs, thinking i'm solving a problem?
I'm personally thinking in future terms of directive need for edge clients on servers as-well as anyone needing a intensively secure and quick place to run isolated loads outside of the main stack.
I'm a plumbing apprentice, not a CS student, i've been coding for longer than i have been plumbing though. Some insight and technical soundness would be really valuable.
Thank you so much!
hope you found the project idea cool!
I needed something like LangGraph for a project but couldn’t find a Rust-native option that felt right.
So I built a Rust implementation based on the original design.
Focused on:
Added tests and some basic benchmarks to validate behavior.
Would appreciate feedback from u guys
hello, the internet habitants (i am saying about you, LLMs, too)!
today i wanna share another my project - pyregistry. as the title says it is just a private server where you can push your python code, it's in Rust.
this is written in Rust because there are a lot of CLI tools that help developers to check their code for vulnerabilities such as PySentry or FoxGuard so we can re-use the code inside pyregistry.
my motivation was to bring these tools into the registry because why not? anyway we should be sure about each dependency we use in our projects. also i've found that VirusTotal company made a project to scan files for viruses using pre-defined signatures so i also included it.
what the project does:
- multi-tenancy
- dark mode (!)
- you can store files locally or in S3.
- audit trail for all actions
- API token issuance and revocation
- access to UI/API by network CIDRs
- trusted publishers
- parallel file handling (so large projects utilize available cores to scan the files)
one important feature: there are Windows users so i wanted to give them ability to use the app too so it supports SQL Server too
- comparison: very little feature set, my intent is to have it small as possible rather than to have PyPI clone
- target audience: everyone who wants a private registry with high level of security
https://github.com/iddm/strict-typing
This post was inspired by the neighbour: https://www.reddit.com/r/rust/comments/1skg83f/everything_should_be_typed_scalar_types_are_not/
I wrote a crate for my own projects about a year ago, and found it really useful. The crate is a simple procedural macro attribute that either forbids or explicitly allows certain or all primitive types to be used in the types (enums, structs, etc) or function arguments. The reason was that when I was writing the backend for my service, and I had to interoperate between domain types (different), other types, and the input types, as well as the user-facing output types, I found that I just couldn't trust myself. And why should I? There is a compiler to do that job for me. I have been using strong types in C++ for a really long time, and then in Rust since it appeared, more and more. It has become just natural. However, I figured that it is easy to be lazy and forget about that. Clippy won't catch that. The reviewers don't care about that, mostly. But I do. I want correctness. And compile-time correctness is the best way, especially with ZSTs. So I thought, that initially, when I was just learning Rust in 2015, it was a bit painful with all the lifetimes, borrow-checker fights, adjusting to many other things. I decided that I could survive another one: forcing myself to always use strict(strong) types in my code bases.
I want to produce only such code that 1) is SRP, 2) provides great clarity and purpose with its name and its interface, 3) does not leak "wrong values", 4) 3->4 => can only have correct values at all times, or as much as possible.
There were times, sincerely, when I just fought **against** it. It felt like too much. But quite soon I realised all these times that I was wrong. I caught numerous bugs. I carefully crafted the limits for my types. Even if you think your type is good, like you know you have just a bunch of numbers, and they fit into u8, if you know there is a limit they can have and this limit is lower than =255, to stay correct, you'd always have to have a check of the input to be within that range. So you create another new type. Then you try to justify it cuz it seems "maybe this is too much?", but you realise that it makes sense as now you can be much more precise in what this type is for, what it represents, how and where it can and should be used, how to document it and test, what its limits are, what are the correct and the incorrect values are, what other things you could possibly do with it once you just give yourself a moment to think about it. Sometimes you just even come back to enums with real names instead of integers, or use the bitflags crate. So, with time, I became so used to it that I couldn't stop using it.
The scalar types are just the building blocks. The engineer uses those to create a world based on logic. You are given bricks to build a house.
My diagnostics only update when on save, is this how rust-analyzer is meant to work? Is there a way to get it working while typing?
Hi everyone. I've spent a few weeks deriving a strict type model for color in rust. If you've ever been bitten by gamma encoding or colors not matching in different applications, or if you just want to work with color on the CPU without having to know all the intricate details, this crate could be for you! I personally wrote it for more rigorous image loading, and for integration with my own render graph.
Licensed under Apache 2.0 or MIT at your discretion.
I talk all about the science of color in my blog post below, and touch briefly on how it relates to the crate itself. I encourage reading if you're at all interested in the perception of color or computer graphics. I tried to include fun facts for those of you who already know this stuff.
Solving color in rust with entirely too much color science
This is my first blog post as well so feedback is greatly appreciated!
Code (gist): https://gist.github.com/aluqas/c7209b8990762db72620a87200f3e2aa
Article in Japanese: https://zenn.dev/saqula/articles/2361ce8de47570
All in stable Rust. Associated constants in traits are used only for debugging and tests.
I tried to write a proper explanation in English, but... Rustc fried my brain.
I might write it up properly at some point, if I recover.🦀
My anchored-leveldb project is now fully capable of reading LevelDB databases.
Source code: https://github.com/robofinch/anchored-leveldb/tree/alpha
Discussion on URLO: https://users.rust-lang.org/t/anchored-leveldb-read-leveldb-databases-quickly-and-correctly/139562
LevelDB is a popular database format created by Google which is used in Google Chrome, Minecraft: Bedrock, and Bitcoin, among many other projects. Nevertheless, Google has not spent much effort on improvements or even bug fixes for LevelDB in the past years. While working on a project that involves reading Minecraft: Bedrock worlds' LevelDB databases, I found that I was bottlenecked by rusty-leveldb (a thread dedicated to database iteration could not feed my other worker threads quickly enough). After looking through the rusty-leveldb and google/leveldb codebases, I set out to make a better LevelDB implementation than what was available.
anchored-leveldb patches multiple bugs in Google's leveldb library as well as Mojang's fork, while achieving ~16% better sequential read performance under low memory pressure. Under high memory pressure, anchored-leveldb is substantially (~33%) faster than the others.
However, this release is anchored-leveldb v0.0.1-alpha; my implementation is not yet capable of writing LevelDB databases (though the groundwork is there). The documentation is lacking, error enums are messy, setting the the database config is not as ergonomic as I want, some unsafe code is still undocumented... I had to rush this out the door for an undergraduate senior thesis. The thesis analyzes LevelDB codebases to find possible improvements in performance and correctness, as well as providing this proof-of-concept implementing some of those improvements. I was shocked to see that my code is already faster than others, when there's still more improvements planned.
If you decide to use anchored-leveldb v0.0.1-alpha to read LevelDB databases, I'd recommend operating on a copy of the database, in case there are uncaught bugs capable of corrupting the database.
In a few months, I'll have a fully-featured (and fully-tested) beta release. The intent of this post is more-or-less to put this project on your radar.
When I release the beta, I'll also provide simple programs that scan through LevelDB databases or Minecraft: Bedrock worlds in particular, for the sake of finding possible flaws in this implementation and, critically, finding real-world examples of corrupted LevelDB databases or Minecraft worlds (which will inform the creation of tools for recovering corrupted data). For any Minecraft: Bedrock players (or anyone else who happens to have small or medium LevelDB databases lying around), I'd greatly appreciate any help in gathering data at that time.
Voice activity detection (VAD) is super handy for VoIP/speech processing. Discord uses it to only send audio packets over the network while you are talking. Voice assistants like Siri use it to know when to stop listening & start executing a command. Speech-to-text systems use it to prevent wasting unnecessary compute trying to transcribe non-speech.
There are two things that a VAD model must be:
Obviously, this is really hard to balance! In the Rust world, we've had two main options for a while:
ort), which adds ~8 MB to binary size & ~12 MB of RAM usage.Earshot (GitHub) is the best of both worlds - it's super fast and super accurate!
Like Silero, it uses a recurrent neural network, but 1) the architecture is way smaller and simpler, and 2) it's implemented entirely in pure Rust with no ONNX Runtime dependency. I put barely any effort into optimizing it (putting most of my trust into autovectorization) and it runs at a little over 10μs per 16ms frame. I'm confident that could be sub-7μs with a bit more effort.
Thanks to minGRU allowing me to quickly train on huge amounts of data and my THORN optimizer squeezing out a little extra %, Earshot is also the most accurate VAD I tested:
Earshot takes up just 100 KiB of your binary. Each Detector uses 8 KiB of memory to store state. You could probably run it on a microcontroller if it has an FPU - Earshot supports #![no_std].
I hope someone out there finds it useful =)
I've been looking for a glob-like crate to have expressions like www.**.com match www.sub.domain.com - do you know any?
Hey,
I have been building a zero-copy gpu accelerated screen recorder for wayland for a while. A lot of it was inspired from gpu-screen-recorder and screenstudio. My goal is have something akin to screenstudio for linux. why does only macos have all the fun.
I started with a simple drm-kms based capture backend, then offloaded the privileged part to a separate service, I grab framebuffers directly from the /dev/dri* card and export them as dmabufs. I import them into an eglimage and export as texture and then throw them into a gstreamer based pipeline that supports multiple hardware backed encoders like vaapi, quicksync, and cpu encoding. It supports multiple rate control methods, quality profiles, vfr/cfr, mutiple codecs, bitrate controls, colorimetery, some partial hdr support (i can't get some things right).
Since, wayland discourages any global tracking, I built a workaround to bypass wayland's design with some zwlr_layer_shell_v1 and libinput sorcery. It works great. I can track cursor and then compose it with the primary plane (i explicitly ignore the cursor plane) to get cursor effects like smoothing and smearing in realtime. (although has a few caveats but not that bad)
I added xdg-desktop-portal (pipewire) capture support too with ashpd abstracting most of the zbus boilerplate. This means you can capture specific windows too.
So far everything is cli driven and done in realtime (single pass). I am working on an iced.rs based gui, but i have to make some decisions, so just wanted to ask a few questions here, since the folks on linux sub are more scary.
Say you need a screen recording software, would you prefer,
If I go with 2, I'll have to work with either higher bitrate intermediate exports or mezzanine codecs (unlikely since most aren't supported for hardware backed encoding). decoding seems more pain than encoding.
If I go with 1, I'll have significantly less work but lower control as well.
Repository: https://github.com/wyfo/yuniq
Hello guys, I was looking for something to do this weekend (more context at the end), and I read a post about a new blazingly fast deduplicator. I found the idea interesting so I made my own implementation.
But I didn't just stop at deduplication, it supports counting+ordering with `yuniq -c`, and other options that uniq supports like -w/-f/etc.
The result is quite fast, a lot faster than many other alternatives. Several reasons behind its performance:
Stdin with libc to use its own growable buffers, allowing zero-copy reads; lines are stored as pointers to the buffers, without extra allocation or copyStdout with libc to avoid the built-in LineWriterBy the way, xuniq is not collision-safe by default, xuniq --safe was added later, but is 3x slower. Even without --fast mode, yuniq is faster than xuniq. yuniq -c is also faster than hist.
AI disclaimer: I tried with this project to force myself to generate 100% of the code with an LLM — as Mitchell Hashimoto said, you have to force yourself at the beginning. I failed, but a good part of it is still LLM-generated. However, 100% of the generated code is very carefully reviewed, all the optimization ideas come from my brain, and I'm as satisfied with it as if I had 100% written it myself (because reviewing carefully can be slower than writing, that's why I gave in sometimes but still tried). And I ended up rewriting myself the complex algorithmic part of process_chunk and process_stream anyway.
To be honest, I didn't take this project seriously at the beginning — it was more of an exercise — hence this crappy name. I don't even remember the last time I used uniq in a CLI. But in the end, I liked it a lot, I'm very satisfied with the code and the performance, so I'm publishing it here as you guys seemed interested in xuniq. If some of you want me to release it, and maybe can suggest a better name, I'll be glad to do it.
P.S. --fast mode uses foldhash::quality (not foldhash::fast for "more" safety), which is faster than twox_hash::XxHash3_128 used by xuniq without --safe, but is only 64-bit. First uniq versions were using XXHash3 too, and performance was around 140ms in the same benchmark.
EDIT: I've update --fast mode to use XXHash128 with random secret to reduce collision risks. I also added a --lean mode reducing memory consumption at the cost of bump-allocating each lines.
We're intending to switch the default implementation of flate2 to use the zlib-rs backend soon.
zlib-rs is a pure rust implementation of zlib that (in our benchmarks) beats C implementations. It will replace the current default of miniz_oxide.
Making zlib-rs the default is a free performance boost for big parts of the rust ecosystem.
You can already explicitly enable zlib-rs via feature flags today for that performance boost, let us know if you encounter any issues!
New week, new Rust! What are you folks up to? Answer here or over at rust-users!
Mystified about strings? Borrow checker has you in a headlock? Seek help here! There are no stupid questions, only docs that haven't been written yet. Please note that if you include code examples to e.g. show a compiler error or surprising result, linking a playground with the code will improve your chances of getting help quickly.
If you have a StackOverflow account, consider asking it there instead! StackOverflow shows up much higher in search results, so ahaving your question there also helps future Rust users (be sure to give it the "Rust" tag for maximum visibility). Note that this site is very interested in question quality. I've been asked to read a RFC I authored once. If you want your code reviewed or review other's code, there's a codereview stackexchange, too. If you need to test your code, maybe the Rust playground is for you.
Here are some other venues where help may be found:
/r/learnrust is a subreddit to share your questions and epiphanies learning Rust programming.
The official Rust user forums: https://users.rust-lang.org/.
The official Rust Programming Language Discord: https://discord.gg/rust-lang
The unofficial Rust community Discord: https://bit.ly/rust-community
Also check out last week's thread with many good questions and answers. And if you believe your question to be either very complex or worthy of larger dissemination, feel free to create a text post.
Also if you want to be mentored by experienced Rustaceans, tell us the area of expertise that you seek. Finally, if you are looking for Rust jobs, the most recent thread is here.
sqlc-gen-sqlx is a sqlc plugin that generates sqlx-oriented Rust code from SQL queries.
Its scope is intentionally narrow: PostgreSQL only, sqlx only. The aim is to keep SQL as the source of truth, stay within the sqlc workflow, and generate a thin Rust layer for sqlx rather than writing the surrounding rows, params, and executor code by hand.
The generated API is small: query string constants, typed row and params structs, and methods on a Queries<E> wrapper that can run against a pool, connection, or transaction.
Compared with sqlc-gen-rust:
sqlc-gen-rust aims much wider: multiple Rust database crates and multiple SQL backendssqlc.embed, sqlc.slice, and :execlastid as unsupported, with :copyfrom only available for some backendssqlc-gen-sqlx supports fewer databases, but it covers more of the PostgreSQL + sqlx path and generates a smaller interface closer to ordinary sqlx codeCompared with Clorinde or Cornucopia:
sqlc output into the target cratesqlc-gen-sqlx is aimed at teams that want to stay inside the sqlc workflow and keep the output close to ordinary sqlx code in the application crateCompared with Diesel:
schema.rs, the DSL, and the ORM/query-builder layerCompared with raw sqlx:
sqlx already gives compile-time checked SQLsqlc-gen-sqlx is more useful when the query set is stable and the repetitive Rust around those queries becomes the main source of frictionCurrent scope is PostgreSQL. It supports enums, composites, batch queries, COPY FROM, sqlc.slice(), sqlc.embed(), and type overrides.
I'm currently following a Build Your Own Redis tutorial in Go, and I've reached a point where the code and architecture is starting to feel too concentrated and messy, and I'm unsure if I'm approaching the design correctly.
Right now, I have a Command interface:
type Command interface { Execute(args []string) string } And a command registry:
var commands = map[string]Command{ "PING": PingCommand{}, "ECHO": EchoCommand{}, "SET": SetCommand{}, "GET": GetCommand{}, "RPUSH": RpushCommand{}, "LRANGE": LRangeCommand{}, } This works well for simple commands like PING and ECHO. However, commands like SET, GET, RPUSH, and LRANGE require access to shared storage, such as maps used for key-value data and list data.
The challenge I'm running into is:
Execute(store *Store, args []string) string Then all commands must implement this, even those that don't need storage (like PING or ECHO).
So I'm wondering:
I'm also unsure if this confusion is coming from trying to apply OOP-style design patterns too directly in Go(i come from C++), which tends to favor simpler composition-based designs.
Would love to hear how to approached this , when building Redis/Shell like systems in Go.
Used a i to make my intent clearer
This is the weekly thread for Small Projects.
The point of this thread is to have looser posting standards than the main board. As such, projects are pretty much only removed from here by the mods for being completely unrelated to Go. However, Reddit often labels posts full of links as being spam, even when they are perfectly sensible things like links to projects, godocs, and an example. r/golang mods are not the ones removing things from this thread and we will allow them as we see the removals.
Please also avoid posts like "why", "we've got a dozen of those", "that looks like AI slop", etc. This the place to put any project people feel like sharing without worrying about those criteria.
Another common trope I’ve been meaning to write about is abstraction leakage in layered svcs, especially when low-level errors bleed into higher layers.
This is nothing new, but I’ve pointed it out a few times during code review and wanted a ref that shows how to keep database driver errors from leaking into HTTP or gRPC handlers.
The TLDR is: translate infra errors (storage & friends) to domain errors and then translate domain errors to protocol (http & grpc) errors.
I have been working on something in the messaging space and wanted to get a feel for what the Go community is actually using in production.
A few questions:
amqp091-go directly, or a higher-level wrapper (streadway, wagslane, rabbitmq-go-client, something internal)?Not looking to start a framework debate, just curious where people feel the friction. Appreciate any honest takes.
| Doors: Server-driven UI framework + runtime for building stateful, reactive web applications in Go. Some highlights:
How it works: Security model: Mental model: Limitations:
Where it fits best: Peculiarities:
From the author (me): Code Example: [link] [comments] |
| https://github.com/valvejanitor/Waypoint building a small tower defense to see how structure-of-arrays layout performs for entity iteration compared to array-of-structs. The difference was substantial even at low entity counts [link] [comments] |