Friday, April 3, 2026
9f785e79-a2bc-48fa-98eb-83b8081586f2
| Summary | ⛅️ Breezy in the afternoon. |
|---|---|
| Temperature Range | 12°C to 19°C (54°F to 66°F) |
| Feels Like | Low: 49°F | High: 60°F |
| Humidity | 70% |
| Wind | 20 km/h (13 mph), Direction: 216° |
| Precipitation | Probability: 94%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:32 AM / 🌇 07:10 PM |
| Moon Phase | Waning Gibbous (55%) |
| Cloud Cover | 35% |
| Pressure | 1014.25 hPa |
| Dew Point | 50.35°F |
| Visibility | 6.14 miles |
I have been learning rust for some time now. I have developed a few personal projects , solved AOC and some challenges from code crafters. I am working on a pretty cool project that I just started but I feel like the code I write isn’t the most idiomatic, performant and production ready. I can use AI but I can’t judge its output when I can’t evaluate myself what’s the best way to do something.
I am looking for some open source project recommendations that have really high quality rust code(basically how a senior rust engineer would code).
My plan is to read through the code and learn about rust specific design patterns and good rust coding practices and I wanna develop a feel for how rust is programmed for production applications. Any recommendations are highly appreciated.
Thank you
Hello, I just can't find an idea for a complex enough project in Rust. At this point I am willing to do programming for free. I can also debug. Does anyone have any work for me?
I’m in a position to get to choose my tech stack at my job and I’ve really enjoyed using Rust. I’m working on building a back-end API for internal services. Although I have a lot of flexibility, the one thing that is non-negotiable is utilizing MS SQL Server as that’s the only DBMS that has organizational support (no matter how badly I want to use Postgres).
I feel like I can’t justify using Rust though because I don’t want to rely on abandonware (sqlx no longer supports MS SQL, Tiberius is no longer maintained). Am I missing anything? Is anyone else using Rust for applications which interface with MS SQL Server? I know I could set up odbc-api, but it just feels like a lot of workaround when there’s first class support for Go or Python.
Not trying to complain, just trying to figure out if it’s justifiable to use my preferred language or if I gotta let our DBMS make decisions for me.
Hi,
I've made a Dis virtual machine and Limbo programming language compiler (called RiceVM) in Rust. It can run Dis bytecode (for example, Inferno OS applications), compile Limbo programs, and includes a fairly complete runtime with garbage collection, concurrency features, and many of the standard modules from Inferno OS's original implementation.
The project is still in an early stage, but if you're interested in learning more about RiceVM or trying it out, you can check out the links below:
Project's GitHub repo: https://github.com/habedi/ricevm
RiceVM documentation: https://habedi.github.io/ricevm/
I want to build a command line repl ( that works in. Terminal ) , where the command language is a simple scripting language. Embedded commands would do things like launch graphics windows and draw things inside them . Any suggestion on crates that would support this, including an embedded language that is easy to add functions to ?
Pre-Release Performance Metrics
For the past few months we've been building AlBDO, a Rust-native compiler + runtime for JSX/TSX apps. The core idea: instead of shipping a JavaScript bundler that talks to a JS runtime, the entire pipeline — parse, compile, render, serve — is a single Rust binary.
What it actually does:
- Tier A → zero JS emitted, pure static HTML
- Tier B → selective hydration only for the parts that need it
- Tier C → full hydration where unavoidable
The result: cached response times around *0.07ms*. No Node.js process, no V8 in the request path.
The scheduler uses what we call a PiArchKernel — a 4-lane Lagrange scoring system paced by a sin(πt) arch curve for smooth burst absorption without queue starvation. Probably overkill for most apps, but the math was too fun to not build.
Current state:
Development Roadmap (for now):
Happy to answer questions about the compiler architecture, the effect lattice design, or why we chose axum over alternatives.
GitHub Repo: GitHub - AlBDO/AlBDO-v-0.1.0: Pre release version of AlBDO, our rust-first javascript/Typescript framework making web development blazing fast · GitHub
Also checkout our website :D
www.albdo.dev
Hey folks,
I’ve always found reqwest-middleware kind of cursed.
It feels like a parallel universe on top of reqwest: separate client type, middleware built around async_trait, and useful stuff spread across extra crates. Even retry has sharp edges there — for example, reqwest-retry explicitly documents that it fails on streaming bodies because the request isn’t cloneable. 
So I built this:
https://crates.io/crates/tower-http-client
The idea is pretty simple:
Use tower services and layers with reqwest, but keep the result ergonomic enough to use as a normal HTTP client.
So you get:
Example from the repo:
use http::{header::USER_AGENT, HeaderValue}; use tower::{ServiceBuilder, ServiceExt}; use tower_http::ServiceBuilderExt; use tower_http_client::{ServiceExt as _, ResponseExt as _}; use tower_reqwest::HttpClientLayer; use wiremock::{Mock, MockServer, ResponseTemplate, matchers::{method, path}}; /// Implementation agnostic HTTP client. type HttpClient = tower::util::BoxCloneService< http::Request<reqwest::Body>, http::Response<reqwest::Body>, anyhow::Error, >; /// Creates HTTP client with Tower layers on top of the given client. fn make_client(client: reqwest::Client) -> HttpClient { ServiceBuilder::new() // Add some layers. .override_request_header(USER_AGENT, HeaderValue::from_static("tower-http-client")) // Convert a generic body type into `reqwest::Body`. .map_request_body(reqwest::Body::wrap) // Make client compatible with the `tower-http` layers. .layer(HttpClientLayer) .service(client) .map_err(anyhow::Error::from) .boxed_clone() } #[tokio::main] async fn main() -> anyhow::Result<()> { // Start a mock server for testing let mock_server = MockServer::start().await; // Configure mock response Mock::given(method("GET")) .and(path("/hello")) .respond_with(ResponseTemplate::new(200).set_body_string("Hello, World!")) .mount(&mock_server) .await; // Create a new client. let mut client = make_client(reqwest::Client::new()); // Execute request by using this service. let response = client .get(format!("{}/hello", mock_server.uri())) .send() .await?; let text = response.body_reader().utf8().await?; println!("{text}"); Ok(()) } I’ve just released 0.6.0.
Main changes:
reqwest use-casesI've been building my file search in rust for a while as a neovim plugin and it appeared to be so good that have finally I released it in a form for SDK. Also as an official rust crate.
it is already adopted by such an actively used applications like opencode
In short this is the best in class file name search and content search. That doesn't require a long running indexing and is very fast and extremely accurate (we even support fuzzy content search on the code that actually works)
There is a public demo with a search of 3 different types of repositories (2k, 100k, and 500k files) that you can try yourself - https://fff.dmtrkovalenko.dev/
The biggest flex: on this specific search it is over 500x faster than ripgrep while running on a way slower machine (only 2 cores VPS)
over 500x faster on the same query
FFF is used both for grep and typo-resistant fuzzy file search, the filename fuzzy search is extremely efficient and being able to pin point mostly any query to the file name, for example this
"scripstcocoentetess unsignelessrehangzerococci" will be matched to this file scripts/coccinelle/tests/unsigned_lesser_than_zero.cocci
you can try it yourself
https://fff.dmtrkovalenko.dev/?repo=1&q=unsignelessrehangzerococci
Essentially this is just the fastest file search implementation that doesn't require an index (there is startup time cost but for 100k linux repo it is ~3 seconds) and support both file finding, content search, git status, frecency of file opens, persisting querying history and actually much more
https://github.com/dmtrKovalenko/fff.nvim
P.S. the repo looks like neovim plugin but this is actually a monorepo for everything. Rust crate is here https://crates.io/crates/fff-search
Your functions or datastructure methods implement an algorithm; how do you know if they’re scaling as expected? For example, imagine you’ve implemented a new sorting algorithm. Decent sorting is typically O(nlogn), but how about your implementation? Maybe there’s a bug somewhere that makes it worse.
With my new crate, bigoish, pronounced “big-o-ish”, you can write tests that assert that a particular function has a particular empirical computational complexity. More accurately, you can assert that an expected complexity model you provide is empirically the best fit for measured runtime, when compared to the fits of a specific set of common complexity models. Real big-O, in contrast, is a mathematically-proven upper bound.
Here’s an example, asserting that Rust’s built-in sort() fits n*log(n) best:
use bigoish::{N, Log, assert_best_fit, growing_inputs}; // The function whose complexity we want to assert. fn sort(mut v: Vec<i64>) -> Vec<i64> { v.sort(); v } // Creates a test input of a particular size for `sort()`. fn make_vec(n: usize) -> Vec<i64> { // Random number generation library: use fastrand; std::iter::repeat_with(|| fastrand::i64(..)).take(n).collect() } // Assert that that n*log(n) is the complexity model that fits `sort()` best. assert_best_fit( // The expected best-fitting computational complexity model: N * Log(N), // The function being tested: sort, // Pairs of (input length, input); the inputs will be passed to `sort()`. [ (10, make_vec(10)), (100, make_vec(100)), (1000, make_vec(1000)), (10_000, make_vec(10_000)), (100_000, make_vec(100_000)) ] ); // You can use `growing_inputs()` to generate input pairs more easily. assert_best_fit( N * Log(N), sort, // Starting with input size 10, generate 25 increasingly larger inputs using // `make_vec()`. growing_inputs(10, make_vec, 25), ); If you were to erroneously assert that sort() uses model N:
assert_best_fit(N, sort, growing_inputs(10, make_vec, 25)); You would then get a panic that looks like this:
All code written by a human.
We just shipped Java as a fully supported target in BoltFFI. It already generates Swift, Kotlin, and TypeScript/WASM bindings now also generates Java.
Few highlights:
- Java 16+ gets records, sealed classes for data enums, and pattern matching. Java 8+ gets equivalent final classes with public fields, depending on the specified min version.
- Async Rust functions map to `CompletableFuture<T>` on Java 8-20, or blocking virtual threads on Java 21+.
- Streams with backpressure support (batch pull, callback push, or `Flow.Publisher` on Java 9+).
- Callbacks and trait objects map to Java interfaces.
- Result<T, E> maps to typed exceptions. Option<T> maps to Optional<T>.
- Both JVM and Android are supported.
Repo & Demo: https://github.com/boltffi/boltffi
Hey, I've been working on this for the past two years and today is the 0.1.0 beta release.
Flow-Like is a visual workflow engine where every node is a sandboxed WASM Component running on Wasmtime 43 with full Component Model + WIT interfaces. But it's not just automation — it also ships a frontend builder, so you can wire up dashboards, internal tools, or full apps directly from workflow outputs. AI agent orchestration is a first-class use case too.
Cloud backend is AWS (can be deployed anywhere tho or you can use the kubernetes or docker-compose) but entirely opt-in — everything runs local-first by default.
Fully typed workflows with complete data lineage — every input/output has a schema, every execution is traceable. Cross-platform: macOS, Windows, Linux, iOS (in App Store review), Android (in verification).
The desktop release is currently building and will land over the weekend. If you grab the current version now you'll get the auto-update. The web app already serves 0.1.0 if you want to try it right away.
GitHub: https://github.com/TM9657/flow-like Web app: https://flow-like.com
Ok, so I am trying to learn leptos (I'm learning leptos to use for the frontend in tauri to use for the GUI of a project I'm working on)
and I have realised that there is a lot of usage of closures, and I never really understood much about them.
I know they are anonymous functions that capture the environment they are defined in as either immutable references, mutable references or taking ownership, but I don't really understand that much more. So I was wondering if I could get a nice explanation on them
Hello, some time ago when making my game i got unjustifiably angry at rapier3d and decided to go a bit overboard and implemented my own physics library, with collisions and all. It is pretty barebones but rather fast and has almost no deps.
I wonder if i should publish it, as it is made already as standalone project, and can be used without troubles.
But first i never published anything to crates, not sure what good practices are, and i really wonder if there is sense to it, because rapier exists.
For the rapier problem - actually was my fault but i only discovered this after swapping rapier with my library, heh…
I’ve just published gcode v0.7.0, which is basically a ground-up rewrite.
The old crate worked, but the API had accumulated some awkwardness over time. This release is the version I wanted it to be from the start: - nice high-level parsing into an owned AST - a proper zero-allocation core API - source spans everywhere - recoverable diagnostics - still embedded-friendly and O(n) with no backtracking
If you’ve looked at gcode before and bounced off it, this is the version I’d recommend checking out.
For those that haven't heard of G-Code before, it's kinda like the Assembly language for 3D printers and CNC machines.
As we know, the go keyword runs goroutines. Or rather, this is how we often think about it.
The go keyword in the Go programming language is one of its core elements, distinguishing it from other programming languages. It is used to launch a function in a separate goroutine. This allows the program to continue executing the current code without waiting for the invoked function to complete.
In this article, we will examine the exact role of the go keyword and explore how its use affects program execution.
```go func main() { go func(){ fmt.Println("Hello World") }()
os.Exit(0) } ```
What does this code print? If go keyword actually runs goroutine, most times we see “Hello World” as output of our program.
But in reality the result is not determined. To figure it out, why this happens, we need to remember, what is goroutines queue in GMP model.
In this picture, we can see LRQ and GRQ. This is a local queue and global queue. Every “P” has its own local queue. “P” put “G” (which mean Goroutine) to “M” from that queue. But how goroutines get into this queue?
To explain of this moment we actually need to understand what does do go keyword. How do we do this? Of course, by disassembling our program.
shell GOOS=linux GOARCH=amd64 go build -gcflags='-S' -o /dev/null ./main.go &> asm.S
As result we get the following go assembly code.
```asm main.main STEXT size=50 args=0x0 locals=0x10 funcid=0x0 align=0x0 0x0000 00000 (:5) TEXT main.main(SB), ABIInternal, $16-0 0x0000 00000 (:5) CMPQ SP, 16(R14) 0x0004 00004 (:5) PCDATA $0, $-2 0x0004 00004 (:5) JLS 43 0x0006 00006 (:5) PCDATA $0, $-1 0x0006 00006 (:5) PUSHQ BP 0x0007 00007 (:5) MOVQ SP, BP 0x000a 00010 (:5) SUBQ $8, SP 0x000e 00014 (:5) FUNCDATA $0, gclocals·g2BeySu+wFnoycgXfElmcg==(SB) 0x000e 00014 (:5) FUNCDATA $1, gclocals·g2BeySu+wFnoycgXfElmcg==(SB) 0x000e 00014 (:6) LEAQ main.main.func1·f(SB), AX 0x0015 00021 (:6) PCDATA $1, $0 0x0015 00021 (:6) CALL runtime.newproc(SB) 0x001a 00026 (:9) XORL AX, AX 0x001c 00028 (:9) NOP 0x0020 00032 (:9) CALL os.Exit(SB) 0x0025 00037 (:10) ADDQ $8, SP 0x0029 00041 (:10) POPQ BP 0x002a 00042 (:10) RET 0x002b 00043 (:10) NOP 0x002b 00043 (:5) PCDATA $1, $-1 0x002b 00043 (:5) PCDATA $0, $-2 0x002b 00043 (:5) CALL runtime.morestack_noctxt(SB) 0x0030 00048 (:5) PCDATA $0, $-1 0x0030 00048 (:5) JMP 0 0x0000 49 3b 66 10 76 25 55 48 89 e5 48 83 ec 08 48 8d I;f.v%UH..H...H. 0x0010 05 00 00 00 00 e8 00 00 00 00 31 c0 0f 1f 40 00 ..........1...@. 0x0020 e8 00 00 00 00 48 83 c4 08 5d c3 e8 00 00 00 00 .....H...]...... 0x0030 eb ce
..... ```
In this assembly code, we interested two moments:
asm 0x000e 00014 (:6) LEAQ main.main.func1·f(SB), AX
What does this code mean? We put the anonimous function func1 into AX register. This is how Linux ABI works. Linux ABI demand to passing arguments via registers.
Second interested moment is:
asm 0x0015 00021 (:6) CALL runtime.newproc(SB)
This code call runtime.newproc function. This function answers our question: what does go keyword actually do. Lets look at its source code:
```go // Create a new g running fn. // Put it on the queue of g's waiting to run. // The compiler turns a go statement into a call to this. func newproc(fn *funcval) { gp := getg() pc := getcallerpc() systemstack(func() { newg := newproc1(fn, gp, pc)
pp := getg().m.p.ptr() runqput(pp, newg, true)
if mainStarted { wakep() } }) } ```
Lets skip the unnecessary details.
go newg := newproc1(fn, gp, pc)
This piece of code create new internal/runtime goroutine representation.
And finally:
go runqput(pp, newg, true)
Put this goroutine to local queue
go // runqput tries to put g on the local runnable queue. // If next is false, runqput adds g to the tail of the runnable queue. // If next is true, runqput puts g in the pp.runnext slot. // If the run queue is full, runnext puts g on the global queue. // Executed only by the owner P. func runqput(pp *p, gp *g, next bool) { .... } And… Nothing. There’s no more about running goroutine. Lets put it to local queue.
In conclusion, the go keyword creates a goroutine and places it into a local queue. The actual execution is handled by the Go scheduler, which manages goroutines using the GMP model. Understanding this process is key to optimizing concurrency in Go.
Hey!
A couple of weeks ago I announced glyph, a declarative terminal UI framework, to the resounding feedback of "cool but what does it look like", which is totally valid criticism for the docs of a UI framework...
So last couple of weeks I've been combing through the docs and making sure its suitably illustrated. You can check out the docs at https://useglyph.sh - where most pages now contain plenty of example screenshots.
Appreciate the honesty last time around, it made the project better. Happy to answer any questions again
Cheers!
Does anyone know how I can go about doing custom protocol association in Go?
I'm looking for a way of opening up my Go app after a user opens up a specific link, in their browser for example.
I saw that Wails v3 offers this but my Go app doesn't need a frontend, so I'm wondering if there are any other options.
Thanks!
Hey everyone,
I’ve been contributing a lot lately to Typo, and its become a very useful tool in my day-to-day developer workflow.
What Typo does: it auto-corrects terminal command typos so you can fix mistakes quickly instead of retyping everything.
You can press Esc Esc to fix failed commands, teach it your own corrections (typo learn), and it also uses command history, built-in rules, and smart subcommand matching (for tools like git, docker, npm, kubectl, terraform, and more).
GitHub: https://github.com/yuluo-yx/typo
If this sounds useful, check it out and give it a try. And if you like contributing to OSS, we’d love more help from contributors - bug fixes, rule improvements, docs, tests, and feature ideas are all welcome.
Would love feedback from anyone who tries it.
| Most agent frameworks today treat inference time, cost management, and state coordination as implementation details buried in application logic. This is why we built Orla, an open-source framework for developing multi-agent systems that separates these concerns from the application layer. Orla lets you define your workflow as a sequence of "stages" with cost and quality constraints, and then it manages backend selection, scheduling, and inference state across them. Orla is the first framework to deliberately decouple workload policy from workload execution, allowing you to implement and test your own scheduling and cost policies for agents without having to modify the underlying infrastructure. Currently, achieving this requires changes and redeployments across multiple layers of the agent application and inference stack. Orla supports any OpenAI-compatible inference backend, with first-class support for AWS Bedrock, vLLM, SGLang, and Ollama. Orla also integrates natively with LangGraph, allowing you to plug it into existing agents. Our initial results show a 41% cost reduction on a GSM-8K LangGraph workflow on AWS Bedrock with minimal accuracy loss. We also observe a 3.45x end-to-end latency reduction on MATH with chain-of-thought on vLLM with no accuracy loss. Orla currently has 210+ stars on GitHub and numerous active users across industry and academia. We encourage you to try it out for optimizing your existing multi-agent systems, building new ones, and doing research on agent optimization. Please star our github repository to support our work, we really appreciate it! Would greatly appreciate your feedback, thoughts, feature requests, and contributions! Thank you! [link] [comments] |
Lately I’ve been thinking more about when a full database server (MySQL/Postgres, etc.) is actually necessary vs. when an embedded DB might be the better choice.
Traditionally, DB servers made sense because of:
But a few things seem to be changing.
Disk I/O used to dominate performance, so DB servers optimized around caching and batching. Now with SSDs, random I/O is fast enough that network latency to a remote DB often becomes the bigger bottleneck.
Also, running a DB server comes with a lot of operational overhead — backups, replication, monitoring, and so on. With embedded DBs, everything ships with the application, which makes things much simpler, especially for small teams or fast iteration.
I’ve been using PebbleDB and BadgerDB instead of a database server in several projects, and it’s been working well so far. What do you think about this?
Lately I keep seeing people (especially coming from other stacks) treating goroutines like a magic performance button. Goroutines are spawned like there’s no tomorrow - no limits, no backpressure, just vibes. No worker pools, semaphores, wait groups, nada.
What’s funny (or probably better to say more scary) is that this isn’t just beginners. I’ve seen pretty experienced devs do this too. Recently I’ve caught myself suggesting semaphore patterns in multiple PRs in a single day - so now I’m honestly questioning myself a bit.
Do you/your teams have actual rules/guidelines for goroutines? How do you review this stuff in PRs without going insane? Any good “we learned this the hard way” stories?
Not trying to rant - just trying to understand what “good” looks like in practice.
hey all
i'm rotem, previously co-created atlasgo.io and was a maintainer for entgo.io
one gap i keep running into with claude: they say the task is done, tests pass, but did the thing actually work?
we've all seen agents game their own tests. write mocks that test nothing. even modify the app so their tests pass. green checkmarks don't mean much when the agent controls both sides.
for web UIs, tools like playwright give agents eyes. but for TUI apps? nothing. the agent builds a terminal app, claims it works, and you're stuck manually checking.
so i built virtui , TUI automation for AI agents. open source, written in Go.
the idea is simple: give the agent a real terminal it can drive programmatically.
virtui run bash)the recording goes alongside the PR. reviewers can replay exactly what happened. no blind trust required.
better agent performance
anecdotally (and seems to be backed by recent anthropic research):
when you give agents a way to verify their own work, they do a much more accurate job.
when working on TUI interactions with bubbletea, Claude was wreaking all sorts of havoc. now it has eyes and a way to correct itself
how it fits into an agent workflow:
this is especially useful for TUI frameworks (bubbletea, ratatui, textual, ink) where there's no browser to screenshot.
still early (just launched on HN), but it's been useful for us internally. curious what verification approaches others are using in their pipelines.
GitHub repo in first comment
Install: brew install honeybadge-labs/tap/virtui
I'm using pgx for db interactions.
This is what I usually do to fetch data (example code)
type postRow struct { ID int `db:"id"` Name string `db:"name"` (....) } rows, _ := pool.Query(...) postRows, _ := pgx.CollectRows(rows, pgx.RowToStructByName[postRow]) return mapToPostDomain(postRows) Clean and easy (I hope).
The problem arises when I have to fetch many rows from different tables (with one-many and many-many relations). I have to map each result to its domain model and then, if I have a list of posts, to place each fetched list to its post.
The mapping is not very complex, but the boilerplate quantity is not trivial when I have a high number of tables and its killing my will to continue the project.
Beside using an ORM, is there a better and faster method? Or I just have to map them?
Hey everyone, in a few weeks i'll make a project using go as my backend. initially, the database i was going to use is supabase, other programming languages have the client being worked on almost daily in their repositories, but this one for go seems to have stopped?
https://github.com/supabase-community/supabase-go
Last commit was 6 months ago, im a little bit confused if should implement the package or i should just manually make my own implementation in the project.
I wanted to know if anyone is using this package and had/have any kind of issues?