Wednesday, April 8, 2026
d2565d89-9ee1-452f-9134-4cf073e5f3dd
| Summary | ⛅️ Breezy in the afternoon. |
|---|---|
| Temperature Range | 12°C to 20°C (54°F to 69°F) |
| Feels Like | Low: 48°F | High: 64°F |
| Humidity | 76% |
| Wind | 20 km/h (12 mph), Direction: 237° |
| Precipitation | Probability: 0%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:25 AM / 🌇 07:14 PM |
| Moon Phase | Waning Gibbous (70%) |
| Cloud Cover | 22% |
| Pressure | 1016.47 hPa |
| Dew Point | 53.56°F |
| Visibility | 6.08 miles |
As many of us here know, Tantivy is a very popular Rust search library inspired by Apache Lucene. We sat down with Paul, its main author, to discuss how he got started with Rust and Tantivy, and his journey since. I figured it would be interesting to folks here :)
TrailBase is an open, fast Firebase-like server for building apps. It provides type-safe REST APIs + change subscriptions, auth, multi-DB, a WebAssembly runtime, geospatial support, admin UI... It's a self-contained, easy to self-host single executable built on Rust, SQLite & Wasmtime.
Moreover, it comes with client libraries for JS/TS, Dart/Flutter, Go, Rust, .Net, Kotlin, Swift and Python.
Just released v0.26. Some of the highlights since last time posting here include:
Check out the live demo, our GitHub or our website. TrailBase is only about 1.5 years young and rapidly evolving, we'd really appreciate your feedback 🙏
Hello, im currently learning how to setup neovim and i want to set it up for rust cause i also want to learn rust, i got autocompletion with blink.cmp to work and rustaceanvim setup but i dont get the RustLsp commands.
Next to blink.cmp and nvim-lspconfig i only configured the following for rust
{
"rust-lang/rust.vim",
ft = "rust",
init = function()
vim.g.rustfmt_autosave = 1
end,
},
{ "mrcjkb/rustaceanvim", lazy = false },
rust-analyzer is installed globally through rustup, using :checkhealt vim.lsp doesnt show rust-analyzer tho autocompletion works without issues.
What do i have to change to get the RustLsp commands and ideally but optionaly also get rust-analyzer to show with :checkhealt vim.lsp?
Hello people, small question. in october 2025 I developed a small project to practice data structures in C++, well the thing is. Is it worth to refactor the whole project to Rust (I'm a complete Newbie in this lang) my goal is to Learn some basics of Rust. Also I was wondering if is it good to invite some friends to start developing this refactor project, they are complety new to coding and git and I would like to teach them in the git usage.
repo: https://github.com/Tasesho/clip\_cloudshare
the goal is to migrate everything there to a new or update another repo with an old version(pls dont make fun of me, i just wanto to get better and learn new things ) :)))
PD: sorry my english.
Does anyone have any production experience with iced_rs? Whata did you write? How was the experience of using it? How was the compilation time? How good was the rendering engine? Does the app looks native enough?
Hey everyone,
I kept running into the same problem: every time I needed to process JSON, logs, or HTML, I had to either paste data into random websites or write scripts/CLI commands.
So I built IsoBrowse: a local-first, sandboxed pipeline that runs small WASM tools directly in your browser.
You can chain commands like this:
/echo "hello world" | /run uppercase
/get https://jsonplaceholder.typicode.com/users | /run jq "0.name"
/read ~/Desktop/server.log | /run grep "ERROR"
No installs, no copy-paste, no setup—just pipe data through lightweight tools.
Still early, but it’s been useful for:
API inspection
Log filtering
Quick data transformations
Repo + demo:
I don't want to have to pull hundreds of 3rdparty crates that I am not (and nobody) audits for a program.
Yes, you can put mitigations in place, but by the time you detect a malware in a dependency three levels deep, your secrets might already have been exfiltrated!
Look at how Go does their standard library, you can surely build complex programs without depending on many 3rdparty packages.
The argument to keep std lean usually goes about not wanting to make breaking changes and thus not willing to ossify progress/development of a certain module.
I don't want to pick a single crate here but just look at the all time crate downloads and you'll get an idea. It's not as bad as npm leftpad situation, granted.
Again looking at Go for inspiration, one idea could be a `std::x` (from Go's golang.org/x) where experimental and allowed+prone to breaking changes can live in, and as their APIs stabilize and mature can be moved into std proper.
I know people usually just crate add whatever and as long as the crate is "blessed" they dont pay much attention, but many fundamental crates still pull in more dependencies that you might not have heard of, are maintained by people who could get compromised without anyone noticing (as opposed to it being maintained by the Rust team).
While we're at it with unpopular opinions, can Rust steal Zig's IO idea so we dont need to divide the ecosystem between Tokio and non-Tokio async crates?
With the toml-spanner 1.0 release, it is now a fully featured TOML crate will all the features you'd expect but one, Serde.
When I posted earlier in development, one of the feature requests I got was full Serde integration, at the time I explained I had other plans. Now that those plans have come to completion this post evaluates the cost and benefits of going our own way, looking at the incremental complication benefits, workaround we can avoid
and bugs fixed.
Edit: Blog Post: https://i64.dev/toml-spanner-no-we-have-serde-at-home/
(Apparently on new Reddit, it looks like an image post.)
Hello r/rust fellows .
Initially I have started this as a hobby project to learn Rust, but very soon it become a real workhourse, with lot's of handy features and great performance.
This is not an AI slope, not a vibe-coded toy. Real project from human and for human :-)
If you have some ideas and suggestions , I would be glad to hear from you. My Goal is to have 100% OpenSource and 100% Rust, modern load balancer which solves problems.
I wanted to share the graphics API that I am using to build my game (Penta Terra). Quick summary:
Problem: Developing an application using existing graphics APIs is incredibly time-consuming to write, understand, and maintain.
Concept: Graphics are conceptually simple: load data and execute code using references to that loaded data.
Prototype Implementation:
Demo: Depth of Field demo and associated code
Details:
This library utilizes Rust's type system to avoid the complexities and redundancies associated with writing GPU applications.
For comparison purposes, I implemented two of the wgpu examples. Please note, this is not a criticism of the wgpu library, but rather how we can build efficient higher-level APIs on complex lower-level ones.
Boids example: pgfx boids | original | demo
The boids example highlights compute shaders. In addition to the smaller code base, this example is more explicit about how the uniform buffer is created and how the boid model data is setup and loaded.
Shadow example: pgfx shadow | original | demo
The shadow example highlights using texture arrays as render targets, indexing into texture arrays and uniform arrays, dynamic inputs (fallback to uniform buffers if storage buffers are unsupported), and custom backend types (depth sampler) and configurations. Honestly, when setting up this example, it took me a long time to fully understand what the original was doing. In addition to the new one being clearer, it actually performs better (likely due to the smaller number of copy operations).
The same concepts can be carried over to other things, like the DirectX shared root signature (improves performance due to less binding):
let root = root_signature .run(&device) .load_input(&my_uniforms) .execute(|cfg| { // One-time load cfg.load_input(&texture_atlas)?; cfg.load_input(&skybox)?; cfg.load_input(&Sampler::linear_repeat())?; cfg.load_input(&Sampler::nearest_repeat())?; Ok(()) })?; render_pass .run(&my_pipeline) .input(&root) .load_input(&MyConstants {..}) .execute(|cfg| { ... })?; Currently, I do not have the capacity to publish and maintain this, but I wanted to throw this out there in case it is useful to others. If you are interested in using this as a starting point for a maintained library, then go for it!
Hello rustaceans. I am trying to understand the "right" way to program in rust. I'm reading The Rust Book and a few others. It's great for learning but not quite a handy reference or cheat sheet and not so community backed. Wondering what the community at large thinks are considered rust "best" practices.
Any tricks, tips, must do, must not do, great patterns, anti-patterns appreciated.
Are these generally good?
https://rust-lang.github.io/api-guidelines/
https://doc.rust-lang.org/stable/book/ch03-00-common-programming-concepts.html
https://github.com/apollographql/rust-best-practices
https://microsoft.github.io/rust-guidelines/guidelines/index.html
Thanks
Yesterday I asked this question “How often do you use unsafe in prod?” (post below) and a lot of you seem to not really use it directly.
It seems most usage comes from either FFI, embedded, or “exotic” data structures which it seems many of you don't need in prod, at least not often.
Some pointed out that you technically use unsafe through the underlying libraries you import.
Now I lived under the impression that most prod Rust code that is not crypto will use unsafe or otherwise require very intimate low level knowledge so yesterday’s answers were quite eye opening (unless a lot of you work in crypto).
I think it’s relevant to know what sort of Rust jobs do you guys have. Or if you don't have a Rust job per-se, what sort of work do you use Rust for?
Again, I'd like to focus on prod work if possible. Thank you all for the answers so far.
Hi, I've been building a library that fills the gap between serde_json::Value and #[derive(Deserialize)]. It coerces types progressively, tells you what it changed, and doesn't bail on the first surprise.
use laminate::FlexValue;
let data = FlexValue::from_json(r#"{"port": "8080", "debug": "true"}"#)?;
let port: u16 = data.extract("port")?; // "8080" → 8080
let debug: bool = data.extract("debug")?; // "true" → true
Four coercion levels from Exact to BestEffort. Every coercion emits a diagnostic with a risk level so you can see exactly what happened.
It also does type detection (guess_type() returns ranked guesses with confidence scores for 16+ types), has a derive macro for struct shaping, and includes domain packs for dates, currencies, medical lab values, and identifier validation.
I'd appreciate any feedback, especially on the coercion level design and the API surface.
cargo add laminate --features full
Thanks.
First and foremost, please forgive me if this isn't the place to ask about such things; perhaps r/Montreal is better suited, and if so, just let me know and I'll take this down.
-----
I'm from New Zealand, and the Rust community is really small here, so my learning journey has been completely self-motivated.
I've recently been offered a place to stay in Montréal from June – September, and I have since learned that Rust Conf. is being hosted in Montréal from September 8 – 11, which is really exciting and I'd love to attend!
This accommodation offer, in some respects, feels like a rare opportunity to find my community.
In that, my questions are quite simple:
Cheers, crabs :)
-----
Again, if you'd like for me to remove this post, I will without hesitation.
I shared GoAI SDK here a few weeks ago. Since then it's added MCP support, prompt caching, and OpenTelemetry integration.
I wrote up the design decisions behind it: why minimal dependencies matter for supply chain security, how Go generics enable type-safe structured output, and how the provider abstraction works across OpenAI-compatible and non-compatible APIs.
https://dev.to/vietanh/why-and-how-i-built-a-go-ai-sdk-26ob
Next up: I'm thinking about adding an agent orchestration framework on top of the SDK.
Let me know your thoughts.
I've been building a tool (wile-goast) that exposes Go's compiler internals — AST, SSA, CFG, call graphs — as composable Scheme primitives. The idea is that AI agents (and humans) can write short scripts to ask structural questions about Go code that grep and file reading can't answer reliably.
The most interesting part is a "belief DSL" based on Engler et al.'s observation that bugs are deviations from statistically common behavior. You define a pattern you expect to hold across a codebase, and the tool finds the violations.
Here's a belief I ran against etcd's server package:
```scheme (define-belief "raft-dispatch-convention" (sites (functions-matching (any-of (contains-call "raftRequest") (contains-call "raftRequestOnce")))) (expect (contains-call "raftRequest")) (threshold 0.75 5))
(run-beliefs "go.etcd.io/etcd/server/v3/etcdserver") ```
This says: "find every function that calls either raftRequest or raftRequestOnce. If 75%+ use one over the other (with at least 5 sites), treat that as the convention and report deviations."
Output:
── Belief: raft-dispatch-convention ── Pattern: present (24/31 sites) DEVIATION: etcdserver.NewServer -> absent DEVIATION: etcdserver.LeaseGrant -> absent DEVIATION: etcdserver.LeaseRevoke -> absent DEVIATION: etcdserver.Alarm -> absent DEVIATION: etcdserver.AuthEnable -> absent DEVIATION: etcdserver.Authenticate -> absent DEVIATION: etcdserver.raftRequest -> absent
24 of 31 callers use raftRequest. 7 use raftRequestOnce directly. The thing is, raftRequest is a one-liner that just delegates to raftRequestOnce — no retry logic, no wrapping, nothing. The naming implies a semantic distinction that doesn't exist. Sibling methods like AuthEnable and AuthDisable make different choices for no reason.
Filed as etcd-io/etcd#21515.
What the tool actually does. Go's compiler toolchain (go/ast, go/types, x/tools/go/ssa, go/callgraph, go/cfg, go/analysis) is exposed as Scheme primitives through Wile, an R7RS interpreter. The belief DSL sits on top — you declare what you expect, it loads the relevant analysis layers lazily, and reports where reality diverges from the majority pattern.
It runs as an MCP server (wile-goast --mcp), so Claude Code or any MCP client can use it as a tool during code review.
What this is not. It's not a replacement for gopls, golangci-lint, or Serena. Those are better for navigation, predefined lint rules, and symbol-level code access respectively. This is for project-specific questions — conventions unique to your codebase that no predefined rule would check.
Hey r/golang,
I shared kumo here a while back and got great feedback. Here's what's new in v0.8.0.
GitHub: https://github.com/sivchari/kumo
Optional data persistence - Set KUMO_DATA_DIR and your data survives restarts. No more recreating test fixtures every time you restart the emulator. Works with both Docker volumes and local directories:
```bash
docker run -p 4566:4566 -e KUMO_DATA_DIR=/data -v kumo-data:/data ghcr.io/sivchari/kumo:latest
KUMO_DATA_DIR=./data kumo ```
Without KUMO_DATA_DIR, kumo stays fully in-memory - zero disk I/O, ideal for CI.
73 AWS services supported (was 71) - Added Location Service and Macie2.
New operations on existing services:
Other improvements:
You can import kumo directly in your Go tests. No Docker, no port management:
go import "github.com/sivchari/kumo"
Docker: docker run -p 4566:4566 ghcr.io/sivchari/kumo:latest
Homebrew: brew install sivchari/tap/kumo
All services are tested with integration tests using the actual AWS SDK v2. Feedback, issues, and contributions welcome!
When testing gRPC services or inspecting binary data from databases/queues, it's often painful to decode Protobuf payloads quickly. Built this tool to solve that.
https://protobufjsondecoder.netlify.app/
Useful when you need to quickly inspect what's actually inside a Protobuf message during development or QA.
Happy to hear feedback!
Ark is an archetype-based Entity Component System (ECS) for Go.
This release brings several performance improvements, but the standout feature is a new query iteration mechanism. Instead of iterating entity-by-entity, Ark can now expose entire component columns directly to the user. Query iteration is roughly 2× faster with this pattern.
Example:
go for query.NextTable() { positions, velocities := query.GetColumns() for i := range positions { pos, vel := &positions[i], &velocities[i] pos.X += vel.X pos.Y += vel.Y } }
This pattern will also make it easier to leverage Go's upcoming SIMD support.
For a full list of changes, see the changelog: https://github.com/mlange-42/ark/blob/main/CHANGELOG.md
Feedback and contributions are always welcome. If you're using Ark in your game, simulation or engine, we'd love to hear about it.
We are working on an open source tool in Go that wraps package managers (npm, pip etc.) and one of the interesting problems was: even if you detect a malicious package, what about unknown threats? postinstall scripts can still read your .env, .ssh, .aws before you've caught them.
so we sandboxed the install process itself. here's roughly how it works:
on Linux : we use bubblewrap (bwrap) under the hood. the install process runs inside a mount namespace where we selectively bind-mount only what's needed. credential directories like .ssh, .aws get hidden via --tmpfs so they don't even appear inside the sandbox. deny rules for files use --ro-bind /dev/null <path> to shadow them.
one non-obvious edge case: if a deny target doesn't exist yet you can't use that trick - bwrap would create an empty file as a mount point on the host, leaving ghost files in your home dir after the sandbox exits. so non-existent deny targets get skipped.
glob patterns like ${HOME}/.cache/** have a fallback, if expansion yields too many paths it coarse-grains to the parent dir instead to avoid hitting bwrap's argument list limit.
Landlock and seccomp based sandbox enforcement on Linux kernels is under development and will become the default choice on Linux with fallback to bubble wrap when Landlock is not available in the kernel.
on macOS : uses native sandbox-exec with a seatbelt profile.
policies are declarative so you can customize allow or deny specific paths depending on your setup. the bwrap translation code is at sandbox/platform/
The tool is http://github.com/safedep/pmg