Tuesday, April 21, 2026
6502d1b7-c477-40f6-babb-02f46f9212ed
| Summary | ⛅️ Clear until afternoon, returning overnight. |
|---|---|
| Temperature Range | 11°C to 20°C (52°F to 68°F) |
| Feels Like | Low: 48°F | High: 69°F |
| Humidity | 68% |
| Wind | 13 km/h (8 mph), Direction: 230° |
| Precipitation | Probability: 0%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:09 AM / 🌇 07:24 PM |
| Moon Phase | Waxing Crescent (15%) |
| Cloud Cover | 10% |
| Pressure | 1014.3 hPa |
| Dew Point | 50.37°F |
| Visibility | 6.11 miles |
I've been thinking about what a solid Go learning path looks like? Curious what topics you all think are underserved or poorly explained in existing resources. Concurrency? Error handling patterns? Project structure? What tripped you up the most?
Already created 2 of them in my opinion they deserve to be first:
- Go Essentials
- Go Concurrency Patterns
Specially concurrency is something I wish I could have when I was started.
Kindly please share your opinions and let me know what you think. Even if you think we don't need any resources and we already have enough please share that thoughts as well, that might shed a light on what I am doing. Thanks so much.
| batch | CUDA (FP32) qps | TensorRT (FP16) qps | speedup | CUDA (FP32) p99 ms | TensorRT (FP16) p99 ms |
|---:|---:|---:|---:|---:|---:|
| 1 | 629 | 2,117 | **3.37x** | 45.4 | 18.3 |
| 5 | 613 | 2,085 | **3.40x** | 44.2 | 14.4 |
| 10 | 590 | 1,998 | **3.39x** | 42.2 | 10.1 |
| 25 | 541 | 1,894 | **3.50x** | 47.8 | 15.3 |
| 50 | 454 | 1,620 | **3.57x** | 58.5 | 18.7 |
| 100 | 329 | 1,207 | **3.67x** | 83.1 | 26.1 |
| 200 | 212 | 697 | **3.29x** | 122.2 | 42.4 |
This is the weekly thread for Small Projects.
The point of this thread is to have looser posting standards than the main board. As such, projects are pretty much only removed from here by the mods for being completely unrelated to Go. However, Reddit often labels posts full of links as being spam, even when they are perfectly sensible things like links to projects, godocs, and an example. r/golang mods are not the ones removing things from this thread and we will allow them as we see the removals.
Please also avoid posts like "why", "we've got a dozen of those", "that looks like AI slop", etc. This the place to put any project people feel like sharing without worrying about those criteria.
| Guac (GB/GBC/GBA/NDS) now supports DS Emulation! It includes a jit compiler, a 3D scene exporter, direct boot without Bios or Firmware files, and a ton of configurable options. Additionally, GB/GBC performance and accuracy has increased 2x. A big thank you to the emudev and golang communities! [link] [comments] |
Hello everyone,
I just wanted to share this small library for anyone who might find it useful.
If you are already running in a Kubernetes cluster, you can probably get metrics from the apiserver. However, there are a few reasons why you might not want to:
Disclaimer: containerstats is being used in Reqfleet, a new load testing as a service platform.
| submitted by /u/SnooWords9033 [link] [comments] |
I'm currently building a 2D MMORPG using TypeScript for both the game client and the backend. I'm thinking about rewriting my backend in Go since it handles concurrency well and is able to scale well. I'm still new to Go but I don't know if it's a well supported language for game development. Is there a Matter.js equivalent physics engine for Go? And in general would you recommend Go for this use case?
I am trying to create an app that reads pdf files, creates an outline of the shapes in it, then allows you to color those shapes. The thing is, i want to filter out certain features but im not sure how to do that. I’m new to rust so i know this is really ambitious but i was hoping someone could point me in the right direction. I am currently planning on using egui, pdfium, and tiny-skia to accomplish this.
Hello everyone,
For the longest time, I’ve been using pure-Python parsers to get oscilloscope data into NumPy for analysis in my lab. While they work, the execution latency started getting on my nerves as our datasets grew. Waiting for the interpreter to comb through hundreds of deep-memory binary files.
As one does when they hit a wall with Python, I started looking into faster alternatives. Naturally, Rust was at the top of my list. I wanted to see if I could build a backend that made the parsing process feel instant, so I started working on this little project.
I’ve been using it around the lab and with a few friends for a while now. It turned out significantly faster than I expected, so I decided to generalize it and put it on GitHub for anyone else stuck.
Some things i added:
Virtual Memory Mapping: I used memmap2 to map binary files directly into virtual memory. This avoids the standard RAM spikes and overhead of loading raw payloads into memory.
Parallel Extraction: By releasing the Python GIL and utilizing rayon, the parser can de-interleave ADC bytes across every available CPU core simultaneously.
Zero-Copy Handover: The Rust core writes data directly into a contiguous memory buffer that is handed to the Python runtime as a float32 NumPy array without any secondary copying.
I tested this on my daily driver a thinkpad T470s (Intel i5-6300U) to see what it could do on resource-constrained lab hardware. I was kinda blown away again rust blew my mind i got sub milisecond execution on parsing the metadata and for end to end extractions for a 12MB Rigol capture that took 375.2 ms in pure Python now finishes in 53.5 ms on my 9 year old laptop.
It’s been tailored for our specific needs, but I’ve tried my best to make it flexible for others. It currently supports Rigol (DS1000Z, DS1000E/D, DS2000) and Tektronix (WFM#001-003) families.
If anybody wants to check it out here the github: https://github.com/SGavrl/WfmOxide
Feedback is more than welcome. Especially if you have different .wfm file versions or suggestions on the PyO3/Rust bridge implementation.
Hello,
When I started building allumette, which means match (the thing you strike to light things up) in French, I had three goals in mind:
Now, we are more than a year later and allumette has a tensor library with built-in autodifferentiation and it can train neural networks. All of this on 3 backends:
It also has a TUI built with ratatui so that you can visualize the neural network training process.
If you're interested in these topics maybe you will find the project useful: https://github.com/BenFradet/allumette
Thank you,
Disclaimer: this was built without AI.
If you've used sqlx's query! macro, you know the drill: run cargo sqlx prepare, commit sqlx-data.json, keep a database running in CI. It works, but it's not frictionless.
qusql-sqlx-type is an (almost) drop-in replacement for sqlx::query! that:
Cargo.tomlcargo check time with no database connection.When you typo a column name, you get this at compile time:
error: ╭─[ query:1:8 ] │ 1 │ SELECT titl, body FROM notes WHERE id = $1 │ ──┬─ │ ╰─── Unknown identifier │ │ Help: did you mean `title`? ───╯ --> src/main.rs:7:24 This currently works for MySQL and PostgreSQL. Read more in this post.
For MySQL/MariaDB there is qusql-mysql-type, which wraps qusql-mysql, a cancellation-safe async driver that benchmarks roughly 1.5–2x faster than sqlx on MySQL workloads. More details here.
We've been using this in production at Scalgo for years, and now feel it's ready to share with a broader audience.
Feedback, questions, and contributions are very welcome!
Hi everyone,
I’ve been working on a music application written in Rust called Audium, and I’d really appreciate some feedback from the community.
Repository: https://github.com/takashialpha/audium
If you have a moment, please take a look at the README and TODO files to get a sense of the project’s current direction and planned features. I’m especially interested in:
Pull requests, issues, and discussions are all very welcome. I’m hoping to make this a solid, idiomatic Rust project and would love input from more experienced Rust developers.
Thanks in advance!
If you ever tried to write complex compute shaders in raw wgpu, you know how much boilerplate you need for pipelines and bind groups. Some of my old multi pass shaders required 1000+ lines of Rust just for the setup and each one req different approaches. but for a long time ago, I got tired of writing the same boilerplate every time and recently finalized a declarative builder to solve it. I saw this recent post with people complaining about graphics library boilerplate, so I thought it was a good time to share my personal project :)
cuneus uses a declarative builder and a strict "4-group" binding convention. It handles all the layout caching, dependency tracking, and ping pong buffer flipping automatically (you can also take this control too if you dont want too automatisation). so you just define your passes in Rust, and the engine connects everything to your WGSL. For example, a 17 pass navier stokes fluid sim (and also takes input media texture) is only ~180 lines of Rust, and most of it is just setting up eguistuff:
https://github.com/altunenes/cuneus/blob/main/examples/fluid.rs
https://github.com/altunenes/cuneus/blob/main/examples/shaders/fluid.wgsl
So you can just focus on your math and shader more easily and also use any kind of benefit from rust side.
it also has integrated GStreamer. Sending videos, webcams, or audio FFTs directly to your shaders is very easy.
To test its limits, I made a few examples (mostly complex shaders for demonstration for what cuneus can handle): https://github.com/altunenes/cuneus/tree/main/examples
for instance a 3D gaussian Splatting inference engine (using my optimized custom 16-bit GPU radix sort: note I "optimized" legend work from wgpu_sort (BSD 2-Clause License) Simon Niedermayr and Josef Stumpfegger, fyi). or old school real time 2D Gaussian training (live adam) with input texture.. a playable 3D block tower game where the game state lives entirely in a compute storage buffer. A Daft Punk (veridis quo) software synth that generates audio per sample directly on the GPU ( basically I tried to code the sound to reproduce veridis quo: https://github.com/altunenes/cuneus/blob/main/examples/shaders/veridisquo.wgsl to demonstrate you can create some simple melodies by coding shaders). or complex pyhsarum experiments: https://github.com/altunenes/cuneus/blob/main/examples/physarum.rs (see how simple rust file even that complex pipeline) https://github.com/altunenes/cuneus/blob/main/examples/shaders/physarum.wgsl
also I know bundling native media frameworks across different platforms is usually really hard especially for gst. So I set up a CI/CD pipeline to make sharing easy and what cuneus actually needed for. most example compiles into a standalone click and "exes". It automatically bundles GStreamer for Windows, macOS, and Linux. so you don't need to setup any build environments; just download a release and open it.
check releases to download and test my experiments:
https://github.com/altunenes/cuneus/releases
its also important me because I sometimes ship small gpu apps (and also art stuff) and always using my own engine for my commercial projects for last 3 years. So always upgrading when I need something. I hope it will be useful or inspiring to someone :-)
I have been trying to learn how to program an esp32 for casual home projects but the ammount of information seems to be really lacking. I tried esp-hall but now I am going with esp-idf since it is a bit easier to work with. The problem is that there is zero documentation. Whenever I google stuff it just feels like shouting into the void. Besides that, the ecosystem surrounding it has the same problem of libraries with no docs that haven't had their 1.0 release.
Do any of you experienced with embedded programming have any advice? I constanly find my self using AI because at least it plaigised some repos so it can answer some of the questions.
I also question if I should have went with the rasberrypi zero but I really like the idea of the esp32.
One thing i think is not talked about enough is how smooth learning rust is, not in a sense like the syntax is easy (for me at least it isnt) but in a sense of how easy it is to actually find the necessary material to learn rust.
A few months ago i started learning c++ and i was immediatelly stressed out by how many books and sites there was about it, i went on reddit and everyone was reccomending diffrent books and different materials and i went on a big rabbit hole before actually starting to learn something.
With rust was different i went on the site i downloaded the compiler and cargo and started reading The Book, everything felt so natural and comforting and i absolutelly love cargo with all my heart (f#ck you cmake).
i hope i was able to express this opinion well enough im italian so i dont speak english very well
stet is a ground-up Rust implementation of a PostScript Level 3 interpreter, a PDF reader, and a print-quality PDF writer — all three converging on a single DisplayList representation, so every output device (PNG, egui viewer, PDF, WASM) works with any source.
Try it in your browser: https://andycappdev.github.io/stet/ — drop a PS, EPS, or PDF and watch it render client-side.
Install:
cargo install stet-cli # CLI binary
cargo add stet # library
What's there
- PostScript interpreter — ~320 operators, Level 3 complete. Type 1 / CFF / TrueType / CID / Type 3 fonts. All 7 shading types (axial, radial, Gouraud mesh, Coons/tensor patch). CIE color spaces. ICC with system CMYK profile auto-detection. Filters (Flate, LZW, DCT, ASCII85, eexec, …). 35 URW fonts embedded.
- PDF reader — PDF 1.0–2.0, RC4/AES-128/AES-256 encryption, xref streams, standard filters. No dependency on the PS interpreter — use it standalone.
- PDF writer — print-workflow quality with native CMYK + spot colour preservation, subsetted font embedding, all 7 shading types round-tripped.
Why a shared display list matters
Most projects in this space are one of those three things. stet's interpreters produce an intermediate DisplayList; rendering is a separate pass that consumes it. That gives you viewport rendering at any zoom without re-interpreting, pipelined multi-page output (rasterizer consumes while interpreter produces), trivial cancellation between bands, multiple output formats from one interpretation pass, and much easier to add new output formats.
Architecture
13-crate Cargo workspace (14 with stet-wasm). stet-pdf-reader depends only on stet-fonts + stet-graphics, not on stet-core — use it without pulling in the PS interpreter. Rasterizer is a vendored tiny-skia fork with analytical-AA hairline modifications. WASM build is single-threaded and runs at ~1.8× native overhead.
Honest limitations
- The WASM demo is a capability sampler, not a production viewer — no system fonts, fixed zoom stops, single-threaded. For real work, use the native crates.
- Output devices: PNG, PDF, egui viewer, WASM. No SVG or TIFF yet.
- 762 passing Rust tests + a 119-baseline visual regression suite (3 known-diff baselines). Not every obscure PostScript corner is perfect.
License
Apache-2.0 OR MIT.
Repo: https://github.com/AndyCappDev/stet
Crates: https://crates.io/crates/stet
I just released version 1.0.0 of cargo-aprz. It's a cargo plugin that lets you appraise the quality of dependencies. For any given crate, it collects a large number of metrics, such as the number of open issues, the frequency of releases, the existence of security advisories, the number of examples, the code coverage percentage, and many more. You can view nice reports showing you all of these metrics in an easy to consume form
Please let me know if you'd like more metrics or other features.
Shamelessly plugging Rasant, a high performance structured logging Rust library i'm currently working on 👉👈
https://github.com/plisandro/rasant
Rasant was born as an ad-hoc log solution for another project, fueled by a bit of frustration with existing loggers for Rust. Eventually, i decided to turn it into a general purpose library - mostly to get acquainted with Rust's crate system.
The main goal is performance: the library is opinionated (f.ex. on output formats) but otherwise quite flexible and configurable. The latest stable release already compares very favorably to popular Rust log solutions, specially regarding throughput and heap usage.
Comments, feedback and bug reports are very welcome! There're a bunch of features and performance tweaks i want to implement for v1.0.0, but every minor version crate released is stable and functional.
I'm messing around with a personal project, laying some foundations, see how it works understand it. Came across some relevant repo's for my project, also perfect for my foundational understanding on a low-level since for most of us it may have been a while with how many ready frameworks there are today.
So for this project I needed the Bosch Sensortec BNO055 it's basically a gyroscope sensor or a "9 axis IMU driver" one can connect with stuff like raspberry pi's and pico's. Froked this repo: https://github.com/eupn/bno055 saw that it wasn't maintained anymore, understanding this is pretty basic stuff that doesn't need constant updates or optimizations. But it had some bug's in it and saw some minor optimization opportunities.
So my approach was pretty simple: Create an over complete test harness around it, make sure that works as expected, small refactor, optimize, test again.
### Before optimizations (upstream baseline)
| Read type | I2C bytes | Time | Max throughput |
|---|---|---|---|
| temperature | 1 | 1.02 ms | ~980 Hz |
| calibration_status | 1 | 1.04 ms | ~960 Hz |
| accel_data | 6 | 1.69 ms | ~590 Hz |
| gyro_data | 6 | 1.67 ms | ~600 Hz |
| mag_data | 6 | 1.71 ms | ~585 Hz |
| euler_angles | 6 | 1.70 ms | ~588 Hz |
| linear_acceleration | 6 | 1.70 ms | ~588 Hz |
| gravity | 6 | 1.68 ms | ~595 Hz |
| quaternion | 8 | 1.97 ms | ~508 Hz |
| all 6 sensors (individual) | ~33 | 9.83 ms | ~102 Hz |
| calibration_profile | 22 + mode switch | 43.9 ms | — |
| init | reset + configure | 653.6 ms | — |
Notice > Every sensor read wasted an I2C write to set the register page even when already on the correct page. A full sensor loop barely fit in a 10 ms window (100 Hz).
### After optimizations (this fork)
| Read type | I2C bytes | Time | Improvement |
|---|---|---|---|
| temperature | 1 | 0.60 ms | -41% |
| calibration_status | 1 | 0.60 ms | -42% |
| accel_data | 6 | 1.26 ms | -25% |
| gyro_data | 6 | 1.24 ms | -26% |
| mag_data | 6 | 1.26 ms | -26% |
| euler_angles | 6 | 1.24 ms | -27% |
| linear_acceleration | 6 | 1.26 ms | -26% |
| gravity | 6 | 1.26 ms | -25% |
| quaternion | 8 | 1.53 ms | -22% |
| all 6 sensors (individual) | ~33 | 7.16 ms | -27% |
| all_sensor_data (bulk) | 45 | 6.28 ms | -36% |
| calibration_profile | 22 + mode switch | 42.6 ms | -3% |
| init | reset + configure | 652.0 ms | no change |
## Changes from upstream
### Bug fixes
- **`AxisRemap::y()` returned wrong axis** — the getter returned `self.x` instead of `self.y`, hidden by `#[allow(clippy::misnamed_getters)]`. Fixed and lint allow removed.
### Safety
- **Removed both `unsafe` blocks** in `BNO055Calibration`. `from_buf()` now constructs field-by-field. `as_bytes()` now returns an owned `[u8; 22]` instead of an unsafe `&[u8]` tied to a raw pointer cast.
### Performance
- **Page tracking** — `set_page()` tracks the current page and skips the I2C write when the requested page is already active. After `soft_reset()`, the tracker resets to page 0.
- **Bulk sensor read** — new `all_sensor_data()` method reads all sensor registers in one I2C transaction. Returns `AllSensorData` with `Option` fields based on mode availability.
### API changes
- `BNO055Calibration::as_bytes()` returns `[u8; 22]` instead of `&[u8]`.
- `AxisRemapBuilder::build()` returns `Result<AxisRemap, InvalidAxisRemap>` instead of `Result<AxisRemap, ()>`.
- New `all_sensor_data()` method and `AllSensorData` struct.
### Dependencies removed
- `byteorder` — replaced with `i16::from_le_bytes()` / `u16::from_le_bytes()` from core.
- `num-derive` — replaced `FromPrimitive` derive with manual match arms.
- `num-traits` — no longer needed without `FromPrimitive`.
### Architecture
- `lib.rs` (996 lines) split into 9 focused modules. No breaking public API change.
- Internal fields and helpers changed from private to `pub(crate)` to support the split.
LICENSE: MIT
Basically what the title says, right now it's a decision between having to have your users have to manually call `Box::pin(foo)` or crippling RA by using `async_trait` which both have DX tradeoffs
I need a proc-macro crate for the following:
```rust
enum MyEnum {
#[attribute(function_name: return_value, ...)]
Variant,
...
}
```
And the output would be something like:
rust enum MyEnum { Variant, ... } impl MyEnum { fn function_name(&self) -> return_type { match self { MyEnum::Variant => return_value, ... } } ...
I've searched for a while and couldn't find any. Are there alternatives? Can I do it better? If all else fails, I'd just have to write and publish my own crate.
I’ve been working on a small open-source project to monitor Postgres from the terminal
https://github.com/nbari/pgmon
pgmon is a real-time PostgreSQL TUI monitoring tool inspired by pg_activity.
I’ve been maintaining several legacy environments (including RHEL7 VMs) and later dealing with Kubernetes setups where connection pooling wasn’t always properly tuned. In many cases:
So I started building a tool to make this workflow faster and more observable directly from the terminal.
What it does:
The goal is to make it easier to debug connection issues and query behavior without adding extra components or leaving the terminal.
It’s still early, but I’d really appreciate feedback, ideas, or contributions
Especially interested in:
Thanks!
New week, new Rust! What are you folks up to? Answer here or over at rust-users!
Mystified about strings? Borrow checker has you in a headlock? Seek help here! There are no stupid questions, only docs that haven't been written yet. Please note that if you include code examples to e.g. show a compiler error or surprising result, linking a playground with the code will improve your chances of getting help quickly.
If you have a StackOverflow account, consider asking it there instead! StackOverflow shows up much higher in search results, so ahaving your question there also helps future Rust users (be sure to give it the "Rust" tag for maximum visibility). Note that this site is very interested in question quality. I've been asked to read a RFC I authored once. If you want your code reviewed or review other's code, there's a codereview stackexchange, too. If you need to test your code, maybe the Rust playground is for you.
Here are some other venues where help may be found:
/r/learnrust is a subreddit to share your questions and epiphanies learning Rust programming.
The official Rust user forums: https://users.rust-lang.org/.
The official Rust Programming Language Discord: https://discord.gg/rust-lang
The unofficial Rust community Discord: https://bit.ly/rust-community
Also check out last week's thread with many good questions and answers. And if you believe your question to be either very complex or worthy of larger dissemination, feel free to create a text post.
Also if you want to be mentored by experienced Rustaceans, tell us the area of expertise that you seek. Finally, if you are looking for Rust jobs, the most recent thread is here.
The lsp-types crate is widely used for language server development in Rust, but it is no longer maintained and all types are hand-written. This makes it outdated, and it also contains a few errors/inconsistencies with respect to the spec.
I created gen-lsp-types as a codegen alternative to the crate. All types are generated from the official LSP Metamodel, with some various improvements listed on the README, including:
Eq, Hash, Copy, Defaultnull and undefined properties (which have different meanings in the spec).into() support for all helper enums generated for "or" types...etc.
Hopefully someone here finds it useful! Please leave issues/comments if you see any issues with the project.
P.S. no LLM-generated code was used in this project.
https://github.com/AndrewOfC/rust_flatbuffer_macros
Flatbuffers are a data exchange protocol with rust support. These macros simplify the creation of a typical use case where you have something like:
table AddRequest { addend_a: int32 ; addend_b: int32 ; } table MultiplyRequest { multiplicand: int32 ; mutiplier: int32 ; } union Payload { AddRequest, MultiplyRequest, } table Message { payload: Payload ; // Note fieldname must be same as field name in snake case }