Monday, April 20, 2026
a3ff50d4-67da-4015-9b46-513a15b7c0da
| Summary | ⛅️ Breezy in the afternoon. |
|---|---|
| Temperature Range | 13°C to 21°C (55°F to 70°F) |
| Feels Like | Low: 52°F | High: 70°F |
| Humidity | 71% |
| Wind | 15 km/h (10 mph), Direction: 264° |
| Precipitation | Probability: 82%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:10 AM / 🌇 07:23 PM |
| Moon Phase | Waxing Crescent (11%) |
| Cloud Cover | 22% |
| Pressure | 1011.06 hPa |
| Dew Point | 52.45°F |
| Visibility | 6.03 miles |
I am trying to lean backed and distrubuted systems with go. I have no idea about Go I know TS, Cpp, and have been in Android community for about a yr or more so also know kotlin and dart a little bit.
I just want a switch to learn more about backend mainly.
So any roadmap or how can I do this fast and actually learn this tech?
Any help will be much appreciated. Thanks in advance...
I work at a small company, no PM, just devs + the boss.
Recently the boss got really into AI tools (like lovable and similar stuff), and now he keeps iterating on ideas constantly. Like… every day there’s a new tweak, new direction, new “what if we just change this part real quick”.
The weird thing is — yeah, AI made us faster at coding.
But now it feels like it just raised expectations instead of reducing workload.
We can generate code faster, sure, but we still have to:
So instead of being “faster”, it just feels like we’re chasing a moving target all the time.
Honestly it’s exhausting.
Feels like we went from “building things” to just constantly reacting.
Anyone else dealing with this? Or is this just a small team / no PM problem?
Hi there.
I'm looking to pivot to SWE from Data Science (I'm early in my career), and I have an interest in decentralised networks and P2P systems in particular.
I'm working through "Distributed Services With Go" to build up some knowledge of distributed systems (I know it's far from a complete resource) more generally.
I'm aware that this is a tough enough sub-field that getting real knowledge in it is a massive undertaking, especially for someone who lacks work experience. I'm not expecting that learning this stuff will increase my chances of getting hired. However, I've decided to work through the book anyway out of interest.
If anyone shares my interests and would like to work through it with me, or potentially work on related projects afterwards, please DM me.
Hi there,
I was writing a backend service in Go and I was always under the impression that you should do everything to do with your setup in the main function. I read it in here, and in books/blog posts multiple times. Is that still true in loosely coupled serverless apps?
I have this question because I am currently writing an Azure Functions App in Go, which needs different configuration options for some functions. Oversimplification of what i mean:
type App struct { dbClient *DBClient taskClient *TaskClient } func (a *App) HandleDBRequest(w http.ResponseWriter, r *http.Request) { /* ... */ } func (a *App) HandleTaskRequest(w http.ResponseWriter, r *http.Request) { /* ... */ } func (a *App) HandleOther(w http.ResponseWriter, r *http.Request) { /* ... */ } func main() { dbClient, err := GetDBClient() if err != nil { panic(err) } taskClient, err := GetTaskClient() if err != nil { panic(err) } app := &App{ dbClient: dbClient, taskClient: taskClient, } mux := http.NewServeMux() mux.HandleFunc("/db/", app.HandleDBRequest) // Only needs the db config mux.HandleFunc("/task/", app.HandleTaskRequest) // Only needs the task config mux.HandleFunc("/otherroute", app.HandleOther) // Might need both configs http.ListenAndServe("localhost:8080", mux) } When the app is not being used it'll automatically shut off completely and do a full cold start on the next request. If that request is a /task/ request, it will always setup the DB Handler too, but I don't actually need it. The app will be "lightly" used, so cold starts are kind of frequent, in Azure they happen every 30 Minutes afaik.
Should I move this logic to the handlers? Or is a different structure altogether better?
Is there any lib which implements a ringbuffer in an IPC context.
Hey everyone,
I recently had to pick up Go from scratch at my recent tenure. Figuring out the syntax was straightforward enough, but wrapping my head around how to structure a scalable backend in a completely new ecosystem was the real challenge.
As the app grew—adding databases, repositories, services, handlers, and middlewares—manually wiring everything together in main.go started getting messy fast.
I ended up looking into DI solutions and went with google/wire to handle compile-time dependency injection. Instead of passing dependencies manually or relying on reflection (which always makes me nervous about unexpected runtime panics), wire generates the dependency graph during the build process.
https://i.ibb.co/9LGV3Mm/wire-post.png (SS)
As you can see in the wire.go setup, it forces a really strict separation of concerns and keeps the actual initialization incredibly clean. Huge credit to PK for pointing me in this direction when I was getting lost in the weeds.
It’s humbling to realize there is always a better way to structure your code. Since I'm still relatively new to the "Go way" of doing things, I'm curious to hear from more experienced Gophers:
How are you all handling DI in your larger production Go projects? Do you stick strictly to manual injection, use wire, lean towards uber-go/fx, or something else entirely?
When you run the `go help` command, you get a list of commands that you can run in "alphabetical" order, but "work" doesn't follow the order:
install compile and install packages and dependencies list list packages or modules mod module maintenance work workspace maintenance <----- run compile and run Go program telemetry manage telemetry data and settings vet report likely mistakes in packages Why?
I ask this out of curiosity as I find a lot of developers, once they reach a saturation point move away from Go to "better" languages. I use the word "better" in quotes because I'm simply quoting these developers. Of course, some are Internet "influencers" and active on social media.
However, some of them move to languages that offer them better control over the hardware (languages such as Zig, Rust, etc.).
In this scenario what is the Go core team's idea to retain users in the ecosystem (which is great IMO (do a lot with less)?
I see that the Go introduced things like the `arena` package that offers some more control, but when it comes to actual systems programming (drivers, kernels etc.) and embedded (I've heard of TinyGo, and Tamago), what is the Go core team doing to bring better features to improve Go's access to low-level features.
Or will they never be offered?
I see a lot of opinionated patterns out there. But they are mostly too complex for a new project and can lead to early over-engineering.
What are those that's are important to start of early. Those that'd cost if afterthoughted.
I depend entirely on std as i am expanding my skills and understand the Go way (aka idioms). Some of the things i do think are a must are: DI, interfaces for abstraction and ofcourse using std for common tasks.
I have only worked with unit test, but never a form of test that works with a user interface
how can we pseudo-test the behavior of user to verify the app logic?
like after a bunch of key presses, if it doesnt show the correct UI we want, then show "FAIL" or sth like that
I have been always enthusiastic about learning the under the hood part of technologies. So recently I was working with docker and wanted to know how containers actually run.
I mean if they run on same machine using flexible resources and the Linux kernel is what manages them how does it isolate them and how there are no hacks or ways from a container to directly access host resources without a bug being specifically in Linux kernel.
I first learnt about namespace and cgroups but theory was not enough so I went ahead and implemented it in Golang.
So I wrote a minimal container runtime in Go from scratch. No Docker. No helper libraries. Just raw Linux syscalls to see exactly what is going on under the hood.
While doing this, a few things finally clicked:
fork() the way you might expect, and what it does instead/proc works and why containers need their own mounted version of itI wrote a blog about same and have put the code on GitHub.
I’d really appreciate feedback from people who have gone deeper into this space. Part 2 coming out where I implement cgroup and chroot to know how resource isolation work.
How is everyone's work going these days? Are you still writing code the old-fashioned way every day? Or have you all started letting AI do the work?
I was working recently on Deploid : A scripted TUI installation wizard builder written in Rust , and controlled by Rhai scripts, that addresses the same use cases of InstallShield/InnoSetup.
The project is still in Alpha phase, and not ready for production uses. and Licensed under MIT License.
Hi! I'm working on a hobby project. I'm building an stm32 based optimal controller to stabilize a bicycle. Embassy has always looked super interesting though I'm unsure if bare metal like cortex-m is the better choice for a faster control loop.
As long as background tasks are shut off, do I lose anything by writing my control loop in embassy?
From a career standpoint, would it be a better learning experience to try writing something at the lower level or using the abstractions?
Thanks in advance
This was my first time using Bevy and my second time using Rust (I used it with Tauri before), and in this challenge, I would try to make a vampire-survival-like game where each line would cost $1. Then, after certain thresholds, I would face a punishment, such as after 500 lines, I would have to use a light theme, and at 1.5k, I would have to use Notepad.
If anyone is using Bevy or Rust for the first time, especially Bevy, you kinda need to immerse yourself in the way they do things, which is their ECS or their Entity Component System, and because I mainly use Unreal Engine, I feel as if I didn't use it to its full potential.
I had to get used to Rust's borrow checker / passing references, where I had to have code where it would be like:
let Ok((mut player, transform)) = player_query.single_mut() { safe var }
As well as using snake_case instead of PascalCase
One thing that's nice about Rust is that everything is immutable by default.
Here's the full source code, and the video I made alongside it.
We needed multi-tenancy, but in a safe and light way because RAM is expensive. Containers are unsafe in multi-tenancy, and less than namespaces is a non-starter. VMs are heavy, but for multi-tenancy, it had to be the baseline, so that was the direction we went.
Clone is able to fork virtual machines sharing memory via CoW across all of them, so you can run 10 8GB VMs and literally only use ~1-2GB of RAM. In some cases, it's more performant than containers. It also attempts to reclaim RAM whenever the system is idle.
It outperforms some of the other major VMMs (Firecracker, etc.) in our use cases, and we've been using it in production. REAL benchmarks are in the README and docs.
https://github.com/unixshells/clone
It's open source and MIT licensed so feel free to do what you need to do! The hope is that, instead of many working on silo'd VMMs, we can do this right, together.
I recently started to learn rust and so far I am absolutely loving it.
Today I encountered a situation where I was working with a few Option values inside a function. My first instinct was to use the ? operator to let the caller handle the option but then I worried that I would loose too much information when I do that because the caller would not know why there is a None variant or how to handle it.
This got me wondering if I should introduce a custom error type and convert the Option into a Result to enrich the return type information.
I would love to hear your thoughts on this
Hello Reddit!
After several weeks and with an active demo, I have started working on packages (libraries) for my compiler!
Initially, I considered adding them to the web experience, but after seeing that the current compiler weighs around 90kb with Brotli compression on the web, I want to take the next step. Any recommendations? Which standard libraries do you consider essential in any language?
Thanks!
Dear r/rust,
I have always loved the semantics of OpenGL, ever since I started making triangles in Python via ModernGL.
When I started using Rust, I immediately attempted to learn wgpu, thinking it was some sort of “easy going” graphics tool. Unfortunately I was blown away by the amount of boilerplate required just to clear the screen. It makes sense (you need a significant amount of configuration to ensure you get an ultra-fast and efficient graphics api) but it just made me frustrated.
Every moment of it was nothing similar to OpenGL, and I guess that's probably because its 2026, not 1992. OpenGL dominated for so long (and still does for the most part) because of the simple semantics and ease of use.
So I folded, and spent the last month writing a library that brings back those semantics, without sacrificing performance. Between two snowboard competitions, a concussion, and living in a camper, I was able to design a layer over wgpu that recreates many different types like “contexts”, "programs", or “vertex array objects” inspired by ModernGL. The list goes on of what I've reworked/implemented.
https://github.com/motivation-inc/yourgpu
Rather than being “a wgpu abstraction”, yourgpu attempts to keep you as far as possible from any sort of wgpu call or operation. The only thing it does expose are ways to integrate with wgpu apis, like egui_wgpu or anything that might need access to wgpu structs.
GitHub:
https://github.com/Eul45/omni-search
Microsoft Store:
https://apps.microsoft.com/detail/9N7FQ8KPLRJ2?hl=en-us&gl=US&ocid=pdpshare
Portable Version: https://eyuel.com.et/omni-search
I’d really love feedback on what to improve next, especially around: - keyboard-first UX - preview performance - indexing/search quality - duplicate cleanup workflow - overall desktop polish
I've been building Easy NATS, a desktop GUI client for NATS messaging. It covers the things I needed most day-to-day:
One feature I'm particularly happy with is the dockable, floatable tab system powered by egui_dock. You can undock any tab — Publisher, Subscriber, Stream viewer — into a floating window and tile them side by side. This turns out to be genuinely useful when you want to watch a stream and publish test messages at the same time without switching tabs.
Available on Windows (Scoop), macOS (Homebrew), and Linux (APT / AppImage / RPM).
GitHub: https://github.com/mcthesw/easy-nats
One thing I'd love advice on: the macOS Homebrew install currently requires running xattr -dr com.apple.quarantine "/Applications/Easy NATS.app" to bypass Gatekeeper. How others handled this.
Hello! just built a Config/Markup language in Rust as my first real project! It's called Oxide. The syntax is inspired by .toml and valve .cfg for source engine games.
I'm really happy I learned Rust, and I will definitely make more projects in Rust.
It's published on crates.io as oxideconf. C and Python codegen coming soon.
I would love feedback on my code, because since this is my first Rust project, I do not know how to write readable Rust code.
GitHub: https://github.com/crazysal-0/oxideconf | crates.io: https://crates.io/crates/oxideconf
Repo: https://github.com/abhishekshree/tokio-fsm | Docs: https://docs.rs/tokio-fsm
What is this about?
Well I'm trying to build a macro library tokio-fsm that allows you to define complex asynchronous state machines using a declarative macro. It handles the boilerplate of event loops, channel management, and state transitions.
The original post: https://www.reddit.com/r/rust/s/KGUQgAAQe4
First off, thank you. The feedback on the original post was genuinely useful and pushed me to fix things I'd been hand-waving. A few of you flagged real ergonomic rough edges, and most of what's below came directly in DMs later when people used it.
Here's what changed:
- State-level timeouts are now first-class: You can attach #[state_timeout(duration = "30s")] to any transition and pair it with an #[on_timeout] handler. Under the hood it's a single stack-pinned tokio::time::Sleep - no Box::pin per transition, no extra allocations.
- TaskError<E> replaces the old opaque error: TaskError explicitly separates your FSM's logic errors from Tokio runtime failures (panics, cancellation). Before, a failed task join gave you no signal about what actually went wrong.
- Graceful shutdowns on the handle: I realise the shutdown flow was way too complex for any real use-case, switched to a simpler mechanism with CancellationToken s that can be propagated from all parts of your code.
- Multi-state handlers: You can now stack multiple #[on] attributes on a single method:
#[on(state = Idle, event = Reset)] #[on(state = Running, event = Reset)] async fn on_reset(&mut self) -> Transition<Idle> { Transition::to(Idle) } Loving building in public. If the changes are useful, a star goes a long way and if something feels off or missing, drop it in the issues or right here. Keep building, cheers!
So i recently decided to make a voxel engine/game (it is basically just what is soon to be a game). It really is not much but i feel like my solutions for some stuff were pretty cool. Allthough i still get rather low fps, at most about 120 with 2 as my chunkradius. I’ve never received feedback on my code so please point out anything you find shitty. It is less that 1k lines so it isn’t super hard to look at.
Oh and it would be really nice to learn why i cant manage to compile it for windows on linux.
Thanks for any feedback!
Link:
coming from typescript, prisma spoiled me. every rust orm felt like a step backwards in dx.
so i built saola, an experimental orm that tries to bring that same feel to rust.
let posts = post() .find_many() .where_clause(|w| { w.published().eq(true); w.user().is(|u| u.is_active().eq(true)); }) .include(|i| i.user()) .order_by(|o| o.created_at(SortOrder::Desc)) .take(10) .exec(&client) .await?; uses your existing schema.prisma, prisma cli for migrations, same databases prisma 6 supports. powered by prisma engine internals directly, no separate binary.
very experimental, built mostly for my own projects. if you've been missing prisma after switching to rust, give it a try.
github: https://github.com/saola-rs/saola | crates.io: https://crates.io/crates/saola-core
I've been working on a browser based tool called ArchAlive and wanted to get some feedback on it. It is basically a visual sandbox where you can design backend systems, like API gateways, load balancers, and servers, and then simulate the HTTP traffic route through them in real time.
You can try it here: https://archalive.com/ (free, no signups)
While the frontend canvas is React/TypeScript, the core simulation engine that calculates routing, handles queue bottlenecks, and tracks individual request states is written entirely in Rust and compiled to WebAssembly. It can simulate quite a bit of requests smoothly.
Let me know what you think. Not sure where to take this project from here.
Anyone else struggling with going back to other languages once you've experienced Rust?
It's not even about speed (though that's nice too). What drives me nuts is error handling in Python and JavaScript projects. Half the time I'm working with libraries that have zero documentation about what exceptions they might throw or when they fail. Just crossing fingers and hoping nothing breaks
With Rust I always know exactly what could go wrong at each step. Sure there's still panic situations but in my experience those are pretty rare. Meanwhile with Python/JS I feel like I'm walking through minefield without knowing where the bombs are
Been doing more Rust lately during my free time and now when I have to touch JavaScript for work stuff it just feels so... unpredictable? Like how am I supposed to handle errors when I don't even know what errors are possible
Hey Rustaceans -
I just pushed / published v0.8.0 of gametools on github & crates. It's a moderately big update.
The biggest changes:
Module with types and helpers for managing ordered queues and collections.
RankedOrder<R,T,D> is Vec-backed collection optimized for burst usage, small to moderate queue sizes, and when you may need the complete order.PriorityQueue<P,T,O> is BinaryHeap-backed, optimized for rapid push()/pop() cycling and when you only care about the highest priority item.These collections can hold any type and be ordered by any type that is Ord. Rust's type system is also leveraged so that each can be used as either a min-first or max-first version without having to wrap your types in Reverse<T> or other such annoyance.
I started the old module when I was very new to Rust, and the API had multiple personality disorder -- part result generator, part physical die simulator, part game rule engine. This has been focused on broadly reusable result generation and analysis.
RefillingPool<T> is a collection of 1+ items of any type from which you can draw indefinitely, but it's more than just a random draw:
None is returned if no preferred items are available or whether to fall back to some other random item from the pool.dice API to save it.Feedback / contributions / suggestions always welcome!