Sunday, April 5, 2026
fc3a1da8-cab2-434b-9707-806d57e27785
| Summary | ⛅️ Partly cloudy until night. |
|---|---|
| Temperature Range | 11°C to 18°C (51°F to 64°F) |
| Feels Like | Low: 47°F | High: 63°F |
| Humidity | 81% |
| Wind | 15 km/h (9 mph), Direction: 243° |
| Precipitation | Probability: 82%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:29 AM / 🌇 07:11 PM |
| Moon Phase | Waning Gibbous (61%) |
| Cloud Cover | 43% |
| Pressure | 1017.46 hPa |
| Dew Point | 52.43°F |
| Visibility | 5.95 miles |
context: I'm making an Andy's apple farm fan game(because theres barely any fangames), called Frankie's Fun School house, this game has its own inspired story, and acts kinda like AAF's story but different, the fictional developer is a ceo at a company that makes digital computer friends, with Frankie The Frog being the most popular one, and him and his employees form a cult behind Frankie, sacrificing family and friends to him, as they are put into Frankie's world (later on marcus gets betrayed and killed by Frankie, but in the game world he gets the power. he's also not the protag like how Andy is but the antagonist) this is definitely far different from how Thomas Eastwood made a deal with an entity and killed his family under possession. i just need Clarification if this is a fangame or ripoff, because I'm very unsure, (please note: I can guarantee that I WILL disclaim that it's a fangame and even credit M36 Games in the credits for being my inspiration.)
I'm a hobbyist, have never intended to quit my job to make games because I love my job, don't even want to monetize on itch or whatever as a side hustle. But I do love designing games. I've made hacks for TTRPGs, text-based games in Inform 7, I made a 4x city building type game in Google Sheets.
But I also realized recently that across my various creative projects, I often don't really want the process of writing / illustrating / coding / whatever, I just want the thing to exist so I can enjoy it. I want to playtest, not debug inventory systems.
So I'm just embracing it. I'mma stick to Google Sheets and Tabletop Simulator mods and solo TTPRGs and have fun playing my random shower thought game idea and stop pretending I'm going to do Unity tutorials.
Idk anyone else had this revelation? I think this sub is mostly people who intend to publish and not hobbyists, but I was curious to see if there's anyone else
Everyone talks about numbers, visibility, wishlists, and marketing. But in a tiny team, the way two people work together can shape everything.
Six months ago, Nick Talmers reached out to me about building IRON NEST: Heavy Turret Simulator together. Today, after 6 months, we’re at 227,816 net wishlists with 2 people and a $0 marketing budget.
That milestone means a lot, but what means just as much to me is finding someone whose work style fits so naturally with my own. In indie, that kind of partnership feels incredibly rare... and incredibly valuable.
I wouldn’t hesitate to say that it has been one of the most important factors in allowing us to achieve what we have so far with IRON NEST.
I’m rooting for all of you to find your other half, too.
I’m just curious if this is the case. Cause the ps2 just had sooo many games it’s crazy. And like every movie had a spinoff game. And the ps3 was the same way like almost everyday was a new game. So it leads to me believe that for some reason it was cheaper.
And if it was. Why was it? And why is it not now?
I've been prototyping a 3D game concept lately, however, I don't have the money to hire a technical artist or staff to make it look nice. I'm used to working with a team to cover skills I lack or aren't good at, but with the lack of resources on my end, I figure it's time to learn things on my own. My background is in design and animation with a bit of programming, but lighting, shaders, and how to use Unity's cinematic system beyond the bare basics are not my expertise. With the progress happening right now, I'd like to work on the prototype's visuals and develop decent cutscenes.
There are a lot of tutorials online about these things I know, but I find many of them that I've been looking ar are either super old and depreciated, do not explain things super well, or are way too specific in their use cases to help me actually understand how these things work. I'd love if I could get a few recommendations on tutorials to check out on any of these things as I'd like to get a strong fundamental understanding of these skills beyond the basic trial and error I'm doing, especially for Lighting and Cinematics. Until I get funding or support, I will have to probably do this myself for the prototype so any advice would be appreciated.
I'd be all over it for serious game dev if it were more official+straight from Valve... but this Garrys Mod/Sandbox wrapper thing around it feels janky af.... anyone else looked into this?
| Hello everyone, I've been working on my little indie game for about 5 years now, and I am very happy to announce that it has now been released and is available on Steam! LOST INSIDE is a non-traditional RPG about a human child who stumbles their way into a Spirit World. You find out that the world has been cursed by a corruption spell where only you, the chosen one, can save the monsters for good. This game is very story-focused. If you like games like UNDERTALE or DELTARUNE, then this game might be for you. [link] [comments] |
Just wondering how often this actually happens, I'd presume it's only at larger developers? How do people look for this kind of work? I've been in animation for years but the industry is rough right now so been thinking about expanding my job search. Also boarding for games does sound genuinely fun, in tv we're always rushing it'd be nice to actually get a chance to really polish sone work
But I know the game industry isn't doing too hot either right now...
How do multiplayer turn-based games manage version updates during the course of a match?
I'm thinking about something like a 4x/strategy game, where you could have a match with friends going on for weeks in some cases. You start on a version, but by the end of the match there could have been numerous updates.
Is the standard way to force all players to always be on the latest version of the game before they play their turn? But this would mean in some cases forcing mid-game gameplay changes that invalidate a previous strategy (I'm building 20 spearmen cause they're OP, but the new update nerfs them and now I'm screwed).
Or maybe they keep various versions of the game installed on the client as long as a player got an active match with that version? This looks like it could be become unmanageable quickly.
I'm just an hobbist working on simple single player games, but I got curious about this problem!
I’ve been thinking about this lately and I’m curious how it is for everyone else. What’s that one thing in game development that just drains you every time?
For me, it’s when something breaks for no reason. everything was working perfectly, you didn’t even touch that system, and suddenly it just stopped working. No clear error, no cause, no nothing, just you sitting there wondering what just happened. Then you spend hours trying to fix something that shouldn’t even be broken in the first place. It honestly feels like the game is fighting you sometimes.
What about you? What’s the one thing that frustrates you the most in this whole process?
Basically the title, but to expand:
I've had this idea for a game sloshing around in my head for around 10 years, and before having put any effort into it, I already knew what I wanted the title to be: Hypernova. There were no games back then with that name, sounds cool, short, sets the theme, perfect.
Fost forward several years, and after I finally started to actually work on it, I decided to check just to be sure, and to my dismay, there was already another game with the same title. Even the genre is similar. Now I re-checked, and there's another game coming up with the exact same title.
I know a solution would be to add a subtitle or extend it with something, but I just can't come up with anything. Is this okay in the long run? Granted, both titles are very small, but I just want to avoid any kerfuffle in the future.
I have watched a lot of game clips and studied what separates the ones that stop people from scrolling versus the ones that get scrolled past. The pattern is consistent.
The first 2 seconds of a game clip need to do one of four things or most viewers will not stay:
Show something the viewer has never seen before. A mechanic that looks impossible, an art style that immediately stands out, a visual trick that makes someone go "wait, what?"
Start mid-action. Not gameplay starting, gameplay already happening at an interesting moment. The player is already three jumps into a difficult section. Something is already exploding. A dialogue option is already happening that feels unexpected.
Create immediate tension. The health bar is at 10 percent. There are three enemies and one bullet. Something is clearly about to go wrong in an interesting way.
Trigger a question. The viewer sees something that makes them think "wait, how does that work?" or "is that actually in the game?" Curiosity is a powerful scroll-stopper.
What does not work in the first 2 seconds:
Logo animation. Main menu. Character creation screen. Cut to the title card. Loading screen. Tutorial text. Opening cinematic. Any of these and you have already lost most of your audience.
Platform-specific differences:
TikTok: The hook matters most. TikTok viewers make the decision to stay in about 1.5 seconds. Start with your most visually surprising moment.
YouTube Shorts: Slightly more forgiving (about 3 seconds), but the thumbnail still needs to do work before anyone hits play.
Instagram Reels: Similar to TikTok but the audience skews slightly older. You can lead with something slightly slower if the visual is compelling.
X: The first frame is your thumbnail. Make it count.
The other consistent finding:
Captions increase watch time significantly across every platform. Not subtitles of what a character is saying. Additional context that explains what is happening. "This enemy can only be killed by its own projectiles" while showing that mechanic in action is more effective than just showing the mechanic.
That's all I got for today. Let me know if this info helped. :)
I'm writing a streaming xml parser that makes heavy use of nested coroutines and if the input buffer runs out of characters then I need to yield all the way back up to the entry.
I'm using a macro to achieve this same effect.
#[macro_export] macro_rules! run_to_completion { ($coro:expr) => {{ let complete; loop { match core::ops::Coroutine::resume(core::pin::Pin::new(&mut $coro), ()) { core::ops::CoroutineState::Yielded(y) => yield y, core::ops::CoroutineState::Complete(result) => { complete = result; break; } } } complete }}; } And then using it looks like
pub fn parse_doctype( char_buffer: CharBuffer, ) -> impl Coroutine<(), Yield = (), Return = Result<Option<DoctypeToken>, DoctypeError>> { #[coroutine] move || { if run_to_completion!(advance_if_matches_array(char_buffer.clone(), &DOCTYPE)).is_none() { return Ok(None); } ... } } Don't worry CharBuffer is reference counted.
Async functions have '.await', maybe coroutines could reuse that or get a '.yield'. Idk, might be confusing.
Maybe a new ? like operator but for CoroutineState and instead of return it yields.
I would just settle for CoroutineState having some/any helper functions like Options and Results do. I could use map_yield and map_complete/map in so many ways. Currently they have no helper functions and the only way to interact with them is to use match/if let statements.
The joys of testing things out in nightly.
I use Claude Code as my primary coding agent, and one thing that bugged me was how much of its context window gets wasted on terminal noise -- ANSI escape codes, progress bars, spinner frames, duplicate blank lines, npm/cargo boilerplate warnings.
So I built cli-denoiser -- a Rust CLI proxy that sits between your shell and your AI agent, filtering out the noise before it reaches the LLM.
Two-pass filter pipeline (line-level then block-level):
Benchmarked against real command output:
| Scenario | Original | Filtered | Savings |
|---|---|---|---|
| cargo build | 15,847 tokens | 1,174 tokens | 92.6% |
| git clone | 8,432 tokens | 793 tokens | 90.6% |
| npm install | 12,891 tokens | 3,260 tokens | 74.7% |
| docker pull | 6,218 tokens | 1,234 tokens | 80.2% |
Overall average: 58.2% token savings across all tested scenarios.
Zero false positives. The type system enforces this -- FilterResult has four variants: Keep, Drop, Replace, and Uncertain. If any filter is uncertain, the original line passes through unchanged. We never silently eat meaningful output.
```bash cargo install cli-denoiser
cdn -- cargo build 2>&1
cargo test 2>&1 | cdn
cdn hook install ```
Also has a built-in token savings tracker (cdn gain, cdn report, cdn log).
Repo: https://github.com/Orellius/cli-denoiser
Would love feedback on the filter architecture, especially if anyone has ideas for additional filters or edge cases I should handle. The Filter trait is designed to be easy to extend.
Hey r/rust,
This is my first open-source project, and I'd love some honest feedback!
**agent-desktop-interface** (`gui-tool`) is a **zero-dependency Rust CLI** for cross-platform GUI automation. It provides screenshots, window management, mouse/keyboard control, and strict JSON-in/JSON-out — built specifically for AI desktop agents (works great with Claude Code, Codex, Gemini CLI, etc.).
### Why it exists
Tools like `xdotool` and `pyautogui` break on modern **Wayland** (especially GNOME). This tool uses native OS APIs instead:
- Linux/Wayland → D-Bus + XDG Desktop Portal + `window-calls` GNOME extension
- macOS → CoreGraphics
- Windows → Win32
No brittle wrappers, no subprocesses, just direct FFI/syscalls.
### The killer feature (IMO)
**Recursive grid zoom**:
Take a screenshot with an overlaid labeled grid (like 8x6 cells: A1, B2, etc.). Then move/click by cell label — even recursively (`B2.C1` for sub-regions). No pixel math, no flaky OCR. The AI (or script) just says the label, and it works reliably. Screenshots are cached for instant zooms and invalidated on input.
Everything outputs clean, structured JSON so agents don't have to parse messy text.
### Quick examples
```bash
# List windows
gui-tool windows list
# Screenshot with grid overlay
gui-tool screenshot --window "Firefox" --grid --output /tmp/grid.png
# Precise control
gui-tool mouse move --cell B2 --window-id 123456
gui-tool mouse click
gui-tool key type "Hello from Rust!"
tldr ; githoob/slopc
So. Recently, I watched "No Boilerplate"'s video on rust's macros and gave those a try after years of ignoring it: thinking it wouldn't help my productivity that much. nah uh ! felt like discovering your car has heated seat after 2 years on a lease. But then the intrusive thoughts came in: What if those heated seats were LLM driven, on fire, and the car is now driving itself into oncoming traffic ?
The voices in my head drove me to draft up some cursed proc-macro that would make my coworkers (me and my 2 cats) loose all the respect and faith they have for my technical skills.
#[slop] is an attempt to see how far I could push codegen and how flexible the rust compiler could be.. and it's limitations. This also made me realize that proc-macros could be a legitimate supply chain attack vector. But I'm not well versed on how one could mitigate this.
The boring stuff: #[slop] replaces todo!() with LLM-generated code at compile time using the comments, fn signature and project deps (with arguably a best-attempt at type discovery) as prompt context. Then feeds rustc errors back and retries until it compiles (or gives up). It also use some flaky caching strategy (inspectable at ./target/slop-cache) to avoid burning your LLM budget too fast.
A while back I posted about deep diving into the Tokio runtime. To reinforce what I was learning, I started writing assignments for myself -- just a few at first, but the collection has grown into seven self-contained assignments that teach the Tokio runtime, each one building on the last. There's also a bonus eighth assignment where you build a mini async runtime from scratch -- no Tokio, just std::future::Future, Waker, and Poll.
If you're the kind of person who learns best by doing and wants a more hands-on, structured way to explore Tokio, I think you'll get a lot out of these!
EDIT: I forgot to mention that all the assignments include solutions but try to implement each one yourself before looking at them! :)
I'm working on a plugin system as part of a larger Tauri app, most likely using Node as the plugin runtime. I'd like to use something like WASM but while the lack of a package ecosystem is good from a security perspective it's also rough from a developer experience perspective (no sdks, no libs, no utilities besides what I expose). Deno is compelling but its still a userspace permissions system and no native addon compat means a lot of useful packages like sharp are off the table. So I'm trying to figure out if I can handle sandboxing through the OS instead e.g. this plugin process can talk to Todoist, this one is allowed to look at the downloads folder, etc.
From what I've read, the right call is either Bubblewrap or Landlock on Linux, Seatbelt on Mac, and (probably) restricted tokens on Windows. I think all of these have good Rust bindings if I wanted to try.
That being said, this seems like a really hairy problem and I am not a security engineer. Is there a crate that solves this problem? Most of the ones I've looked at are either unmaintained (gaol) or very new (sandbox-rs, ai-sandbox, zerobox).
If the answer is no, is this something I should even attempt to write on my own?
Thanks!
Repo: https://github.com/cachebag/nmrs
I've shared this before, but many seem to have misunderstood what it is or why it's important.
The biggest confusion I see is that people seem to think it's shelling out to nmcli or some other interface/program. While I think taking a few seconds to read code is very crucial here, I understand why someone wouldn't jump to see what my library does. It's a bit niche.
It's an async first, Rust API. A set of bindings that allows you to easily interact with NetworkManager over DBus on Linux, and I think I've done a thorough job of covering most of the major operations in NM (see example below). The goal is to cover basically everything that NM does, which will take time but the project is at a point now where it's reliable and stable enough to use in your IoT devices, GUIs, etc.
We are also covering major VPN surfaces. WireGuard has first class support, and I'm very close to finishing OpenVPN support (see dev branch for progress and issue 288). Most recently, I've finished a recursive descent parser for .ovpn files, and a cert store for managing inline certs.
The best way to demonstrate how it can be used is the following example below, where we will list our networks, connect to one and then print the SSID we've connected to:
```rust use nmrs::{NetworkManager, WifiSecurity};
async fn main() -> nmrs::Result<()> { let nm = NetworkManager::new().await?;
// List networks let networks = nm.list_networks().await?; for net in &networks { println!("{} - Signal: {}%", net.ssid, net.strength.unwrap_or(0)); } // Connect to WPA-PSK network nm.connect("MyNetwork", WifiSecurity::WpaPsk { psk: "password".into() }).await?; // Check current connection if let Some(ssid) = nm.current_ssid().await { println!("Connected to: {}", ssid); } Ok(()) } ```
To show how, or more importantly why this is better than alternatives, here's the exact same code but using raw DBus.
(gist because the example is 161 lines and I don't wanna dump)
Beyond the difference in pure LoC, you may have also noticed that in the DBus version:
HashMap<String, PropMap> with Variant(Box::new(...)) for every connection propertyVec<u8>, device type 2 means WiFi)DeviceType property, and filtering accordinglynetworkmanager-rs and the dbus crate are inherently synchronous and blocking (not to mention abandoned). nmrs is async first, supporting tokio and every other framework out of the box.These are only a few reasons on why I built nmrs, and what value it would bring to someone who would want/need to use Rust to write their Network utilities.
I've spent a lot of time working on the code and reasoning through a lot of the design patterns I chose. (No, none of it was vibecoded).
While a project like this can and will be opinionated by nature, I still think this is the best option for anyone looking to interact with NM in Rust.
I am more than open to critiques, feedback, suggestions, etc. Contributions are very welcome.
I started this project about 6-7 months ago and I've very recently gotten a fire lit inside to continue fleshing it out. I'm very proud of where it has gotten so far and I hope I've been able to properly show the value proposition in using it.
Thanks!
I'm adding a TUI to my program, just to make some things simpler in certain scenarios, I found this to be interesting.
My PR if anyone is curious: https://github.com/boquila/boquilahub/pull/20
disclaimer: the tui has slightly less features
Hi everyone, I've been using Rust for a few years now and normally use VSCode as my code editor and build and run my projects from terminal because of how fast and convenient cargo is. For debugging I mainly use the dbg! macro, but sometimes actual debugger is really needed.
Recently I checked some of the available options and didn't really like them:
In the last few years I've spent some time learning how debuggers work, so I think I would like to make my own debugger.
I want it to be able to run a compiled binary from the command line like:
some-debugger ./target/debug/my-binary --my-arguments Ideally for Rust, with cargo integration like:
cargo debug --build-arguments -- --command-arguments I think it's possible to sync breakpoints with VSCode even without plugins for that (if I migrate to neovim, I guess I'll need to develop a specialized integration plugin, haven't researched that yet).
And OFC I want it to support all of the features I mentioned as negatives in using VSCode.
I consider implementing Debug Adapter Protocol for the start, and then migrating to the native mach-o API and CoreSymbolicator.framework. Currently I'm only interested in support for macOS on ARM, because that's the kind of a system I use.
I'm interested in whether anyone else wants to use something like that, and which features would you need. Are there any other good options available? What would you recommend using to debug programs specifically?
P.S. My first post on reddit
Back in January I pinged the Zenodo folks about a Rust client, never heard back, so I made one.
I have long-running experiments that periodically publish results on Zenodo, and instead of re-implementing the same flaky requests yet again, I wrote zenodo-rs.
It covers deposition create/update/publish flows, latest-version and DOI lookup, and artifact downloads.
Test coverage is at 97%, and there is also a daily live sandbox workflow that creates, versions, publishes, and downloads real sandbox records, so if anything breaks we should know fairly quickly.
I hope it will be helpful to other researchers.
Hi all,
Ive worked with embedded rust for the last year and a bit and I love it. The Embeded-Hal is so good and the portability of drivers is amazing.
Usually when I make an I2C driver I use the embedded-Hal traits and build it to them. This allows me to reuse the same driver across multiple microcontrollers, single board computers and my Mac.
However when I try to do this with a serial based device such as UART. There is no trait in the embedded-Hal for that. There is some in embedded-io but then popular crates such as serialport doesn’t support that trait and thus my driver doesn’t bind to that interface.
Am I missing something here? Any help would be great!
After a few years building Java/Spring microservices in enterprise settings I kept hitting the same N+1 queries that slip through code review because they span multiple services, redundant HTTP calls nobody notices until production latency spikes. Every project, same patterns, different stack.
I also have a background in environmental science (I founded and ran a scientific association for 3 years, organizing conferences with climate researchers from the IPCC and IPBES). So when I started thinking about an N+1 detector, I naturally wanted to quantify the environmental cost of wasteful I/O too, not just the latency impact.
Existing tools are either runtime-specific (Hypersistence Optimizer only works with JPA), heavy and proprietary (Datadog, New Relic) or don't correlate across services. So I built perf-sentinel, a lightweight Rust CLI that analyzes runtime traces and flags these patterns automatically regardless of language or ORM.
How it works:
It takes OpenTelemetry traces (or Jaeger/Zipkin exports) and runs them through a pipeline: ingest -> normalize -> correlate -> detect -> score -> report. Detection is protocol-level, it sees the SQL queries and HTTP calls your code produces, not the code itself. So it works the same whether you're using JPA, EF Core, SeaORM or raw SQL.
What it detects:
What makes it different:
perf-sentinel analyze --ci with configurable quality gate and exit codes. If someone introduces an N+1 the pipeline breaks.perf-sentinel explain --trace-id abc123 shows a tree view of a trace with findings annotated inline.perf-sentinel inspect opens a TUI to browse findings interactively.pg_stat_statements data for DB-side validation.Numbers:
I've been dogfooding it on a personal polyglot microservices project (Java Spring WebFlux + Virtual Threads, Quarkus/GraalVM Native, C# .NET NativeAOT, Rust Actix, all talking to each other) and it caught real N+1s across all stacks without any language-specific configuration.
Still early (v0.2.2) and I would really appreciate feedbacks on:
cargo install perf-sentinel or grab a binary from the releases page.
Hey r/rust! I built a Rust-based OpenTelemetry agent with a configurable metrics pipeline and would love your feedback! Supports Windows and Linux, (builds for Solaris, but not tested)
https://github.com/observantio/ojo
Happy to discuss the internals, design decisions, or anything you think could be done better!
Hey everyone! 👋
I recently finished the Rust book and I mean really finished it. Since there were no online resources available in my native language, I not only read through the entire book but also digitized all the written content alongside the code examples as inline comments. It was a big undertaking, but I'm glad I did it.
I should mention I'm someone with ADHD and a bit of a perfectionist streak, which means that when I'm focused on learning something, I really can't split my attention across multiple things at once. So throughout my reading journey, I kept my projects very minimal and intentional, rather than diving into serious builds. But now that the book is done, I'm excited to change that!
For context on my background: I do have some prior programming experience, though it's been fairly limited mostly basic backend work, and desktop/CLI tools built with Python, usually automation-oriented stuff. I work at a digital marketing company, and I approach programming and CS purely as a hobby. That said, it's genuinely paid off at work the company's tech infrastructure was quite lacking when I joined, and I've slowly been modernizing and automating their systems, which has also helped me stand out there.
Now, to my actual question:
What beginner-friendly Rust projects would you recommend to solidify my knowledge and deepen my practical understanding?
I've thought about building small CLI tools with clap, but I worry those might just end up being automation scripts or mini utilities —fun, but not necessarily pushing my Rust knowledge further. Before I even properly started learning Rust, I did build a rough CLI tool that analyzed code and added inline comments in appropriate places, but it was pretty basic and I wouldn't call it a great showcase.
I'd really love to hear your suggestions especially projects that would help me grow specifically as a Rust developer, not just as a programmer in general. Any ideas are warmly welcome!
| i built this project from scratch in rust (engine, guts and all) to be a kind of dwarf fortress/civilization crossover, wdyt of the look? I would love your opinions and ideas! check out davesgames.io for more information on me :) [link] [comments] |
| submitted by /u/h888ing [link] [comments] |
Hello guys
I have a problem while building a restaurant backend service in golang
I tried to share a struct between a parent folder and a subfolder in same package name, but it didn’t work. The only way I got it working was by deleting the subfolder and moving the file into the parent folder, which I don’t want to do, i want to keep my project well organized without creating a new package just for this ,
I could ask AI but I prefer understanding how it actually works so I don’t forget it later I’d really appreciate if someone could explain what’s going on and the proper way to structure this
Thanks
Hi all,
I've never really done any performance optimization, i have now a pretty stable interface, properly tested etc, and have started to do bench tests.
My approach has been naive, i mostly focused allocs, and 2nd on perceived variations on sec/op. But I really feel insecure in this approach, allocs are deterministic but sec/op is not at all.
My question is: is there a way to more confidently optimize my code, in a way that feels more deterministic? Do you guys set a controlled docker env for optimization or something? Thank you for your attention
| We open-source Spath and Splan, explore this topic, unpack what it means for the future of AI tool developers. As our first dialect we include an Spath spec for Go. [link] [comments] |
| Disclosure. Im not the creator. Go has an amazing runtime. Its almost a perfect language for most networky things. The surface has things that could be improved, but having them in Go is probably not even a good idea at this point in time. Instead something like TS for Go is probably what we will see more of in the future. Heres one project i stumbled upon that has additional typing features many/some devs consider a must have for development. [link] [comments] |
Hi everyone I’m using wails for desktop app, it’s awesome till now, however I’m having hard time implementing and maintaining full OIDC flow including a localhost callback server, I’m problems like the call back server being terminated inadvertently and if app idle for long time server basically I feel like I’m lot of things by myself include PKCE etcc, is there any framework for me to achieve this or how do i do this better?
Ive been thinking about that recent thread where people mentioned goroutines becoming the new just add threads mindset. I see it a lot in code reviews coworkers spinning up a goroutine for every little task without thinking about limits. Sure goroutines are cheap but they arent free. The runtime overhead adds up and downstream systems can get overwhelmed.
My question is how do you actually measure this in production? I know about setting GOMAXPROCS and using semaphores or worker pools but Im curious what metrics people monitor. Do you look at scheduler latency, number of runnable goroutines, memory footprint, something else? I want to be able to point to real data when I push back on unnecessary goroutine usage. I know the typical advice is dont prematurely optimize but when I see a pattern that clearly doesnt scale I want evidence. The team I work with tends to argue that goroutines are so lightweight it doesnt matter. Id like to show them otherwise in a constructive way without being the performance police. What do you actually monitor in your observability stack to catch goroutine explosion before it becomes an incident? Any specific Prometheus metrics or Go runtime stats youve found useful for this?
Hey r/golang! I’m the maintainer of GO Feature Flag, an open-source feature flag solution built on top of the OpenFeature standard.
We just shipped a feature I’m really proud of: in-process evaluation for our server-side OpenFeature providers.
The problem it solves:
Until now, every flag evaluation triggered a network call to the relay-proxy. That’s fine for most setups, but on hot paths it adds up fast — latency, throughput pressure, and fragility if the network hiccups.
How it works:
∙ The provider periodically fetches the flag configuration from the relay-proxy and stores it in memory ∙ Flag evaluation runs entirely inside your application process — no network call on the critical path ∙ Evaluation events are collected locally and sent back asynchronously, so you keep full observability Supported providers: Go, Java, .NET, Python, JavaScript/TypeScript
When to use it:
∙ Latency-sensitive workloads → in-process is the way to go ∙ Sidecar deployments where the proxy sits right next to your app → remote evaluation still works great Full blog post: https://gofeatureflag.org/blog/2026/03/31/in-process-openfeature-providers
GitHub: https://github.com/thomaspoignant/go-feature-flag
Happy to answer any questions!
| A Go language extension that turns HTML templates into typed Go expressions and adds In core principals it's sumular to `templ` - it compiles to Go, has own language server, CLI tool and IDE extensions, can write to `io.Writer`. BUT it supports LSP edits (rename works!) and does not require you to run separate generate command (generates .go file on save automatically). LSP navigation works accross `.gox` and `.go` files seamlessly. Also, supports HTML as a first-class expression, for example this is valid syntax: The main distinction from `templ` is that template is converted into a stream of typed Jobs, that you can preprocess before actual output. I basically tried to solve all issues i had with `templ` and i think i succeeded. P.S. I build it primarily for my server-driven web app runtime, but it works standalone perfectly and is templ-compatible. [link] [comments] |