Monday, April 6, 2026
35af12f8-9fac-4e78-b45f-ca19519a8d68
| Summary | ⛅️ Mostly clear until afternoon, returning overnight. |
|---|---|
| Temperature Range | 10°C to 19°C (51°F to 66°F) |
| Feels Like | Low: 48°F | High: 66°F |
| Humidity | 78% |
| Wind | 14 km/h (9 mph), Direction: 263° |
| Precipitation | Probability: 90%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:28 AM / 🌇 07:12 PM |
| Moon Phase | Waning Gibbous (64%) |
| Cloud Cover | 35% |
| Pressure | 1019.29 hPa |
| Dew Point | 51.91°F |
| Visibility | 5.97 miles |
I have a golang server built for REST endpoints with a local authentication set up using JWT. I would like to implement OIDC using Authentik but i’m not sure if this is a wise move. Need some advice for this.
Hey r/golang,
I've been working on this for a while and finally decided to open source it: agentflow.
I built it as a production-ready, streaming-first framework for developing agentic AI systems in Go. Most existing tools are Python-heavy and I often found them lacking when it came to real production needs like proper error handling, rate limiting, circuit breakers, and context management.
So I decided to create something that feels solid and reliable for actual services.
- Streaming-first design with 17+ different event types (ready for SSE)
- Works with OpenAI, Anthropic, Groq, Ollama and any custom provider
- Zero external dependencies in the core
- Built-in production features: rate limiting, circuit breaker, retry logic, tool timeouts
- Automatic tool calling and context compaction
- Clean and straightforward API
I'm still actively developing it. There are a few examples in the repo (basic chat, custom tools, full streaming, etc.).
GitHub: https://github.com/CanArslanDev/agentflow
Examples: https://github.com/CanArslanDev/agentflow/tree/main/examples
I'd really appreciate any feedback, suggestions, or honest opinions.
A few things I'm especially curious about:
- How are you currently building AI agents in Go?
- What production pain points do you usually run into?
- Any features you'd like to see in a framework like this?
Thanks in advance
I’ve been using FreeCodeCamp for a while and it helped me get comfortable with basics, but I’m starting to feel like I want something more backend focused. I’m more interested in things like: - how APIs actually work - working with databases (queries, schema, etc.) - CLI tools and Linux basics - Git and real workflows - how backend systems are structured
The issue I’m running into is a lot of resources either: - stay too beginner/tutorial based - or jump straight into frameworks without explaining fundamentals I don’t mind paying if it’s worth it but I’m mainly looking for something structured where you actually build things and not just follow along. For people who moved on from FreeCodeCamp, what worked for you?
Inspired by rtk (Rust Token Killer), I built snip to solve the same problem with a different approach: filters are YAML data files, not compiled Rust code.
AI coding agents burn tokens on verbose shell output. A passing go test is hundreds of lines the LLM never uses. git log dumps full metadata when a one-liner suffices.
snip sits between your AI tool and the shell, filtering output through declarative YAML pipelines before it hits the context window.
Before: go test ./... → 689 tokens After: 10 passed, 0 failed → 16 tokens (97.7% reduction)
Some design choices that might interest Go devs:
Real session: 128 commands filtered, 2.3M tokens saved.
Works with Claude Code, Cursor, Copilot, Gemini CLI, Aider, Windsurf, Cline.
GitHub: https://github.com/edouard-claude/snip
Feedback welcome, especially on the filter DSL design.
Hi gophers,
I just released oaswrap/spec v0.4.0.
fiberv3openapi)echov5openapi)Current adapter support includes:
net/httpchigingorilla/muxfiber (v2 and v3)echo (v4 and v5)oaswrap/spec is intentionally adapter-focused and docs-serving focused.
In short:
oaswrap/spec targets that use case| submitted by /u/SnooWords9033 [link] [comments] |
Database proxy in Go calling libSQL (SQLite fork) through CGO. One connection doing SELECT 1 uses 4.2GB RSS. macOS heap shows only 335KB allocated by C code. vmmap shows 12+ Go heap
arenas of 128MB each, all via mmap, never released.
Same library called from Rust: 9MB. Tried purego instead of CGO: same 4.4GB. GOMEMLIMIT, GOGC=10, debug.FreeOSMemory() — none help. The arenas aren't tracked by Go's GC.
Is there any way to prevent Go from allocating these arenas on foreign function calls, or is the only fix not using Go for this workload?
Before testing the game with public players, I am thinking about testing the very early version (tutorial, first level) with paid playtesters. My hope is to eliminate bugs before I make the playtest on steam. Does that make sense? Does anyone have experience with paid playtesting? It's an automation game if genre matters.
I've been working on a top-down survival game for a while now and somewhere along the way I started calling it an RPG. But recently I took a step back and asked myself — is it actually one?
It has XP, leveling, an inventory system, different weapon types with rarity tiers, equippable gear, a story with 18 missions. On paper that sounds like an RPG. But I keep going back and forth on whether slapping progression systems onto a game makes it an RPG or if there's something deeper that I'm missing.
Like, some of the best RPGs ever made are really about choice and consequence. Your decisions shape the world. Meanwhile my game is more about builds and loot and surviving encounters. Does that make it an action game with RPG elements? Or is that gatekeeping a genre that's evolved way past its tabletop roots?
I think about games like Diablo — most people call it an RPG but you're not really roleplaying, you're clicking on demons and watching numbers go up. Then there's something like Disco Elysium where there's barely any combat but it's one of the most RPG things ever made. The spectrum is wild.
For those of you who've shipped games or are deep in development — where do you personally draw the line? Is it meaningful player choice? Character progression? Stat-based combat? Some combination?
And more practically — if your game sits in that grey area between genres, how do you market it? I've seen "action RPG" and "RPG-lite" and "with RPG elements" thrown around so much that they almost mean nothing anymore. How did you decide what to call yours?
The GDC 2025 State of the Industry survey broke down layoff rates by discipline. Narrative design came in at 19%. That's higher than visual arts (16%), production (16%), programming (12%), game design (9%), and business (6%).
The total damage across the industry: roughly 29,000 jobs lost between 2023 and 2025. Embracer Group alone went from 15,701 employees to 7,873 in under a year.
At the same time, the games shipping right now have more written content than ever:
And live service games need content drops every 6 to 8 weeks to keep players around.
So the industry is cutting the people who write the content while demanding more content than ever. 28% of developers surveyed by GDC in 2026 said they personally lost their job in the past two years. For US respondents, it was 33%.
I don't think studios are cutting narrative because they don't value it. They're cutting it because it's expensive, hard to scale, and they're under pressure to reduce headcount everywhere. But the content expectations aren't going down. Players who experienced BG3's depth expect that level of narrative investment in other games.
Something has to give. Either budgets go up (they won't, AAA is already at $200M to $300M per title), team sizes grow back (already seeing this with the boom of indie / AA teams), or we have to find ways do more with less. We need to spend more time being creative and spend less time on implementation.
How do you guys read these industry trends? Are you also seeing that players want more and deeper narrative? More meaningful choices and agency?
Aece - LoreWeaver
It’s interesting — I think everyone who once wanted, or still wants, to become a programmer started with some kind of idea.
Some dreamed of making games, others wanted to build websites, and some just wanted to “know everything” and be that person who can fix anything on the spot.
But I’m more curious about something else: did you have a specific idea that truly drove you?
Maybe it was your dream game?
Or a service/website you felt people really needed?
Or even a project that you thought could “change everything”?
Share your story:
What was your idea?
Did you start working on it?
And where is it now?
I’m really curious to hear what pushed people to keep going — and maybe there are some of you whose ideas turned into something bigger than you ever imagined back when it all started.
EDIT: Thanks to everyone who responded positively to this post — really appreciate it 🙌 But I realized I’m actually more interested in the ideas themselves rather than the full stories behind them.
I have an idea of my own, but it’s still pretty vague. That’s why I want to hear your ideas — to get a sense of how interesting or unique mine actually is, and whether it’s worth pursuing.
So if you can, focus on the idea itself. I’d love to compare and understand if mine is something worth building.
I'm sorry if this isn't the right subreddit to ask this question. I don't have any experience in gamedev, but I've wanted to try it. Is it a good idea to do it on my smartphone or is it not worth it? If it is worth it, can someone recommend an engine and how to get started?
| Hey everyone, I’ve been working on a plugin called Filtr because I wanted a faster way to test out different "looks" (VHS, Film Noir, Retro, etc.) without messing around with complicated environment nodes every time. It uses a custom CShader setup that you can toggle with a single click or even trigger using 'FiltrZones' (think Area3D but for shaders). What’s inside:
It's completely open source. I'm looking for feedback on what presets are missing or if anyone has performance issues with the CRT/VHS stuff on lower-end hardware. [link] [comments] |
I've been working solo on this RPG for about 2 years now.
It's focused on helping a struggling village — less about combat, more about choices, survival, and interacting with people.
I recently updated the trailer and tried to improve the first few seconds to better show the main hooks and systems.
Would genuinely appreciate feedback — especially on whether the game concept comes across clearly.
I’ve been experimenting with a different way to approach game design.
Instead of writing everything in a traditional GDD, I tried structuring things as entities with relationships.
So instead of writing something like:
"Fireball applies burn which is affected by enemy resistance"
It becomes something more like:
Fireball
→ applies → Burn
→ interacts with → Enemy Fire Resistance
The idea is that instead of describing systems in text, you define how they connect directly.
I found it interesting because it makes relationships between mechanics more explicit, but it also feels a bit more rigid than just writing things out.
Curious what others think:
Does structuring design like this make sense?
Or do you prefer keeping things more flexible in docs/spreadsheets?
Post-mortem devlog of my Chaos physics game, Backrooms break.
| In a sad portent for the state of the games industry, a unicorn has fallen. Rec Room, a multiplatform, UGC metaverse once valued at $3.5b, is shutting down. And in this shutdown, there is a crystal clear message for all VC funded and VC funding aspiring founders in the games industry: we are firmly in the age of profitability. There will be an aquihire of some talent and technology to Snap, and a high integrity shut down that includes the leadership specifically calling it early so they can do right by its people (and perhaps even return some capital to investors). It is sad news for the players, the founders and the many employees and their families who have lost their jobs. And the reasons are clear. “Despite [our] popularity, we never quite figured out how to make Rec Room a sustainably profitable business. Our costs always ended up overwhelming the revenue we brought in,” reads the shutdown announcement. My 20+ year career in the games industry has been spent primarily working for or contracting with VC funded startups. And for most of those startups, the success story we were trying to tell was one of growth. We aimed to create traction that proved growth, and told the story to investors “if you pour money into this startup, we will convert it into rocket fuel that takes this ship to the stars.” Its through this type of traction, this type of growth story, that a UGC metaverse play that brought joy to over 150 million players and creators can be valued at $3.5 billion despite being a money losing venture. For those purely in the AI space, hypergrowth is still the name of the game. But here in the games industry - where recent years have seen plenty of big ticket failures are few substrantial exits - profitability is top of mind. When it comes to running companies and making games, I take a broad view of success. If you are measuring yourself only against the metric of a profitable exit, then statistically you will fail, since of course, nearly all startups fail. But there are many other forms of success, and Rec Room achieved many of them. Raising $300k is an achievement, let alone nearly $300m. Releasing on 1 platform is an achievement, let alone 6 of them. Serving 150 players is an achievement, let alone 150 million of them. Putting roofs over heads and food on the table for hundreds of employees and their families is an achievement. But, despite all of these achievements, Rec Room did not succeed in the most important way for any VC funded startup. Because the second you take money from the investors, your are signing a contract. You will do everything in your power to bring your investors a multiple on their investment. If they’re early stage, they might be hoping for a 100x or 50x return. In later and later series, it might be a 10x or 5x return. But still, a return is what you are promising to chase when you take their money. In the past few years, we’ve seen plenty of 8 and 9 figure VC investments into gaming companies and platforms. And we’ve seen plenty of these investments go to zero without a meaningful game launch, or launch a game that was DOA, or launch a live service that petered out within months. The environment for raising money from LPs has changed, and in lockstep the environment for raising money from VCs has changed. For one of my clients, I am assisting them with shaping the story for their VC pitch. In the first pitch feedback session, one of my strongest pieces of feedback, even at a pre-seed stage, was that the story was lacking a focus on profitability. 1) For any pitch at this stage, I would strongly recommend two things: 2) A credible story for how this company will chase profitability with their pre-seed and seed funds A feature roadmap and P&L that backs that story Not that these elements will guarantee landing an investment. More that they are table stakes. If you are not achieving profitability and growth in your seed stage, the chances of you landing series A investment in the current market are weak. Just showing signs of growth, in most instances, will not be enough. So if you are an early stage founder and will be chasing VC investment in the near future, do your diligence. Focus on profitability with slower growth, not hyper growth at any cost. Hyper growth might have worked a decade ago. It might still work for OpenAI and Anthropic. But it is highly unlikely to work for you. [link] [comments] |
i always dreaded linking my local git repository to github because i thought it would take ages and things would go wrong. Today i finally went ahead and tried making it work, and it only took like 20 minutes figuring everything out so that it all works together, i'm so glad this works! now i can commit and push without having to look back :D
No idea how Nyanners found it, but she is so funny and I was just smiling watching all the way trough! As you know indie game dev can get lonely sometimes. We sit in a dark room with two monitors for most of the process and just silently work... so for the demo of to come out and Nyanners to give it a chance is just... crazy.
Working and researching on a CLI tool that diffs code at the entity level (functions, classes, structs) instead of raw lines.
Line-level diffs are optimized for human eyes scanning a terminal. But when you feed a git diff to an LLM, most of those tokens are context lines, hunk headers, and unchanged code. The model has to figure out what actually changed from the noise. I did some attention score calculations as well, and it increases significantly in the model when you feed semantic diffs instead of git diffs.
sem extracts entities using tree-sitter and diffs at that level. Instead of number of lines with +/- noise, you get exact number of entity changes: which struct changed, which function was added, which ones were modified. Fewer tokens, more signal, better reasoning.
It also does impact analysis. sem impact match_entities shows everything that depends on that function, transitively, across the whole repo. Useful when you're about to change something and want to know what might break.
Commands:
multiple language parsers (Rust, Python, TypeScript, Go, Java, C, C++, C#, Ruby, Bash, Swift, Kotlin) plus JSON, YAML, TOML, Markdown CSV.
Written in Rust. Open source.
I started learning rust this week and I wanted to port the https://github.com/Blizzard/heroprotocol project which can be used to read replay files of heroes of the storm. It is written in python and I thought it would be a good first project.
As you can see, the project has a “versions” folder where each version file has an array with instructions for how to decode these files. The decoders then dynamically construct the structs by following the typeids.
For example, the user wants to decode the “header”which has a hardcoded entry point 18 at the latest game version. When you look at index 18 in the typeinfos array, you see this is a “_struct” which has more infos about its fields and their types.
My initial idea for representing this in rust was to model structs based on these typeinfos but I got stuck multiple times so I wanted to ask if this is even the right approach.
Thanks
Couldn't find a good hybrid of frecency (like zoxide) and a live fuzzy search (like fzf) being used for directory jumping so I decided to build one with a simple TUI.
fast-jump crawls through your directories and sorts them according to their fuzzy + frecency scores.
It currently searches ~184k paths in about 400ms on my machine. Do let me know how it performs on yours :)
Would really appreciate any feedback or suggestions!
just released v0.11.0 of Danube, an open-source messaging platform written in Rust. This release adds a complete security layer:
Danube is a messaging/streaming platform built from scratch in Rust with: embedded Raft consensus (no etcd/ZK dependency), sealed-segment WAL persistence (local/shared FS/S3/GCS/Azure), partitioned topics, multiple subscription types, schema registry, automated cluster rebalancing, and MCP integration for AI-assisted cluster management.
Details on: https://danube-docs.dev-state.com/concepts/security/
If you find Danube interesting, a star on GitHub goes a long way in helping the project grow.
Hey folks, looking for a piece of advice.
My goal is to provide a more general version of the phf crate. In a nutshell, phf lets you write:
static KEYWORDS: phf::Map<&'static str, Keyword> = phf_map! { "loop" => ..., "continue" => ..., "break" => ..., "fn" => ..., "extern" => ..., };
...and that builds an efficient hash-map in compile time, producing something like
... = phf::Map { key: ..., disps: ..., entries: ... };
The issue is that phf_map! exclusively supports strings as keys, and that's kind of the worst case for PHFs -- they behave much better on e.g. integers or other fixed-size data, which I'm personally interested in.
So I'd really like it if I could write something like
static NUMBERS: phf::Map<(u32, u64), Keyword> = phf_map! { (1u32, 2u64) => ..., (3, 4) => ..., };
There are two issues here:
phf_map! needs to somehow parse (1u32, 2u64) into a tuple and then execute the corresponding Hash implementation.
After building the hash table, it needs to emit a struct literal for phf::Map, which requires converting the produced data back into Rust code.
The first point would be solved by crabtime, which allows Rust code in macros to be executed in compile-time, but it seems to be unmaintained and buggy.
Codegen seems to be completely unsolved -- while quote! exists, there's nothing like a Codegen trait for primitive types that I could integrate with, and the few tools that do exist focus on types and items rather than expressions. crabtime is even more of a joke, asking users to format string literals by hand.
So I'm kind of stuck here. I could implement the parser and type system and provide the relevant codegen traits myself, but a PHF library feels like the wrong place to put them, and besides adding this complexity to the mix doesn't strike me as a good idea.
So here's my question: are there any long-term plans I'm unaware of that could help here, or perhaps crates dealing with this issue in a different manner, or any other solutions I'm missing? The most obvious answer would be to just use const and avoid macros entirely, but const is so limited (no traits, very slow, no allocation) that it's simply not enough for this task.
here is the css override pasted in my Stylus browser extension.
```css @-moz-document domain("doc.rust-lang.org") { @import url("https://fonts.googleapis.com/css2?family=JetBrains+Mono:ital,wght@0,100..800;1,100..800&family=Jost:ital,wght@0,100..900;1,100..900&family=Libre+Baskerville:ital,wght@0,400..700;1,400..700&display=swap"); /* Apply to all websites / * { / Set New Baskerville for the main body text */ font-family: "Libre Baskerville", serif !important; }
.header { font-family: "Jost", serif !important; } :not(pre) > .hljs { background-color: transparent; font-style: italic; font-family: "Libre Baskerville", serif !important; color: var(--fg)!important; } .hljs, [class^="hljs-"], [class*=" hljs-"] { font-family: "JetBrains Mono", monospace !important; } pre > code { border-top: 2px solid var(--fg); border-bottom: 2px solid var(--fg); padding-left: 0px; padding-right: 0px; } } ``` I tried to copy the book fonts to make reading easier and even the code look with the separator lines. also for the text highlights I also removed the background color so that you are encourage to read a whole paragraph with bombarded by text highlight (helps you focus more).
Keypoints: - trying to make it look like the no starch version atleast in some parts (font, code format) - removing the highlight background (making it italic instead) to encourage readers to read a whole paragraph. In my experience by brain tends to fly around until I realize I was already skipping a whole paragraph by only reading the highlighted parts
Preview: https://imgur.com/3zfdXMv
A few days ago I made a post about whether or not Dioxus sparks joy yet. My conclusion was "no", but mainly because the eco-system wasnt ready yet.
While writing the post I stumbled over two Dioxus icon libraries dioxus-free-icons and dioxus-iconify. * dioxus-free-icons just put all icons into a rust file and called it a day. This does not scale well, as the more icon libraries you add, the more it increases the compile time. Now, arguably probably not much, but thats not the point of making any project, right? * dioxus-iconify, while vibe coded, has the idea to just use a CLI approach to just "add" icons and fetch them on the go, kinda building your own component library. But this approach force you to add the icon code actually to your project and requires a working internet connection. Again, honestly, all of it is not a big deal at all, given the huge flexibility.
But I thought, why not take a hybrid approach. Store the icons as data in your manifest directory and just parse this data during compile time. This way, the actual API code can be extremely thin (Although, I chose to still include an index). And you can use as many libraries as you want. And even work offline. Plus, the compiler does not need to parse any unneeded static icon code.
So I made this library with an additional adapter for Dioxus to resolve icons with a convenient API and provide an unstyled Icon component, ready to be consumed by any component library out there.
```toml
pictogram = { version = "*", features=["material"] } rust let svg = pictogram::svg!(pictogram::material::action_123::filled); println!("{}", svg); ```
The adapter is available here.
rust rsx! { Icon { icon: pictogram::svg!(pictogram::material::action_123::filled), width: 48, height: 48, ... other attributes of your liking ... } }
Please let me know what you think, before I continue to add more libraries to it. What are must have features for you?
Why do lending iterators remain so painful to implement safely even with GATs and will Polonius actually make them ergonomic without deeper borrow checker changes?
I've been working on the next set of sibling projects to the previously announced piaf (an EDID parser) and concordance (a mode negotiation and ranking system). The next step in the journey is FRL link training. To accomplish that, I devised three layered libraries:
My philosophy for building this stack has been taking a part, designing what I think at that point the future sibling projects will need, building it up to a feature-complete MVP, and then moving on to its siblings to see what they actually need to communicate with each other.
If anyone has any feedback, I'd love to hear it. In particular if you're involved in graphics or embedded dev, or working in (Wayland) compositor development. The entire stack has as one of its core design principles that an implementer should be able to extend and adapt it to their needs without the need to fork.
Working on my machine with 24GB RAM and running into serious memory issues. I use multiple Neovim sessions for different Rust projects, often jumping between various git branches in separate instances. Each session seems to spawn its own rust_analyzer process, which gradually consumes more and more memory until my system becomes unresponsable.
Currently I have couple browser tabs open plus maybe 4-5 Neovim instances active at same time. The memory usage keeps climbing until everything starts using swap heavily and performance becomes terrible. I already tried adjusting swap settings but wondering if there are better approaches.
Has anyone found effective ways to control rust_analyzer memory usage? Maybe some configuration options or different setup that prevents this memory explosion? Would appreciate any suggestions since this is really disrupting my workflow.
been building this for a while and just hit v0.1.1. the idea is simple - one command, testx, and it figures out what test framework you're using and runs it. no config needed. supports 11 languages out of the box (rust, go, python, js/ts, java, c++, ruby, elixir, php, zig, dotnet). for rust it detects cargo test automatically, but the real value is in polyglot repos where you don't want to remember if it's pytest or go test or cargo test in each directory. the feature i love the best is stress mode (testx stress -n 50) which hammers your test suite N times to surface flaky tests before they hit main. way more useful than just retrying failures after the fact.
v0.1.1 also added monorepo support - testx workspace walks your repo, discovers all projects, and runs them in parallel with isolated adapters.
here: https://github.com/whoisdinanath/testx
need some honest feedbacks on what needs to be improved and added