Monday, March 30, 2026
2611e907-dbc1-4187-b939-5c330cb45072
| Summary | ⛅️ Windy in the afternoon. |
|---|---|
| Temperature Range | 12°C to 19°C (54°F to 66°F) |
| Feels Like | Low: 42°F | High: 57°F |
| Humidity | 63% |
| Wind | 26 km/h (16 mph), Direction: 245° |
| Precipitation | Probability: 0%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:38 AM / 🌇 07:07 PM |
| Moon Phase | Waxing Gibbous (41%) |
| Cloud Cover | 37% |
| Pressure | 1010.21 hPa |
| Dew Point | 46.8°F |
| Visibility | 6.07 miles |
hey guys! what would be the best platform for a south korean style fps, im thinking like crossfire would be the main one, sudden attack, special force, point blank. can godot work for this, or what would honestly be the best platform? i have professional modelers and animators in Blender that i could hire to make the assets for me, and i can code, but im from roblox lol and don't have much experience in other game platforms (but i can learn) ... what would be your recommendation? thanks everyone!!
Apparently this is a thing people do. Literally thousands of games with a few minutes playtime. I only noticed since we're recently on the list in the last couple of weeks.
I know in the grand scheme of things 1 review is negligible, but for some of those indies with low review counts, that could be disproportionately impactful. It's certainly not useful feedback for anyone.
My understanding was Steam was supposed to protect against players repeatedly refunding. What's happening here? Is that person just buying them and not refunding? Steam doesn't seem to reliably show "Game refunded" anymore, so it's hard to tell, but all the play times are for 10-20 mins each. Very odd.
Am I missing something?
Hi! A bit of an unusual problem but as you can see in the title I deal with that.
I love learning about development, seeing others do games even just silly little projects but when it comes to me actually doing it I can't.
How do you guys find your motivation?
I tried coming up with game ideas I really like and it's not something like GTA6 super realistic or animation heavy stuff because I watched so much game development content that I know the steps I know I have to focus on small stuff I have to make games that I don't really care about at first to learn but again when it comes to actually doing it I can't.
It's so nice talking to other about my ideas maybe that's why I'm here but I don't know how to make myself actually working on something like this and it's frustrating because I enjoy seeing others people do it.
I don't have any particular skill in this domain either to top it all of no animation no drawing no sound no coding even just be and my lazy brainstorming brain.
Did someone else experience this? Can I fix it ?
I’ve confirmed that the music used in my game triggers YouTube copyright claims, though it doesn't cause any issues with broadcasting or strikes. However, since this means the videos cannot be monetized, I'm worried about whether YouTubers will still be willing to play it. ㅠㅠ What do you guys think?
| Since 2020, I have created 3D abstract strategy games inspired by the Seven Wonders. Let me know what you think! [link] [comments] |
hey all, i am curious if you guys use asset packs for your games and what is your opinion on them ? it seems the general consensus among games is not very positive as the game is seen as just an asset flip.
but i'm just one guy, i can't model everything and do the programming so asset packs are super helpful but i guess if gamers hate your game just because it has bought assets, then it's not much help.
I feel like I'm getting whiplash trying to understand the advice posted here. Now I understand that different games require different recipes for success, but here are some examples of what I'm talking about.
While the answer to all of this seems to point to "Just try it out and see what happens," some of these options seem very risky. And for those not working day jobs in the gaming industry it's hard to get non-reddit or influencer advice. What are your thoughts?
| Some more experiments integrating head tracking into my OpenGL engine, starting from the OpenCV face detection sample. I corrected some mathematical errors that were causing distortion in the 3D model and implemented filtering on the detected head position to reduce high-frequency noise. [link] [comments] |
Hi everyone, I’ve applied for the Associate QA Tester role at Rockstar Games. I’m a fresher and I’ve been teaching myself Python to have a more technical approach to testing. Since this is a major studio with very high standards, I’m looking for some advice from anyone who has gone through their recruitment process or currently works in Game QA. I’m looking for insights on: The Test: What should I expect from the initial assessment? Does it involve logical reasoning, writing bug reports for video clips, or a basic coding/scripting test? QA Scenarios: What kind of "edge case" or "out-of-the-box" testing questions do they typically ask during the interview? Manual vs. Automation: Since I have some Python knowledge, how much weight do they give to scripting skills for an Associate (Level 1) role? Preparation: Are there specific game mechanics or technical terms (like LOD, collision, or memory leaks) that I should be very familiar with before the drive? I really want to make the most of this opportunity. Any tips on what the recruiters look for in terms of a "tester's mindset" would be greatly appreciated! Thanks!
One thing I’ve been struggling with lately as a solo developer has nothing to do with art, writing, programming, or puzzle design. It’s just staying motivated long enough to keep going when progress starts to feel invisible.
When you’re working on a game by yourself, especially in your spare time after work, it can sometimes feel like you’re building something in a vacuum. You spend hours solving problems, making rooms, writing dialogue, fixing bugs, and trying to make the whole thing feel coherent… and then you post something online and it disappears into the void.
I think one of the weirdest parts of solo development is that the amount of effort going into the work and the amount of visible reaction from other people often have absolutely nothing to do with each other. You can spend three nights trying to get one scene to finally feel right, and from the outside it looks like nothing happened at all.
I’ve been trying to remind myself that making the game is still the point. Not the likes, not the comments, not whether a post gets traction. But I’d be lying if I said that part wasn’t hard sometimes, especially when you’re also trying to convince yourself that what you’re making is worth finishing.
I’m still working on mine, and I do want to see it through. I think I’m just learning that staying motivated as a solo dev has less to do with constant inspiration and more to do with stubbornness, routine, and occasionally forcing yourself to keep going even when the internet seems completely uninterested.
For those of you who have worked on games, art, music, or really any long creative project on your own — what actually helps you keep going when momentum drops off?
I've been playtesting indie games for a few months now and I keep running into the same issues across completely different games. Thought I'd share what I've noticed in case it's useful.
1. Animated main menus make a bigger difference than you think
A static main menu is the easiest thing to build but it immediately signals "unfinished" to the player. Even a simple particle system on a loop or a subtle floating element adds life to the first thing your player sees. It sets the tone before they've even pressed start.
2. If you support both controller and keyboard, your tutorial needs to reflect both
This came up more than I expected. Games that have full controller and keyboard support but only show one set of keybindings in the tutorial, or worse, don't update when you switch mid-game. If you support both, test both.
3. Don't ship broken levels even in demos
Unfinished levels are fine for internal testing but if you know a level is unstable, keep it back. Players will remember the broken experience more than anything else in your game. A shorter polished demo beats a longer broken one every time.
4. Put your game's name on the main menu
Sounds obvious but you'd be surprised. The main menu is the first thing players see and it should make the game immediately recognizable. Logo, title, something. Don't assume people remember what they downloaded.
5. Don't ignore major bugs because you think players won't notice
They will. And they'll remember. If you know something is broken, fix it before you push it out. Playtesting friends and family before releasing publicly is one of the best things you can do, fresh eyes catch things you've stopped seeing.
6. Too little content and too much content are both problems
Too little and players feel like there's nothing to do. Too much and they get overwhelmed and quit. Both kill retention. Finding that balance is hard but it's worth thinking about early.
7. Punishing mechanics need to be earned
I played a game where one mechanic served as both health and energy. Then a new block was introduced that instantly killed you regardless of how much health you had. It felt arbitrary and unfair. Difficulty is good, but players need to feel like failure was their fault, not the game's.
8. If a mechanic isn't fun to play, it doesn't matter how good it sounds
I've playtested mechanics that were genuinely interesting ideas on paper but after 20 minutes I was bored or frustrated. If you can't enjoy your own mechanic for a full session, your players won't either.
9. Always include a tutorial, no matter how simple the mechanic seems
What feels obvious to you after months of development is not obvious to someone playing for the first time. Every mechanic needs at least a brief introduction. No exceptions.
10. Wall of text tutorials are just as bad as no tutorial
If your tutorial is 3-4 sentences per mechanic, players will stop reading after the first one. Keep it short, show don't tell where possible, and introduce mechanics gradually through play rather than upfront.
If any of this resonates and you want a proper outside perspective on your game, I do paid playtesting sessions at https://wildduckdev.github.io $20 for a full recorded session and timestamped bug report delivered within 48 hours.
How are you devs? I am a trully passionate gamer for indie games. Because, in my opinion, it's where the most fascinating ideias come from. Do you have any demo projects on Steam or every other gaming platforms where I can test and give some feedback for your projects? i played every kind of games and I have 35yo with gaming experience since my 4yo. I like every type of games. I wish you can give me some recommendations. Thank you and have fun!
I know that sevo is there but it's not complete and does not looks good when I try to use it. I'm a rust learner and really wants to build something open source which is fast and light so what's your suggestion??
https://i.redd.it/91lw85i493sg1.gif
It fetches their SSH public keys from GitHub and encrypts using the age format. No PGP, no key exchange, no accounts to create.
nvlp encrypt secret.env --to alice -o secret.env.age
echo "secret" | nvlp encrypt --to alice
nvlp send secret.env --to alice (uploads as a private Gist)
If you have any feedback or a better idea to send the encrypted files (current version leverages gists) do let me know!
cargo install nvlp
I’ve spent about 7 years working in C++, and recently I had one of those career moments that genuinely changed how I think about systems programming.
A few months ago, I was hired to maintain a security/safety software stack used in the oil sector — reliability-critical deployment, edge devices, real operational consequences if things fail.
- video ingest
- decode
- AI inference on edge devices (NUCs)
- sending raw detections to a central hub
- per-client integration logic
- recording / streaming / frontend / tickets / operational views
- a C++ engine
- a custom message-based H2-style multiplexed protocol
- a Go backend with hardcoded client logic
- React frontend
- almost no documentation
The original team was gone because of internal politics, and the main maintainer had only 20 days left before leaving for a PhD.
So I got:
20 days of rushed knowledge transfer, no docs, fragmented explanations, then immediate operational deployment.
That’s when reality hit.
The deployment appliance ran Docker on hardware we fully owned, and somehow even with everything preconfigured it was painful:
- device passthrough issues
- deployment complexity
- hard debugging in field conditions
Then deeper problems appeared:
- file descriptor leaks
- thread leaks
- CPU thrashing
- repeated soft freezes
- watchdog didn’t fire because the system technically stayed alive
- SSH could take nearly an hour to respond
- The system couldn’t reliably handle more than 5 streams.
- Load average on a 4-core machine: 78
Inside the engine:
- threads everywhere
- GPU decode -> CPU copy -> GPU recopy -> another copy again
- custom protocol impossible to reason about
- partial duct-taped business logic
- every classic systems footgun present at once
Honestly, it felt like inheriting a museum of undefined behavior.
The hardest part wasn’t technical.
It was internal politics: getting approval to urgently re-architect a sellable production product.
Team available:
- me
- one junior systems engineer
- one frontend dev
I’m effectively the only systems engineer there with strong Rust experience.
I decided not to port component by component. I burned it down and restarted.
The new engine core was built around:
- a single event loop
- one large future
- select!
- FuturesUnordered
That immediately gave a much cleaner architecture than the previous thread-heavy model.
We split everything into decoupled crates.
After years of C++ firefighting, the difference was honestly shocking.
No:
- UB paranoia
- race-condition anxiety
- linker hell
- sanitizer archaeology
- build-system warfare
Cargo + mdBook + rustdoc from day one changed the pace completely.
What changed technically
- handcrafted dataflow framework -> direct Rust domain modeling (pipelines + systems)
- custom protocol -> gRPC for control plane
- QUIC streams for data plane
- Go -> Rust (axum, tonic, sqlx, etc.)
- hardcoded per-client logic -> schema-based rule engine
- React -> GTK
GdkDmabufTextureBuilder helped a lot
- fully zero-copy backed by VAAPI and VPP
- Ubuntu + Docker removed -> replaced with a custom Nix-based system image
- stripped down to management only, React kept where practical
- OpenVINO backend wrapped cleanly from C/C++ into Rust
In exactly 50 days:
The whole engine core was rebuilt.
- Stable.
- Deployable.
- Production-demo ready.
One engineer learned Rust during the project and contributed solid FFI work because he already knew libav deeply. He owns the engine now!
(Yes, some C habits leaked into code, but reviews fixed that.)
40 streams on video wall
~50% CPU on a single core
GPU utilization ~92% with OpenVINO
no leaks
no freezes
no sanitizers
no mysterious deadlocks
It just worked.
I’m not claiming C++ cannot achieve the same performance; It absolutely can. But the result / development cost ratio was not even close. For a small team, under pressure, with a limited budget: Rust felt like an order-of-magnitude difference.
I didn’t expect this, but after years in C++ I genuinely had an emotional reaction.
For the first time in years:
the language stopped being the battlefield. Creativity became the only real barrier.
This wasn’t my first Rust project. I’ve been using Rust for about 2 years, mainly in side projects.
But this is the first high-stakes critical infrastructure system I’ve built where I truly felt:
I got tired of my CI running 200+ tests when I changed one file. Tools like Nx and Bazel solve this but require buying into a whole framework.
So I built affected; a zero-config Rust CLI that:
$ affected list --base main --explain 3 affected package(s) (base: main, 2 files changed): ● core (directly changed: src/lib.rs) ● api (depends on: core) ● cli (depends on: api → core)
Features:
It's ~5k lines of Rust, 160+ tests, passes CI on Linux/macOS/Windows.
GitHub: https://github.com/Rani367/affected
Would love feedback, especially on edge cases with Cargo workspaces. What features would make this useful for your projects?
Hey everyone,
I've spent the last 4 months building an open-source Rust protocol called VEX. It intercepts AI agent traffic, checks it against governance rules, and if approved, generates a cryptographically signed capsule proving it was allowed.
I wanted to move from just software isolation to absolute hardware-rooted trust. I just finished building the TPM 2.0 integration for Windows using CNG. It takes the hash of our approved AI intent, extends it into a PCR on the TPM, and generates a TPM Quote proving both the machine state and the action are untampered.
It works perfectly, but here is the problem: I develop exclusively on Windows(and wsl sub before people start roasting me) .
I'm looking for someone who understands Linux systems programming and tss-esapi (or a similar Rust TPM 2.0 binding) to build the identical PCR extension and Quote generation logic for our Linux target. We already have all the cryptographic abstractions and the Windows implementation running as a reference.
If you enjoy wrestling with tss-esapi and want to help build a physical safety brake for AI, I'd really appreciate the help. (https://github.com/provnai/vex)
Feel free to DM me or reply if you want to see the specific Windows TPM implementations first!
Hey r/rust!
I got tired of spinning up a Jupyter server just to quickly test a snippet or jot down notes alongside code. So I built Runbook — a desktop app where you can write notes in Markdown and run code right next to them, all offline, all local.
You paste a snippet, hit run, and the output shows up right below the cell. One caveat worth knowing upfront: it doesn't bundle any runtimes. It uses whatever you already have installed on your machine — [rustc](vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html), python3, node, bun — so if a runtime isn't installed, that language just won't run. For most Rust folks that's fine, but it's not a zero-setup experience for everything.
Stack:
It's early — v0.1.0, macOS only for now. I'd love feedback from anyone who knows Rust, especially on how I'm handling code execution. I'm not fully confident that's done well.
Hey everyone, I just released v0.3.0 of Proxelar, my MITM proxy written in Rust. The big addition is Lua scripting, you can now write on_request / on_response hooks to modify, block, or mock traffic as it flows through the proxy.
A simple example:
function on_request(request) if string.find(request.url, "ads%.example%.com") then return { status = 403, headers = {}, body = "Blocked" } end end Or mock an API endpoint without touching your backend:
function on_request(request) if string.find(request.url, "/api/user/me") then return { status = 200, headers = { ["Content-Type"] = "application/json" }, body = '{"id": 1, "name": "Test User"}', } end end Lua is embedded via mlua with vendored Lua 5.4, so there are no system dependencies — just cargo install proxelar and you're good.
Would love to hear feedback, especially on the scripting API design. The whole thing is #![forbid(unsafe_code)] if anyone's curious
HI , Only the foundation of the project is done . i would like your advice to contine this project . This needs a lot of my time to mature . HAPPY TO HELP .
Here is few renders :
https://cdn.xinoxis.com/Video%20Project%204.mp4
https://cdn.xinoxis.com/lorenz_attractor.mp4
you can find the source code in github
Old school C programmer here. I just started taking up Rust. Feels like C with someone (borrow checker) constantly minding my ass. (At other times - traits, etc. - feels a bit like Java - without the syntax monstrosity.)
So far so good. But then it seems everyone (transitioning, if that's a word in this context) has some beef with the borrow checker. And confusion.
I do not get it. The confusion. Is it because I happen to be a C programmer and everyone else came from ... JavaScript? Or will something come back to bite me later?
I just wanted to share this tool I’ve been working on called PNGToSVG. As I assume you can already imagine, it converts PNG images into SVG vectors.
I originally wrote a small Python script to be used for a frontend job I had in 2019, sometimes we were required to work with SVG, and I just needed a quick way to convert images without uploading them to any remote service.
When I switched jobs, I took that code, uploaded it to GitHub and didn't think about it too much. 5 years later, someone ("Kartik Nayak" on GitHub) made the first Rust implementation of the tool. 3 months later someone else ("Salman Sali" on GitHub) improved the code by including parallelization.
This was in late 2024. Since then, I have learnt a bit of Rust, polished the tool, and published a couple new releases, mainly focusing on ease of use and performance. Yesterday, I did the latest release (v0.6.1) which included an amazing programmer art new shiny icon hahaha.
What in 2019 was a forgotten Python script on my hard drive is now a tool I actively use for my projects, I used to convert 64x64 or 128x128 icons to SVG in a couple seconds, it can now convert 8000x8000 images in a fraction of that time. According to my quick benchmarks, the performance increase in my machine of the latest release compared to the first version I shared in Python in 2019 is around 2580x.
And that's not even the best part, thanks to randomly deciding to share a quick script I have been able to see very useful contributions to my code from the community, learn a complete new language, and actually build an easy to use tool I am actually willing to use myself.
I'd love to hear some constructive feedback. I am sure I can still improve some rough edges, as I still don't fully know Rust conventions, and I'd love to learn as much as possible while building something useful.
Hello Rustaceans
After months of development, I’m happy to share LazyChess, a fast and memory-efficient chess engine library written in Rust. My goal was to create a chess engine that fully supports the FIDE ruleset and works well in performance-critical applications. The library implements everything from castling, en passant, pawn promotion, to advanced features like FEN/PGN serialization, opening detection, and UCI engine communication also Human Move Clasifications Analysis
Features:
- Full FIDE Rules: Implements castling, en passant, pawn promotion, and all draw conditions (including threefold repetition, insufficient material, etc.).
- FEN/PGN Support: Easily serialize chess positions and games with FEN and PGN. - UCI Compatibility: Communicate with UCI-compatible engines like Stockfish for advanced game analysis. - Opening Detection: Built-in ECO table for detecting openings, and support for custom opening books. - Move History: Track full game history with undo/redo support. - Move Analysis: Evaluate moves and classify them based on engine evaluation. - Draw Detection: Handle special cases like the 50-move rule and insufficient material.
Installation:
I have published two versions to crates.io, you can install via cargo:
cargo add lazychess Heres a simple example showing how you can use LazyChess to play a few moves and display the board:
use lazychess::Game; fn main() { let mut game = Game::new();
let moves = ["e2e4", "e7e5", "g1f3", "b8c6", "f1b5"]; for mv in &moves { game.do_move(mv).expect("move should be legal"); } println!("{}", game.display_board()); println!("Opening : {:?}", game.opening_name()); println!("Status : {}", game.get_game_status_str()); println!("FEN : {}", game.get_fen()); println!("PGN : {}", game.get_pgn()); }
Feedback:
LazyChess is still in its early development stages, so any feedback or suggestions are highly appreciated! If you encounter bugs or have ideas for new features, feel free to visit the issues page or contribute directly.
Repository: https://github.com/OhMyDitzzy/LazyChess
Just built a single threaded rust runtime, it contains executor, reactor, waker, timers and uses mio for OS driven wakers
what do you think
+ I built a toy executor last month, posted it here and got roasted if you want to get a look
https://github.com/omaremadcc/Toy_async_executor
Check out my emulator: https://github.com/kaezrr/starpsx
been working on this for nearly a year and its going pretty nice
Hey, I've built my first public mini-project - varz - a convenience CLI for env var management.
Found myself constantly verifying if I have certain env vars set and if yes, their values (I'm looking at you, AWS). My another case is frequent 'export =' and/or .zshrc changes with the corresponding need to source or restart the terminal.
None of that is a big deal, of course, and I'm already accustomed to doing `printenv | rg {FOO}`, or terminal shortcuts, but just decided to play with the idea and explore what else is possible + hands where itching for some Rust.
For example, I've added persistence between sessions, without the need to update .zshrc/.zshenv, also integrated with the shell to avoid restarts.
Repo: https://github.com/oharlem/varz
Crate: https://crates.io/crates/varz
I'd appreciate some feedback on the overall approach, code quality, and the shell integration in particular.
Best
D
I started learning Go last week and so far so good. I decided to read the book "Introduction to Go" by Caleb Doxsey, just a 124 page book.
It's quite brief.
Which other book do you guys recommend for a begginer?
I am new to golang and i wanted to practice writing basic test for simple functions.
this is my attempt:
func TestHash(t *testing.T) { var text1, text2 string text1 = "password123" text2 = "password333" var hash1, hash2 string hash1, err := utils.Hash(text1) if err != nil { fmt.Println(err) return } match, err := utils.CompareHash(hash1, text1) if !match { t.Errorf("Password matching failed for %s", text1) } else { t.Log("Password matched for \n", text1) } hash2, err = utils.Hash(text2) if err != nil { fmt.Println(err) return } match, err = utils.CompareHash(hash2, text1) if match { t.Errorf("Password match should have failed for %s", text2) } } should i write test as i did above
also I haven't read much on writing test in go , so could u provide me with helpful docs
| Driver | Implementation | DB Generation | Query 1 | Query 2 |
|---|---|---|---|---|
mattn/go-sqlite3 | CGO | 50.06s | 5.36s | 3.24s |
modernc.org/sqlite | Pure Go (ccgo/v4) | 1m 05.66s | 12.02s | 5.36s |
ncruces/go-sqlite3 | Pure Go (wasm2go) | 50.87s | 20.59s | 6.08s |
| Driver | Implementation | DB Generation | Query 1 | Query 2 |
|---|---|---|---|---|
mattn/go-sqlite3 | CGO | 8m 48.30s | 58.83s | 49.80s |
modernc.org/sqlite | Pure Go (ccgo/v4) | 11m 18.63s | 2m 08.97s | 1m 18.12s |
ncruces/go-sqlite3 | Pure Go (wasm2go) | 9m 10.56s | 3m 39.33s | 2m 42.00s |
module modernc.org/tpch go 1.26.0 require ( github.com/mattn/go-sqlite3 v1.14.37 github.com/ncruces/go-sqlite3 v0.33.1 modernc.org/mathutil v1.7.1 modernc.org/sqlite v1.47.0 ) require ( github.com/dustin/go-humanize v1.0.1 // indirect github.com/google/uuid v1.6.0 // indirect github.com/mattn/go-isatty v0.0.20 // indirect github.com/ncruces/go-sqlite3-wasm v1.0.3-0.20260328094950-9a3c1e23fa90 // indirect github.com/ncruces/go-strftime v1.0.0 // indirect github.com/ncruces/julianday v1.0.0 // indirect github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect golang.org/x/sys v0.42.0 // indirect modernc.org/libc v1.70.0 // indirect modernc.org/memory v1.11.0 // indirect ) TL;DR: Most AI-generated code fails because developers give LLMs a "blank canvas," leading to abstraction drift and spaghetti logic. AI-assisted engineering (spec-first, validation-heavy) requires a language that "boxes in" the AI. Go is that box. Its strict package boundaries, lack of "magic" meta-programming, and near-instant compilation create a structural GPS that forces AI agents to write explicit, predictable, and high-performance code.
There is a growing realization among developers using AI agents like Cursor, Windsurf, or GitHub Copilot: the choice of programming language is no longer just about runtime performance or ecosystem. It is now about **LLM Steering.**
During the development of my recent projects, I’ve leaned heavily into **AI-assisted engineering**. I want to make a clear distinction here: this is not "vibe coding." To me, "vibing" is just going with whatever the AI suggests—a passive approach that often leads to technical debt and architectural drift.
**AI-assisted engineering** is a deliberate, high-rigor cycle:
In this workflow, Go is structurally unique. It doesn't just run well; it "boxes in" the AI during that final implementation phase, preventing the hallucination-filled "spaghetti" that often plagues AI-generated code in more flexible languages.
---
### 1. The "GPS" Effect: Forcing Explicit Intent
The greatest weakness of LLMs is **abstraction drift**. In languages with deep inheritance or highly flexible functional patterns (like TypeScript or Python), an AI often loses the architectural thread, suggesting three different ways to solve the same problem.
Go solves this by being **intentionally limited**:
* **Package Boundaries:** Go’s strict folder-to-package mapping acts as a physical guardrail. The LLM is structurally discouraged from creating complex, circular dependencies.
* **No "Magic":** Because Go lacks hidden meta-programming, complex decorators, or deep class hierarchies, the AI is forced to write **explicit code**.
> **My Opinion:** I believe that for a probabilistic model like an LLM, "explicit" is synonymous with "predictable." By narrowing the solution space to a few idiomatic paths, Go acts as a structural GPS. It doesn't let the AI get "too clever," which is usually when logic begins to break down.
---
### 2. The OODA Loop: Validating Theory at Scale
A core part of my engineering process is using AI to validate a theory in code before it ever touches the main repository. Go’s near-instant compilation makes this **Observe-Orient-Decide-Act (OODA)** loop incredibly tight.
* **Instant Feedback:** If a validation cycle takes 30 seconds (common in C++ or heavy Java apps), the momentum of the engineering process dies. Go allows me to test a theoretical concurrency pattern or a pointer-safety fix in milliseconds.
* **Tooling Synergy:** Because `go fmt`, `go test`, and `go race` are standard and built-in, the AI can generate and run validation tests that match production standards immediately.
---
### 3. Logical Cross-Pollination (The C/C++ Factor)
I’ve noticed anecdotally that LLMs seem to leverage their massive training data in C and C++ to improve their Go logic. While the syntax differs, the **underlying systems logic**—concurrency patterns, pointer safety, and memory alignment—is highly transferable.
* **The Logic Transfer:** Algorithmic patterns translate beautifully from C++ logic into Go implementation.
* **The "Contamination" Risk (Criticism):** You must be the "Adult in the Room." Because Go looks like the C-family, LLMs will occasionally try to write "Go-flavored C," attempting manual memory management or pointer arithmetic that fights Go’s garbage collector. This is why the **Review** and **Whiteboarding** stages of my process are non-negotiable.
---
### Proof of Concept: High-Performance Infrastructure
Recently, I implemented a high-concurrency storage engine with Snapshot Isolation (SI). The AI didn't just "vibe" out the code; we went through a rigorous spec and validation phase for the transaction logic.
Because Go handles concurrency through core keywords (`channels`/`select`), the AI-generated implementation of that spec was structurally sound from the first draft. In more permissive languages, the AI might have suggested five different async libraries or complex mutex wrappers; in Go, it just followed the spec into a simple `select` block.
**The result?** A system hitting sub-millisecond P50 latencies for complex search and retrieval tasks. The "box" didn't limit the performance—it ensured the AI built it correctly according to the plan.
---
### Conclusion: Boxes, Not Blank Canvases
If you’re struggling with AI-assisted development, stop giving your agents a blank canvas. A blank canvas is where hallucinations happen. Give them a **box**.
Go is that box. It isn’t opinionated in a way that restricts your freedom, but it is foundational in a way that forces the AI to implement your validated vision with rigor. When the language enforces the boundaries, the engineer is finally free to focus on the high-level architecture and the deep planning that "vibe coding" often skips.
Is Go the perfect language? No. But In my option, for a rigorous AI-assisted engineering workflow, it’s the most reliable one we have. thoughts?
| submitted by /u/atkrad [link] [comments] |
Apart from the usual go get <link> or brew install, I wanted to do pip install go-binaries.
Is there a standard go-approved way?
I stumbled upon Simon Willison repo - https://github.com/simonw/go-to-wheel.git.
Not sure if it is something new or just a wrapper over the binaries.
Hi,
I'm working on a terminal-based explorer for Azure Service Bus built with Go and BubbleTea/Lipgloss libraries. It's still in active development, but fully usable.
Features :
Authentication
Resource Browsing
Send/Resend
| submitted by /u/quirky-lettuce-9 [link] [comments] |
hey gophers, made something you might like if you code with AI assistants
its called ai-setup. run npx ai-setup in your Go project and it detects your stack and generates all the AI config files. .cursorrules, claude.md etc it knows its a Go project and generates rules specific to go conventions, modules, idiomatic patterns etc
no more manually writing these context files at the start of every project
just hit 150 stars on github, 90 PRs merged. totally open source
would love go devs to give feedback or contribute
We just released liter-llm: https://github.com/kreuzberg-dev/liter-llm
The concept is similar to LiteLLM: one interface for 142 AI providers. The difference is the foundation: a compiled Rust core with native bindings for Python, TypeScript/Node.js, WASM, Go, Java, C#, Ruby, Elixir, PHP, and C. There's no interpreter, PyPI install hooks, or post-install scripts in the critical path. The attack vector that hit LiteLLM this week is structurally not possible here.
In liter-llm, API keys are stored as SecretString (zeroed on drop, redacted in debug output). The middleware stack is composable and zero-overhead when disabled. Provider coverage is the same as LiteLLM. Caching is powered by OpenDAL (40+ backends: Redis, S3, GCS, Azure Blob, PostgreSQL, SQLite, and more). Cost calculation uses an embedded pricing registry derived from the same source as LiteLLM, and streaming supports both SSE and AWS EventStream binary framing.
One thing to be clear about: liter-llm is a client library, not a proxy. No admin dashboard, no virtual API keys, no team management. For Python users looking for an alternative right now, it's a drop-in in terms of provider coverage. For everyone else, you probably haven't had something like this before. And of course, full credit and thank you to LiteLLM for the provider configurations we derived from their work.