Thursday, April 16, 2026
fec78d31-4392-44a6-9a04-2d6abc27f12b
| Summary | ⛅️ Partly cloudy throughout the day. |
|---|---|
| Temperature Range | 18°C to 27°C (64°F to 80°F) |
| Feels Like | Low: 58°F | High: 82°F |
| Humidity | 39% |
| Wind | 16 km/h (10 mph), Direction: 88° |
| Precipitation | Probability: 0%, Type: No precipitation expected |
| Sunrise / Sunset | 🌅 06:15 AM / 🌇 07:20 PM |
| Moon Phase | Waning Crescent (96%) |
| Cloud Cover | 44% |
| Pressure | 1009.72 hPa |
| Dew Point | 45.69°F |
| Visibility | 5.78 miles |
| submitted by /u/Familiar-Classroom47 [link] [comments] |
https://github.com/orneryd/NornicDB/releases/tag/v1.0.41
584 stars and counting. Neo4j-driver compatible, MIT licensed enjoy!
A few weeks ago I posted here about hedge, a Go library for adaptive hedged requests. Got some good discussion, kept working on it.
Then I pointed it at an LLM inference server and it completely fell apart. Turns out these servers flush `200 OK` immediately then take 50-200ms before the first token actually shows up. My library was using time-to-headers as the latency signal, so it saw ~1ms for every single request. The sketch thought everything was fast, the hedge threshold collapsed to basically nothing, and it started firing backup requests on literally everything. 100% overhead. Totally useless.
Interesting learning though. The fix was to measure time-to-first-readable-byte instead of headers. After that the sketch started tracking real latency again.
The difference (from 50k requests)
Wrong signal: 100% overhead, barely helps p99. Right signal: 17% overhead, actually catches stragglers.
Interestingly, is this isn't even a hedging-specific thing. Anything that makes adaptive decisions based on header timing is going to break the same way on streaming workloads. Envoy, custom retry logic, whatever - if you're keying off headers and your server flushes early, your signal is garbage.
https://github.com/bhope/hedge
Has anyone else hit this with SSE or gRPC streams? What do you actually measure when headers show up before the work is done?
I recently learned about Go generics and I thought a good application of them might be to write something like this. It's a little more ergonomic coming from a JavaScript background
type Slice[T any] []T func (s Slice[T]) Len() int { return len(s) } func (s Slice[T]) Cap() int { return cap(s) } Any reason not to use this?
edit: formatting
Features
[link] [comments] |
Hey r/golang, I’ve been working on createos-cli, a deployment and infrastructure CLI written in Go and shipped as a single binary with no external runtime or daemon.
The problem
A lot of deployment workflows still break the terminal loop.
You write code locally, test locally, maybe automate parts of the workflow, then the deploy step moves into a browser dashboard. Fine for occasional use, less ideal when you want something scriptable, repeatable, or usable inside CI.
I wanted a CLI that keeps deployment where the work is already happening.
What it does
env pull / env push)Design choices
createos deploy keeps the session open while logs stream, then exits with the live URL once the deploy completes.Quick example
brew install createos createos login createos init createos deploy Source: github.com/NodeOps-app/createos-cli Docs: https://nodeops.network/createos/docs/CLI/Overview
Feedback and contributions welcome, especially interested in how others have handled auth flows, streaming logs, or CI-friendly UX in Go CLIs.
i am looking for some good audio books to listen, to just make my brain force to remember some definitions like, cause previously i had some issues in interviews where i lack of wrods to explain.
so, if you know any do comment the link
Prefered: learning go: an idomatic approach
Hi all,
I’d like to share a Go CLI I’ve been working on, and get feedback on the design and implementation. The goal is to make it safer for AI coding agents to access services like GitHub, Stripe, or databases without pasting long‑lived API keys into .env files or chat windows.
Instead of handing raw credentials to the agent, you declare what a project needs in a .env.kontext file:
textGITHUB_TOKEN={{kontext:github}} STRIPE_KEY={{kontext:stripe}} LINEAR_TOKEN={{kontext:linear}} Then you run:
bash kontext start --agent claude The CLI (written in Go) authenticates the user via OIDC, talks to a backend over ConnectRPC, and for each placeholder either does an RFC 8693 token exchange for a short‑lived token or injects a static key from the backend for the duration of the session. Secrets stay in memory on the developer machine and are not written to disk. Each tool call from the agent is logged with who ran it, what it did, and the result.
Kontext is intended to behave like a Security Token Service for agents: users authenticate once, and the backend issues short‑lived, scoped credentials on demand. Long‑lived OAuth refresh tokens and API keys stay on the server side; the CLI stores its own auth material in the system keyring.
Current status: early but usable; we are using it with Claude Code and iterating on the policy/audit pieces. I’d describe it as “experimental, not production‑ready” at this point.
Install (macOS):
bash brew install kontext-dev/tap/kontext Repo: https://github.com/kontext-security/kontext-cli
Site: https://kontext.security
I’m mainly looking for feedback on the Go side (CLI structure, error handling, keyring usage, RPC/client layout), but comments on the overall approach are also welcome.
| Built a drop-in replacement for Linux crontab with a proper web interface, high availability, task dependencies, retry policies, and notification support (Slack/Email/Webhook). Written in Go, MIT licensed, easy to deploy via Docker. [link] [comments] |
Hi r/rust I wanted to share a project i've been working on for the last couple months. This is a very early release of my AWS service emulator featuring emulation of SQS. https://github.com/SethPyle376/hiraeth
I've been a user of Localstack for years. It was great for integration tests but recent changes around pricing/licensing have had me looking for alternatives. Not satisfied I thought it might be fun to roll my own :)
This is my first medium-ish size rust project, and first public rust project. Any feedback/advice is of course welcome and appreciated.
Some cool things I'm proud of:
They claim to be a "boutique rust recruiting shop", they're full of shit. Multiple recruiters have wasted my time with complete bullshit, they have no idea how technology works or even what makes Rust, Rust. If a recruiter from Lawrence Harvey reaches out, beware, and don't bother.
I am making some CLI tools at my job in rust to support our dev work. I have a robust CI/CD that generates linux & windows binaries, but for macos I run a script on my local machine to build and publish it. This is because if issues with needing the apple SDK in my CI environment to create apple binaries.
I'm wondering if there are any good solutions folks know that can let me automate this away. Thanks.
Hey everyone!
Whenever my small VPS was hit by L7 HTTP botnets or simple DDoS attacks, traditional tools like Fail2ban + iptables would actually make things worse. The sheer overhead of the Linux kernel allocating sk_buff memory for 100,000 packets per second created an Interrupt Storm that crashed my databases and locked me out of SSH.
So, I spent some time building CrabShield — a hybrid firewall written entirely in Rust.
How it works: It uses an asynchronous Tokio daemon in user-space to instantly analyze Nginx/Traefik logs (detecting 404 floods, brute-forcers, scrappers). But instead of adding iptables rules, it dynamically updates an eBPF BPF-map. The actual penalty (XDP_DROP) happens natively at the Network Interface Card (NIC) driver level.
The result? The malicious packets are dropped before the heavy Linux TCP/IP stack even knows they exist. The CPU stays under 5%, and Nginx never wakes up.
I just open-sourced it, put together proper documentation on it, and added cross-compilation support so you can just drop a static binary on your Linux box (x86_64 or ARM) and be protected.
Check out the repo and the architecture here: https://github.com/aleksgrim/crab-shield
Would love to hear your feedback, issues, or code-review if anyone is into eBPF!
https://github.com/rdaum/micromeasure
This kind of grew over time inside of my main project, as I wanted reporting on (and tracking drift on etc) Linux hwcounter (PMU counters) metrics for various small operations in microbenches, and criterion wasn't much use for me here.
(There's e.g. plugins for criterion for adding linux perf counter output but because of criterion's architecture they only output one metric / counter at a time, and I wanted a bunch of stats)
This thing measures the usual throughput/latency but also branch misses, cache misses, branches per operation, number of instructions/operations, etc.
screenshot of example output from `cargo bench`
Now looking at this screenshot, there's also additional hwcounter stats I probably need to add that are missing from this, but I'll get to them.
(And yes, there's some "LLM-generated" code here -- like almost all projects these days --but tightly reviewed and focused and tested and hand crafted, and the original version was not agent written. This comes from several months of prior work, which I used an agent to massage.)
In any case, the crate could use more eyes, so if it's useful to you, feel free to use and/or pitch-in. Apache licensed.
I'm trying to make a game with rust, and I want to see how much better it runs in release compared to dev, but I got a bunch of debug code here and there like print statements and such, is there a way to like tell it to skip certain code when building in release? Because otherwise I have to delete it and then put it back
ChronoGrapher is a Workflow Orchestration / Job Scheduling engine written in Rust, focused on:
It operates in a similar problem space to Temporal, Apache Airflow, and Celery, but targets a different trade-off point:
"Lower complexity than full orchestration platforms, with more structure than lightweight schedulers".
From working with existing tools, a few recurring issues stand out:
ChronoGrapher is an attempt to explore a design that addresses these without overcommitting to one extreme.
One of the features ChronoGrapher introduces is the VirtualClock, a fully controllable time source where time does not advance unless explicitly instructed.
This allows for:
A simulation environment (“Simulacrum”, WIP) can also be paired to allow debugging and replay of workflow behavior.
Instead of a fixed scheduler, the system is split into interchangeable components:
Each component is defined via traits and can be replaced independently. Tasks are composed of the following:
Most tools fall into two categories:
ChronoGrapher aims to sit between these, working as an embedded scheduler initially but can be later extended towards more complex/distributed setups.
The current implementation is based on Tokio (though later planned to be runtime agnostic), with focus on predictable scaling under load.
The Preliminary Benchmark measures throughput under increasing load by adding batches of 1k Tasks/sec, where each Task is executing every 2ms.
The X axis are the batches added (previous batches stack on top) whereas the Y axis measures the Tasks executed per second.
The colors for each library benchmarked (individually) are as follows:
tokio_scheduletokio_cron_schedulerHowever these numbers should be treated as early results.
Note: The proc-macro system is currently a work in progress. The syntax below reflects the intended API and is not yet finalized.
A minimal example of defining and scheduling a periodic task:
use chronographer::prelude::*; use thiserror::Error; #[derive(Error, Debug, PartialEq, Eq)] pub enum MyErrors { // ... } #[task(schedule = interval(4s))] async fn HelloWorldTask(ctx: &TaskContext) -> Result<(), MyErrors> { println!("Hello World"); Ok(()) } #[chronographer::main] async fn main(scheduler: DefaultLiveScheduler<MyErrors>) { let task = HelloWorldTask::instance(); scheduler.schedule(&task).await; } Under the hood the macro translates it to the Base API which is significantly more verbose. HelloWorldTask is intentionally written in PascalCase, as it maps to a generated struct rather than a conventional Rust function.
The proc-macro layer is designed to provide a more ergonomic interface over the core primitives, including support such as:
Additionally it should be noted, the core API remains fully usable without macros (the proc-macro API is for ergonomics and what most interact with).
The current focus is stabilizing the core abstractions before an initial alpha release (version 0.0.1a), once the alpha version is out the core will be expanded upon until a certain point.
In parallel various extensions and language bindings (for both extensions and the core) will be worked on.
Currently we are a small team (2-3 people) and thus i’m looking for contributors interested in the following:
Its recommended to start with issues tagged with the label Difficulty: Easy and progressively move to more challenging and valuable issues.
If this sounds interesting, I’d appreciate feedback or collaboration. I can share more detailed design docs / internals if needed.
Feel free to open issues for additional explainations, bug reports, feature requests... etc.
For more information visit: https://github.com/GitBrincie212/ChronoGrapher
Hey everyone,
First want to say thank you so much for all the support from my first post announcing the project, the response has been overwhelming and I appreciate everyone who left feedback and tried it out!
Few updates I want to share since the last post:
jsongrep now supports YAML, TOML, JSONL/NDJSON, CBOR, and MessagePack out of the box. (See #24)jsongrep queries without having to install first 🥳: https://micahkepe.com/jsongrep/playgroundjsongrep is also now in Homebrew, Scoop, Winget, Nix, and more!Also wanted to shoutout crowley, it's fork of jsongrep called that supports streaming which is super cool!
As always, feedback and contributions are welcome! Though jsongrep is primarily a CLI tool, I am still working on trying to make the library as ergonomic as possible so that it can be used in other Rust projects, as well as continuing to add more features!
Thanks y'all!
Hello, I am looking for safe bindings for cusolver and cublas.
I know crate “cudarc” has very minimal / mostly incomplete bindings to cusolver.
Recently I found a crate “oxicuda” but it appears to be very new + mostly ai generated slop.
Does anyone have any suggestions?
Wanted to share a GitComet update we’ve been working on for a while :)
GitComet is our attempt at building a fast, local-first, open source Git GUI that still feels good to use on large repos. A lot of this release came directly from feedback, so this one is very much shaped by people using the app.
Stack: Rust workspace, gix for Git internals ( certain operations still use git cli ), gpui community edition fork for the desktop UI, smol for async work. We keep obsessing over performance, and this update pushed that further with faster sidebar loading and better shortcut support.
A few things that shipped in recent releases:
Whether you are new to GitComet or already familiar with it, we would like to hear your feedback!
Code: https://github.com/Auto-Explore/GitComet
Discord: https://discord.com/invite/2ufDGP8RnA
I would like to share with you a project that I've been cooking for some time, a cosy place for devs to hang out :)
https://github.com/mpiorowski/late-sh
Stack: Rust workspace with 4 crates, russh for the SSH server, ratatui for the TUI, axum for the HTTP/WS side, Postgres, testcontainers for integration tests.
License is FSL-1.1-MIT (source-available now, flips to MIT after 2 years), wanted to be upfront about that since it's not classic OSS.
ssh late.sh That's all, no passwords. no OAuth, no accounts, your ssh key is your identity.
Connect, chat, listen to some vibes and play some GAMES! Right now supporting: 2048, tetris, sudoku, nonograms, minesweeper, solitaire. Leaderboards, badges, streaks, everything with sweet ASCII ;) Multiplayer games coming after! Poker, chess, so much cool stuff :)
Imagine sitting at the blackjack table for a few minutes between your coding sessions, lofi music in the background, chat with people all around the globe, and just throw some chips....
A few things I learned building this in Rust:
- russh render loop backpressure is real, handle.data needs a short timeout (50ms) or a slow client will block your whole render task. Took me a while to figure out why some sessions felt laggy for everyone.
- since the ssh don't allow streaming music, I had to come up with a solution. Paired-client WS state sync (browser/CLI controlling audio from the TUI), a token-keyed registry with mpsc channels into the app, and the TUI sends control events back over the same WS. Keeping the two sides from drifting was harder than I expected.
- ratatui + a 15fps render loop is shockingly comfortable to work with once you stop fighting it. AND I WANTED A VISUALIZER :D
How to listen to music?
Trickier than you would have expected ;p, As I've mentioned, ssh doesn't allow streaming music, so here are your options:
(did I mention you can control the music from within the app)
What's more?
A landing page: https://late.sh
Code (source-available, FSL-1.1-MIT): https://github.com/mpiorowski/late-sh
Looking for contributors! Jump, chill and take a break. Have a good day everyone :)
Hello,
I know the title is a bit vague, but I could really use some help.
First of all, I’ve been coding for fun as a hobby for about 6 years. I started with Python and then branched out into other areas. I’ve done some LeetCode, learned the basics of Go and Rust, and even tried Vue.js.
Throughout all these experiences, I’ve been consistently disappointed by one thing: the UI.
Aside from the web, every library I’ve tried feels the same. The logic isn't truly separated from the UI. It’s difficult to visualize how the interface looks just by reading code like object.padding(10).color("blue").
Web development seemed like the solution at first: a clear structure in HTML, styling in CSS, and then the logic. However, I find JS horrible to use. There are a hundred ways to make a simple counter button and a hundred frameworks that all work differently. With many of them, the "clear structure" disappears, and you end up with messy code where the UI and logic are merged.
Because of this, I’ve spent the last 2 years avoiding UI entirely to build CLI tools. But I can't ignore the fact that I want to build UIs, and I need them for many of my project ideas.
I started learning Rust a month ago (I wanted to try something lower-level), and to help me progress, I want to build some basic apps. After some research, I discovered declarative UI libraries. If they are as good as they seem, it could be a game-changer for me.
So, in your opinion, what is the best declarative UI library?
I have a few criteria and questions:
I use VSCode for development with the rust analyzer plugin.
Debugging for me has been near impossible with that setup. For whatever reason, the debugger is too slow and when its not, it still does not show values for complex objects. I am aware other people struggle with it too.
I have been enjoying developing in rust so this has been my solution to the problem
This has helped me a lot. Hopefully, it can help someone else too.
I have spent almost one year polishing and refactoring this project; most of the code is written by hand. It increased from 500 stars to 3k stars this year!
It's now at the 1st spot on GitHub Trending (Rust language).
https://github.com/trending/rust?since=daily
Here is the repo if you are interested!
I'm working on a 2D MMORPG solo and wanted to know what the right way of getting validation / feedback early on is as well as figuring out how games are usually released (ex: early release -> playtest 1 -> playtest 2 -> full launch, etc). In my non-gamedev projects I usually throw together a MVP quickly and host a live website to get feedback from users and iterate on it. I see a lot of games have playtests where they open up the game to be played for a certain amount of time and then closed off again. How would I be able to get early feedback and validation for my game and what's the usual release cycle for games like this?
Hey! I just released my Rose Academy visual novel on Steam after a year of development.
Five days in — 563 copies sold, ~$3,279 after Steam's cut. On a $38,000 budget. Yeah.
To be clear upfront: I'm not claiming to be a businessman and I'm not pretending I had everything under control. This project is an expensive hobby. Very expensive. I deliberately paid for experiments, for curiosity, for experience. In this post I'll be honest about where I could've saved money, what turned out to be a waste, and what actually worked. Hope it helps someone!
Quick context
The game is a detective visual novel set in a girls' school. You play as a young agent investigating a suspicious death. Think mystery, romance flags, some fan service.
DEVELOPMENT — ~$31,000
Artist — $8,300
~20 backgrounds (AI-assisted), ~20 CG arts, 70+ sprites, UI elements, clue icons, point-and-click highlight assets. Honestly great value and I have zero regrets. The art style became our biggest marketing weakness — not bad, just not the hyper-polished anime look that goes viral on TikTok. In hindsight I should've hired a more experienced anime artist at 2-3x the rate, or offered a revenue share. But I wanted to work with this specific person and I don't regret it.
Developers (3 people) — $9,800
The big mistake here wasn't the people — it was using Agile for a narrative game. I'm a software engineer by day, so sprints and iterations felt natural. They are absolutely wrong for VN development. The script kept changing, every revision cost extra, bugs were billed to me.
Use Waterfall instead. Don't touch the code until the script is nearly final and the art is mostly done. Yes, it delays you by 2-3 months. But fixed-price contracts with the contractor covering bug fixes would've saved me ~$4,500.
Writer — $5,900
900 pages of script with branching paths (~70,000 words). Fair price, no complaints.
Composer — $1,300 → completely useless
The music is fine. But free stock music is just as fine. Skip this entirely.
Translations (EN/CN/JP) — $2,800 → mostly wasted
We paid professional translators for half the game, then used AI for the rest. Result: we had a few typos in character names and titles.. Hotfixed day one.
Lesson: AI translation + native speaker proofreading is good enough and costs a fraction. Also — we dropped the Japanese localization entirely after seeing how few wishlists came from Japan. Always research the market before localizing for it.
Steam page + UI design — $1,500
Overpaid by about $900. Found out later you can get the same quality for ~$500.
Experiments & misc — $1,500
"What if we add animations?" We did. Removed them. Plus subscriptions, Steam Direct fee, paid consultations, test tasks during hiring.
Development result
| Item | Spent | Wasted | Could have been | Result |
|---|---|---|---|---|
| Artist | $8,300 | — | $8,300 | Worth it |
| Developer 1 | $5,000 | ~$2,000 | ~$3,000 | Partial |
| Developer 2 | $3,500 | ~$1,300 | ~$2,200 | Partial |
| Developer 3 | $1,300 | $1,300 | $0 | Wasted |
| Writer | $5,900 | — | $5,900 | Worth it |
| Composer | $1,300 | $1,300 | $0 | Wasted |
| Translations | $2,800 | ~$2,400 | ~$400 | Partial |
| Steam page + UI | $1,500 | ~$500 | ~$1,000 | Partial |
| Experiments & misc | $1,510 | ~$1,050 | ~$460 | Partial |
| Total | ~$31,110 | ~$9,850 | ~$21,260 |
Total spent~$31,000 Wasted~$9,850 Could have been~$21,260
MARKETING — ~$7,000
Hired marketer, 4 months — $2,300 → 700 wishlists
Ran Reddit, socials, TikToks. I kept him too long because I hate marketing and didn't want to deal with it. Classic mistake. When I took over, results improved immediately — because I actually cared about the outcome.
Lesson repeated: if you hate doing something, you'll pay dearly for avoiding it.
Reddit ads — $3,700 → 2,500 wishlists
About $1.48/wishlist — above the ~$1.00 I'd consider acceptable. But I was chasing 7,000 wishlists to hit Popular Upcomin(mistake!), so I overpaid deliberately. Became our main paid traffic source.
Side note: getting our ads approved was a nightmare. Our novel features schoolgirls, love and murder, so we often found ourselves in situations where we had to prove that everything was within the rules.
VK ads (Russian social, can assist you with this one btw) — $530 → 800 wishlists
~$0.65/wishlist. Worked, burned out fast.
Other social platforms — $120 → ~0 wishlists
Facebook, Twitter etc
Community ads (gaming channels) — $340 → ~100 wishlists
Niche community groups (visual novel fans etc.) will often post for free or a small donation.
What worked for free (but cost time)
Steam Next Fest — 1,600 wishlists
Detective-themed Steam festival — 800 wishlists
Key insight: Next Fest gives you roughly 40-80% of your existing wishlist count in new wishlists. It multiplies what you have — it doesn't create demand from nothing. Come in with 100 wishlists, leave with 180. Come in with 5,000, leave with maybe 7,000. Build your base before the fest, not during.
Manually reaching out to 500 streamers — ~300 wishlists + priceless playtesting
I spent two months finding and contacting streamers who play visual novels, up to 10k followers. Reviewed 2,000+ channels manually. Sent 10-20 messages a day.
Got 72 streams. Watched ~50 of them live. Average ~4-5 wishlists per stream.
The wishlist numbers sound small. But this was the most valuable thing we did:
1. Real playtesting. We saw exactly where players got bored, where they laughed, where they quit. Cut a lot of overwritten prose that streamers were visibly suffering through. Added more interactive moments. Fixed a ton of stuff.
2. Emotional fuel. Watching real people react to your game — laughing, getting frustrated, theorizing — is something I'd never experienced before. Completely addictive. Out of 72 streams, maybe 10 were negative. We took some feedback, ignored some. You can't please everyone.
We also added Easter eggs referencing every streamer who played and enjoyed the game. Small thing, meant a lot to them and to us.
Short-form video (TikTok/Reels) — complete failure
This is supposedly the main marketing channel for games right now. We talked to several influencer agencies. Nobody wanted our game. Visual novels don't clip well — the gameplay is reading. Our art style wasn't eye-catching enough either. This was the single biggest gap in our marketing. Next game will be designed from day one to look good in vertical video.
The Steam review situation
Our game involves a suicide investigation at a girls' school. Sensitive themes, some suggestive content — nothing explicit, but enough to flag.
Steam review took 5 weeks. We got greenlit 48 hours before launch. I was genuinely prepared to delay.
Steam also required Adult Content tags, which makes the game appear in search results next to actual hentai. We've already gotten negative reviews from people expecting explicit content who were disappointed.
| Channel | Spent | Wishlists | $/wishlist | Result |
|---|---|---|---|---|
| Hired marketer | $2,300 | 700 | ~$3.29 | Wasted |
| Reddit ads | $3,700 | 2,500 | ~$1.10 | Partial |
| VK ads | $530 | 800 | ~$0.65 | Works |
| Community channels | $340 | ~100 | ~$3.40 | Wasted |
| Other social platforms | $120 | ~0 | ∞ | Wasted |
| Streamers outreach | $0 | ~300 | $0 | Worth it |
| Steam Next Fest | $0 | 1,600 | $0 | Must have |
| Detective Steam fest | $0 | 800 | $0 | Must have |
| Total | ~$6,990 | 6,800 | ~$1.03 |
Total spent~$7,000. Total wishlists 6,800
RESULTS
Total invested: ~$38,000. So yeah, financially: a disaster.
Why I finished it anyway
Everyone talks about how only 1,000 of Steam's ~18,000 annual releases are commercially successful. People use this to say most devs are wasting their time.
I think the framing is wrong. I'd guess hundreds of thousands of games start development every year and never ship. Most don't fail — they just stop. GitHub is a graveyard.
Finishing a game that loses money is infinitely better, for the first project, than having an unfinished game that never existed.
What I actually got for $38,000:
Someone once told me: "Don't make a 10k-wishlist game — a 50k-wishlist game costs the same to make and earns 5x more." I disagree. How else do you learn? One publisher later told me about a dev who spent 4 years on a 10k-wishlist game, then made a massive hit in 6 months using everything he'd learned. You need the first game to make the second one.
Happy to answer questions about any part of this!
I’ve been building a small app called Holoscape.
It uses your camera to track your head and creates a 3D background effect that moves with you.
It creates a sort of parallax/immersive effect without any special hardware.
One of my recent posts about it got around 2M impressions, and it’s still bringing traffic. So far it made about $100.
Now I’m wondering if Steam could actually be a better place for it.
I feel like the audience there might be more relevant, but I’m not sure if this is something people would expect to find on Steam or not.
Would love honest feedback:
Hey guys, I’ve started learning Godot and for my next project I want to make a variety of mini games and put them in an arcade. I’m curious about how you guys would go about balancing the games around all the players. I like the idea of making them based off of real mechanics you would use in other games, such as aim training, rhythm, etc. How would you guys go about balancing all of the different mechanics so a player doesn’t feel the need to grind 1 of them to reach the end, but doesn’t feel burdened playing the ones they don’t like?
My first thought was the group 1-8 games into a wario ware type system, where they play a collection of smaller games in 1 sitting, but I planned on having it set in an arcade, and that doesn’t really match the vibe of the games I’m going for.
Hello!
As you might know, there is a downright avalanche of Exit 8-like and similar spot-the-anomaly games in the last couple of years on PC. This is partially of course because this genre comes with reasonably low complexity and limited content scope, which makes it a very suitable project for solo & indie devs, even more so with purchasable assets and AI support readily available.
Well, turns out I got hooked, and by now, I happen to be a major fan of this genre and maintain both a Steam curator page as well as an Itch.io collection for Exit 8-likes, mostly for my own sake because I want to play them all. :D
As a result, I have played literally hundreds of these titles, and I am starting to see patterns on why so many of them are very unneccesarily flawed and ultimatively turn out to be not very good games. I figured I should share my perspective somewhere, so here I go.
Maybe someone finds this helpful:
Don't make it ambiguous if the player is in the reference room (the one without anomalies) or not. There is no common paradigm whether or not after an anomaly detection failure the player is sent back to the reference room (i.e., the one without anomalies) or just to the first randomized room (where there might be anomalies) again. Both are fine, but it's very annoying if that is not clear in your game. Make it clear!
Avoid any confusion on the rules of your game. There is no common paradigm on which direction the "found anomaly" and which the "no anomaly one" might be. Many games use the original Exit 8 pattern, but a good amount deviate from this, too. Many explain this poorly. It's very annoying if that is not clear in your game. Make it clear! (Exit 8 itself used the most simply but effective approach: a poster on the wall describing the mechanics...)
Learn from other games on how anomaly repetion should be handled. It's not fun if the same anomaly has to be experienced 10 times. Many games have rules of not repeating already detected anomalies until after the user detected them all. Consider something like that.
Please make use of a good random number generator. It's not fun at all if the same anomalies occur all the time while others never occur. It's also not fun if "no anomalies" occurs too often in a row.
Don't accidentally reveal anomalies to me by bad implementation of asset- & resource management. If the game out of sudden hangs for half a second when entering a new floor iteration, I will know immediately that something new was loaded into memory that will certainly be an anomaly. That destroys the suspense.
Make the loop transitions smooth. The majority of games get it sorted out that you can keep walking without a loading screen, or visual disruption, or graphic glitches, etc., but far too often games have such a disruption. That destroys immersion of the loop.
Make your achievements interesting but not spoiling. Nobody wants to see achievements for "finish the game 1/5/10/20/50 times". Neither wants anyone to be spoiled about the anomalies from the achievement descriptions. (Oh and of course obligatory: have achievements in the first place!)
Show the controls. Somewhere, anywhere. If jump, crouch, interaction or whatnot are part of your game, tell the player! At the very minimum, show it in the options (and allow remapping...). It's not fun if playing cat on the keyboard is the only way to figure out what controls are available and where. And to not forget: please let me run. If I can only walk and there is no setting/lore-related reason for it, the long looping corridors will get tedious faster than they should.
For crying out loud, optimize your game! Your game design and graphics could be the best in the world, but if it melts the GPU on 5 fps, people are not gonna enjoy it. (And contrary to what some people seem to believe, engines like Unreal will not optimize things out of the box for you..)
Provide basic graphics settings. If your game doesn't allow to change graphics quality, restrict FPS to fixed value, enable vsync, adjust brightness, or go windowed mode, people will hate it. (Let me tell you, I have to constantly use my GPU vendor's software to limit FPS manually, as my screen runs on 144 Hz ... I really don't need that many fps on your game, trust me. As I am on the topic: ultrawide support is certainly only a nice-to-have and I can absolutely work with black bars, however, it's not okay if your game glitches out entirely just because the aspect ratio is not what it expected.)
Be aware of people who get motion sick. Some engines (looking at you, Unreal Engine) have head bobbing enabled by default, and if you don't disable it or add a setting that allows the player to to do so, some people (incl. myself) will get motion sick and not enjoy nor play your game. Same goes for motion blur, btw.
If you localize your game by some means, make sure this is tested and reviewed by native speakers. If I had a dollar for the amount of times I encountered that a translation to my native language is available in a game, just to realize it's utter garbage because of apparently out-of-context word-by-word translations... (that phenomenon is not limited to - but certainly prevalent in AI translations).
Read and react to your user's comments (e.g., in the the steam community discussion page of your game). I made it a habit of providing feedback to the game creator(s) via these channels about bugs or other issues. You might be surprised how often either nobody replies, or only weeks or months later. No wonder so many people just go ahead and give you a negative review for your game for a bug that could easily be fixed...
Well, I suppose the last six recommendations are probably true for most games. A someone said in a similar thread, in the end of the day all of this is mostly a combo of "just polish your game", "look at and mimic the successful competitors", and "use common sense". But, oh well, if it would be that obvious, there wouldn't be so much flawed titles out there, would there? :)
| Hi all! I have been working on compiling a big repository of 3d resources (both free and paid). Goal is to make a free resource that anyone can refer to. Please feel free to share your thoughts and suggestions, this is very much a work-in-progress. [link] [comments] |
The most common mistake I see on Steam pages is writing the description like it is an essay. You are telling the story of your game, its lore, its development history, its inspirations. Almost nobody reads it.
Here's why:
A Steam page visitor has usually already made about 60 percent of their buy or wishlist decision based on the first screenshot, the trailer thumbnail, and the short description at the top. By the time they scroll down to the full description, they are not reading for information. They are reading for confirmation.
They want a quick confirmation that this is the kind of game they thought it was. That it has the features they care about. That the developer knows what they are making and is serious about it.
This completely changes how to write the description.
The structure that works:
Line 1: One sentence. Present tense. The player is doing something. Not "a game about" or "you play as." Something like: "You are a blacksmith who accidentally discovered time travel and now your only tool for fixing history is a hammer."
Lines 2 to 4: What the player actually gets to do. Not the story. The experience. The verbs. "Craft weapons that do not exist yet. Negotiate with kings who will not remember you. Break the rules of causality with enough force."
Lines 5 to 6: The differentiator. One or two sentences about what makes this game different from everything else in its genre. Specific. Not "unique gameplay" because that means nothing. Something like: "Every item you craft can be used in ways the game did not intend. The physics system is fully simulated, which means if you figure out something clever, it actually works."
Bullet points (5 to 7): Feature-level confirmation. Short, active, specific. "50+ hours of handcrafted story" or "Full controller support" or "Procedural world generation with authored story events." These are what people scan.
Close: A single line that creates urgency or emotional connection. "The timeline is collapsing. It is up to you how much of it survives."
The words that hurt you:
Every game uses these words. They signal nothing. Every time you write one of these words, replace it with something specific.
Happy to do a quick critique of anyone's description in the comments if you want to share.
| Started this project in mid-2024 thinking it would be a fun little challenge and probably take me a few months. Turns out I had absolutely no idea what I was getting myself into. Nearly 2 years later, it somehow grew into a full mobile remake/reimagining of the old GBA One Piece game, built entirely by myself in Unity. What was meant to be a small learning project became the thing that taught me more about game development than anything else ever could — combat systems, UI, optimization, scope creep, bug fixing, polish, all of it. Honestly just proud to have shipped the damn thing. Would love to hear what fellow devs think. [link] [comments] |
My mate and I are co-developing with studios in Thailand and Poland, and we revenue share payments and milestone payments. Small amounts go through fine, but anything that is a bit big, like actual milestone payments, either gets rejected or sits in limbo for a couple of days with no explanation from the bank. This is embarrassing when you're trying to run a professional operation, and we've had studios ask us if something was wrong because the payments were late. What are other devs and publishers using for this?
| Hello, I’m passionate about roads in games. I’ve been writing a blog to share my journey and share the knowledge I got. If anyone is interested in this area, I’d love to hear other thoughts and exchange ideas [link] [comments] |
Been going back and forth on this for a while and I think I just need to hear how other people think about it.
I'm working on a 2D action game, pretty small scope, mostly solo dev with a friend helping on art. The core loop is built around a parry mechanic. Timing based, pretty tight window, big payoff when you nail it. Internally it feels great. Satisfying, clear feedback, good risk/reward. I've been happy with it for months.
But lately I've been second guessing whether it's *enough*. Like the whole combat system basically revolves around this one interaction and I keep wondering if players are going to feel like it's shallow. I watch footage of other indie action games and they've got dodge cancels, combo trees, stance switching, meter management, all this layered stuff. And part of me thinks I should be adding more systems on top of what I have.
But then I think about games where the simplicity IS the design. Where one mechanic done really well carries the whole experience. And I go back to feeling like maybe what I have is fine and I'm just in my own head about it.
I spent like two hours the other night just brainstorming ideas for additional mechanics, half of them on paper and half just throwing stuff at StonedGPT trying to see if anything clicked. Some of the ideas were interesting in isolation but every time I thought about actually implementing one, it felt like it would dilute the core thing that already works. Adding complexity for complexity's sake.
The thing that keeps tripping me up is I can't tell if I'm being disciplined by keeping it simple, or if I'm being lazy. Those two things look really similar from the inside. And playtesting only helps so much because the people I've shown it to enjoy it, but they're also not the kind of players who would grind through a 40 hour action game. They're friends, they're being nice, they play for 20 minutes and say "yeah this is cool."
I think part of the problem is that as the developer you lose the ability to evaluate your own game's depth because you already know everything. You can't experience discovery anymore. So a mechanic that might feel fresh and interesting to a new player just feels like "the thing I've been testing for 6 months" to you.
Has anyone else dealt with this? How do you decide when a system is deep enough vs when you're just rationalizing not doing more work? And if you kept things simple, did players ever actually complain about it feeling thin?
I run a small indie game studio and I'm in the stage of looking for programmers to help us move forward.
However, I keep running into the same issue: many candidates rely almost entirely on AI-generated code, without really understanding what they are building. This often leads to poor code quality, lack of ownership, and problems when things need to be debugged or extended.
I'm not against using AI as a tool, but there needs to be real understanding behind it.
How do you handle this when hiring or working with developers? Any tips on filtering for actual problem-solving skills instead of just AI-generated code.
In academic language Temporal Gauss-Seidel (TGS) is a bare-metal matrix solver designed to bring a massive, interconnected web of physical constraints (like stacked boxes or character joints) into perfect balance without choking the CPU. It does this by sweeping through the chain of objects one by one: it calculates the strictly unbalanced force acting on a single object, shifts it into local equilibrium using its mobility, and immediately uses that newly updated position to calculate the forces on the next connected object. However, to prevent this heavy, sequential chain-reaction from exhausting our CPU cycles, it injects the dimension of "Time" (Temporal) by recycling the exact final physics state from the previous frame and using it as the "initial guess" for the new frame. Because objects barely move in 1/60th of a second, this brilliant shortcut saves the hardware dozens of calculation loops, instantly resolving complex physical webs into a stable state.
If we carefully analyze the TGS formula, the solver is essentially isolating a single object within an array. By extracting all the surrounding forces, it focuses exclusively on the inertia (mass) and pure external gravity acting on that specific object. To understand exactly how TGS works under the hood, let us visualize a physical scenario.
Imagine a person hanging off the edge of a cliff. But he isn't alone his legs are being grasped by a second person, whose legs are held by a third. This chain of people hanging helplessly below the cliff can be viewed as our first array: [B1, B2, B3], where B3 is at the very bottom and B1 is right at the top, holding onto the cliff edge. Now, suppose there are people safely standing on top of the cliff trying to pull them back up. One person is holding B1’s hand directly, a second person is pulling that person, and a third person is anchoring them all. This creates our upper array: [T3, T2, T1], where T1 is the one in direct physical contact with B1.
If we stitch this entire system together, we get a single appended array: [T3, T2, T1, B1, B2, B3]. In this massive interconnected chain, our Main Focus Point is strictly B1. Why? Because if B1's grip fails, everyone hanging below him will instantly fall into the abyss. Furthermore, immediately following T1, B1 is the very first element actually suspended in the air rather than standing solidly on the mountain. Until the solver properly stabilizes B1, it is physically impossible to accurately calculate the fate of B2 and B3.
Looking at the math, the very first term we see is x_i(k+1). If we are recording this entire scene like a movie, 'k' represents the previous frame (the past), and 'k+1' represents the present frame. If we call this entire human chain the X array, then 'x_i' represents our main focus element: B1. Basically, the CPU is calculating the specific forces acting exclusively on this one guy, x_i.
First, we have to determine B1's own mass and inertia, which is represented in the formula by the term a_ii. We know that mass and inertia provide 'stubbornness' a natural resistance to movement. This is exactly why gravity affects a heavy iron ball and a light foam ball differently. But to resolve a physics constraint, we don't want stubbornness we desperately want movement. Therefore, the formula mathematically reverses this inertia by flipping it upside down: 1 / a_ii. And the exact inverse of inertia is Mobility.
Next, we have to account for gravity. But remember, B1 isn't floating in a vacuum. We need to find the pure gravitational force acting on him, but B2 and B3 are constantly dragging B1 further down (adding heavily to the downward force), while T1, T2, and T3 are pulling B1 up (fighting to cancel out that downward force). To find the true, raw state of B1, we must measure the forces of all these neighbors. This is exactly where the two Sigmas (the summation symbols) come into play.
In both of these Sigmas, the a_ij term represents the physical connection the stiffness of the spring or the tight grip of the hands that is transferring force between every single element.
Now we arrive at a highly critical concept. Inside the main bracket, the very first term you see is b_i. This is strictly the pure external force acting on Bi meaning B1's own isolated weight (his mass multiplied by gravity) pulling him straight down.
So, what does the CPU actually do? Inside that bracket, it takes b_i (B1's pure gravity) and strictly subtracts those two Sigmas (the pulls from the guys above and the guys below). When the CPU executes this subtraction, the final result it spits out is the 'UNBALANCED FORCE' (also known as the Residual).
Let's apply some hard numbers to our analogy: Imagine B1's own gravity (b_i) combined with the drag of the people below him (the Second Sigma) are pulling down with a massive total force of 100. Meanwhile, T1 hanging above (the First Sigma) is pulling up with a force of only 90. When we subtract these forces (100 minus 90), we are left with a net downward force of 10. This remaining force is our Unbalanced Force. It dictates that B1 is definitely not in equilibrium he is actively experiencing a continuous, physical pull downwards.
Because B1 will inevitably have to change his position due to this Unbalanced Force, the CPU takes B1's Mobility (that 1 / a_ii we calculated earlier) and multiplies it directly by this exact Unbalanced Force:
Mobility (1 / a_ii) * Unbalanced Force
When this multiplication occurs, this is the exact physical moment we call 'B1's hand slipping from T1'. This slip is absolutely not some random external error it is the direct physical displacement caused by that remaining force of 10. Driven entirely by the math, B1's hand will physically slide across the grid to a brand new position where all surrounding forces temporarily cancel each other out. This newly calculated, fully stabilized position becomes our final output on the left side of the equation: x_i(k+1).